id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9910/astro-ph9910510.html
ar5iv
text
# Untitled Document Speckle interferometric observations of the collision of comet Shoemaker-Levy 9 with Jupiter S K Saha<sup>1</sup>, R Rajamohon<sup>1</sup>, P Vivekananda Rao<sup>2</sup>, G Som Sunder<sup>2</sup>, R Swaminathan<sup>2</sup> and B Lokanadham<sup>2</sup> <sup>1</sup>Indian Institute of Astrophysics, Bangalore 560034, India <sup>2</sup>Astronomy Department, Osmania University, Hyderabad 500007, India Abstract Speckle interferometric technique has been used to obtain a series of short exposure images of the collision of comet Shoemaker-Levy 9 with Jupiter during the period of July 17-24, 1994 using the Nasmyth focus of 1.2 meter telescope of Japal-Rangapur Observatory, Hyderabad. The technique of Blind Iterative Deconvolution (BID) was used to remove the atmospherically induced point spread function (PSF) from these images to obtain diffraction limited informations of the impact sites on Jupiter. Key words: Speckle Imaging, Image Reconstruction, Jupiter, Shoemaker-Levy 9 1. Introduction The impact of the collision of the comet Shoemaker-Levy 9 (1993e) with the gaseous planet Jupiter during the period 16th.-22nd. July, 1994, has been observed extensively worldwide, as well as from the Hubble space Telescope. Several observatories in India too had planned observations of the crash phenomena starting from the observations in the visible part of the electro magnetic spectrum to the radio frequencies (Cowsik, 1994). As a part of the programmes, we had developed an interferometer to record the images of the collision of the fragments of Comet Shoemaker-Levy 9 (SL 9) with Jupiter during the period 17-24th. July, 1994, with a goal of achieving features with a resolution of 0.3-0.5 arc sec., in the optical band, using 1.2 meter telescope at Japal-Rangapur Observatory (JRO), Osmania University, Hyderabad. Though, monsoon condition prevailed over large part of the country, we were able to record more than 600 images of the entire planetary disk of Jupiter during the said period. In this paper, we describe the observational technique using interferometer, as well as the image processing technique used to restore the degraded images of Jupiter. 2. Observations The image scale at the Nasmyth focus (f/13.7) of 1.2 meter telescope of JRO, was enlarged by a Barlow lens arrangement (Saha et al., 1987, Chinnappan et al., 1991). The set up was modified to suit to requirement of sampling 0.11 arc sec/pixel of the CCD (at 0.55 $`\mu `$) which is essentially the diffraction limit of the said telescope. A set of 3 filters were used to image Jupiter, viz.,: (i) centered at 5500 $`\AA `$, with FWHM of 300 $`\AA `$, (ii) centered at 6110 $`\AA `$, with FWHM of 99 $`\AA `$, and (iii) RG9 with a lower wavelength cut-off at 8000 $`\AA `$. A 1024$`\times `$1024 pixel water cooled CCD with a pixel size 22$`\mu `$ was used as a detector. 50 speckle-grams were sequentially recorded (each of 100 m sec exposure) in each of the 3 filters. The exposure time was chosen to obtain a good signal-to noise ratio. Since the smearing due to the equatorial rotation of Jupiter is about 0.15 arc sec/min., one can afford to accumulate speckle-grams for 2-3 minutes, if one expected to attain a resolution of 0.5 arc sec. In this experiment, we have recorded 10 speckle-grams/min. Therefore, 20-30 frames with good enough signal-to-noise ratio at the desired spatial frequencies are required to perform speckle reconstruction. 600 images were recorded on July 17, 1994 soon after the fragment E of the Comet SL-9 collided with Jupiter. On July 24, 1994, 80 more images were recorded. A liquid nitrogen cooled 512$`\times `$512 CCD was used to record 3 images of Jupiter in integrated light on July 22, 1994. 3. Data Processing Atmospherically induced phase fluctuations distort incoming plane wave-fronts from the distant objects which reach the entrance pupil of telescope with patches of random excursions in phase. Such phase distortions restrict the effective angular resolution of most telescopes to 1 second of arc or worse. Speckle interferometry (Labeyrie, 1970) recovers the diffraction limited spatial Fourier spectrum and image features of the object intensity distribution from a series of short-exposure ($`<`$ 20 m sec.) images. Schemes like Knox-Thompson algorithm (Knox-Thompson, 1974), triple correlation (Lohmann et al., 1983) have been successfully employed to restore the Fourier phase of an extended object. All these schemes require statistical treatment of a large number of images. Often, it may not be possible to record a large number of images within the time interval over which the statistics of the atmospheric turbulence remains stationary. There are a number of schemes, viz., Maximum Entropy Method (Jaynes, 1982), CLEAN algorithm (Hogbom, 1974) and Blind Iterative Deconvolution (BID) technique (Ayers and Dainty, 1988) being applied to restore the image using some prior information about the image. Here, we employed a version of BID developed by P. Nisenson (Nisenson, 1991), on degraded images of Jupiter. In this technique (see Bates and McDonnell, 1986), the iterative loop is repeated enforcing image-domain and Fourier-domain constraints until two images are found that produce the input image when convolved together. The image-domain constraint of non-negativity is generally used in iterative algorithms associated with optical processing to find effective supports of the object and or point spread function (PSF) from a speckle-gram. Here, the Weiner filter was used to estimate one function from an initial guess of the PSF. The algorithm has the degraded image $`c(x,y)`$ as the operand. An initial estimate of the point spread function (PSF) $`p(x,y)`$ has to be provided. The degraded image is deconvolved from the guess PSF by Wiener filtering, which is an operation of multiplying a suitable Wiener filter (constructed from the Fourier transform $`P(u,v)`$ of the PSF) with the Fourier transform $`C(u,v)`$ of the degraded image as follows $`O(u,v)=C(u,v)\frac{P^{}(u,v)}{P(u,v)P^{}(u,v)+N(u,v)N^{}(u,v)}`$ where $`O`$ is the Fourier transform of the Deconvolved image and $`N`$ is the noise spectrum. This result $`O`$ is transformed to image space, the negatives in the image are set to zero, and the positives outside a prescribed domain (called object support) are set to zero. The average of negative intensities within the support are subtracted from all pixels. The process is repeated until the negative intensities decrease below the noise. A new estimate of the PSF is next obtained by Wiener filtering the original image $`c(x,y)`$ with a filter constructed from the constrained object $`o(x,y)`$. This completes one iteration. This entire process is repeated until the derived values of $`o(x,y)`$ and $`p(x,y)`$ converge to sensible solutions. 4. Results The flat field corrections, as well as bias subtractions were made for all the Jupiter images acquired on 17th. and 24th. July ’94, using IRAF image processing package and analyzed on SPARC ultra Workstation. The images were converted to specific formats to make them IRAF compatible. The results for the Jupiter images obtained on 17th. July ’94 and on 24th. July ’94 were arrived at after 150 iterations. Since the combined PSF of the atmosphere and the telescope varies as a function of time, the value of support radius of the PSF had been chosen accordingly. The value of the Weiner filter parameters also had chosen according to the intensity of each of the images differently. Figure 1 shows the speckle-gram of the Jupiter obtained on 24th. July ’94, through the green filter centered at 5500 $`\AA `$, with FWHM of 300 $`\AA `$. The satellite Io can be seen on top left. Care has been taken to avoid the satellite while reconstructing the images. Figure 2 shows the deconvolved image of the same. The complex structure of the spots were identified and compared with Hubble space telescope observation. The chief result of this reconstruction is the enhancement in the contrast of spots. The complex spot at the East is due to impacts by fragments $`Q_2`$, R, S, D and G. The spot close to the centre is due to K and L impacts. Figure 3 depicts the reconstructed PSF. 5. Discussion and Conclusions The uniqueness and convergence properties of the Deconvolution algorithm are uncertain for the evaluation of the reconstructed images if one uses BID directly. The support radius of the PSF was estimated from the observations around the same time. The present scheme of BID has been tested by reconstructing the Fourier phase of the computer simulated convolved functions of binary star and the PSF caused by the atmosphere and the telescope which was used as an input. It is found that this software preserves the photometric quality of the reconstructions. The same software was used to retrieve the Fourier phase of two binary stars obtained at the Cassegrain end of the 2.34 meter Vainu Bappu Telescope (VBT), at Vainu Bappu Observatory (VBO), Kavalur, India (Saha and Venkatakrishnan 1997). The authors have found the magnitude difference for the reconstructed objects compatible with the published values. In the case of compact sources, it is essential and possible to put a support constraint on the object obtained from the auto-correlation. Whereas, in this case of an extended object of a complex structure, the auto-correlation is of not great help to constrain the individual features. Convergence could not be obtained in the absence of any constraints. The estimated support radius of the PSF was utilized in this case and a successful convergence was obtained. Thus in the case of complex objects, the prior knowledge of the PSF support radius seems to be vital for the reconstructions. Though the chief problem of this software is that of convergence, it is indeed an art to decide when to stop the iterations. The results are also vulnerable to the choice of various parameters like the support radius, the level of high frequency suppression during the Wiener filtering, etc. The availability of prior knowledge on the PSF, in this case, of the degraded image was also found to be very useful. It is to be seen how the convergence could be improved (cf. Jefferies and Christou, 1993). For the present, it is noteworthy that such reconstructions are possible using single speckle frames. Acknowledgments: The authors are grateful to Prof. R Cowsik, Director, Indian Institute of Astrophysics, Bangalore for the encouragement during execution of the project, as well as to Dr. P. Nisenson of Center for Astrophysics, Cambridge, USA, for the BID code as well as for useful discussions. The personnel of the mechanical division of IIA, in particular Messrs F Gabriel, K Sagayanathan and T Simon, helped in the fabrication of the instrument. The help rendered by B Nagaraj Naidu of electronic division of IIA ,Bangalore and M J Rosario of VBO, Kavalur, during the observations are gratefully acknowledged. References Ayers G.R. & Dainty J.C., 1988, Optics Letters, 13, 547. Bates R.H.T. & McDonnell M.J., 1986, ”Image Restoration and Reconstruction”, Oxford Engineering Science 16, Clarendon Press, Oxford. Chinnappan V., Saha S.K. & Fassehana, 1991, Kod. Obs. Bull. 11, 87. Cowsik R., 1994, Current Sci., 67 , . Hogbom J., 1974, Ap.J. Suppl., 15, 417. Jaynes E.T., 1982, Proc. IEEE, 70, 939. Jefferies S.M. & Christou J.C., 1993, ApJ., 415, 862. Knox K.T. & Thompson B.J., 1974, Ap.J. Lett., 193, L45. Labeyrie A., 1970, A and A, 6, 85. Lohmann A.W., Weigelt G. & Wirnitzer B., 1983, Appl. Opt., 22, 4028. Nisenson P., 1991, in Proc. ESO-NOAO conf. on High Resolution Imaging by Interferometry ed. J M Beckers & F Merkle, p-299. Saha S.K. & Venkatakrishnan P., 1997, Bull. Astron. of India (To appear). Saha S.K., Venkatakrishnan P., Jayarajan A.P. & Jayavel N., 1987, Current Science, 56, 985. Figure captions 1. Fig. 1: 1a, 1b, 1c show the greyscale, 2-D contour and 3-D speckle-gram of Jupiter obtained on 24th. July ’94 respectively. The numbers on the axes of the 2-D contour map denote pixel numbers with each pixel being equal to 0.1 arc sec. The satellite Io can be seen on top left. 2. Fig. 2: 2a, 2b, 2c are for the Deconvolved image of the Jupiter on 24th. July ’94. 3. Fig. 3: 3-D map of the reconstructed Point Spread Function (PSF).
no-problem/9910/astro-ph9910194.html
ar5iv
text
# Non-equilibrium Kinematics in Merging Galaxies ## 1. Evolution of Velocity Moments in Merging Galaxies The global kinematics of merging galaxies are often used to infer dynamical masses, or study evolution of merger remnants onto the fundamental plane (e.g., Lake & Dressler 1986; Shier et al. 1994; James et al. 1999). In systems well out of equilibrium, these measurements may not yield true estimates of the velocity dispersion of the system. For example, in a merger where the nuclei have not yet coalesced, much of the kinetic energy of the the system may be in bulk motion of the nuclei, rather than in pure random stellar motions. Such conditions could in principle lead to systematic errors in dynamical masses or fundamental plane properties. Equally important is the timescale over which any merger-induced kinematic irregularities are mixed away through violent relaxation or mixing. To examine the evolution of the kinematic moments of a galaxy merger, Figure 1 shows the projected velocity moments in an N-body model of an equal mass galaxy merger. The data is constructed to simulate observations with modest spatial resolution of $``$ 1 kpc. The low order moments of the velocity distribution very quickly evolve to their final value – violent relaxation in the inner regions is extremely efficient. Even during the final coalescence phase, the velocity dispersion of the merger is essentially unchanging, except for extreme situations where the remnant is viewed almost exactly along the orbital plane. This analysis suggests that studies which place mergers on the fundamental plane are not excessively compromised by possible kinematic evolution of the remnants; instead, luminosity evolution should dominate any changes in the properties of the remnant. At larger radius, the merger remnant possesses a significant rotational component, as transfer of orbital angular momentum has spun up the remnant (e.g., Hernquist 1992). The higher order velocity moments (skew and kurtosis) continue to evolve for several dynamical times, particularly in the outer portions of the remnant where the mixing timescale is long. These higher order moments also vary significantly with viewing angle, reflecting the fact that the merger kinematics maintain a “memory” of the initial orbital angular momentum. As high angular momentum material streams back into the remnant from the tidal debris, incomplete mixing results in extremely non-gaussian line profiles. ## 2. Local Stellar Kinematics and Ghost Masses On smaller scales, however, measurements of local velocity dispersion can give erroneous results if the system has not yet relaxed. Figure 2 shows the merger model “observed” at higher spatial resolution at a time when the nuclei are still separated by a few kpc. Looking along the orbital plane, the nuclei still possess a significant amount of bulk motion. Measured on small scales, this bulk motion shows as a gradient in the projected radial velocity across the two nuclei. Perhaps more interesting is the rise in projected velocity dispersion between the nuclei, where the the velocity profile shows a single broad line with dispersion $``$ 30% higher than in the nuclei themselves. A similar rise is seen between the nuclei of NGC 6240 (Tezca et al. 1999 in prep, referenced in Tacconi et al. 1999), where a central gas concentration exists. The simulations here indicate that such features can arise in double nucleus systems even when no central mass exists, and suggest that dynamical masses inferred this way can be significantly overestimated. In this case, the full analysis of the line profiles results in a better understanding of the dynamical conditions. The gradient across the nuclei again is an indicator of large bulk motions, and the shape of the line profile is rather flat-topped (negative kurtosis), exactly what is expected from the incomplete blending of two separate line profiles. Here, of course, the increase in velocity dispersion is due simply to the projected overlap of the nuclei, but the complete line profile is needed to unravel the complex dynamics. ## 3. Ionized Gas Kinematics and Starburst Winds Finally, while gas kinematics are perhaps the easiest to measure, they give the most ambiguous measurement of the gravitational kinematics of a merging system. Aside from the problems of the evolving gravitational kinematics and line-of-sight projection effects, gas kinematics are also subject to influences such as shocks, radial inflow, and starburst winds. All of these conspire to make a very confusing kinematic dataset. A case in point is the ultraluminous infrared galaxy NGC 6240. This starburst system has a double nucleus separated by $``$ 1.5<sup>′′</sup> and is clearly a late stage merger. Based on H$`\alpha `$ velocity mapping of this system, Bland-Hawthorn et al. (1991) proposed that a $`10^{12}M_{}`$ black hole exists well outside the nucleus, at a projected distance of 6 kpc. The major piece of evidence supporting this claim was a sharp gradient in the ionized gas kinematics, suggestive of a rapidly rotating disk. To study this object in more detail, we (van der Marel et al. in prep) have initiated a program using HST to obtain imaging and longslit spectroscopic data for the inner regions of NGC 6240. Figure 3 shows an F814W image of the center of NGC 6240, along with a narrow band image centered on H$`\alpha `$+\[NII\] (taken using the F673N filter, which for NGC 6240 fortuitously sits on redshifted H$`\alpha `$). The narrow band image shows a clear starburst wind morphology in the ionized gas. Overplotted on Figure 3b is the position of the putative black hole, along with the position angle of the observed velocity gradient. Interestingly, the position lies directly along an ionized filament from the starburst wind, with the kinematic gradient directed orthogonal to the filament’s direction. While our narrow-band data do not go deep enough to study the detailed distribution of ionized gas immediately surrounding the proposed black hole, the image certainly suggests that the observed kinematics may be strongly influenced by the starburst wind, indicating that the black hole may not be real. The strong gradient that was attributed to a black hole may instead be due to kinematic gradients in the starburst wind, or even simple geometry of the wind filament projecting on top of background system emission. We have follow-up STIS spectroscopy planned to further study the complex kinematics in this intriguing system. ### Acknowledgments. This work was sponsored in part by the San Diego Supercomputing Center, the NSF, and STScI. I thank Rebecca Stanek and Sean Maxwell for help with data analysis. ## References Bland-Hawthorn, J., Wilson, A.S., & Tully, R.B. 1991, ApJ, 371, L19. Hernquist, L. 1992, ApJ, 400, 460 James, P., et al. , astro-ph/9906276 Lake, G., & Dressler, A. 1986, ApJ, 310, 605 Shier, L.M., Rieke, M.J., & Rieke, G.H. 1994, ApJ, 433, L9 Tacconi, L.J., et al. , astro-ph/9905031
no-problem/9910/cond-mat9910078.html
ar5iv
text
# Criticality of the “critical state” of granular media: Dilatancy angle in the Tetris model ## I Introduction Granular materials give rise to a number of original phenomena, which mostly result from their peculiar rheological behavior. Even using the most simple description of the grains (rigid equal-sized spherical particles) a granular system displays a rather complex behavior which shows that the origin of this rheology has to be found at the level of the geometrical arrangement of the grains. Guided by these considerations, models have been proposed to account for the geometrical constraints of assemblies of hard-core particles . The motivation of these models is not to reproduce faithfully the local details of granular media, but rather to show that simple geometrical constraints can reproduce under coarse-graining some features observed in real granular media. Along these lines, one of the most impressive examples is the “Tetris” model which, in its simplest version, is basically a spin model with only hard core repulsion interactions. This model has been introduced in order to discuss the slow kinetics of the compaction of granular media under vibrations. In spite of the simplicity of the definition of the model, the kinetics of compaction has been shown to display a very close resemblance to most of the experimentally observed features of compaction and segregation. Our aim is here to consider again the Tetris model and to focus on a basic property of the quasi-static shearing of a granular assembly. It is well known since Reynolds that dense granular media have to dilate in order to accommodate a shear, whereas loose systems contract. This observation is important since it gives access to one of the basic ingredients (the direction of the plastic strain rate) necessary to describe the mechanical behavior in continuum modeling. The dilatancy angle is defined as the ratio of the rate of volume increase to the rate of shearing. Denoting with $`\epsilon _{xy}`$ the component $`xy`$ of the strain tensor $`\epsilon `$, Fig.(1) illustrates an experiment where a shear rate $`\dot{\epsilon }_{xy}`$ is imposed together with a zero longitudinal strain rate $`\dot{\epsilon }_{xx}=0`$, and the volumetric strain rate (here vertical expansion) $`\dot{\epsilon }_{yy}`$ is measured. The direction of the velocity of the upper wall makes an angle $`\psi `$ with respect to the horizontal direction. This angle is called the dilatancy angle, $`\psi `$. In this particular geometry we have $$\mathrm{tan}(\psi )=\frac{\dot{\epsilon }_{yy}}{\dot{\epsilon }_{xy}}$$ (1) More generally, the tangent of the dilation angle is the ratio of the volumetric strain rate ($`\mathrm{tr}(\dot{\epsilon })`$) to the deviatoric part of the strain rate. Numerous experimental studies have confirmed the validity of such a behavior, and have lead to extensions such as what is known in soil mechanics as the “critical state” concept. Assuming that the incremental (tangent) mechanical behavior can be parametrized using only the density of the medium, $`\rho `$, a loose medium will tend under continuous shear towards a state such that no more contraction takes place, i.e. it will assume asymptotically a density $`\rho _c`$ such that $`\psi (\rho _c)=0`$. This state is by definition the “critical state”. Conversely, if the strain were homogeneous, a dense granular media would dilate until it reached the critical state density $`\rho _c`$. However, for dense media, the strain may be localized in a narrow shear band which may allow a further shearing without any more volume change so that the mean density may remain at a value somewhat higher than the critical value. Recent triaxial tests in a scanner apparatus have however shown that in the shear bands the density of the medium was quite comparable to the critical density, thus providing further evidence for the validity of the critical state concept. The word “critical” used in this context has become the classical terminology, but it has no a priori relation with any kind of critical phenomenon in the statistical physics vocabulary . One of the results presented in this article is to show that indeed the critical state of soil mechanics is also a critical point in the sense of phase transitions, for the Tetris model considered here. ## II Model and definition of dilatancy A group of lattice gas models in which the main ingredient is the geometrical frustration has been introduced recently under the name Tetris . The Tetris model is a simple lattice model in which the sites of a square lattice can be occupied by (in its simplest version) a single type of rectangular shaped particle with only two possible orientations along the principal axis of the underlying lattice. A hard core repulsion between particles is considered so that two particles cannot overlap. This forbids in particular that two nearest neighbor sites could be both occupied by particles aligned with the inter-site vector. An illustration of a typical admissible configuration is shown schematically in Fig.(2). More generally one can consider particles that move on a lattice and present randomly chosen shapes and sizes. The interactions in the system obey to the general rule that one cannot have particle overlaps. The interactions are not spatially quenched but are determined in a self-consistent way by the local arrangements of the particles. The definition of the dilation angle as sketched in Fig.(1) is difficult to implement in practice in the Tetris model due to the underlying lattice structure which defines the geometric constraints only for particles on the lattice sites, and not in the continuum. We may however circumvent this difficulty through the following construction illustrated in Fig.(3). We consider a semi-infinite line starting at the origin and oriented along one of the four cardinal directions. This line is (and all the sites attached to it are) pushed in one of the principal directions of the underlying square lattice by one lattice constant. In the following, we will consider only a displacement perpendicular to the line, although a parallel displacement may also be considered. As this set of particles is moved, all other particles which may overlap with them are also translated with the same displacement. In this way, we determine the set $`𝒟`$ of particles which moves. In the sequel, we will show that this domain is nothing but a directed percolation cluster grown from the line. Anticipating on the following, the mean shape of $`𝒟`$ will be shown to be a wedge limited by a generally rough boundary whose mean orientation forms an angle $`\psi `$ with the direction of motion. The angle $`\psi `$ can be shown to be exactly equal to the dilatancy angle as defined previously. Exploiting the non-overlap constraint, we may simply determine the rule for constructing the domain $`𝒟`$. Let us choose the particular case of a displacement in the direction $`(1,0)`$, and consider a non-empty site $`(i,j)`$ which is displaced. The particles which may have to be displaced together with site $`(i,j)`$ can be identified easily: -If the particle in $`(i,j)`$ is horizontal: $``$ $`(i+1,j)`$ if the site is occupied by a particle with any orientation. $``$ $`(i+2,j)`$ if the site is occupied by a horizontal particle. \- If the particle in $`(i,j)`$ is vertical: $``$ $`(i+1,j)`$ if the site is occupied by a particle with any orientation. $``$ $`(i+1,j\pm 1)`$ if the site is occupied by a vertical particle. Using these rules, it is straightforward to identify the cluster of particles $`𝒟`$. The model thus appears to be a directed percolation problem with a mixed site/bond local formulation. Thus unless long range correlations are induced by the construction of the packing, the resulting problem will belong to the universality class of directed percolation. The density of particles, $`p[0,1]`$, in the lattice plays the role of the site or bond presence probability, i.e. the control parameter of the transition. Let us recall, for sake of clarity, some properties of the two-dimensional directed percolation. For $`p<p_c`$ (where $`p_c`$ is the directed percolation threshold), a typical connected cluster extends over a distance of the order of $`\xi _{//}`$ in the parallel direction (the preferential direction) and a distance $`\xi _{}`$ in the perpendicular direction. For $`p>p_c`$ there appears a directed percolating cluster which extends over the whole system. This cluster possesses a network of nodes and compartments. Each compartment has an anisotropic shape similar to the connected cluster below $`p_c`$, characterized by $`\xi _{//}`$ in the parallel direction and $`\xi _{}`$ in the perpendicular direction. On both sides of the percolation transition, the two lengths present the power-law behavior $`\xi _{//}|pp_c|^{\nu _{//}}`$ and $`\xi _{}|pp_c|^\nu _{}`$. ## III Monocrystal Let us first examine a simple geometrical packing. There exist (two) special ordered configurations of particles such that the density can reach unity (one particle per site). This corresponds to a perfect staggered distribution of particle orientations. Thus a simple way of continuously tuning the density is to randomly dilute one of these perfectly ordered states. In this case, if a site is occupied by a particle, its orientation is prescribed. Therefore the above rules can be easily reformulated as a simple directed site percolation problem in a lattice having a particular distribution of bonds (up to second neighbors). Fig.(4) illustrates the specific distribution of bonds corresponding to such an ordered state. For $`p=1`$, suppose that the initial seed is $`(0,j)`$ for $`j0`$ and this line is pushed in the $`x`$ direction. Then the infinite cluster is the set of sites $`(i,j)`$ such that $`ji`$, for a vertical spin at the origin. Thus moving the semi-infinite line (seed) introduces vacancies in the lattice which was initially fully occupied. The system dilates and its dilation angle is $`\psi _1=\pi /4`$. As $`p`$ is reduced, the orientation of the boundary changes up to the stage where it becomes parallel to the $`x`$ axis for $`p=p_R`$. At this point the dilatancy is zero. A motion is possible without changing the volume. This point corresponds precisely to the directed percolation threshold (using the precise rules defined above). From the theory of directed percolation, we can directly conclude that the behavior of the dilatancy angle $`\psi `$ in the vicinity of $`p_R`$ obeys $$\mathrm{tan}(\psi )(pp_R)^{\nu _{//}\nu _{}}$$ (2) where the correlation length exponents are $`\nu _{//}1.732`$ and $`\nu _{}1.096`$ independently of the lattice used. A further decrease of $`p`$ leads to a subcritical regime where only a finite cluster is connected to the initial seed. Only a finite layer of thickness $`\xi _{//}(p_Rp)^{\nu _{//}}`$ along the $`y`$-axis is mobilized. This means that it not possible to define in the same way the dilation angle for $`p<p_R`$ (negative angles). What happens in practice is that for $`p<p_R`$ the shearing produces a compaction of the system in front of the semi-infinite line pushing the system. Fig.(5) summarizes schematically the situation for all the values of $`p`$. The horizontal line corresponds to $`p=p_R`$ and a zero dilation angle. We performed numerical simulations of this problem using a transfer matrix algorithm which allowed to generate system of size up to $`10^4\times 310^4`$. These large system sizes allowed for a very accurate determination of the dilatancy angle as a function of the occupation probability (density) $`p`$. Fig.(5) shows the boundaries of the domains $`𝒟`$ for $`p=0.58`$, close to the directed percolation threshold $`p_R`$, and $`p=0.7`$. Fig.(7) shows the estimated dilatancy angle as a function of the density of particles. Angles are evaluated on lattice of size $`10^4\times 310^4`$ and are averaged over $`100`$ different realizations. The onset of dilatancy is thus estimated to be $$p_R=0.583\pm 0.001$$ (3) The singular variation of $`\psi `$ close to the onset of dilatancy Eq. 2 has been checked to be consistent with our numerically determined values as shown by the dotted curve in Fig.(7) which corresponds to the expected critical behavior. ## IV Random sequential deposition It is worth emphasizing that the directed percolation problem associated with the dilatancy angle determination is simply a site percolation problem in the above special case where each site is assigned only one possible orientation for the particle. In the more general case, the way the cluster is grown locally depends on the specific orientation of the particle. Thus it is a mixed site/bond percolation problem. Therefore, depending on the way the system has been built, the onset for dilatancy, $`p_R`$, will vary. This is illustrated by constructing the system through a random deposition process, i.e. differently from the above procedure. The algorithm used to construct the system is the following. At each time step, an empty site and a particle orientation are chosen at random. If the particle can fit on this site (without overlap with other particles), then the site is occupied, otherwise a new random trial is made. This is similar to the “random sequential” problem often studied in the literature, here adapted to the Tetris model. This procedure leads to a maximum density of particles around $`p_{max}0.75`$ above which it becomes impossible to add new particles. Differently from the previous case, in the random sequential deposition simulations could not have been performed using the transfer matrix algorithm and thus we generated systems of size up to $`500\times 1500`$. We studied the dilatancy angle in such systems stopping the construction at different $`p`$ values, averaging for each $`p`$ over $`100`$ realizations. Fig.(8) shows the estimated dilatancy angle which is definitely different from the data of Fig.(7). In particular the onset of dilatancy is determined to be $$p_R=0.70\pm 0.01$$ (4) However, this procedure is not expected to induce long range correlations in the particle density or orientation, and thus, we expect that the universality class of the model remains unchanged. In particular, the critical behavior Eq. 2 is expected to hold with the same exponents. Although the system sizes are much smaller in the present case, our data are consistent with such a law. ## V Ballistic deposition under gravity Finally we would like to point out another property related to the texture of the medium. Up to now the two procedures followed to generate the packing of particles did not single out any privileged direction. We now construct the packing by random deposition under gravity. Particles with a random orientation are placed at a random $`x`$ position, and large $`y`$. Then the particle falls (along $`y`$) down to the first site where it hits an overlap constraint. In this way, the packing assumes a well defined bulk density $`p0.8`$. We used this construction procedure to generate lattices of size $`500\times 1500`$ (averaged over 500 samples) cutting out the top part of the lattice which is characterized by a very wide interface and a non-constant density profile. On this configuration (and thus at a fixed density) we measured the dilatancy angle for different orientations of the imposed displacement on the wall with respect to “gravity”. Table 1 shows the resulting dilatancy angles obtained for the same density $`p=0.81`$ using different constructions: * the dilution of the ordered state, * the sequential deposition, (in both of these cases the dilatancy angle does not depend on the orientation of the motion). It is worth noticing how a direct comparison between this case and the others is not possible because with the Random Sequential Deposition one cannot obtain densities larger that $`0.75`$. * the ballistic deposition using a displacement along $`y`$ (against gravity), $`y`$ (along gravity), and $`x`$ (perpendicular to gravity). In the latter case, we could study the problem for two orientations of the semi-infinite line ($`x=0`$ and $`y>0`$ or $`y<0`$). We checked that the dilatancy angle was not dependent on this orientation. The data reported in Table 1 indeed shows that the dilatancy angle can be dependent on the direction of the imposed displacement. This measurement is thus sensitive to texture effects. As a side result, we note that the usual characterization of the dilatancy in terms of a single scalar (angle), albeit useful, is generally an oversimplification for textured media. Indeed, a number of studies have revealed that granular media (even consisting of perfect spheres) easily develop a non isotropic texture as can be shown by studying the distribution of contact normal orientations. This remark is almost obvious from a theoretical point of view, however, few attempts have been made to incorporate these texture effects in the dilatancy angle or even more generally in the rheology of granular media. ## VI Conclusion We have shown that dilatancy can be precisely defined in the Tetris model, and that it is a function of the density as it is well known for granular media. The onset of dilatancy, i.e. the “critical state” of soil mechanics, has been shown to corresponds to a directed percolation threshold density, hence justifying the term “critical” in this expression. Form this point of view it is important to stress how any comparison of our approach with real granular materials should be done in the neighborhood of the critical point where we expect a largely universal (in the sense of critical phenomena in the statistical physics vocabulary) behavior. Using different lattices we expect, for instance, to recover the same critical behavior (same exponents) but not the same values for the critical density. To our knowledge this is the first time that such a mapping is proposed. We have also shown that the dilatancy angle was not only determined by the density but also by the packing history. Finally, we have shown from a simple anisotropic construction that texture affects the dilatancy angle, even for a fixed density. ###### Acknowledgements. We wish to thank S. Krishnamurthy, H.J. Herrmann and F. Radjai for useful discussions related to the issues raised in this study. This work has been partially supported from the European Network-Fractals under contract No. FMRXCT980183. ## VII figure captions * Schematic view of shearing of granular media in a shear cell. The upper part of the cell moves only if the medium dilates so that the direction of the motion forms an angle $`\psi `$, the dilatancy angle, with the horizontal direction. * Illustration of the Tetris model. The sites of a square lattice can host elongated particles shown as rectangles. The width and length of the particles induce geometrical frustrations. * Procedure used to define the dilation angle. All particles located on a semi-infinite line (the particles enclosed in the round-edge rectangle on the left-hand side of the lattice) are moved by one lattice unit in the horizontal direction (shown by the arrows). Using the hard-core repulsion between particles, we determine the particles which are pushed (shown in black) and those which may stay in place (grey). For each column we consider the lowest (in general the most external) black site (The gray particles within the cluster of black particles do not play any role in the determination of the dilation angle). The curve connecting all these points defines the profile of the pushed region. The line connecting the first and the last points of this profile determines the angle, $`\psi `$, with respect to the direction of motion. This angle provides the value of the dilation angle for the particular realization considered. The dilation angle is actually measured performing an average over a large number of realizations. * Lattice over which directed site percolation is taking place. The arc bonds connect second neighbors along the $`x`$ axis (horizontal). * Schematic representation of the mobilized region in the shearing procedure. Starting from $`p=1`$, where one has a dilation with an angle of $`\pi /4`$, the dilation angle reduces until $`0`$ (for $`p=p_R`$). A further reduction of $`p`$ brings the system in a subcritical regime where only a finite layer of thickness $`\xi _{//}(p_Rp)^{\nu _{//}}`$ along the $`y`$-axis is mobilized and the system compactifies. * Shapes of the boundaries of two clusters for (a) $`p=0.58`$ (close to the threshold for a vanishing dilatancy) and (b) $`p=0.7`$. The clusters mobilized are above and in both cases the line interpolating linearly between the first and the last point the boundaries defines the dilation angle. * Dilatancy angle as a function of the density $`p`$ in the case of a random dilution of the perfectly ordered Tetris model. The dashed line represents a fit obtained using Eq. (2) with $`p_R=0.583\pm 0.001`$. The relative errors diverge at the transition. * Dilatancy angle as a function of the density $`p`$ in the case of a random sequential deposition. The dashed line represents a fit obtained using Eq. (2) with $`p_R=0.70\pm 0.01`$. The relative errors diverge at the transition.
no-problem/9910/astro-ph9910225.html
ar5iv
text
# WFPC2 Imaging of Young Clusters in the Magellanic CloudsBased on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., (AURA), under NASA Contract NAS 5-26555. ## 1 Introduction In this paper we present new UV, visual and H$`\alpha `$ photometry obtained with the WFPC2 camera on board the HST, of NGC 330, NGC 1818, NGC 2004 and NGC 2100, four young populous clusters in the Magellanic Clouds with main sequence turnoff masses in the range of 9-12 M. A key feature of the evolution of 8-20 M stars is the treatment of convection and the presence and degree of the extension of the convective core. This paper presents the observational data that will form the basis of a quantitative investigation for the presence of convective core extension, and its magnitude, as constrained by these clusters. The further analysis is to be found in a later paper (Keller et al. 1999c ). The fundamental basis for our understanding of the evolution of stars of various masses is derived from the comparison between the observed colour-magnitude diagrams (CMDs) of star clusters and the predictions of stellar evolution theory in the forms of evolutionary tracks and isochrones. This interactive process has provided much insight into the physics of stellar evolution. Even though the grounds of stellar evolution are well understood, there remain a number of points of uncertainty that may potentially impact not only on current estimates of evolutionary parameters of stellar clusters, but also on the more sophisticated results that use such parameters as their basis. One such point of contention is the physical causation of the observed extension of the main sequence (MS) beyond that predicted by no overshoot, non-rotating stellar evolutionary models. Convective core overshoot is commonly proposed as the mechanism for the extension of the convective core. However, more recently the role of rotation has been recognised as offering a more natural way to bring about the same end result, i.e. increased internal mixing, (Maeder maeder98 (1998), Langer & Heger langer98 (1998) and Talon et al. talon97 (1997)). However with these models in their infancy, our discussion here is conducted within the convective core overshoot paradigm. On the theoretical front, discussion of the efficiency of convective core overshoot has been addressed by several authors with different results, ranging from negligible to substantial (see e.g. Bressan bressan81 (1981)). In the absence of hydrodynamical models the amount of mixing is only weakly constrained by physical arguments. In order to ascertain the correct amount to apply within stellar evolutionary models we must infer this amount from the populations within star clusters. The difficulty has been to find a sufficiently large, young and coeval population in which to search for signs of overshooting. Galactic clusters of comparable age to those of the present study contain small numbers of stars and individually offer little insight. The clusters of the present study contain upwards of 10$`\times `$ the mass of their galactic counterparts. Within these clusters the numbers of stars are such that statistically meaningful confrontations with evolutionary theory are possible. The promising nature of the four clusters studied in the present work has lead to several previous ground-based and IUE studies. These studies have revealed several anomalies with evolutionary models which do not include overshooting. Problematic features include: the observed and predicted temperatures of the MS turnoffs differ by up to several thousand degrees; the observed and predicted turnoff luminosities also disagree by as much as 1 mag in $`V`$ (Caloi and Cassatella caloi95 (1995), Caloi et al. caloi93 (1993)); the relative luminosity of the red giants and the turnoff are not consistent with model predictions for a coeval population (Caloi and Cassatella caloi95 (1995)). These features are indicative of some degree of convective overshoot within the population. However, further conclusions from these studies are limited since most have been restricted to the brightest members, the only stars for which accurate effective temperatures were attainable. They extend to just below the MS turnoff. The data we present here are a significant extension to these previous studies. The superior resolution of the WFPC2 camera and far-UV coverage has enabled us to extract accurate effective temperatures for a large sample of stars up to 4 magnitudes in $`V`$ below the MS turnoff. This forms a database suitable for quantitative investigation of convective core overshoot. ## 2 Observations and Data Reduction The data presented in this paper were obtained by the HST using the WFPC2 on June 1997. Exposures in F555W, F160BW and F656N were obtained. Table 1 details the exposures obtained. The F160BW filter ($`\mathrm{\Delta }\lambda `$= 446Å, $`\lambda _e`$ = 1491Å) is a wideband filter with negligible red-leak. The imaging and transmission properties of the F160BW filter have been described by Watson et al. (watson94 (1994)). F656N is a narrow band filter($`\mathrm{\Delta }\lambda `$= 22Å) centred on the H$`\alpha `$ line. This filter was included to identify those stars showing H$`\alpha `$ emission, namely Be stars, of which the four clusters discussed here are known to have large populations (e.g. Keller et al. 1999a ). Biretta and Baggett (biretta (1998)) have examined the noise characteristics of the far-UV flats used in the standard data pipeline. They find that the excessive noise in the F160BW flat field is a serious limitation on the F160BW photometry. We have reprocessed the F160BW data through the standard data pipeline (Holtzman et al. 1995a) for bias subtraction and flat fielding. Following Biretta and Baggett we have used a F255W flat instead of the F160BW. To account for the large-scale vignetting of the F160BW filter in the WFs we form a vignetting correction by taking the ratio F160BW flat/F255W flat then smoothing by a 20 pixel FWHM Gaussian function and then dividing this image into the data flattened with the F255W flat. In both F555W and F160BW multiple exposures were taken, the set of images were combined and cleaned of cosmic rays within the IRAF package using the GCOMBINE task. Care was taken when combining these frames that the central regions of individual stellar profiles were not truncated. When combining, the gcombine task forms the median (or the average in the case of two frames) of each pixel value after rejecting those values that are deemed statistically unreasonable. Such selection is made on the basis of the readnoise, gain and sensitivity noise. In some circumstances a side product of the cleaning/rejection process is that some stars show truncated intensity profiles. We suspect that this maybe due to subpixel shifts of the centroid of the stellar image between images in the combining process. When the PSF is undersampled, as is the case with the WFPC2, it is possible for the central pixel values of a stellar image to differ by several sigma from that expected from noise calculations between images due to small subpixel shifts. These suspect values are rejected, in all cases the lower pixel value is retained. This effect is pronounced amongst the brighter stars. We find a sensitivity noise of 0.1 is optimal within the GCOMBINE task for avoiding this effect and ensuring cosmic ray removal. The fields are relatively crowded in both F555W and F160BW. We found that photometry through a 3px radius (0.15″on PC, 0.3″on WF) aperture was optimal. Appropriate aperture corrections were made to the 0.5″standard adopted by Holtzman (1995b ). The sky brightness was determined from the median within a 5px wide annulus of inner radius 10px. The measured FWHM in the F160BW images varies significantly in all cameras in a radial manner; in its centre the FWHM is around 1.7px and at the edges $``$1.9px with a marked ellipticity. The measured FWHM for the F555W filter is around 1.6px (0.08″on PC, 0.16″on WF). The PSF in F555W does not vary significantly across the field. Tests with a range of apertures have reassured us that the variable PSF in F160BW does not introduce position dependent variations in the extracted photometry through our chosen aperture. F555W and F160BW magnitudes are reported in the conventional system based upon the spectrum of Vega. The zeropoints used were taken from STScI:WFPC2(STSci:WFPC2 (1997)). We have corrected our F160BW magnitudes for the attenuation due to adhering contaminants on the external aperture following the prescription of Holtzman et al (1995b ) and the contamination rate given for chips 1 and 3 in Whitmore (whitmore97 (1997)). The WFPC2 is subject to a variation in charge transfer efficiency across each detector. We have made a correction using a simple ramp model as described in Holtzman et al. (1995b ). A degree of geometric distortion is present in the WFPC2 camera (details in Whitmore whitmore97 (1997)). This is most severe in the F160BW filter. For simple aperture photometry as preformed here these distortions are not problematic, however to minimise the chance of incorrectly matching faint stars between the F160BW and F555W images it was found to be necessary to transform the coordinates of the F160BW frame to those of the F555W frame. Tables 1010, available from CDS, report the photometric results for the four clusters. ## 3 Photometric Uncertainties and Completeness As noted above, flat fields are a major contributor to the uncertainties in our photometry. Whitmore & Heyer (whit97 (1997)) have found that in the optical broad-band filters the uncertainties introduced into aperture photometry of point sources due to flat fields are of the order of 1.5% or less. This is not the case in the far-UV, here Biretta and Baggett (biretta (1998)) report that the F160BW flat in particular is very noisy. The RMS noise is found to be $``$20% near the centre of the field. A most demonstrative improvement offered by the adoption of the F255W flat is the reduction by $``$1/3 in the apparent width of the upper MS in the resultant CMDs. We have compared the photometry in the short and long exposures for both F555W and F160BW. This shows the internal accuracy of the photometry. We see that the internal accuracy is of the order of 0.03 mag for the brightest stars, ie. those with F555W$`<`$14.5, and rises to 0.1 for stars 4 mag. below the saturation level of the long exposure (F555W=19.0 and F160BW=16.5). We find no indication of a systematic dependence of the difference in magnitude between short and long exposures with magnitude. A number of photometric studies of these clusters exist in the literature. Many of the earlier works confine themselves to the outer extremities of the clusters. The first extensive study of these clusters is the $`B`$$`V`$ photographic study of Robertson (robo (1974)). CCD studies include Walker (walker92 (1992): NGC 330), Sebo & Wood (seb94 (1994): NGC 330), Bencivenni et al. (ben91 (1991): NGC 2004), Sagar et al. (Sagar91 (1991): NGC 2004 and NGC 2100) and Balona & Jerzykiewicz (bal92 (1992): NGC 2004 and NGC 2100). However many CCD based studies have taken previous studies, in particular that of Robertson (robo (1974)), as the basis of standardisation. Consequently there exist a limited number of external checks on our F555W photometry. In the case of the F160BW magnitudes there exists no straightforward means to quantify the uncertainties, we derive an estimate from indirect means in section 6. In deference to the frequent use of the magnitudes of Robertson in many intervening discussions we firstly examined our photometric results relative to those of Robertson. For the purposes of comparison we have transformed our F555W magnitudes to Johnson $`V`$ using the transformation of Holtzman et al. (1995b ). The results for all four clusters are shown in figure 1. It is not suprising to see that there is a much greater dispersion in Robertson’s photometry of stars close to the cluster core (open squares) where the degree of crowding is extreme, compared to those more radially distant (filled squares). If we restrict the comparison to those stars without close neighbours in Robertson’s outer B,C,D regions and $`V`$$`<`$14.0 we find a mean difference of +0.01$`\pm `$0.08 mag. The CCD-based photometry of Walker (walker92 (1992)) in NGC 330 is of high precision. We have included in figure 1 the comparison between the photometry of Walker and our own, here we see an agreement to -0.01$`\pm `$0.03 mag. This is quite satisfactory. We have evaluated our completeness limits by the addition of sets of artificial stars to the final frames for each colour. Using the IRAF ADDSTAR routine we added 100 stars and observed the number recovered by the standard reduction procedure (using DAOFIND). This was repeated 100 times. An examination of the results finds our photometry $`>`$90% complete to 19.5 in F555W, 17.5 in F160BW and 21.0 in F656N. This is consistent with the observed stellar luminosity function evident in figures 36 which continues to rise to F555W$``$19, but drops sharply due to incompleteness beyond F555W=19.5 ## 4 Be Star Detection The detection of the Be star population is made from the F555W-F656N, F160BW-F555W diagram. Stars with strong H$`\alpha `$ emission should appear to have a more positive F555W-F656N colour than their non-emission counterparts at similar F160BW-F555W colour. Given that a typical Be star has H$`\alpha `$ emission with a full width at half maximum of $``$7Å and peak H$`\alpha `$ flux 5 times the local continuum flux, a search of this kind with a 22Å filter is readily capable of detecting Be stars. Figures 2a-d. show the diagnostic diagrams. Using Fig. 2a. as a typical case, we note that most main sequence stars lie in a tight, almost horizontal band. The cool supergiants are omitted as they are saturated in H$`\alpha `$. Stars within this band do not have detectable H$`\alpha `$ emission. Above this band are a group of stars that clearly show significant H$`\alpha `$ emission: these are Be stars. The dispersion in F555W-F656N of normal MS stars about the central locus of the band is determined by the photometric errors in both the narrow band H$`\alpha `$ and F555W magnitudes. Both have similar signal to noise characteristics. This uncertainty will depend upon the magnitude of the star. We limit our Be star selection process to stars with F555W$`<`$ 19.0. In order to select a sample of Be stars, we draw a line in each of the F555W-F656N diagrams parallel to the sequence of non-emission line stars and at a distance of 0.5 magnitudes above it. The standard deviation in the F555W-F656N colour of the normal B stars below this cutoff is at most 0.23 (in the case of NGC 2100). Consequently the imposition of this cutoff excludes members of the band of normal main-sequence stars to at least a 2$`\sigma `$ level. Application of such a cutoff introduces a lower limit on the equivalent width of the Be stars detected by our survey. We estimate that this limit is of the order of that within Keller et al. ( 1999a ) of $``$9Å. Spectroscopy in NGC 330 (Keller et al be330 (1998)) found, from a sample of 29 B and Be stars, only one star with an emission equivalent width smaller than this limit. For this reason we do not consider that our lower limit is a significant limitation. All known Be stars within the outer extremities of the clusters are retrieved by our Be detection criteria. Within the cluster cores the high spatial resolution of the present data reveals a large number of Be stars either undetectable or unreliably located within ground-based data. Nebular H$`\alpha `$ emission is visible in the field of NGC 2100. Implicit in the photometric detection technique is sky subtraction on small spatial scales which avoids mis-classification of normal B stars superimposed upon filamentary structure within the background unless the background is particularly clumpy which is not the case within our data. ## 5 Luminosity Function of the Be Fraction We have investigated the fraction of stars along the MS which are Be stars by binning both emission-line and non-emission-lines stars with F160BW$``$F555W$`<`$ $``$1.0 into magnitude bins in F555W. Figure 7a-d. show our results. In each case the Be fraction peaks towards the MS turnoff. Table 8 examines the statistical significance of this trend in more detail. Here we have divided the stars into two groups: the brighter stars within 2 magnitudes of the MS turnoff (i.e. F555W$`<`$16.5) and the fainter stars with 19$`<`$F555W$`<`$16.5, all with F160BW$``$F555W$`<`$$``$1.0. In each case the fraction of Be stars near the MS turnoff are significantly higher than the Be fraction further down the MS. We note that V=19 corresponds to spectral type B9 in the calibration of Zorec and Briot (1991), consequently below this cutoff we would not expect any Be stars. In Keller et al (1999a ) we performed a ground based survey for Be stars within these clusters and their surrounding fields. Within this previous study a similar analysis as that presented above, revealed no statistically significant peak towards the MS turnoff. We note that the observations in Keller et al (1999a ) were restricted by the imposition of a limiting magnitude of $`V`$=17.5 (i.e. spectral type B5). In addition, due to crowding, the inner 15″ of each cluster was unusable. The present sample is also free from significant field contribution, which acts to dilute any difference between the cluster and field populations. Maeder et al. (mae99 (1999)) provide a review of previous studies of the Be fraction within the MCs and various galactic clusters. Amongst the four clusters studied here reference is made to the studies of Grebel (grebel97 (1997): NGC 1818), Grebel et al. (grebel92 (1992): NGC 330) and Kjeldsen & Baade (kjeldsen (1994): NGC 2004). A comparison of the two sets of results shows qualitative agreement in the case of NGC 330 and NGC 2004. In the case of NGC 1818, Grebel (grebel97 (1997)) finds a Be fraction which is consistently higher than our results and does not show signs of a drop off with increasing magnitude. Towards the faint limit of the sample of Be stars indicated by Grebel (grebel97 (1997)) we find a high proportion of objects within crowded regions which do not show signs of strong emission. We posit that the spurious detections in the data of Grebel arise from crowding within the central regions. Fractions as high as seen in the present study are unprecedented when seen in the light of comparable studies of the Be fraction within the surrounding field. Our examination of the field population of Be stars within the MCs described in Keller et al. (1999a ) revealed the fraction of Be stars to be more or less evenly distributed with magnitude at 15%, in contrast to the peaked distribution seen within the clusters. This is in line with the Be fraction seen within the galactic field. Studies drawing a volume-limited sample from within the galactic field face the difficulties of interstellar reddening and depth effects, at best they limit the Be fraction to 5% at B9 rising to around 20% at B2 ($`\mathrm{M}_\mathrm{V}`$=-3.5) (see figure 1 of Zorec and Briot 1997). The evidence is suggestive that there is a fundamental difference between the Be star populations in the clusters and the field which, if borne out by closer scrutiny, would have important implications on the evolutionary status of Be stars. This is however only suggestive at the moment, in a latter paper (Keller et al. 1999d ) we will attempt to resolve this issue through the examination of a larger sample of the field population within the MCs. ## 6 Colour-Magnitude Diagrams The colour-magnitude diagrams (CMDs) for NGC 330, 1818, 2004 and 2100 are shown in figures 36. Figure 3 demonstrates the most pertinent features. The MS is seen to terminate at F555W=15.5. A prominent clump of A supergiants is seen around F160BW$``$F555W$``$1.5. These are core He-burning blue supergiants. In the case of NGC 2004 and NGC 2100 there is little sign of a clump of A supergiants, rather the stars “flow” in a continuous manner from the tip of the MS. Note the omission of the red supergiant population, which is undetectable in F160BW. The CMDs show little sign of significant field contamination. Attention is drawn to the group of hot and luminous stars within NGC 330 which are separated from the bulk of the cluster MS. The stars occupy a region bluer than the MS (F160BW$``$F555W$`>`$3.1 and 16.4$`>`$F555W$`>`$14.8). This region is that typically associated with the position of blue stragglers in lower mass clusters. Like blue stragglers in other clusters the central condensation of this population is remarkable; of the 8 stars all but one (B13 discussed in section 8) reside on the PC chip containing the cluster core (the corresponding figures for the total population are: 407 stars on the PC chip brighter than F555W$`<`$19.0 and 339 on the WF chips). We tentatively identify these stars as blue stragglers. The width of the main sequence is clearly greater than the estimated internal uncertainties discussed above. The width likely results from the combination of position dependent errors from the F255W flatfield (as discussed above), differential reddening across the cluster and the intrinsic width of the main sequence within each cluster, which is notably enhanced by the Be star population. The precise contribution of each component is impossible for us to separate. However we can gain an estimate of the relative importance of the three major constituents through an examination of NGC 1818. In the study of NGC 1818 by Hunter et al. hunter (1997) presents photometry in F336W, F555W and F814W. The F336W filter (analogous to Johnson $`U`$) is a frequently used filter with well determined flatfield. Therefore let us assume that the width of the MS exhibited in the F336W$``$F555W colour is due to differential reddening and the intrinsic width of the MS. The width of the MS in the photometry of Hunter et al. is $`\sigma `$=0.15mag (for stars 17$`<`$F555W$`<`$20). Using the colour excess ratios from Table 10 (discussed below) gives a corresponding $`\sigma `$=0.4mag in F160BW$``$F555W. We see in Fig. 3 a width of $`\sigma `$0.45mag for 16$`<`$F555W$`<`$18. We conclude from this that flatfield errors are a relatively minor contributor to the dispersion evident in our CMDs. We consider a $`\sigma `$=0.1mag an appropriate estimate for the uncertainty introduced by flatfield errors in our quoted F160BW$``$F555W colours. The effects of differential reddening is undoubtably the major contributor to the large width of the MS in the case of NGC 2100. The cluster lies in the vicinity of 30 Dor, a region filled with complex nebulosity. The width of the upper main sequence is $``$0.6 and assuming 0.3 is due to intrinsic width of the MS and flatfield noise this implies a $`\sigma `$E($`B`$$``$$`V`$)$``$$`\pm `$0.04; a large but not unreasonable amount. We have undertaken new IR photometry of these clusters described in detail in Keller (1999b ). The IR colours provide a valuable check on our WFPC2 colours for the brighter members. An examination of the $`V`$$``$$`K`$ and F160BW$``$F555W colours has revealed a number of systems within the clusters which consist of a binary pair of red supergiant and MS stars. These possess F160BW$``$F555W $``$ 0-1, closely matched to those of the A supergiants, and $`V`$$``$$`K`$ colours indicative of a red supergiant. The F160BW$``$F555W, $`V`$$``$$`K`$ colours of these systems are the consequence of the combined light in F555W but the light of only one member in F160BW and $`K`$, namely the blue MS star in F160BW and the red supergiant in $`K`$. The systems highlighted as such are: in NGC 330:A52 and NGC 1818:A18 and A95. ## 7 Transformation to the H-R Diagram Transformation of our measured F160BW$``$F555W colours and F555W magnitudes into temperatures and luminosities was made through interpolation into a grid of synthetically derived colours and bolometric corrections. Distance moduli for the SMC and LMC were taken as 18.85 and 18.45 respectively, in line with current determinations. The theoretical colours were computed for the revised Kurucz (1993) fluxes used in Bessell, Castelli & Plez (bessell98 (1998)) and described in more detail by Castelli (castelli99 (1999)). The non-solar metallicity is taken into account within these models (\[Fe/H\]=-0.5 LMC; \[Fe/H\]=-1.0 SMC). In lower metallicity colours are bluer for a given temperature. For example, a star of 20000K in both the SMC and LMC, appears 0.02 magnitudes bluer in the SMC. For models cooler than 8000K the NOOVER grid of fluxes were used. The passbands used were those in /cdbs/cdbs6/synphot\_tables/ and obtained via ftp from ftp.stsci.edu. We computed the magnitudes for the F160BW, F336W, F439W, F450W, F555W, F675W, F814W bands using a synthetic photometry program that integrated the relative photon numbers for the various WFPC2 bands. The zeropoints were adjusted to produce 0.04 mag for all bands with the Castelli & Kurucz (castellikurucz (1994)) spectrum of Vega ($`T_{eff}`$/log g/\[Fe/H\]/ = 9550/3.95/-0.5). The UBVRI magnitudes were computed through energy integration using the passbands of Bessell (bessell90 (1990)). These magnitudes were also normalised to +0.04. Table 10 (available from ADS) lists the grid of colours and bolometric corrections. To evaluate the effect of interstellar reddening we used the extinction curves of Mathis (Mathis90 (1990)) for the Galaxy, Nandy et al. (Nandy81 (1981)) for the LMC and Prevot et al. (Prevot84 (1984)) for the SMC. The LMC and SMC curves were extrapolated to 90nm and set equal to the galactic curve for the wavelengths redder than $`V`$. The galactic curve for diffuse dust (R($`V`$) = A($`V`$)/E($`B`$$``$$`V`$)= 3.1) was used. We interpolated into the 22 magnitude ratios A($`\lambda `$)/E($`B`$$``$$`V`$) of the extinction curves to produce multiplicative factors at each of the 1221 Kurucz wavelengths for attenuating the model fluxes. Figure 8 shows the extinction curve for the optical and UV wavelengths. The colour excess ratio of E(F160BW$``$F555W) to E($`B`$$``$$`V`$) under the different reddening regimes is given in Table 6. The interstellar reddening to the four clusters was constrained by the position of the MS in the CMD. The line of sight reddening was adjusted to achieve a match between the essentially unevolved MS and that predicted by standard evolutionary models. The reddening to both the LMC and the SMC were considered in two components, half of the total reddening to galactic absorption and the remainder due to intra-Cloud absorption. It was apparent that the “SMC” extinction curve was not appropriate for NGC 330 as it produced too large a F160BW extinction for the probable E($`B`$$``$$`V`$). We adopted the LMC extinction curve for NGC 330. Table 7 reports the values of reddening found in the present study. Uncertainties in these values are of the order of $`\pm `$0.02 mag. The values are in agreement with values in the literature. ### 7.1 Effective Temperatures for the Be Star Population As can be seen in figures 36 the Be population forms a secondary sequence at apparently cooler temperatures than the mean locus of non-emission stars in each cluster. The presence of a circumstellar envelope around Be stars is known to give rise to flux excess in various optical and infra-red bands. For this reason it is not clear that the observed colours and visual magnitudes are representative of the effective temperature and luminosities of the underlying star. We have sought an appropriate method for removing the effect of the circumstellar reddening on the underlying star within the Be system. Numerous studies have revealed that the flux emitted by a Be star in the $`V`$ band is stronger than emitted by a similar B star without emission due to Balmer continuum emission (Zorec & Briot zorec91 (1991); Kaiser kai89 (1989)). A similar study by Zorec and Briot (zor85 (1985)) in the UV revealed no such excess in UV flux amongst Be stars. This study made a comparison of monochromatic magnitudes at $`\lambda `$=1460Å which is fortuitously close to the central wavelength of the F160BW filter. This simplifies our task considerably: we can consider $`\mathrm{\Delta }`$(F160BW$``$F555W)=$`\mathrm{\Delta }`$F555W. Since the $`V`$ excess arises from the circumstellar disk one would expect that the $`V`$ excess would scale as the apparent surface area of the disk. In the same way the strength of H$`\alpha `$ emission is similarly related to the apparent surface area. Zorec and Briot (zor85 (1985)) show a clear correlation between $`V`$ excess and H$`\alpha `$ emission strength. We have used the details of this correlation and the F555W$``$F656N colours of the Be stars within our sample to establish a $`V`$ excess for each object. In order to do so it was first necessary to calibrate the F555W$``$F656N in terms of the H$`\alpha `$ emission strength. This was achieved through the use of the H$`\alpha `$ equivalent widths described in Keller et al. (be330 (1998)) obtained three months following the present observations. Conceivably, during this length of time the emission strength of some of the sample may have changed, however this does not appear to seriously degrade the correlation between the F555W$``$F656N colours and the H$`\alpha `$ emission strength. The $`V`$ excess for each object from the above procedure has been applied to F555W and to the F160BW$``$F555W colours for each object, making the Be stars fainter and bluer. The figures 1013 show the resulting HR diagrams, in which it is clear that the Be population has been rectified, within observational uncertainties, to the locus of non-emission line stars. This gives us considerable confidence in the corrections we have applied to the Be stars within the CMD. These corrected colours are then transformed to the H-R diagram as described in section 7. ## 8 Comparison with Previous Determinations of Effective Temperature A number of studies have previously determined temperatures for a sample of MS stars within these clusters. They have used a variety of techniques: the studies of Caloi et al. (caloi95 (1995); NGC 2004 and NGC 330) and Bohm-Vitense et al (bohmvitense85 (1985); NGC 2100) were based upon IUE spectrophotometry while those of Lennon et al. (Lennon96 (1996)) and Reitermann et al. (Reitermann90 (1990)) are from spectroscopic analysis. Figure 9 compares the $`T_{eff}`$ of these previous studies with that of our own. Three discrepant points stand out, they are A1 and B13 in NGC 330 and D22 in NGC 2004. We have conducted spectroscopy of B13 and A1 using the Double Beam Spectrograph on the 2.3m telescope at Siding Spring Observatory. Our spectra consist of two simultaneously recorded segments, one blue (from 3400-4250Å), the other centred on H$`\alpha `$ (6200-6800Å) at a resolution of (0.6Å/px). We spanned the Balmer discontinuity in our blue spectra to enable an independent assessment of effective temperature. B13 has a temperature of 27 500K from our photometry, making it spectral type B0.5. In our HR diagram of NGC 330, B13 is placed in the region of blue stragglers. Spectroscopically, however, this star has been designated by Lennon et al. as B5III. The $`B`$$``$$`V`$ and $`U`$$``$$`B`$ colours of this system are also discrepant; an observed $`B`$$``$$`V`$ of $``$0.19 (Walker 1992) indicates a temperature of 30000K using the previously specified reddening, whereas the $`U`$$``$$`B`$ colour ($``$0.85) gives 22000K. Our spectrum of this object shows a narrow, weak H$`\alpha `$ emission (equiv. width = 9Å, FWHM = 1.6Å) which is markedly different from the majority of Be stars which in general exhibit much stronger and broader H$`\alpha `$ lines. Lines of Si-IV are strong and the Balmer discontinuity is very small suggesting that the object is of spectral type B1.5-B0 in reasonable agreement with our photometric temperature. The H Balmer lines blueward of H$`\beta `$ show line cores that are unusually broad and flat. This is perhaps an indication that the object is an approximately equal mass binary system within an extended region of H$`\alpha `$ emission. Further observations of this object with better signal to noise are required. A1 has been designated B0.5III by Lennon et al. and Grebel et al. Our observations make it much cooler at 22 100K (i.e. B2; Bessell et al. bessell98 (1998)). Spectroscopically A1 has a Balmer discontinuity too large and a Ca-II K feature too strong for B0.5III, rather we consider it to be B2III, in line with our derived temperature. Existing photometric colours have little to add, $`B`$$``$$`V`$ gives a $`T_{eff}`$ of 30 000K whilst $`U`$$``$$`B`$ gives 22 000K. The third most discrepant point is that of D22 in NGC 2004. In the study of Caloi and Cassatella, the determination of the temperature of D22 in NGC 2004 was recognised as problematic (see discussion therein) due to the apparently discrepant $`B`$$``$$`V`$ colour. If we leave aside these three most discrepant points we are left with a scatter which we consider represents the relative accuracy of temperature determinations drawn from optical and IUE spectroscopy. It is also apparent that there is a minor systematic offset of our temperatures relative to that of previous studies, in the sense that our temperatures are slightly cooler. This amounts to 1500K at 25000K (i.e. 6%) which is unlikely to be significant given the uncertainties in previous measurements at such temperatures. ## 9 H-R Diagrams The resultant H-R diagrams are shown in figures 1013. Also shown in figure 10 are typical error bars for data points on the MS. The red supergiant (RSG) population shown in these figures is that contained within the WFPC2 field. Effective temperatures and luminosities for the RSG are taken from Keller et al (1999b ). We briefly discuss some of the implications of the H-R diagrams here with reference to previous observations; detailed discussion is deferred to Keller et al. (1999c ). Overlain are isochrones from Bertelli et al. (bert94 (1994)) at 0.1 dex spacing. Isochrones for Z=0.008 are used for the LMC clusters and Z=0.004 for NGC 330. The discrepancy in the temperature of the RSG is discussed in Keller et al (1999b ). The ages of the clusters indicated by these isochrones are of order log age=7.2 for NGC 2004 and NGC 2100, 7.4 for NGC 1818 and 7.5 for NGC 330. These ages are within the range of ages determined in previous works. In their study of NGC 330, Chiosi et al (choisi95 (1995)), claim that the cluster data does not permit discrimination between convective criteria. They are unable to fit the cluster CMD with a single age and require instead a large spread of ages within the cluster. To a large degree this can be put down to the insensitivity of the ($`B`$$``$$`V`$) colour to the temperatures in the vicinity of the MS turnoff. The presence of Be stars, which have discrepant $`B`$$``$$`V`$ colours, in the vicinity of the MS turnoff distracts further from the clarity of the turnoff. This makes for a broad turnoff, which enables a range of model isochrones to fit. Our data is a significant improvement upon this. We also note that the tight grouping of the RSGs in these diagrams is inconsistent with a large age spread. Within NGC 2004, Caloi and Cassatella (caloi95 (1995)) find that the luminosities of the upper MS stars are inconsistent with these being progenitors of the RSGs observed within the cluster. The latter concern is resolved by the IR photometry of Keller et al. (1999b ), which finds the temperatures of these red supergiants to be of the order of 400K cooler than previously determined. The consequent change in bolometric correction has brought the luminosity of the RSGs into line with the top of the MS. In figures 1013 we show the redward boundary of the MS for standard “moderate” overshoot models (solid line - this is the stage B of Bertelli et al. bert94 (1994)) and in figures 1113 models without convective core overshoot (Alongi et al padova (1993)). The region redward of this boundary and bluer than the evolved A supergiants is a region which according to evolutionary models is traversed rapidly. This region, the Blue Hertzsprung Gap (BHG), is expected to be devoid of stars. It is clear from the HRDs that the no overshoot models fail to match the density in this region. The predictions of standard overshoot models provide a closer match. This necessity for a degree of overshoot is in agreement with previous IUE studies of Caloi et al. (caloi93 (1993)) and Caloi and Cassatella (caloi95 (1995)). Whilst the standard overshoot models provide a closer match to the observations a number of stars are seen in the four HRDs, which remain within the BHG. The duration of evolution from the red edge of the MS to temperatures cooler than the tip of the blue loop tracks is very rapid, it accounts for 0.07% of the total lifetime for a 12$`\mathrm{M}_{}`$ star (Fagotto et al. fbbc (1994)). Consider the HRD of NGC 2004. Here we see six stars clearly within the BHG. The time spent in the BSG+RSG phases for such a star is 78 times longer than the traversal of the BHG. With thirteen BSG+RSG present we would expect on this basis to find 0.2 stars present within the BHG. The existence of populations within the BHG has been noted previously, the controversy over their evolutionary status remains. Grebel et al (grebel96 (1996)) in their study of NGC 330 have suggested that the population of stars within the BHG is a mixture of rapidly rotating B/Be stars and blue stragglers. In the light of our accurate determination of effective temperatures we can rule out the possibility that these stars are blue stragglers. Neither are any of the interloping stars in this region Be stars from our study or at several other observational epochs (see Keller et al. 1999a ). Four of these BHG interlopers, A01, B22 and B30 in NGC 330 and D12 in NGC 1818, have been the focus of spectroscopic studies (Reitermann et al Reitermann90 (1990), Lennon et al. Lennon96 (1996), Korn et al. kor99 (1999)). These stars appear as normal B type stars, albeit some with evidence of a N overabundance. As discussed above, the temperature and luminosity of the terminus of the MS is particularly sensitive to the degree of extension of the convective core. Perhaps the stars within the BHG are an extension of MS evolution to the red brought about by a degree of internal mixing in excess of that prescribed in standard overshoot models. A more detailed analysis is required in this regard and will be presented in our subsequent paper. ## 10 Summary Our WFPC2 photometry of NGC 330, 1818, 2004 and 2100 has shown that these clusters form an excellent testing ground in which to examine a number of outstanding issues in stellar evolution. The unprecedented resolution offered by the WFPC2 camera provides us with a sample of sufficient size to enable a statistically meaningful confrontation with standard evolutionary models. The far-UV coverage has provided good temperature estimates for the hot main sequence population. With this information it should be possible to ascertain the presence and amount of internal mixing due to convective core overshoot expressed in the population with the present data. SCK acknowledges the support of an APA scholarship and a grant from the DIST Hubble Space Telescope Research Fund.
no-problem/9910/astro-ph9910265.html
ar5iv
text
# Untitled Document SIGNALS OF SUPERSYMMETRIC DARK MATTER AFSAR ABBAS Institute of Physics, Bhubaneswar-751005, India (e-mail : afsar@iopb.res.in) Abstract The Lightest Supersymmetric Particle predicted in most of the supersymmetric scenarios is an ideal candidate for the dark matter of cosmology. Their detection is of extreme significance today. Recently there have been intriguing signals of a 59 Gev neutralino dark matter at DAMA in Gran Sasso. We look at other possible signatures of dark matter in astrophysical and geological frameworks. The passage of the earth through dense clumps of dark matter would produce large quantities of heat in the interior of this planet through the capture and subsequent annihilation of dark matter particles. This heat would lead to large-scale volcanism which could in turn have caused mass extinctions. The periodicity of such volcanic outbursts agrees with the frequency of palaeontological mass extinctions as well as the observed periodicity in the occurrence of the largest flood basalt provinces on the globe. Binary character of these extinctions is another unique aspect of this signature of dark matter. In addition dark matter annihilations appear to be a new source of heat in the planetary systems. Careful measurements on the dynamics of galaxies by Fritz Zwicky back in the 1930’s led him to the conclusion that the mass to light ratio of galaxies and clusters of galaxies required far more mass than explained on the basis of stellar origin of their light. Hence he argued for the existence of invisible dark matter. His ideas were not immediately appreciated and it took several decades for astronomers and physicists to understand the significance of this discovery. Today, on the basis of several experimental observations, it has become clear that to account for the observed motion in the cosmos, gravitational fields much stronger than those attributable to luminous matter are required. As much as $`90\%`$ of the mass in the universe is constituted of this invisible dark matter. This conclusion gets further support from simulations using cosmological models which bring out the necessity for large number of relic particles from the early universe. The ideal candidate for these relic species are the weakly interacting massive particles ( WIMPS ). These WIMPS arise most naturally in Supersymmetric theories. Most of the supersymmetric theories contain one stable particle , the so called Lightest Supersymmetric Particle ( LSP ), which is the candidate for dark matter as a WIMP. The existence of a stable supersymmetric partner particle results from the fact that these models include a conserved multiplicative quantum number, the R-parity. This takes on values of +1 and -1 for particle and supersymmetric partners respectively. As per this conservation principle SUSY particles can only be generated in pairs.This requires that SUSY particles may decay in odd number of particles only. As such the LSP must be stable. However the R-parity may be violated. The quantum number R is given by $$R=(1)^{3B+L+2S}$$ (1) where B is baryon number, L is lepton number and S is the spin. A violation of B or L implies a violation of R number. However sharp bounds on the violation of R have been established. Had LSP been susceptible to the strong or the electromagnetic interactions, then it would have been detectable today, as it would have condensed with ordinary matter. Bounds on the abundance of LSP normalized with respect to the abundance of proton have been calculated $$\frac{n(LSP)}{n(p)}10^{10}(strong)\mathrm{}10^6(electomagnetic)$$ (2) Hence LSP should be practically bereft of the strong or the electromagnetic interactions. It can however take part in gravitational and weak interactions. So these are ideal candidates for the WIMP scenarios of the cosmological dark matter. SUSY dark matter particles are of interest as they occur in a totally different context and not specifically introduced to solve the dark matter problem. Possible candidates for the LSP include the photino ( S=1/2 ), the higgsino ( S=1/2 ), the zino ( S=1/2 ), the sneutrino ( S=0 ) and the gravitino ( S=3/2 ). The above spin 1/2 SUSY particles are called gauginos. In most of the recent theories the favourite LSP is the neutralino which is defined as the lowest mass linear superposition of photino ($`\stackrel{~}{\gamma }`$), zino ($`\stackrel{~}{Z}`$) and the two higgsino states ( $`\stackrel{~}{H_1}`$ , $`\stackrel{~}{H_2}`$ ) : $$\chi =a_1\stackrel{~}{\gamma }+a_2\stackrel{~}{Z}+a_3\stackrel{~}{H_1}+a_4\stackrel{~}{H_2}$$ (3) Within the Minimal Supersymmetric extension of the Standard Model ( MSSM ) , it is convenient to describe the supersymmetry phenomenology at the electroweak scale without too strong theoretical assumptions. Various properties like relic abundances and detection rates have been carefully analyzed recently by several authors . In the soft-breaking Lagrangian one has the trilinear and bilinear breaking parameters. Either looking for signatures of the dark matter or detecting it directly is obviously, a major enterprise today . To understand the properties of the elusive and invisible dark matter, one may try to detect the dark matter directly or look for situations where it would have left its indelible fingerprints. The latter would be referred to as indirect detection. First the direct detection. The most exciting news is that recently there have been intriguing tell-tale signs of dark matter. There are several detectors all over the world trying to catch a dark matter particle, Most of them focus on WIMP-nucleus elastic scattering from target nuclei part of the detector. The putative WIMP would be detected via nuclear recoil energies which are expected to be in the kilo-electronvolt range. The experiment, which has found possible signature of dark matter, is DAMA, which is housed deep underground in the INFN Gran Sasso National Laboratory in Italy . It this detector high atomic-number target nuclei, such as Iodine ( in the form of NaI) and Xenon are used. To help isolate a possible WIMP signal from the background, one focuses on the annual modulation effect. As the earth rotates around the sun, the dynamics are such that its rotational velocity would be in the same direction as that of the solar system with respect to the galaxy in June and opposite in December. This would bring in an annual modulation in the WIMP detection rate. The 100 kg DAMA detector, after two years of data collection on this modulation effect, has enabled the experimental group to announce the possible detection of a 59 Gev WIMP, most likely a neutralino . This is a most significant discovery in the direct dark matter detection set-ups. Further work continues to be done to consolidate or refute this discovery. One may ask for possible indirect signatures of the dark matter in the universe. One has to seek out special and unique scenarios in the astrophysical or geological context to obtain unique signatures of dark matter. A few such scenarios studied by us are described below . While investigating the possibility that a WIMP could explain both the dark matter problem and the solar neutrino problem, Press and Spergel estimated the rate at which the sun or a planet will capture WIMPs. As given by Krauss et al the capture rate for earth is : $$\dot{N_E}=(4.7\times 10^{17}sec^1)\{3ab\}\left[\frac{\rho _{0.3}\sigma _{N,32}}{\overline{v}_{300}^3}\right](\frac{1}{1+m_X^2/m_N^2})$$ (4) where $`m_X`$ is the mass of the DM particle, $`m_N`$ is the mass of a typical nucleus off which the the particle elastically scatters with cross-section $`\sigma _N`$, $`\rho _X`$ is the mean mass density of DM particles in the Solar System, $`\overline{v}`$ is the r.m.s. velocity of dark matter in the Solar System, $`\rho _{0.3}=\rho _X/0.3GeVcm^3`$, $`\sigma _{N,32}=\sigma _N/10^{32}cm^2`$, $`\overline{v}_{300}=\overline{v}/300kms^1`$, and $`a`$ and $`b`$ are numerical factors of order unity which depend on the density profile of the sun or planet. The earth will continue to accrete more and more particles until their number density inside the planet becomes so high that they start to annihilate. One possibility is a flux of upwardly moving neutrinos at the earth’s surface. This has been studied very meticulously and is being used to detect dark matter directly \[7-9\]. We ignore this channel and study other possible outcomes of the said annihilation of dark matter at the centre of earth. This had not been studied earlier. Depending on whether the dark matter is neutralino, photino, gravitino, sneutrino, Majorana neutrino or some other, different annihilation channels are possible \[8-10\]. Note that we are however, looking in particular, at neutralino in the supersymmetry broken scenario of the MSSM as described above . Generally the most significant channels are $`\chi \overline{\chi }q\overline{q}`$ ( quark-antiquark ), $`\chi \overline{\chi }\gamma \gamma `$ ( photons ) and $`\chi \overline{\chi }l\overline{l}`$ ( lepton-antilepton ). We ignore the $`\nu \overline{\nu }`$ which has been well studied by others \[8-10\] and concentrate upon photon producing channels. In the quark channel hadronization will take place through jets and subsequent radiative decay will lead to mesons which in turn will decay through their available channels. Hence : $$\chi \overline{\chi }q\overline{q}(\pi ^0,\eta ,\mathrm{})\gamma +Y$$ (5) All annihilation processes which directly or indirectly create photons, energy is delivered to the core through inelastic collisions. This would lead to the generation of heat in the earth’s core. We wish to study this heat generation in the core through annihilation. This heat is : $$\dot{Q}_E=e\dot{N_E}m_X$$ (6) where $`e`$ is the fraction of annihilations which lead to the generation of heat in the core of the earth. Here e may be as large as unity for the ideal case where the WIMPs annihilate predominantly through photons only. For an order of magnitude estimate let us take it to be $`0.5`$ \[8-10\]. On taking $`ab0.34`$ , $`\rho _{0.3}=1`$, $`\overline{v}_{300}=1`$, $`m_X=55GeV`$ and the cross-section on iron to be $`\sigma _N=10^{33}cm^2`$, one finds that $`10^8W`$ of heat is generated. As the visible matter clumps together to form stars, planets, etc. an interesting question is whether the dark matter also displays this tendency of clumping. Interestingly several dark matter models do suggest that clumps of dark matter arise naturally during the course of evolution of the universe. Silk and Stebbins considered cold dark matter models with cosmic strings and textures appropriate for galaxy formation. They found that a fraction $`10^3`$ of the galactic halo dark matter may exist in the form of dense cores. These may survive up to mass scales of $`10^8M_{}`$ in galaxy halos and globular clusters . Analyzing the stability of these clumps of dark matter, they found that the cores of these clumps will not be affected, although the outer layers may be stripped off by tidal forces. In the cosmic string model, the clumpiness C, defined as the ratio of clumped matter concentration to normal concentration, of dark matter at the present epoch would be $$C10^{12}f_{cl}h^6\mathrm{\Omega }_0^3$$ (7) where $`f_{cl}`$ is the fraction of dark matter in clumps, $`H`$ is the Hubble parameter parametrized as $`100hkm/sMpc^1`$, and $`\mathrm{\Omega }_0`$ is the closure energy density of the Universe. Subsequently Kolb and Thachev studied isothermal fluctuations in the dark matter density during the early universe. If the density of the isothermal dark matter fluctuation or clumps, $`\mathrm{\Phi }=\delta \rho _{DM}/\rho _{DM}`$, exceeds unity, a fluctuation collapses in the radiation-dominated epoch and produces a dense dark matter object. They found the final density of the virialized object $`\rho _F`$ to be $$\rho _F140\mathrm{\Phi }^3(\mathrm{\Phi }+1)\rho _x$$ (8) where $`\rho _x`$ is equilibrium density. For axions, a putative dark matter particle, density fluctuations can be very high, possibly spanning the range $`1<\mathrm{\Phi }<10^4`$. The resultant density in miniclusters can be as much as $`10^{10}`$ times larger than the local galactic halo density. The probability at present of an encounter of the earth with such an axion minicluster is 1 per $`10^7`$ years with $`\mathrm{\Phi }=1`$. Kolb and Tkachev found two types of axion clumps arising from two kinds of initial perturbations: * Fluctuations with $`10^3<\mathrm{\Phi }<1`$ collapse in the matter-dominated epoch. * Fluctuations with $`\mathrm{\Phi }>1`$ collapse in the radiation-dominated epoch. If the dark halo is mostly made of neutralinos, then the clumping factor in the MSSM could be less than $`10^9`$ for all neutralino masses . It has been estimated that these clumps would cross earth with a periodicity of 30-100 Myrs . Thus during the passage of the earth through such clumps at regular intervals, the flux of the incident DM particles will increase by roughly a factor of $`10^9`$. Consequently the value of $`\dot{Q}_E`$ during the passage of a clump will be $`10^{17}`$ W . Improving upon the previous work, Gould obtained greatly enhanced capture rates for the earth ( 10-300 times that previously believed ) when the WIMP mass roughly equals the nuclear mass of an element present in the earth in large quantities, thereby constituting a resonant enhancement. Gould’s formula gives the capture rate for each element in the earth as : $$\dot{N_E}=(4.0\times 10^{16}sec^1)\overline{\rho }_{0.4}\frac{\mu }{\mu _+^2}Q^2f\widehat{\varphi }(1\frac{1e^{A^2}}{A^2})\xi _1(A)$$ (9) where $`\overline{\rho }_{0.4}`$ is the halo WIMP density normalized to $`0.4GeVcm^3`$ , $`Q=N(14sin^2\theta _W)Z`$ $``$ N - 0.124Z, $`f`$ is the fraction of the earth’s mass due to this element, $`A^2=(3v^2\mu )/(2\widehat{v}^2\mu _{})`$, $`\mu =m_X/m_N`$, $`\mu _+=(\mu +1)/2`$, $`\mu _{}=(\mu 1)/2`$, $`\xi _1(A)`$ is a correction factor, $`v=`$ escape velocity at the shell of earth material , $`\widehat{v}=3kT_w/m_X=300kms^1`$ is the velocity dispersion, and $`\widehat{\varphi }=v^2/v_{esc}^{}{}_{}{}^{2}`$ is the dimensionless gravitational potential. In the WIMP mass range 15 GeV-100 GeV this yields total capture rates of the order of $`10^{17}sec^1`$ to $`10^{18}sec^1`$ . According to the equation above, this yields $`\dot{Q}_E10^8W10^{10}W`$ for a uniform density distribution. In the case of clumped DM with core densities $`10^9`$ times the galactic halo density, global power production due to the passage of the earth through a DM clump is $`10^{17}W10^{19}W`$. It is to be noted that this heat generated in the core of the earth is huge and arises due to the highly clumped CDM . If the dark matter is composed of neutralinos, the effect of geological heating may not be in the saturation regime and this may diminish the heat production. The effect also depends on the unknown density inside the dark matter clump. The estimates show that in case of the neutralino, it can reach the right order of magnitude for extreme values of parameters. One should note however, that not only are the parameters of neutralino interactions not well known, but even the nature of the dark matter particles (neutralino or something else?) is not yet established. However, the bottom line is that our estimates should be relevant for non-standard neutralino parameters and/or other dark matter particles. The geothermodynamic theory states that continuous heat absorption by the the lowermost layer of the mantle, the so called D” layer would result from a temporary increase in heat transfer from the core . This process would continue until, due to its decreasing density, this layer becomes unstable, eventually breaking up into rising plumes. This is the only physical possibility as plume production is the most efficient way of heat transfer in earth. The lower mantle origin for plumes concept is strengthened by several recent observations. Firstly, high levels of primordial He-3 reported for Siberian flood basalts support this view. Secondly, high levels of Osmium-187 from the decay of the Rhenium-187, found in high concentrations in the earth’s core, observed in Siberian flood basalts , suggest that some of these rocks may even come from the outer core. Due to its lower density, a typical plume created in this manner would well upwards. In this process, decompression of the plume on account of its ascent in a pressure gradient will lead to partial melting of the plume head, thereby producing copious amounts of basaltic magma . Mantle velocities being $`1`$ m/year , such a plume would take $`5`$ million years to reach the crust. It would then melt its way through the continental crust, thereby producing viscous acidic ( silicic ) magma . The ultimate arrival of such a plume head at the surface could be cataclysmic. Initial explosive silicic volcanism would be followed by periods of large-scale basalt volcanism that ultimately lead to the formation of massive flood basalt provinces such as the Siberian Traps, the Deccan Traps in India and the Brazilian Paraná basalts. Extensive atmospheric pollution would follow; the Deccan Trap flood basalt volcanic episode ( $``$ 65 million years ago ) ejected huge amounts of basalt, tonnes of $`H_2SO_4`$, $`HCl`$, and fine dust . Climatic models predict that this is capable of triggering a chain of events ultimately leading to the depletion of the ozone layer, global temperature changes, acid rain and a decrease in surface ocean alkalinity. Thus, Deccan volcanism has been proposed as a possible cause for the K/T ( Cretaceous/Tertiary ) mass extinction that extinguished the dinosaurs , while the Siberian basalts have been put forth as a possible culprit for the P/T ( Permian/Triassic ) mass extinction . In fact, there exists a striking concordance between the ages of several major flood basalt provinces and the dates of the major palaeontological mass extinctions . Hence it has been proposed by us that all major periodic mass extinctions have been caused by gigantic volcanism which in turn were caused by the heat coming from dark matter annihilations at the centre of earth. So the actual culprit, for all major extinctions including that of dinosaurs, was the invisible dark matter . Collar set forth the hypothesis that the passage of the clump leads to direct extinctions by causing cancers in organisms . If this is correct, then this extinction should precede that due to volcanism by approximately 5 million years. Hence each major extinction should, at higher resolution, be a binary extinction: the first extinction due to the direct passage of the clump (causing cancers in various organisms), ie. the carcinogenic dark matter scenario, and the second extinction due to massive volcanism, ie. the volcanogenic dark matter scenario above. What is the empirical situation regarding this unique prediction of the dark matter extinction scenario ? The Permo-Triassic extinction is the most severe ever recorded in the history of life on earth. It has been estimated that 88 - 96 % of all species disappeared in the final stages of the Permian. However, Stanley and Yang discovered that this biotic crisis in fact consisted of two distinct extinction events. The first and less severe of the two was the Guadalupian crisis at the end of the penultimate stage of the Permian, followed after an interval of approximately 5 million years by the mammoth end-Tartarian event at the P/T boundary. Traditionally, the Signor-Lipps effect has been used to explain the high rates of extinction during the last two stages of the Permo-Triassic extinction. It was generally believed that the actual extinction occurred at the Permo-Triassic boundary during the end of the Tartarian stage, with the high Guadalupian metrics being due to the ‘backward smearing’ of the single grand extinction event. However, Stanley and Yang found that the high rates of extinction of the Guadalupian stage were not artifacts of the Signor-Lipps effect, but represent actual extinction. They conclude that the Permo-Triassic extinction consisted of two separate extinction events: the Guadalupian event when 71 % of marine species died out, and the Tartarian, with an 80 % disappearance of marine species still the largest mass extinction in paleontological history. The occurrence of two mass extinctions within 5 my of one another would be possible only if the causative mechanism of the first one had ceased to operate to allow for the observed recovery. The Siberian flood basalt volcanic episode occurs during the end of the Tartarian and is a possible culprit for the Tartarian extinction. This volcanism commenced less than 600,000 years before the P/T boundary much after the Guadalupian extinction. Hence the Siberian Traps could not have been the cause of the Guadalupian extinction. In addition it is likely for the Late Devonian extinction to also consist of two separate extinction episodes; the Frasnian event followed after an interval by the terminal Fammenian extinction . The occurrence of double extinctions is explained within the volcanogenic dark matter framework as explained above. In fact this is a unique and unambiguous prediction of this model. Just as in the case of the earth, dark matter capture and annihilation in other planets and their satellites would lead to significant heat generation in these bodies for a uniform dark matter halo. This thermal output becomes enormous when clumped dark matter passes through the solar system. There are several evidences of clumpiness of dark matter in galactic halos . This heat should be treated as a new source of heat in the planetary systems, at par with primordial accretional heat and radioactive heating. In may lie in the background or in special circumstances manifest itself more forcefully and directly. As such this new source of heat in the solar system may lead to unique imprints. Such new signatures of the dark matter are found in the generation of the recent completely unexpected discovery of the magnetic field of Ganymede along with the enigmatic Mercurian magnetic field. Standard conventional sources of heat are unable to give a reasonable description of these enigmatic magnetic fields. Careful calculations within the dark matter annihilation scenario enumerated here, explain them in a natural manner. The volcanic hypothesis, despite providing a viable explanation for several features reported for mass extinctions, has always lacked a compelling reason for otherwise supposedly haphazard eruptions to occur in a periodic fashion. When one takes into account that the earth has been cooling ever since its formation ( which implies a consequent decrease in volcanic activity ), this objection becomes a serious weakness. It is hoped that a viable reason for large volcanic eruptions to occur in a periodic manner has been presented here with the introduction of the volcanogenic dark matter scenario. This should strengthen the volcanic hypothesis of mass extinctions and in addition explain the enigmatic magnetic fields of Ganymede and Mercury. References 1 Klapdor-Kleingrothaus H V and Staudt A, ” Non-accelerator particle physics ”, IOP Publishing, Bristol ( UK ) , 1995 2 Bottino A, Donato F, Mignola G, Scopel S, Bell P and Incichitti A, Phys Lett, B 402 (1997) 113 3 Glanz A, Nature, 283 (1999) 13; Cern Courier, June 1999, 17 4 Abbas S and Abbas A, Astroparticle Physics, 8 (1998) 317 5 Kanipe J, New Scientist, (Jan 11 1997) 14 6 Press W H and Spergel D N, Ap J, 296 (1985) 679 7 Krauss L M, Srednicki M and Wilczek F, Phys Rev, D33 (1986) 2079 8 Gaisser T K, Steigman G and Tilav S, Phys Rev, D34 (1986) 2206 9 Freese K, Phys Lett, B167 (1986) 295 10 Bengtsson H-U, Salati P and Silk J, Nucl Phys, B346 (1990) 129 11 Silk J and Stebbins A, Ap J, 411, 439 (1993) 12 Kolb E W and Thachev I I, Phys Rev, D50 (1994) 769 13 Bergstrom L and Ullio P, Nucl Phys, B504 (1997) 27 14 Collar J I, Phys Lett, B368 (1996) 266 15 Gould A, Ap J, 321 (1987) 571 16 Bottino A, Forengo N, Mignola G and Moscoso L, Astroparticle Physics, 3 (1995) 65 17 Courtillot V E, Sc Am, (Oct. 1990) 85 18 Basu A R, Poreda R J, Renne P R, Teichmann F, Vasilev Y R, Sobolev N V and Turrin B D, Science, 269 (1995) 822 19 Walker R J, Morgan J W, Horan M F, Science, 269 (1995) 819 20 Campbell I H, Czamanske G K, Fedorenko V A, Hill R I and Stepanov V, Science, 258 (1992) 1760 21 Officer C B, Hallam A, Drake C L and Devine J D, Nature, 326 (1987) 143 22 Officer C and Page J, ‘The Great Dinosaur Controversy’, Addison-Wesley (1996) 23 Stanley S M and Yang X, Science, 266 (1994) 1340 24 Abbas Samar, Abbas Afsar and Mohanty Shukadev, ”Evidence of Compact Dark Matter in Galactic Halos”, astro-ph/9910187
no-problem/9910/astro-ph9910030.html
ar5iv
text
# Time evolution of galactic warps in prolate haloes ## 1 Introduction Many spiral galaxies, including our Galaxy, have warped discs which resemble characteristic ‘cosmic integral signs’. That is, the outer disc lies above the inner disc plane on one side, and falls bellow that plane on the other side. Although the warping is often seen in neutral hydrogen layers (Sancisi 1976; Bosma 1981), it is also observed in stellar discs (van der Kruit & Searle 1981; Innanen et al. 1982; Sasaki 1987). In the Milky Way, the stellar warp has been detected not only for young stars (Miyamoto, Yoshizawa & Suzuki 1988) but for old stars (Porcel, Battaner & Jiménez-Vicente 1997). In addition, the frequency of warped discs in spiral galaxies is sufficiently large that at least half of spirals are warped both in H i discs (Bosma 1991) and in optical discs (Sánchez-Saavedra, Battaner & Florido 1990; Reshetnikov & Combes 1998). These observations imply that warps must persist for a long time unless they are repeatedly excited. It is true that some galaxies with warped discs (e.g., M31, see Innanen et al. 1982) have nearby companions. However, there do exist warped galaxies (e.g., NGC 4565, see Sancisi 1976) that have no nearby companions being supposed to be responsible for the warp in the recent past. In fact, Reshetnikov & Combes (1998) have revealed that about 21 out of 133 isolated galaxies are warped like an integral sign. This indicates that warps are not necessarily caused by tidal interactions with other galaxies. One explanation for isolated warped galaxies is the gravitational torque of a halo acting on a disc that is ‘misaligned’ to the equatorial plane of the halo. Such a tilted disc embedded in a halo has intrinsic spin, so that it precesses like a top. Since the precessing rate is a function of radius, kinematical warps will wind up and disperse in a short period of time. Once the self-gravity of the disc is taken into account, realistic warped configurations emerge in which a disc precesses coherently like a solid body inside axisymmetric haloes (Sparke 1984; Sparke & Casertano 1988, hereafter SC; Kuijken 1991). Even if a tilted disc is formed in a halo as a different shape from a warped mode, it will be finally turned into the mode within a Hubble time (Hofner & Sparke 1994). However, according to some numerical simulations (Dubinski & Kuijken 1995; Binney, Jiang & Dutta 1998), the warping in oblate haloes is not retained persistently, and so, disappears within a few dynamical times. Thus, the interaction between an oblate halo and a disc will be inappropriate for long-lasting warps. Recently, Smart et al. (1998) have extracted warp-induced motions in the Milky Way by analysing the data obtained with the Hipparcos satellite. Then, they have found that the Galactic warp rotates in the same direction as the Galaxy. As shown by Nelson & Tremaine (1995), oblate haloes lead to the opposite sense of the warp precession to the Galactic rotation, whereas prolate haloes make them rotate in accordance with each other. In addition, they have demonstrated that in some cases, prolate haloes can excite warps. Thus, prolate haloes are favourable to the explanation of Smart et al.’s (1998) finding, if the motions that they found are attributed to the interaction of the Galactic disc with an often assumed massive halo. Cosmological simulations, based on a cold dark matter scenario, have also revealed that dark matter haloes surrounding individual galaxies are highly triaxial and that the fraction of prolate haloes is roughly equal to that of oblate haloes (Dubinski & Carlberg 1991). In spite of these circumstances mentioned above, prolate haloes have somehow often been ignored in previous studies on warps. Therefore, we need to pay attention to the warp arising from prolate haloes. In this paper, we examine how a warp is developed and evolves in prolate haloes in comparison with that in oblate haloes. As a first step, we treat the haloes as external fixed potentials. In Section 2, we describe the models and the numerical method. Results are presented in Section 3. In Section 4, we analyse our results and explain them on the basis of a simplified model. Conclusions are given in Section 5. ## 2 Models and Method We study the evolution of self-gravitating discs embedded in axisymmetric haloes. As shown by Nelson & Tremaine (1995) and by Dubinski & Kuijken (1995), dynamical friction between a disc and a halo plays an important role to precessing bending modes at least for the inner region of the composite system. However, the accurate evaluation of dynamical friction would require a prohibitively huge number of particles to represent the halo as well as the disc. Otherwise, the disc will thicken owning to two-body relaxation originating from Poisson fluctuations. In fact, Dubinski & Kuijken (1995) reported the vertical disc thickening for a self-consistent model with a particle disc, bulge, and halo. As a result, warped structures developed in the disc could not be distinguished from the background particle distribution, which would lead us to an incorrect conclusion about the longevity of the warp. Then, we begin with rigid halo models, as a first step, to unravel the effects of the halo shape on the warp. The density distribution of the halo is an axisymmetric modification of Hernquist’s models (Hernquist 1990) being suitable for spherical galaxies and bulges. Then, the halo density profile is represented, in cylindrical coordinates, by $$\rho _\mathrm{h}(R,z)=\frac{M_\mathrm{h}}{2\pi a^2c}\frac{1}{m\left(1+m\right)^3},$$ (1) where $`M_\mathrm{h}`$ is the halo mass, $`a`$ and $`c`$ are the radial and vertical core radii, respectively, and $$m^2=\frac{R^2}{a^2}+\frac{z^2}{c^2}.$$ (2) The cumulative mass profile and potential of the halo are written, respectively, by (Binney & Tremaine 1987), $$M_\mathrm{h}(R,z)=M_\mathrm{h}\frac{m^2}{\left(1+m\right)^2},$$ (3) and $$\mathrm{\Phi }_\mathrm{h}(R,z)=\frac{GM_\mathrm{h}}{2}_0^{\mathrm{}}\frac{du}{\left(a^2+u\right)\sqrt{c^2+u}\left[1+m\left(u\right)\right]^2},$$ (4) where $$m^2\left(u\right)=\frac{R^2}{a^2+u}+\frac{z^2}{c^2+u}.$$ (5) As a realistic disc model, though a bulge component is not included, we adopt an exponential density profile in the radial direction (Freeman 1970) and an isothermal sheet approximation in the vertical direction (Spitzer 1942) given by $$\rho _\mathrm{d}(R,z)=\frac{M_\mathrm{d}}{4\pi R_\mathrm{d}^2z_\mathrm{d}}\mathrm{exp}\left(\frac{R}{R_\mathrm{d}}\right)\mathrm{sech}^2\left(\frac{z}{z_\mathrm{d}}\right),$$ (6) where $`M_\mathrm{d}`$ is the disc mass, $`R_\mathrm{d}`$ is the disc scale-length, and $`z_\mathrm{d}`$ is the disc scale-height. The discs are truncated radially at $`15R_\mathrm{d}`$ and vertically at $`2z_\mathrm{d}`$. Following Hernquist’s (1993) approach, we approximate the velocity distribution of disc particles using moments of the collisionless Boltzmann equation; the velocities are sampled from Gaussian distributions with means and dispersions derived from the Jeans equations. The Toomre $`Q`$ parameter (Toomre 1964) used to normalize the radial velocity dispersion is set to be 1.5 at the solar radius, $`\mathrm{R}_{}=\left(8.5/3.5\right)R_\mathrm{d}`$. The $`Q`$ profile varies with radius and has a minimum at approximately $`R=2R_\mathrm{d}`$, where the minimum $`Q`$ value is about 1.47. To avoid complications due to an extra component such as a bar, this rather large $`Q`$ distribution is chosen so that the bar instability will not occur in the disc. We have ensured that the disc-halo system thus constructed is really in equilibrium if the disc is initially placed in the equatorial plane of the halo, because the density profile of the disc did not change significantly over several orbital times. However, the disc does not necessarily form in the equatorial plane of the halo. In fact, Katz & Gunn (1991) showed that the disc was misaligned to the symmetry plane of the halo at an angle of typically 30 degrees at its birth. Therefore, in our simulations the disc is initially tilted by 30 degrees with respect to the equatorial plane of the halo. We employ a system of units such that the gravitational constant $`G=1`$, the disc mass $`M_\mathrm{d}=1`$, and the exponential scale-length $`R_\mathrm{d}=1`$. The orbital period at the half-mass radius of the exponential disc, $`R1.7R_\mathrm{d}`$, is $`13.4`$ in our system of units. If these units are scaled to physical values appropriate for the Milky Way, i.e., $`R_\mathrm{d}=3.5\mathrm{kpc}`$ and $`M_\mathrm{d}=5.6\times 10^{10}\mathrm{M}_{}`$, unit time and velocity are $`1.31\times 10^7\mathrm{yr}`$ and $`262\mathrm{km}\mathrm{s}^1`$, respectively. The disc is represented by $`\mathrm{100\hspace{0.33em}000}`$ particles of equal mass. We show the parameters of our models in Table 1, and the rotation curves of each model in Fig. 1. The halo mass is determined so that the disc and halo masses within $`3R_\mathrm{d}`$ are equal to each other. The simulations are run with a hierarchical tree algorithm (Barnes & Hut 1986) using the GRAPE-4, a special-purpose computer for gravitationally interacting particles (Sugimoto et al. 1990; Makino et al. 1997). We adopt an opening angle criterion, $`\theta =0.75`$. Only monopole terms are included in the tree code. The equations of motion are integrated with a fixed time-step, $`\mathrm{\Delta }t=0.1`$, using a time-centred leapfrog method. The Plummer softening length is 0.04$`R_\mathrm{d}`$, or in other words, 0.2$`z_\mathrm{d}`$. ## 3 Results We stopped the simulations at $`t=400`$. This time corresponds to about 30 orbital periods at the half-mass radius of the disc. No bar instability was found in the discs. In either simulation, the total energy was conserved to better than 0.2 per cent. We measured the inclination and the longitude of ascending nodes of the disc relative to the equatorial plane of the halo by calculating the principal moments of inertia for the bound particles. In Fig. 2, we then show the evolving density profiles from an edge-on view of the discs in the precessing frame. In this frame, an observer is always on that line of nodes of the disc which is calculated for the particles within the half-mass radius of the disc, $`R1.7R_\mathrm{d}`$. The precession periods of the discs in the oblate and prolate haloes evaluated with least-squares fits are $`T_{\mathrm{ob}}=266`$ and $`T_{\mathrm{pr}}=306`$, respectively. These values are in excellent agreement with those predicted by linear theory \[see equation (21) of SC\] which gives $`T_{\mathrm{ob}}=266`$ and $`T_{\mathrm{pr}}=308`$. Hofner & Sparke (1995) showed that warped configurations are developed in oblate haloes. We further find from Fig. 2 that such configurations appear in the prolate halo as well as in the oblate one. In addition, Fig. 2 demonstrates that the shape of warp depends on that of halo: using the terminology in SC, the type I warp that bends upward away from the symmetric plane of the halo is developed in the oblate halo, while the type II warp that bends down toward the symmetric plane of the halo is developed in the prolate one. The warp in the oblate halo decayed and disappeared almost completely by the end of the simulation, while the warp in the prolate one persisted to the end. To see the differential precession of the disc, we divided the distance between the centre and $`5R_\mathrm{d}`$ evenly into 10 annuli, and calculated the inclination and azimuthal angles of each annulus which contains at least $`\mathrm{2\hspace{0.33em}000}`$ particles. Fig. 3 shows, on the polar diagram, the line of ascending nodes of each annulus with respect to the equatorial plane of the halo. There are two differences between the oblate and prolate halo models. One is the sense of the precession; for the oblate halo the warp rotates in the opposite direction to the disc rotation, while for the prolate one it rotates in the same direction. The other is the behaviour of the winding of the warp; for the oblate halo the warp winds up tightly with time, while for the prolate one the longitude of each annulus is kept almost aligned. We find from Fig. 3 that the inner region of the disc, $`R3R_\mathrm{d}`$, precesses at almost a constant rate independent of radius both in the prolate and oblate haloes. Since the disc mass is equal to the halo mass within $`3R_\mathrm{d}`$ for our models, the self-gravity of the disc is dominant as compared to that of the halo within such radius. We thus understand that this behaviour arises from the predominance of the self-gravity of the disc over that of the halo, as shown by Lovelace (1998). On the other hand, the outer disc $`(R3R_\mathrm{d})`$ precesses at a different rate from radius to radius. To make clear the difference in precession at large radii between the oblate and prolate halo models, we present in Fig. 4 the time evolution of the longitude (top row) and that of the inclination angle (bottom row) with respect to the equatorial plane of the halo for the outer and inner discs which correspond to the annulus between $`4.5R_\mathrm{d}`$ and $`5.0R_\mathrm{d}`$, and to that between $`1.5R_\mathrm{d}`$ and $`2.0R_\mathrm{d}`$, respectively. Fig. 4 shows that the precession rate of the outer disc increases with decreasing inclination angle, and vice versa. For the prolate halo, at the beginning of the simulation, the inclination angle decreased, and the precession rate increased. In the subsequent evolution, the longitude of the outer disc passed through that of the inner disc as the inclination decreased. After the passage of the longitude of the outer disc, the inclination increased, and the precession rate decreased. For the oblate halo, on the other hand, the inclination increased initially, and the precession rate decreased. The difference in longitude between the inner and outer discs became larger with increasing inclination, and so, the warp wound up with time as seen in Fig. 3. The difference in inclination angle between the inner and outer discs in the oblate halo is larger than that in the prolate one at the end of the simulation. However, for the oblate halo, the warp disappeared as seen in Fig. 2, because the azimuthal angle of the inner disc was different from that of the outer disc. ## 4 Physical interpretation As found in the previous section, the prolate halo is plausible for the maintenance of galactic warps in that the winding problem is avoided. However, there remains a question as to what makes the difference in evolution of warps between the oblate and prolate halo models. Since Lovelace (1998) has shown that the self-gravity of the disc can synchronize the precession rate in the inner region, we need to explain the different behaviour of the warp in the outer region between the oblate and prolate halo models. Then, we simplify the $`N`$-body models used in our simulations and construct a three-component system consisting of an outer disc, an inner disc, and an axisymmetric halo in order to pay special attention to the torque between the inner and outer discs in addition to that from the halo. Here, we approximate an outer disc as a ring. ### 4.1 Simple model In this subsection, we solve the equation of motion for the outer ring in order to examine whether the results found in the $`N`$-body simulations can be reproduced. For this purpose, we consider the axisymmetric Binney (1981) potentials as models of the halo and the inner disc, $`\mathrm{\Phi }_\mathrm{h}`$ $`=`$ $`{\displaystyle \frac{1}{2}}V_{\mathrm{c},\mathrm{h}}^2\mathrm{ln}\left(R_{\mathrm{c},\mathrm{h}}^2+R^2+{\displaystyle \frac{z^2}{q_\mathrm{h}^2}}\right),`$ (7) $`\mathrm{\Phi }_\mathrm{d}`$ $`=`$ $`{\displaystyle \frac{1}{2}}V_{\mathrm{c},\mathrm{d}}^2\mathrm{ln}\left(R_{\mathrm{c},\mathrm{d}}^2+R^2+{\displaystyle \frac{z^2}{q_\mathrm{d}^2}}\right),`$ (8) where $`V_\mathrm{c}`$ is the asymptotic circular velocity, $`R_\mathrm{c}`$ is the core radius, and $`q`$ is the potential flattening with the subscripts ‘h’ and ‘d’ denoting the halo and the disc, respectively. The potentials of the halo and the inner disc are fixed. The inner disc is tilted by an angle of 30 degrees relative to the symmetric plane of the halo and given a constant precession rate that is calculated from linear theory (SC). The dynamics of the outer ring are solved on the basis of Euler’s equation of motion for a rigid body (Goldstein 1980). The radius and angular speed of the outer ring, $`\mathrm{\Omega }`$, are set to be $`5.0`$ and $`0.15`$, respectively. The parameters of this model are summarized in Table 2. The values of the potential flattening, $`q_\mathrm{h}`$ and $`q_\mathrm{d}`$, are adjusted to those which are evaluated from the halo and disc models at $`5R_\mathrm{d}`$ employed in the $`N`$-body simulations (see Table 1). We determine the values of $`V_{\mathrm{h},\mathrm{d}}`$ and $`V_{\mathrm{c},\mathrm{d}}`$ in the same manner. Fig. 5 presents the time evolution of the longitude and inclination of the outer ring, which corresponds to Fig. 4 in the $`N`$-body simulations. We can see that the precession rate of the outer ring increases with decreasing inclination angle with respect to the equatorial plane of the halo, and vice versa. Moreover, for the prolate halo the inclination angle decreases at the beginning, while for the oblate one it increases. Thus, for the prolate halo, the precession of the outer ring can pass through that of the inner disc. After the passage, the inclination angle increases and the precession rate decreases. This means that the longitude of the inner disc and the outer ring remains almost aligned. Therefore, the behaviour of the outer ring is the same as that seen in the $`N`$-body simulations. Since we have found that the simple model can reproduce the main properties of the $`N`$-body simulations, we can rely on this model to figure out the physical mechanism of warps in the oblate and prolate haloes as described below. ### 4.2 The precession The precession of the outer ring is caused by the torque due to the halo and the inner disc. We take a coordinate system in which the $`z`$-axis is along the symmetry axis of the halo, and the $`x`$-axis is along the line of nodes of the outer ring (where the outer ring intersects with the $`z=0`$ plane). The $`xy`$-plane is in the inertial frame. The geometry is shown in Fig. 6. The $`x`$-component of the torque exerted by the halo on the outer ring is given by $$T_{\mathrm{h},x}=_0^{2\pi }r𝑑\varphi \lambda \left(F_{\mathrm{h},y}zF_{\mathrm{h},z}y\right)\lambda \frac{m}{2\pi r},$$ (9) where $`\lambda `$ is the line density of the outer ring, and $`m`$ is the mass of the outer ring. The $`y`$\- and $`z`$-components, $`F_{\mathrm{h},y}`$ and $`F_{\mathrm{h},z}`$, of the force due to the halo per unit mass are, respectively, written by $$F_{\mathrm{h},y}=\mathrm{\Omega }_\mathrm{h}^2y,F_{\mathrm{h},z}=\mu _\mathrm{h}^2z,$$ (10) where $`\mathrm{\Omega }_\mathrm{h}`$ and $`\mu _\mathrm{h}`$ are the halo contributions of the orbital and vertical frequencies, respectively. The position vector of a point on the outer ring is $$𝑹=(r\mathrm{cos}\varphi ,r\mathrm{cos}i\mathrm{sin}\varphi ,r\mathrm{sin}i\mathrm{sin}\varphi )0\varphi 2\pi ,$$ (11) where $`i`$ is the inclination angle of the outer ring with respect to the equatorial plane of the halo. Then, on the assumption that $`\mathrm{\Omega }_\mathrm{h}`$ and $`\mu _\mathrm{h}`$ are constant on the ring, we obtain $$T_{\mathrm{h},x}=\frac{mr^2}{2}\left(\mathrm{\Omega }_\mathrm{h}^2\mu _\mathrm{h}^2\right)\mathrm{sin}i\mathrm{cos}i.$$ (12) The potential of an exponential disc in the outer region is approximated as \[see equation (2P-5) of Binney & Tremain (1987)\], $$\mathrm{\Phi }_\mathrm{d}(R,z)\frac{GM_\mathrm{d}}{r}\left[1+\frac{3R_\mathrm{d}^2\left(R^22z^2\right)}{2r^4}\right].$$ (13) Thus, the $`y`$\- and $`z`$-components of the force due to the inner disc per unit mass are, respectively, given by $$F_{\mathrm{d},y}=\frac{GM_\mathrm{d}}{r^2}\frac{y}{r}\left[1+\frac{15R_\mathrm{d}^2\left(R^22z^2\right)}{2r^4}\frac{3R_\mathrm{d}^2}{r^2}\right],$$ (14) and $$F_{\mathrm{d},z}=\frac{GM_\mathrm{d}}{r^2}\frac{z}{r}\left[1+\frac{15R_\mathrm{d}^2\left(R^22z^2\right)}{2r^4}+\frac{6R_\mathrm{d}^2}{r^2}\right].$$ (15) Provided that the line of nodes with respect to the equatorial plane of the halo is aligned between the inner disc and the outer ring, the $`x`$-component of the torque exerted by the inner disc is $$T_{\mathrm{d},x}=\frac{mr^2}{2}\frac{9R_\mathrm{d}^2GM_\mathrm{d}}{r^5}\mathrm{sin}\delta \mathrm{cos}\delta ,$$ (16) where $`\delta `$ is the inclination angle of the outer ring relative to the inner disc plane, and its sign is positive when the inclination angle of the inner disc is larger than that of the outer ring, and vice versa. Next, the absolute value of the total angular momentum of the outer ring, $`L`$, is $`mr^2\mathrm{\Omega }`$, where $`\mathrm{\Omega }`$ is the orbital frequency, if the precession rate is rather smaller than the orbital frequency. The perpendicular component to the $`z`$-axis is $`mr^2\mathrm{\Omega }\mathrm{sin}i`$. The change in $`L_x`$ over an infinitesimally small time $`\mathrm{\Delta }t`$ is $`mr^2\mathrm{\Omega }\mathrm{sin}i\omega _\mathrm{p}\mathrm{\Delta }t`$, where $`\omega _\mathrm{p}`$ is the precession rate of the outer ring. Thus, $$\dot{L_x}=mr^2\mathrm{\Omega }\mathrm{sin}i\omega _\mathrm{p}.$$ (17) This should be equal to the torque on the outer ring, so that the precession rate is derived as $$\omega _\mathrm{p}=\frac{\mathrm{\Omega }_\mathrm{h}^2\mu _\mathrm{h}^2}{2\mathrm{\Omega }}\mathrm{cos}i\frac{9R_\mathrm{d}^2GM_\mathrm{d}}{2\mathrm{\Omega }r^5}\frac{\mathrm{sin}\delta \mathrm{cos}\delta }{\mathrm{sin}i}.$$ (18) If the warp is of the type I shape such as Fig. 7a, $`i`$ is nearly equal to $`\delta `$. It follows from equation (18) that the precession rate is proportional to $`\mathrm{cos}i`$. In this case, the second term of equation (18) is negative because $`\delta `$ is positive, and the first term is also negative because $`\mathrm{\Omega }_\mathrm{h}<\mu _\mathrm{h}`$ for the oblate halo. Therefore, the precession rate increases with increasing inclination as seen in Fig. 4, and the difference in longitude between the outer ring and the inner disc becomes larger with time. If the warped disc is of the type II shape such as Fig. 7b, $`\delta `$ is negative. Taking into consideration $`\mathrm{\Omega }_\mathrm{h}>\mu _\mathrm{h}`$ for the prolate halo, both terms in equation (18) are positive. Moreover, $`i`$ will become smaller than $`|\delta |`$ with decreasing inclination, so that the second term of equation (18) is dominant. Therefore, the precession rate increases with decreasing inclination, and vice versa as seen in Fig. 4. ### 4.3 The inclination The time evolution of the inclination is also explained simply by the torque on the outer ring. The geometry is the same as that used in the previous subsection except that the $`xy`$-plane is in the precessing frame of the outer ring. The $`y`$-component of the torque, $`T_y`$, changes that of the angular momentum, $`L_y=mr^2\mathrm{\Omega }\mathrm{sin}i`$, and so, affects the inclination angle, $`i`$. At the beginning, $`L_y`$ suffers no change from the torque due to the halo because of the axisymmetric nature of the halo. If the longitude of the outer ring is the same as that of the inner disc, we obtain $`T_y=0`$, and so, the inclination remains unchanged. If the longitude of the outer ring is smaller than that of the inner disc, i.e., a warp like a trailing spiral, $`T_y`$ becomes positive. Hence, $`L_y`$ increases, so that the inclination angle of the outer ring decreases. Since the torque on the inner disc is the opposite direction to that of the outer ring according to Newton’s third law, the inclination of the inner disc increases. As a result, the type II warp is generated. Similarly, a warp like a leading spiral leads to the type I warp. If the precession rate in the prolate halo decreases with radius, the warped configuration becomes similar to a trailing spiral because the sense of the precession is the same direction as the disc rotation. Consequently, the type II warp is produced, which leads to the situation where the inclination of the outer ring decreases, and the precession rate increases. In the subsequent evolution, the longitude of the outer ring passes through that of the inner disc. At the point, the warped configuration turns into that similar to a leading spiral, which, this time, leads to the situation where the inclination increases and the precession rate decreases. This kind of self-regulation enables the warped disc to avoid the differential precession. For the oblate halo, however, once the precession of the outer ring recedes from that of the inner disc, the difference in precession rate continues to increase, so that the warp winds up tightly. ### 4.4 Comparison with discrete bending modes Our oblate halo model cannot sustain the warped disc, though SC found that long-lived warps do exist in some halo models. This discrepancy is considered to originate from the radial extent of a disc. SC showed that a discrete mode with an eigenfrequency $`\omega `$ may exist only if $`\mathrm{\Omega }\mu <\omega <\mathrm{\Omega }+\mu `$ is satisfied at the edge of the disc, where $`\mathrm{\Omega }`$ and $`\mu `$ are the orbital and vertical frequencies, respectively. For oblate haloes, $`\mathrm{\Omega }\mu `$ and $`\omega `$ are negative. Since $`\mathrm{\Omega }\mu `$ tends to zero with radius, there is some radius where $`\omega `$ is equal to $`\mathrm{\Omega }\mu `$. Beyond such a radius, the condition of the existence of discrete modes is violated. According to SC, bending modes become continuous in frequency $`\omega `$, if a warped disc embedded in an oblate halo extends beyond the radius at which $`\omega =\mathrm{\Omega }\mu `$ holds. Consequently, such continuous modes will be propagated with a group velocity and disappear (Hunter & Toomre 1969). As is found from Fig. 8a, such a resonance radius emerges at $`R4.8R_\mathrm{d}`$ in our oblate halo model. This implies that there would be no discrete bending mode and that a warped configuration would be coerced to disperse. Thus, the disappearance of the warping for the oblate halo could be due to the existence of the resonance. On the other hand, in our prolate halo model, the condition of $`\mathrm{\Omega }\mu <\omega <\mathrm{\Omega }+\mu `$ is exactly satisfied within the truncated radius, 15 $`R_\mathrm{d}`$, as shown in Fig. 8b. Therefore, a discrete mode can exist in our prolate halo model. Our adopted disc model is nothing special in the sense that the observed light distributions of galactic discs are well-described by an exponential law, though a constant mass-to-luminosity ratio throughout the disc is assumed. Our simulations suggest that as long as a disc is not truncated abruptly, real galactic discs would not have a discrete bending mode if surrounding haloes are oblate. ## 5 Conclusions In this paper, we have examined the time evolution of warped discs in the oblate and prolate haloes using $`N`$-body simulations. The haloes were represented by fixed external potentials in which self-gravitating discs were embedded. Then, we have found the warped configurations both in the oblate and prolate haloes. While the warping in the oblate halo continued to wind up with time and finally disappeared, the warping in the prolate halo survived to the end of the simulation by regulating the line of nodes of the warped disc to be straight. We have shown that this difference in winding between the oblate and prolate haloes can be attributed to the gravitational torque between the inner and outer discs. Observationally, some galaxies show the straight line of nodes of the warp within the Holmberg radius, beyond which the warp is traced as a leading spiral (Briggs 1990). Others show that the line of nodes is delineated as a trailing spiral (Christodoulou, Tohline, Steiman-Cameron 1988; Bosma 1991). These observations appear to favour the view that we just witness the different phases of the evolving warped discs in prolate haloes, because only warped discs in prolate haloes can change the spirality of the line of nodes according to our simulations. Putting together our simulations and the observations mentioned above, we can infer that warps are formed and maintained in prolate haloes. However, our fixed halo models are quite simplified. In particular, such models cannot include the effect of dynamical friction between the warped disc and the halo. Dubinski & Kuijken (1995) and Nelson & Tremaine (1995) have shown that dynamical friction plays an important role to precessing discs. Therefore, we will need the simulations of warped discs embedded in live haloes to determine the precise evolution of warps, though a huge number of particles will be required to incorporate the effect of dynamical friction accurately into such simulations, and to avoid disc heating due to an insufficient number of halo particles. This line of investigation is in progress (Ideta et al. in preparation). ## Acknowledgments We are grateful to Prof. S. Inagaki for his critical reading of the manuscript. We thank E. Ardi and Y. Kanamori for useful discussions, and Dr. J. Makino and the anonymous referee for valuable comments on our paper. MI thanks Dr. J. Makino for giving him an opportunity to use the GRAPE-4 and providing him with a tree code available on it. MI is also indebted to A. Kawai for his technical advice on the use of the GRAPE-4. TT and MT acknowledge the financial support from the Japan Society for the Promotion of Science.
no-problem/9910/astro-ph9910399.html
ar5iv
text
# The dwarf nova RZ Leonis : photometric period, “anti-humps” and normal alpha disk∗ ## 1 Introduction: about RZ Leonis Dwarf novae are interacting binary stars in which a Roche-lobe filling main-sequence secondary looses mass through the $`L_1`$ point. The transferred mass falls along a ballistic trajectory towards the heavier white dwarf primary, forming an accretion disk. The disk undergoes semi-periodic collapse during which matter is accreted by the compact primary. The result is a release of gravitational energy which is observed as a system brightening. This is called a dwarf nova outburst. Dwarf novae, a subclass of cataclysmic variable stars (CVs), have been reviewed by Warner (1995a). RZ Leonis is a long cycle-length large-amplitude dwarf nova with only 7 outbursts recorded since 1918 (e.g Vanmuster & Howell 1996) and with an estimated distance from earth between 174 and 246 pc (Sproats et al. 1996). Humps in the light curve of RZ Leo repeating with a 0$`\stackrel{d}{.}`$0708(3) period were observed by Howell & Szkody (1988). They concluded that this dwarf nova is a candidate for SU UMa star probably seen under a large inclination. This assumption is supported by the finding of broad double emission-lines in the optical spectrum (Cristiani et al. 1985, Szkody & Howell 1991). Orbital humps are observed in some high-inclination dwarf novae (e.g. Szkody 1992), they probably reflect the pass of the disk-stream interacting region (often named hot spot or bright spot) along the observer’s line of sight. The study of the hot spot variability of RZ Leo is potentially useful to constrain models of gas dynamics in close binary systems. In the classical view, the hot spot is formed during the shock interaction of matter in the gaseous stream flowing from $`L_1`$ (the inner Lagrangian point) with the outer boundary of the accretion disk. This picture was consistent with photometric observations of dwarf novae during many years. However, this view conflicts with recents observations indicating anomalous hot spots in many systems. For example, in many cases, Doppler tomography does not show the effect of a hot spot at all, or indicates that the hot spot is not in the place where we would expect a collision between the gaseous stream and the outer boundary of the disk (e.g. Wolf et al. 1998). To explain these findings, the possibility of gas stream overflow has been worked out. In this view the hot spot is formed behind the white dwarf by the ballistic impact of a deflected stream passing over the white dwarf (e.g. Armitage & Livio 1998, Hessmann 1999). However, this scenario has not yet been confirmed by observations. Furthermore, recent three-dimensional numerical simulations indicate the absence of a shock between the stream and the disk. The interaction between the stream and the common envelope of the system forms and extended shock wave along the edge of the stream, whose observational properties are roughly equivalent to those of a hot spot in the disk (Bisikalo et al. 1998). This paper is aimed to confirm the reported photometric period and to establish a long-term hump ephemeris. We also expect to detect systematic luminosity trends and get insights about the hump variability and hot spot nature. Interestingly, we find some phenomena conflicting, in many ways, with the classical scenario of the hot spot forming region. ## 2 The observations and data reduction CCD images were obtained at six observing runs during 1991–1999 at Las Campanas Observatory (LCO) and ESO La Silla Observatory, Chile. Exposure times were between 250 and 300 s. Details of the observations are given in Table 1. All science images were corrected for bias and were flat-fielded using standard IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation routines. Instrumental magnitudes were calculated with the phot aperture photometry package, which is adequate due to the RZ Leo’s uncrowded field. The optimum aperture radius defined by Howell (1992) was used. This radius matches the $`HWHM`$ of the point spread function, minimizing the noise contribution due to sky pixels and readout noise. In this paper we are interested in differential photometry. This technique, reviewed by Howell (1992), involves the determination of time series $`VC`$ and $`CCH`$, among instrumental magnitudes of variable ($`V`$), comparison ($`C`$) and check ($`CH`$) stars in the same CCD field. A finding chart of RZ Leo showing the check and comparison stars is shown in Fig. 1. The photometric error of $`VC`$ was derived from the standard deviation of the $`CCH`$ differences. In general, the intrinsic variance (not due to variability but to noise) associated to each differential light curve $`\sigma _{VC}`$ and $`\sigma _{CCH}`$ are related by a scale factor $`\mathrm{\Gamma }`$ depending on the relative brightness of the sources (Howell & Szkody 1988, Eq. 13). This factor is of order of unity if the three sources are of similar brightness or if the variable is of similar brightness to the check star and the comparison is brighter. These criteria are completely fulfilled in our observations. Table 1 shows mean $`V`$ magnitudes along with the comparison and check stars used every night. The star labeled $`C1`$ in Fig. 1 (for which $`V`$ = 14.201 is given by Misselt 1996), was used to shift the differences to an non-differential magnitude scale. On the other hand, $`UBV`$ magnitudes taken at HJD 244 8333.5981, 244 8333.6044 and 244 8333.6131 were properly calibrated with photometric standard stars, yielding $`V`$ = 18$`\stackrel{m}{.}`$56 $`\pm `$ 0$`\stackrel{m}{.}`$04, $`BV`$ = 0$`\stackrel{m}{.}`$17 $`\pm `$ 0$`\stackrel{m}{.}`$07 and $`UB`$ = -1$`\stackrel{m}{.}`$02 $`\pm `$ 0$`\stackrel{m}{.}`$08. ## 3 Results ### 3.1 The humps: a distinctive character of the light curve The differential light curves shown in Fig. 2 indicate the presence of prominent humps on March 1991 and 1995. The humps are roughly symmetrical lasting by about 65 minutes and followed by a slow magnitude decrease (March 1991) or by a secondary low-amplitude hump (March 1995). This picture sharply contrasts with that observed on early 1998 (Fig. 3). The humps are completely absent on January-February 1998 and re-appear (with secondary humps) on March 1998 and January 1999 (Fig. 4). A remarkable feature is the absorption like feature seen on January 1998. We will show in the next section that this feature is the “embrion” of the fully developed humps seen two months later. ### 3.2 The long-term light curve Fig. 5 shows the long-term light curve of RZ Leo during 1987–1999. It is evident that the quiescence mean magnitude changes by several tenths of magnitude in a few years and at 3.5 $`\times `$ 10<sup>-3</sup> mag d<sup>-1</sup> during January and March 1998. Unfortunately, the faintness of the object has prevented a continuous monitoring, so the long-term data are inevitably undersampled. ### 3.3 Searching for a photometric period We removed the long-term fluctuations normalizing the magnitudes to a common nightly mean. Then we applied the Scargle (1982) algorithm, implemented in the $`MIDAS`$ $`TSA`$ package, which obeys an exponential probability distribution and is especially useful for smooth oscillations. In this statistics, the false alarm probability $`p_0`$ depends on the periodogram’s power level $`z_0`$ through $`z_0\mathrm{ln}N/p_0`$, for small $`p_0`$, where $`N`$ is the number of frequencies searched for the maximum power (Scargle 1982, Eq. 19). In our search we used $`N`$ = 20000, so the 99% confidence level (i.e. those corresponding to $`p_0`$ = 0.01) corresponds to a power $`z_0`$ = 14.5. The range of frequency scanned was between the Nyquist frequency, i.e. 1.7 $`\times 10^3`$ c/d and 1 c/d. After applying the method to the whole dataset many significant aliases appeared around a period 0$`\stackrel{d}{.}`$076. Apparently, the light curve was characterized by a non-coherent or non-periodic oscillation. We decided to start with our more restricted dataset of March 1991. The corresponding periodogram, shown in Fig. 6, shows a strong period at 0$`\stackrel{d}{.}`$0756(12) (108.9 $`\pm `$ 1.7 m, the error correspond to the half width at half maximum of the periodogram’s peak) flanked by the $`\pm `$ 1 c d<sup>-1</sup> aliases at 0$`\stackrel{d}{.}`$070 (the period found by Howell & Szkody 1988) and 0$`\stackrel{d}{.}`$082. The ephemeris for the time of hump maximum is: $$T_{max}=2448333.6186(35)+0\stackrel{d}{.}0756(12)E$$ (1) In order to search for possible period changes we constructed a $`OC`$ diagram based on timings obtained measuring the hump maxima. These timings, given in Table 2, were compared with a test period of 0$`\stackrel{d}{.}`$0756. The $`OC`$ differences versus the cycle number are shown in Fig. 7. Apparently, the period is not changing in a smooth and predictable way. In principle, the $`OC`$ differences are compatible with non-coherent humps and/or period jumps. To explore both possibilities, we searched for seasonal periods. Only datasets of March 1991, 1995 and 1998 were dense enough to construct periodograms. The results, given in Table 3, suggest a non-coherent signal rather than a variable period. In summary, the data are compatible with humps repeating with a period of 0$`\stackrel{d}{.}`$0756(12) but in a non-coherent way. Armed with a photometric period, we constructed seasonal mean light curves. Only nights with fully developed humps were included. The results, shown in Fig. 8, clearly show secondary humps around photometric phase 0.5. These mean light curves are provided as a hint for future light curve modeling. ### 3.4 “Anti-humps” and long-term hump evolution A review of the observations of early 1998 including the diskovery of “anti-humps” was given by Mennickent & Sterken (1999). Here we present a more complete analysis of the phenomenon. Fig. 9 shows in detail the events of early 1998. The light curves have been binned with a period 0$`\stackrel{d}{.}`$0756, accordingly to Table 3. The evolution of the hump is singular. It starts as a 0$`\stackrel{m}{.}`$15 absorption feature (07/01/98) then disappear from the light curve (11/01/98 and 06/02/98) and then re-appears like a small wave (07/02/98) and fully developed symmetrical hump (18/03/98 and 19/03/98). Secondary humps are also visible, with amplitude roughly 60% the main hump amplitude. On February 7 a secondary absorption hump is also visible, along with the main absorption feature. These “anti-humps” appear at the same phases where normal humps develop a month later. A close inspection to the data of February 7 reveals another alternative interpretation: the observed minima could define the base of the humps. We have rejected this hypothesis for three reasons: (1) it does not fit the ephemeris, indicating a possible shift of the hump maximum by about 0.2 cycles, (2) the peak-to-peak distance between main and secondary maxima should be 0.3 cycles instead 0.5 cycles, which is observed the other 3 nights and (3) the secondary maximum should be about 80% of the main peak, contrasting with a value of 60% observed other nights. We provide an interpretation for this phenomenon in the next Section. Fig. 10 shows the hump’s amplitude roughly anticorrelated with the nightly mean magnitude, as occurs in VW Hyi (Warner 1975). As shown in Fig. 9, this anti-correlation is not only due to the increase of hump brightness, but is also a true rise of the total systemic luminosity, through the whole orbital cycle. The outlier in Fig. 10 is a measure by Howell & Szkody (1988) which is a rather doubtful point. In fact, accordingly to these authors, since their primary goal was to obtain differential photometry – not absolute photometry – they calibrated their magnitudes using only a few standards per night. They give a formal error of 0$`\stackrel{m}{.}`$03 for the zero point of RZ Leo, but with so few standards observed, not in the same CCD field, it is difficult to control systematic errors due to variable seeing and atmospheric transparency. In the following, we will omit this outlier from our diskussion. Returning to Fig. 10, we observe that the hump disappears when $`V19`$ and attains maximum amplitude when $`V18.4`$. Surprisingly, the hump becomes “negative” (i.e. an absorption feature) when the system drops below $``$ 19 mag. A linear least squares fit to the hump amplitude $`\mathrm{\Sigma }`$ yields: $$\mathrm{\Sigma }=0.88(8)0.82(11)(V18)$$ (2) where $`V`$ refers to the nightly mean $`V`$ magnitude. ## 4 Discussion ### 4.1 A Moving hot spot ? Any reasonable model for the photometric variability of RZ Leo should reproduce the non-coherent humps and their amplitude variations. It is currently assumed that the humps reflect the release of gravitational energy when the gas stream hits the accretion disk. The disk’s luminosity is produced by the same process when disk gas slowly spirals towards the central white dwarf (e.g. Warner 1995a). An explanation for the varying humps could be a hotpsot moving along the outer disk rim. A bright spot co-rotating with the binary should reflect the binary orbital period, but random translations of the hot spot along the outer disk rim should produce a non-coherent signal. Support for this view arises from the evidence of moving hot spots in some dwarf novae, e.g. KT Per (Ratering et al. 1993) and WZ Sge (Neustroev 1998). The large scatter observed in the $`OC`$ diagram of RZ Leo (up to 0.4 cycles) is atypical for dwarf novae. For example, U Gem (Eason et al. 1983), IP Peg (Wolf et al. 1993) and V 2051 Oph (Echevarría & Alvarez 1993) show quasi-cyclic period variations, of small amplitude, on time scale of years. In these cases, the $`OC`$ residuals are always lower than 0.02 cycles. The interpretation of the changes observed in the above stars is still controversial. ### 4.2 RZ Leonis in the context of WZ Sge stars #### 4.2.1 Evidence for a normal $`\alpha `$ disk In this Section we analyze the events of early 1998. The observed correlation between the hump amplitude and mean brightness might provide important clues on the numerical value of the disk viscosity. The hot spot and disk bolometric luminosity can be approximated by (Warner 1995a, Eq. 2.21a and 2.22a): $$L_s\frac{GM_1\dot{M}_2}{r_d}$$ (3) $$L_d1/2\frac{GM_1\dot{M}_d}{R_1}$$ (4) where $`M_1`$ and $`R_1`$ are the mass and radius of the primary, $`r_d`$ the disk’s radius and $`\dot{M}_2`$ and $`\dot{M}_d`$ the mass transfer and mass accretion rates, respectively. In the following we assume that $`L_s`$ is proportional to the hump’s peak luminosity and $`L_d`$ is proportional to the cycle mean luminosity. The disk luminosity so defined includes some contribution from the hot spot, but it is difficult to exclude the wide and long-lasting humps from the analysis. It is apparent from Fig. 9 and 10 that the increase of hot spot luminosity is followed by an increase of the disk’s luminosity. This effect seems to be true, and not simply a consequence of the hump rising. Accordingly to Eq. 3 and 4, the events of early 1998 may be interpreted as follows: a mass transfer burst starts at the secondary in January 1998 and then continues with increasing $`\dot{M}_2`$, until March 1998. The burst, evidenced in the rising of hump’s luminosity in Fig. 9 triggers an increase of mass accretion rate inside the disk, as observed in the rising of the total systemic luminosity. The time scale for matter diffusion across the accretion disk, is called the viscous time scale (Pringle 1981): $$t_\nu \frac{r_d^2}{\nu _K}$$ (5) where the viscosity is given by the Shakura & Sunyaev (1973) ansatz: $$\nu _K=\alpha c_sH$$ (6) with $`H`$ the half-thickness of the disk and $`c_s`$ the sound velocity. Replacing Eq. 6 in 5 and using typical parameters $`H/r`$ = 0.01, $`r=10^{10}`$ cm, $`c_s=20\times 10^5`$ cm s<sup>-1</sup> we obtain $$\alpha \frac{5\times 10^5}{t_\nu }$$ (7) For the diffusion process observed in RZ Leo $`t_\nu `$ $``$ 6.0 $`\times `$ 10<sup>6</sup> s (70 days), we find $`\alpha `$ = 0.08, a common value among dwarf novae (Verbunt 1982). This value contrasts with the low $`\alpha `$ ($`<<`$ 0.01) invoked to explain the long recurrence times and large amplitude outbursts of some dwarf novae, in particular WZ Sge (Meyer-Hofmeister et al. 1998). Since our observations indicate a rather normal $`\alpha `$, the long recurrence time must be explained by another cause. In this context it is worthy to mention the hypothesis of inner disk depletion. The removal of the inner disk by the influence of a magnetosphere (Livio & Pringle 1992) or the effect of mass flow via a vertically extended hot corona above the cool disk (also referred as “coronal evaporation”, Meyer & Meyer-Hofmeister 1994, Liu et al. 1997, Mineshige et al. 1998) naturally explains the long recurrence times. Spectroscopic evidence indicates that the inner disk depletion might be a common phenomenon in SU UMa stars (Mennickent & Arenas 1998, Mennickent 1999). #### 4.2.2 Evidence for a main sequence like secondary It has been suggested that many large amplitude dwarf novae have bounced off from the orbital period minimum (at $``$ 80 min) and are evolving to longer orbital periods with very old, brown-dwarf like secondaries (Howell et al. 1997). This view is supported by the finding of undermassive secondaries in WZ Sge (Ciardi et al. 1998) and V 592 Her (van Teeseling et al. 1999) and the suspection – based on the “superhump” mass ratio – of this kind of objects in AL Com and EG Cnc (Patterson 1998). In principle, the large amplitude and long cycle length of RZ Leo suggest that this star is an ideal candidate for a post-period minimum system and therefore, for an undermassive secondary. Since superhumps have not been yet detected in this star, the only way to investigate this view is analyzing the flux distribution. We have compiled data from different sources. They are generally non-simultaneous, and may contain possibly significant variations in the emission of the CVs. However, to minimize this effect, we have excluded data taken during outburst, and we have considered data from as few sources as possible and as close together in time as possible. The flux distributions of RZ Leo and other dwarf noave with recognized brown-dwarf like secondaries (and available photometric data) are compared in Fig. 11. The optical-IR flux of a steady disk, scaled to fit the UBV data of RZ Leo, is also shown.<sup>2</sup><sup>2</sup>2In general, the flux distribution of a CV is dominated by the accretion disk in optical wavelengths and by the secondary star in the infrared; the white dwarf and boundary layer mostly contribute to the EUV and X-ray radiation (e.g. Frank et al. 1992). The optical-IR radiation of a infinitely large, steady, optically thick disk can be approximated by a $`\lambda ^{7/3}`$ law (Lynden-Bell 1969). We find that, in contrast with that observed in the objects with undermassive secondaries, the flux distribution of RZ Leo does not drop in the red wavelengths, but rises with respect to the disk’s contribution. This is expected if the secondary were a main-sequence red dwarf. In fact, the $`VK`$ color of RZ Leo (viz. 3.65, Sproats el at. 1996) is representative of a main sequence M0 star (Bessell & Brett, 1988). This is consistent with the finding that most secondaries stars for cataclysmic variables with $`P_o`$ $`<`$ 3 h are close to the solar abundance main sequence defined by single field stars (Beuermann et al. 1998). The above arguments probably rule out the possibility of an undermassive secondary in RZ Leo. Our results indicate that large amplitude – long cycle length – dwarf novae might not necessarily correspond to objects in the same evolutive stage. We have shown that, in spite of the extreme cycle length and outburst amplitude, RZ Leo can not be properly named a WZ Sge like star, as suggested in the Ritter & Kolb catalogue (1998). ### 4.3 Anti-humps The ratio between hot spot and disk luminosity, for the case of an optically thick, steady state accretion disk and a simple planar bright spot on the edge of the disk, is (Warner 1995a, Eq. 2.71): $$\frac{L_s^V}{L_d^V}=f\frac{\mathrm{tan}i}{1+1.5\mathrm{cos}i}\frac{\dot{M}_2}{\dot{M}_d}\frac{R_1}{r_d}10^{0.4(B_{sp}B_d)}$$ (8) where $`i`$ is the systemic inclination, $`f`$ an efficience factor $``$ 1 and $`B_{sp}`$ and $`B_d`$ are the bolometric corrections ($`<`$ 0) for the spot and disk respectively. Roche-lobe geometry and the assumption of a disk radius 70% the Roche lobe radius (an usually good approximation for dwarf novae), yield to: $$\frac{R_1}{r_d}\frac{0.40q^{2/3}}{P_o^{3/2}(hr)}$$ (9) In addition, hot spot and disk temperatures inferred for dwarf novae indicate $`B_{sp}`$ $``$ $`B_d`$ (Warner 1995a’s diskussion after Eq. 2.73 and references therein). Therefore we obtain: $$\frac{L_s^V}{L_d^V}h(i,q,P_o)\frac{\dot{M_2}}{\dot{M_d}}$$ (10) where $`h(i,q,P_o)=\frac{f\mathrm{tan}i}{1+1.5\mathrm{cos}i}\frac{0.40q^{2/3}}{P_o^{3/2}(hr)}`$ is a function with a numerical value in the range 0.03–0.3 for most practical purposes. The condition of “anti-humps” is given by: $$\frac{L_s^V}{L_d^V}<1$$ (11) The above equations suggest that the apparition of “anti-humps” in a single system depends on the relative values of $`\dot{M}_d`$ and $`\dot{M}_2`$. In particular, for RZ Leo, assuming the orbital and photometric periods equals, a mass ratio of 0.15, i.e. representative for dwarf novae below the period gap (e.g. Mennickent et al. 1999), and a moderate inclination angle of 65<sup>o</sup>, this occurs when $`\dot{M}_2<6.25\dot{M}_d`$ (assuming $`f`$ =1). The rarity of the phenomenon indicates that $`L_s<L_d`$ is a condition rarely fulfilled among dwarf novae and that $`\dot{M}_2`$ is probably always larger or equal than $`\dot{M}_d/h`$. Systems with large amplitude humps are candidates for $`\dot{M}_2`$ $`>`$ $`\dot{M}_d/h`$ whereas high inclination systems with no prominent humps (e.g. WX Cet, Mennickent 1994) are candidates for $`\dot{M}_2`$ $``$ $`\dot{M}_d/h`$. We can estimate the mass accretion rate from the recurrence time: $$\dot{M}_d=\frac{880}{T_s(d)}\times 10^{15}gs^1T_s900d$$ (12) (Eq. 37 by Warner 1995b). Using a supercycle length $`T_s>`$ 2 yr we obtain $`\dot{M}_d<1.2\times 10^{15}`$ g s<sup>-1</sup>. This implies that $`\dot{M}_2<7.5\times 10^{15}`$ g s<sup>-1</sup> is required to develop “anti-humps”. This condition is easily satisfied if the mass transfer rate is driven by gravitational radiation, as expected for a dwarf novae below the period gap. In this case, using the system parameters given above, we estimate: $$\dot{M}_2^{GR}=2.2\times 10^{15}gs^1$$ (13) from Eq. 9.20 by Warner (1995a). Since the mass accretion rate $`\dot{M}_d`$ is proportional to the viscosity (e.g. Cannizzo et al. 1998), an extremely low $`\alpha `$ disk is not a good site for developing “anti-humps”. The reason is that, in this case, the condition imposed on the mass transfer rate to satisfy Eq. 11 is too strong, requiring, probably, unrealistic low $`\dot{M}_2`$ values. Therefore the presence of “anti-humps” in RZ Leo is consistent with the normal $`\alpha `$ found in the previous section. ## 5 Conclusions * The light curve of RZ Leo during a time interval of 11-years is characterized by highly variable humps. * A non-coherent photometric period of 0$`\stackrel{d}{.}`$0756(2) is consistent with the data. * The hump amplitude is anti-correlated with the stellar mean brightness. * A new phenomenon was reported: the presence of “anti-humps” when the system is faint. * Anti-humps might result from a regime of very low mass transfer rate and normal alpha disks. * Non-coherent humps are compatible with a non-steady hot spot. * The rapid response of the accretion disk to the enhanced mass transfer rate evidenced by the phenomena of early 1998 (Fig. 9) suggests a disk with a normal viscosity parameter $`\alpha `$ $``$ 0.08. * The possibility of an undermassive secondary is rejected by arguments concerning the observed optical and infrared flux distribution. ###### Acknowledgements. This work was partly supported by Fondecyt 1971064 and DI UdeC 97.11.20-1. Support for this work was also provided by the National Science Foundation through grant number GF-1002-98 from the Association of Universities for Research in Astronomy, Inc., under NSF Cooperative Agreement No. AST-8947990. C. Sterken acknowledges a research grant of the Fund for Scientific Research Flanders (FWO).
no-problem/9910/astro-ph9910191.html
ar5iv
text
# Compton Dragged Gamma–Ray Bursts associated with Supernovae ## 1. Introduction In the leading scenario for GRBs and afterglows, the gamma–ray event is produced by internal shocks in a hyper–relativistic inhomogeneous wind (Rees & Mészáros 1994) while the afterglow is produced as the fireball drives a shock wave in the external interstellar medium (Mészáros & Rees 1997). Even if there is a large consensus that both gamma–rays and afterglow photons are produced by the synchrotron process, recently some doubts have been cast on the synchrotron interpretation for the burst itself (Liang 1997; Ghisellini & Celotti 1998; Ghisellini, Celotti & Lazzati 1999). The nature of the progenitor is still a matter of active debate, as the sudden release of a huge amount of energy in a compact region generating a fireball does not keep trace of the way this energy has been produced. For this reason, the study of the interactions of the fireball with the surrounding medium seems to be the most powerful mean to unveil the GRB progenitor. At least two models are in competition: the merging of a binary system composed of two compact objects (Eichler et al. 1989) and the Hypernova–Collapsar model (Woosley 1993, Paczyński 1998), i.e. the core collapse of a very massive star to form a black hole. After the discovery and the multiwavelength observations of many afterglows, circumstantial evidence has accumulated for GRB exploding in dense regions, associated to supernova–like phenomena. In fact, (a) host galaxies have been detected in many cases (Sahu et al. 1997; see Wheeler 1999 for a review), and some of them show starburst activity (Djorgovski et al. 1998, Hogg & Fructher 1999); (b) large hydrogen column densities have sometimes been detected in X–ray afterglows (Owens et al. 1998); (c) non–detections of several X–ray afterglows in the optical band can be due to dust absorption (Paczyński 1998); (d) a possible iron line feature has been detected in the X–ray afterglow of GRB 970508 (Piro et al. 1999, Lazzati et al. 1999) and (e) the rapid decay with time of several afterglows can be explained by the presence of a pre–explosion wind from a very massive star (Chevalier & Li 1999). More recently, the possible presence of supernova (SN) emission in the late afterglows light curves of GRB 970228 (Galama et al. 1999, Reichert 1999) and GRB 980326 (Bloom et al. 1999) has added support in favor of the association of some GRBs with the final evolutionary stages of massive stars. Although in these models the available energy is larger than in the case of compact binary mergers, the very small efficiency of internal shocks (see, e.g., Spada, Panaitescu & Mészáros 1999) seems to be inconsistent with the fact that more energy can be released during the burst proper than the afterglow (Paczyński 1999, see also Kumar & Piran 1999). In this letter we show that if GRBs are associated with supernovae, Compton drag inside the relativistic wind can produce both the expected energetics and the peak energy of the spectrum of a classical long duration GRB. In this new scenario the efficiency is not limited by internal shock interactions, and the successful modeling of afterglows with external shocks is left unaffected. The Compton drag effect has been already invoked for GRBs by Zdziarski et al. (1991) and Shemi (1994). Cosmic background radiation (at high redshift), central regions of globular clusters and AGNs were identified as plausible sources of soft photons, but none of these scenarii was able to account for all the main properties of GRBs. However, the growing evidence of association of GRB explosions with star–forming regions and supernovae opens new perspectives for this scenario. ## 2. Compton drag in a relativistic wind We consider a relativistic ($`\mathrm{\Gamma }1`$) wind of plasma propagating in a bath of photons with typical energy $`ϵ_{\mathrm{seed}}`$. A fraction $`\mathrm{min}(1,\tau _\mathrm{T})`$ of the photons are scattered by the inverse Compton (IC) effect to energies $`ϵ\mathrm{\Gamma }^2ϵ_{\mathrm{seed}}`$, where $`\tau _\mathrm{T}`$ is the Thomson opacity of the wind. Due to relativistic aberration, the scattered photons propagate in a narrow cone forming an angle $`1/\mathrm{\Gamma }`$ with the velocity vector of wind propagation. By this process, a net amount of energy $`E_{\mathrm{CD}}`$ is converted from kinetic energy of the wind to a radiation field propagating in the direction of the wind itself, where $`E_{\mathrm{CD}}\mathrm{min}(1,\tau _\mathrm{T})Vu_{\mathrm{rad}}(\mathrm{\Gamma }^21)`$. $`V`$ is volume filled by the soft photon field of energy density $`u_{\mathrm{rad}}`$ swept up by the wind. Let us assume that the GRB fireball, instead of being made by a number of individual shells (see e.g. Lazzati et al. 1999), is an unsteady (both in velocity and density) relativistic wind, expanding from a central point. After an initial acceleration phase, the density of the outflowing wind decreases with radius as $`n(r)r^2`$, giving a scattering probability $`\mathrm{min}[1,(r/r_0)^2]`$, where $`r_0`$ is the radius at which the scattering probability equals unity. After the first scattering, the photons propagates in the same direction of the flow and the probability of a second scattering is reduced by a factor $`\mathrm{\Gamma }^2`$. If such a wind flows in a radiation field with energy density $`u_{\mathrm{rad}}(r)`$, the total energy transferred to the photons when the fireball reaches a distance $`R`$ is given by<sup>1</sup><sup>1</sup>1All the calculations are made in spherical symmetry. In case of beaming, all the quoted numbers should be considered as equivalent isotropic values.: $$E_{\mathrm{CD}}(R)=4\pi \mathrm{\Gamma }^2\left[_0^{r_0}u_{\mathrm{rad}}(r)r^2𝑑r+_{r_0}^Rr_0^2u_{\mathrm{rad}}(r)r^2𝑑r\right]$$ (1) where for simplicity we assume that a constant $`\mathrm{\Gamma }`$ has been reached (see also Section 3). The transparency radius $`r_0`$ depends on the baryon loading of the fireball, which is parameterized by $`\eta _\mathrm{b}E/(Mc^2)`$, where $`E/M`$ is the ratio between the total energy and the rest mass of the fireball. Then $`r_0`$ is given by<sup>2</sup><sup>2</sup>2Here and in the following we adopt the notation $`Q=10^xQ_x`$, using cgs units: $$r_0=5.9\times 10^{13}E_{52}^{1/2}\eta _{\{\mathrm{b},2\}}^{1/2}\mathrm{cm}.$$ (2) ### 2.1. A simple scenario We initially consider a simple scenario which can illustrate the basic features of the Compton drag effect. Let us assume that the GRB is triggered at a time $`\mathrm{\Delta }t`$ (of the order of a few hours) after the explosion of a supernova (Woosley et al. 1999; Cheng & Dai 1999). By this time, the supernova ejecta, moving with velocity $`\beta _{\mathrm{SN}}c`$, have reached a distance $`R_{\mathrm{SN}}=v_{\mathrm{SN}}\mathrm{\Delta }t5.4\times 10^{13}\beta _{\{\mathrm{SN},1\}}(\mathrm{\Delta }t/5\mathrm{hr})`$ cm. Let us also imagine that the supernova explosion is asymmetric, e.g. with no ejecta in the polar directions. Despite this asymmetry, the ejecta uniformly fill with radiation the entire volume within $`R_{\mathrm{SN}}`$. If $`R_{\mathrm{SN}}<r_0`$, the energy extracted by Compton drag is: $`E_{\mathrm{CD}}`$ $`=`$ $`{\displaystyle \frac{4\pi R_{\mathrm{SN}}^3}{3}}\mathrm{\Gamma }^2u_{\mathrm{rad}}R_{\mathrm{SN}}r_0`$ (3) $`E_{\mathrm{CD}}`$ $`=`$ $`{\displaystyle \frac{4\pi r_0^3}{3}}\mathrm{\Gamma }^2u_{\mathrm{rad}}\left(3{\displaystyle \frac{R_{\mathrm{SN}}}{r_0}}2\right)R_{\mathrm{SN}}>r_0.`$ (4) According to Woosley et al. (1994), the average luminosity of a type II supernova<sup>3</sup><sup>3</sup>3This luminosity decreases by a factor $`100`$ for type Ibc supernovae, while the typical frequency increases by a factor of 10. during $`\mathrm{\Delta }t`$ is of the order of $`L_{\mathrm{SN}}10^{44}`$ erg s <sup>-1</sup>, with a black body emission at a temperature $`T_{\mathrm{SN}}10^5`$ K. It follows that in this case $`u_{\mathrm{rad}}=aT_{\mathrm{SN}}^47.6\times 10^5T_{\{\mathrm{SN},5\}}^4`$ erg cm<sup>-3</sup> (consistent with $`R_{\mathrm{SN}}`$ assumed above). The efficiency $`\xi `$ of Compton drag in extracting the fireball energy is very large; from Eq. 3 we obtain: $$\xi \frac{E_{\mathrm{CD}}}{E}0.6E_{52}^1\beta _{\{\mathrm{SN},1\}}^3\left(\frac{\mathrm{\Delta }t}{5\mathrm{h}}\right)^3T_{\{\mathrm{SN},5\}}^4\mathrm{\Gamma }_2^2,$$ (5) Note here that a high efficiency can be reached even for $`\mathrm{\Gamma }100`$. Note that the drag itself can limit the maximum speed of the expansion – even in a wind with a very small barion loading – as discussed in Sect. 3. Each seed photon is boosted by $`2\mathrm{\Gamma }^2`$ in frequency, yielding a spectrum peaking at $`h\nu 2\mathrm{\Gamma }^2(3kT_{\mathrm{SN}})0.5\mathrm{\Gamma }_2^2T_{\{\mathrm{SN},5\}}`$ MeV. ### 2.2. A more realistic scenario The previous scenario requires that the GRB explodes a few hours after a supernova. There is however a plausible alternative, independent of whether the massive ($`>30M_{}`$) star (assumed to be the progenitor of the GRB) ends up with a supernova explosion or not, and can produce a gamma–ray burst even if the relativistic flow and the core collapse of the progenitor star are simultaneous or separated by a relatively small time interval (Woosley et al. 1999; MacFadyen, Woosley & Heger 1999). In fact there is a somewhat general consensus (e.g. MacFadyen & Woosley 1999; Aloy et a. 1999, but see also Khokhlov et al. 1999) that a relativistic wind can flow in a relatively baryon free funnel created by a bow shock following the collapse of the iron core of the star. Even if details of this class of models are still controversial, the formation of the funnel seems to be a general outcome. Let us estimate its luminosity and more precisely the amount of energy in radiation crossing the funnel walls at a time $`t_\mathrm{f}`$ after its creation. With respect to the total luminosity of the star, assuming it radiates at its Eddington limit $`L_{\mathrm{Edd}}`$, there would be a reduction by the geometrical factor equal to the ratio of the funnel to star surfaces, which is of the order of the funnel opening angle $`\vartheta `$. However, immediately after its creation, the funnel luminosity is much larger than $`\vartheta L_{\mathrm{Edd}}`$, due to two effects which we discuss in turn. First the walls of the funnel contain an enhanced amount of radiation with respect to the surface layers of the star: the radiation once “trapped” in the interior of the star can escape through the funnel walls, thus enhancing the luminosity inside the funnel for a short time. Photons produced at a distance $`s`$ from the wall surface cross it at a time $`t_\mathrm{f}\tau _\mathrm{s}s/c=\sigma ns^2/c`$, where $`\sigma `$ is the relevant cross section. This compares with the Kelvin time $`t_\mathrm{K}\sigma nR_{}^2/c`$ needed for radiation to reach the star surface, yielding $`s/R_{}(t_\mathrm{f}/t_\mathrm{K})^{1/2}`$. After the time $`t_\mathrm{f}`$, the radiation produced in the layer of width $`ds`$ crosses the funnel surface carrying the energy $`dE_\mathrm{f}\vartheta \tau _{}L_{\mathrm{Edd}}ds/c`$, the corresponding luminosity being: $$L_\mathrm{f}\frac{\vartheta }{2}L_{\mathrm{Edd}}\left(\frac{\tau _{}R_{}}{ct_\mathrm{f}}\right)^{1/2}.$$ (6) For $`t_\mathrm{f}=100`$ s and a $`10M_{}`$ star with $`R_{}10^{13}`$ cm ($`\tau _{}10^8`$), this effect can enhance the funnel luminosity by $`10^4`$. Let us now consider a second plausible enhancing factor. If the funnel has been produced by the propagation of a bow–shock in the star, the matter in front of the advancing front is compressed, with a pressure increase of $`^2`$, where $``$ is the Mach number of the shock in the star. This (optically thick) gas then flows along the sides of the funnel and relaxes adiabatically to the pressure of the external matter (its original pressure). The result is that the funnel is surrounded by a sheath (cocoon) with density lower than that of the unshocked stellar material by a factor $`^{3/2}`$ (a polytropic index of $`4/3`$ has been used in the adiabatic cooling). The diffusion of photons through this rarefied gas into the funnel is then even faster, resulting in a further increase of the luminosity by $`^{3/4}200`$, where a shock speed $`\beta _{\mathrm{sf}}c=0.1c`$ (MacFadyen & Woosley 1999) and a sound speed $`\beta _\mathrm{s}c=10^4c`$ have been assumed. By taking into account both effects, the funnel luminosity corresponds to: $$L_\mathrm{f}L_{\mathrm{Edd}}\frac{\vartheta }{2}\left(\frac{\tau _{}R_{}}{ct_\mathrm{f}}\right)^{1/2}\left(\frac{\beta _{\mathrm{sf}}}{\beta _\mathrm{s}}\right)^{3/4}10^{45}\vartheta _1\frac{M_{}}{10M_{}}\mathrm{erg}\mathrm{s}^1,$$ (7) which leads to an energy loss for Compton drag $`L_{\mathrm{CD}}\mathrm{\Gamma }^2L_\mathrm{f}10^{49}\vartheta _1\mathrm{\Gamma }_2^2(M_{}/10M_{})`$, to be compared with the observed luminosity $`L_{\mathrm{GRB}}10^{49}\pi \vartheta _1^2`$ erg s<sup>-1</sup>. Here the average luminosity is considered over the entire burst duration: for single pulses, we should take into account an extra factor $`\mathrm{\Gamma }^2`$ in Eq. 7 due to the Doppler contraction of the observed time. The typical radiation temperature associated with this luminosity, assuming a black body spectrum, is enhanced with respect to the temperature of the star surface by $`[L_\mathrm{f}/(\vartheta L_{\mathrm{Edd}})]^{1/4}(\tau _{}R_{})^{1/8}(ct_\mathrm{f})^{1/8}(\beta _{\mathrm{sf}}/\beta _\mathrm{s})^{3/16}`$. Adopting the numerical values used above, the enhancement is of the order of 50, corresponding to a funnel temperature $`T_\mathrm{f}2\times 10^5`$ K (for a surface temperature of the star of $``$5000 K). This value is similar to the one estimated in the simple scenario of the previous subsection and thus leads to similar Compton frequencies. ## 3. Properties of the observed bursts If the wind is homogeneous the spectrum of the scattered photons resembles that of the incident photons, i.e. a broad black–body continuum peaked at a temperature $`T_{\mathrm{drag}}2\mathrm{\Gamma }^2T`$. While the observed characteristic photon energy would be therefore $`ϵ0.5\mathrm{\Gamma }_2^2T_5(1+z)^1`$ MeV, in good agreement with the observed distribution of peak energies of BATSE GRBs (assuming again $`\mathrm{\Gamma }=100`$, see below), the spectrum would not reproduce the observed smoothly broken power–law shape (Band et al. 1993). The assumptions of a perfectly homogeneous wind and of an isothermal radiation field are however very crude, and one might reasonably expect that different regions of the wind are characterized by different values of $`\mathrm{\Gamma }`$ and different soft field temperatures. If we assume, e.g., that the temperature of the soft photon field varies with radius according to a power–law $`T(r)r^\delta `$, the time integrated spectrum will have a high energy power–law tail $`F(\nu )\nu ^{\frac{33\delta }{\delta }}`$. In addition, the bulk Lorentz factor of the flow is likely to vary on a timescale much shorter than the integration time required to obtain a spectrum with the BATSE data ($`1`$ s), and hence the analysed spectra are the superposition of drag spectra by many different Lorentz factors. A third effect adding power to the high energy tails of the spectrum is the reflection of up–scattered photons in the pre–supernova wind. This photons are scattered again by the fireball and can reach energies of $`0.5\mathrm{\Gamma }`$ MeV $`50\mathrm{\Gamma }_2`$ MeV. The computation of the actual spectrum resulting from all these effects depends from many assumptions and is beyond the scope of this work. The effects described above, which can increase the funnel luminosity over the Eddington limit, take place in non–stationary conditions. At the wind onset, it is likely that the temperature gradient in the walls of the funnel is large, but this is soon erased due to the high luminosity of the walls. This causes both the total flux and the characteristic frequency of the soft photons to decrease, and hence a hard–to–soft trend is expected. Moreover, it has been shown by Liang & Kargatis (1996) that the peak frequency of the spectrum in a single pulse at time $`t`$ is strongly related to the flux of the pulse integrated from the beginning of the pulse to the time $`t`$. In our scenario, this behaviour can be easily accounted for if we consider a shell slowed down by the drag itself: the Lorentz factor (and hence the peak frequency of the spectrum) at a time $`t`$ is related to the energy lost by the shell, i.e. to the integral of the flux from the beginning of the pulse to the time $`t`$. The observed minimum variability time–scale is related to the typical size of the region containing the dense seed photon field, which corresponds to either $`R_{}`$ or $`R_{\mathrm{SN}}`$ depending on which of the two scenarios described above applies. The relevant light crossing time – divided by the time compression factor – is thus $$t_{\mathrm{var}}\frac{R}{c\mathrm{\Gamma }^2}3\times 10^2R_{13}\mathrm{\Gamma }_2^2\mathrm{s}.$$ (8) Longer time–scales are instead expected if the relativistic wind is smooth and continuous. Another interesting feature of this scenario is the possibility that the bulk Lorentz factor of the wind is self–consistently limited by the drag itself. The pressure of the soft photons starts braking the fireball in competition with the pressure of internal photons. The limiting Lorentz factor is hence reached when the internal pressure $`p_{\mathrm{fb}}^{}(T_0/\mathrm{\Gamma })^4`$ is balanced by the pressure of the external photons as observed in the fireball comoving frame $`p^{}\mathrm{\Gamma }^2T_{\mathrm{SN}}^4(1+\tau _\mathrm{T})^1`$, where $`\tau _\mathrm{T}`$ is the scattering optical depth of the wind. This gives: $$\mathrm{\Gamma }_{\mathrm{lim}}2\times 10^4T_{\{\mathrm{SN},5\}}^{1/2}E_{52}^{1/4}R_{\{0,7\}}^{5/8}\eta _{\{\mathrm{b},5\}}^{1/8},$$ (9) where $`R_0`$ is the radius at which the fireball is released. Equation 9 reduces to $`\mathrm{\Gamma }_{\mathrm{lim}}10^4(T_{0,11}/T_{\mathrm{SN},5})^{2/3}`$ if the fireball becomes transparent before reaching the coasting phase. With such high $`\mathrm{\Gamma }`$ the Compton drag would be maximally efficient, causing the fireball to immediately decelerate until its $`\mathrm{\Gamma }`$ reaches the value given by $`L_{\mathrm{CD}}=L_\mathrm{f}\mathrm{\Gamma }^2`$, implying: $$\mathrm{\Gamma }=\left(\frac{L_{\mathrm{kin}}}{L_\mathrm{f}}\right)^{1/2}\mathrm{\hspace{0.17em}300}\left(\frac{L_{\{\mathrm{kin},50\}}}{L_{\{\mathrm{f},45\}}}\right)^{1/2}.$$ (10) These limits are in general smaller than the maximum $`\mathrm{\Gamma }`$ set by the baryon load only, but still in agreement with values recently inferred for GRB 990123 (Sari & Piran 1999). In addition, it is likely that the external parts of the relativistic wind, which are in closer connection with the funnel walls, are dragged more efficiently then the central ones, since at the beginning the soft photons coming from the walls can penetrate only a small fraction of the funnel before being up–scattered by relativistic electrons. This may result in a polar structured wind, with higher Lorentz factors along the symmetry axis, gradually decreasing as the polar angle increases. ## 4. Discussion A crucial requirement of our model is the association of GRBs with the final evolutionary stages of very massive stars, as these provide the large amount of seed photons emitted at distances $`10^13`$ cm from the central trigger, which are neeeded for the Compton drag to be efficient. The efficiency of conversion of bulk kinetic energy of the flow into gamma–ray photons is large, solving the observational challenge of gamma–ray emission being more energetic than the afterglow one (Paczyński 1999). Furthermore in this scenario there is no requirement for either efficient acceleration in collisionless shocks or the presence/generation of an intense (equipartition) magnetic field, although Poynting flux may still be important in accelerating the outflow (being more efficient than neutrino reconversion into pairs). We have investigated the main properties of a GRB produced by Compton drag in a relativistic wind in a very general case. A moderately beamed burst ($`\vartheta 10^{}`$, Woosley et al. 1999) can be thus produced and, without any fine tuning of the parameters, the basic features of classic GRBs are accounted for. In particular, the peak energy of the burst emission simply reflects the temperature of the supernova seed photons, up–scattered by the square of the bulk Lorentz factor. The simplest hypothesis predicts a quasi–thermal spectrum, however it is easy to imagine an effective multi–temperature distribution which would depend on unconstrained quantities such as the variation of the spectrum of the SN photons with radius and the degree of inhomogeneity of the wind. Although in this scenario there is no requirement for internal shocks to set up, they can of course occur, contributing a small fraction of the observed gamma–ray flux. On the other hand, the wind is expected to escape from the funnel of the star with still highly relativistic motion, so that an external shock can be driven in the interstellar medium and produce an afterglow, similar to the scenario already studied by several authors. It is likely that this afterglow would develop in a non–uniform density medium, due to the presence of the massive star wind occurring before the supernova explosion (Chevalier & Li 1999). We thank Andy Fabian, Francesco Haardt, Piero Madau and Giorgio Matt for many stimulating discussions. DL thanks the Institute of Astronomy for the kind hospitality during the preparation of this work. The Cariplo Foundation (DL) and Italian MURST (AC) are acknowledged for financial support.
no-problem/9910/astro-ph9910244.html
ar5iv
text
# Design and Testing of a Prototype Pixellated CZT Detector and Shield for Hard X-Ray Astronomy ## 1 INTRODUCTION Hard X-ray and gamma-ray detectors made of Cadmium Zinc Telluride (CZT) hold great promise for advancing the state of X-ray and gamma-ray astronomy instrumentation. The intrinsic energy resolution of semiconductor detectors is far greater than that of scintillators, and the use of pixel or strip electrode readouts allows far greater spatial resolution. The high density of CZT permits the photoelectric absorption of photons up to $`500`$ keV with reasonable thicknesses ($`5`$ mm), and the high bandgap allows detectors to operate at room temperature (as opposed to germanium). It has been shown that the poor hole transport properties of CZT can be overcome by special readout electrode geometries that are sensitive to the motion of electrons only. We have been pursuing a program to develop CZT detectors for astronomy applications, focusing specifically on the needs of a wide-field-of-view survey telescope operating in the hard X-ray band between 20 and 600 keV, such as the EXIST or EXIST-LITE concept. In this paper we describe the CZT instruments we are preparing for a balloon flight in April 2000 as piggyback experiments on the Harvard EXITE2 payload. They are designed to test several of the key techniques that are needed to construct a large-area CZT detector plane, and to measure the CZT background at balloon altitudes with two different shielding configurations under consideration for a wide-field survey instrument. ## 2 TECHNICAL ISSUES IN CONSTRUCTING A HARD X-RAY SURVEY TELESCOPE Many technical issues must be addressed before hard X-ray detectors suitable for astronomy can be constructed. The only practical method for imaging between 100 keV and 500 keV is the coded aperture technique, which requires a large area, position-sensitive detector. The wide field of view ($`45^{}`$) of an individual survey telescope module together with the thick detectors needed for high energy response require relatively large pixels (1.5–2.0 mm) to avoid projection effects. Our work so far had thus focused on thick (5 mm) detectors with 1–2 mm pixels. At the same time, the resistivity of the material must be large to keep the noise due to leakage current from degrading the energy resolution, a problem made worse by large pixels. Thus we have also investigated the use of blocking contacts made from PIN junctions to reduce leakage current noise. The sensitivity intended for EXIST ($`<0.1`$ mCrab) requires several square meters of detector material. Presently the method of CZT crystal growth yielding the highest resistivity is the high-pressure Bridgman (HPB) technique. Although crystals grown in this manner have high resistivity, the process is costly since the yield of defect-free crystals any larger than 10 mm $`\times `$ 10 mm is low. A major issue is then the construction of a large area detector by closely tiling thousands of small (1 cm<sup>2</sup>) pixellated crystals with minimal dead space at at minimum cost, while incorporating the associated readout electronics. Recently IMARAD Imaging Systems has begun producing CZT using a modified horizontal Bridgman (HB) process which allows the growth of larger crystals (40 mm $`\times `$ 40 mm) at higher yield and thus lower cost. The drawback is that the crystals have lower resistivity and thus higher leakage current. We are investigating IMARAD CZT detectors using various blocking contacts to reduce this leakage current noise. The main technical challenges are then creating a closely-tiled array out of small detector elements, reading out the thousands of pixels with the detectors and electronics in a compact package, and processing the signals from each channel. We have already begun working with IDE Corporation to test low-noise preamps and shaping amps in the form of application specific integrated circuits (ASICs) that are made as small as possible. The new CZT experiment described in detail in Section 4 begins to meet these challenges by placing two 10 mm $`\times `$ 10 mm $`\times `$ 5 mm CZT detectors with $`2`$ mm pixels next to each other on a common carrier board in such a way that the pixel pitch is preserved. The pixels are mounted in “flip-chip” fashion such that the pixels are read out directly into traces on the carrier board and fed into a 32-channel ASIC controlled by a PC/104 single-board computer. One detector is eV Products HPB material, the other IMARAD HB material with blocking contacts, which allows comparitive measures of in-flight background and performance under very similar conditions. ## 3 DESIGN OF THE BACKGROUND EXPERIMENT Another critical issue for hard X-ray astronomy instrumentation is the background level in the detector system. Astronomical sources are faint in the hard X-ray range, and in coded aperture telescopes the noise per pixel is determined by the total counts in the entire detector. To achieve good signal-to-noise, therefore, the detector background must be kept to a minimum. Typically the background in balloon payloads is due to a combination of diffuse cosmic gamma-rays with gamma-ray photons and energetic particles resulting from cosmic ray interactions in the atmosphere and in the payload itself. Effective shielding requires a detailed knowledge of the physical processes that produce background counts in a given detector material. These processes must be determined by measurements and simulations. In May 1997 we flew a simple CZT background experiment consisting of a single-element CZT detector (10 mm $`\times `$ 10 mm $`\times `$ 2 mm) completely shielded with a passive Pb/Sn/Cu cup covering the front and sides and actively-shielded by a large BGO crystal to the rear. We found that the background rate in this CZT/BGO detector was reduced by a factor of $`6`$ when BGO triggers were used to veto events, and that the resulting “good event” rate ($`9\times 10^4`$ cts cm<sup>-2</sup> s<sup>-1</sup> keV<sup>-1</sup> at 100 keV) could be explained using GEANT simulations that included only gamma-rays leaking through the shields and produced in the surrounding passive material. Gamma-ray interactions alone could not, however, explain the six-times higher background that was rejected by the BGO, indicating that an internal activation component was also present that was effectively vetoed by the active shield. An additional goal in flying the new pixellated detectors is to make another measurement of the flight background spectrum with new shielding configurations, as well as to study its spatial distribution. The large reduction in background achieved with the active shield in the CZT/BGO experiment led us at first to consider an active collimator of CsI combined with a rear CsI shield for the current imaging detector experiment. (BGO was not considered due to its cost.) However, detailed simulations have found that surrounding CZT detectors with thick material leads to increased background from activation, even if that material is active, and that a thin passive collimator with an active rear shield is preferable. We investigated the reduction in gamma-ray-generated background expected from an active collimator by performing two simple Monte-Carlo simulations using the CERN Program Library simulations package GEANT: a CZT detector at the bottom of a square well active collimator ($`45^{}`$ field of view) made of 2.5 cm thick CsI, and a detector at the bottom of a square graded passive collimator made of 4 mm of Pb, 1 mm of Sn, and 1 mm of Cu. In both cases the CZT sat in front of a 2.5 cm thick CsI rear shield. Only the interactions of cosmic and atmospheric gamma-rays were considered (the passive collimator was assumed to be surrounded by plastic scintillator that vetoed gamma-rays produced locally from particle interactions). The spectra recorded per volume of CZT are shown in Figure 1. The CsI shield threshold was 50 keV. The spectrum recorded with the passive collimator is practically identical to that found with the active collimator at low energies, where aperture flux dominates, and is only $`50`$% higher at several hundred keV, suggesting that un-vetoed Compton scatters have only a modest effect on the overall background. This, together with the activation simulations mentioned above, indicates that an active collimator may not be optimal. The factor of 6 reduction observed by the CZT/BGO experiment indicates that a large internal background is being generated in the CZT and vetoed by the BGO. This background is not due to shield-leakage gamma-rays, but presumably particle interactions such as prompt (n,$`\gamma `$) reactions. Thus we believe an active shield must be included in any CZT hard X-ray telescope. Though it has little effect on the gamma-ray component, would an active collimator further reduce the internal background component? In addition to our CZT/BGO balloon flight experiment, two other CZT background measurements at balloon altitudes have been performed using an active rear shield and active or passive collimation, and these have influenced the design of the present experiment. In 1995, a group from Goddard Space Flight Center flew the CZT experiment PoRTIA, containing a 25.4 mm $`\times `$ 25.4 mm $`\times `$ 1.9 mm planar detector, in a number of configurations, including passively-collimated to a $`10^{}`$ field of view while sitting on top of a thick NaI crystal. In 1998 groups from Washington University, St. Louis (WUSTL) and the University of California-San Diego (UCSD) flew a 12 mm $`\times `$ 12 mm $`\times `$ 2 mm CZT detector with orthagonal strip electrodes in several different configurations, including one with active CsI shielding to the rear and sides with a passive collimator ($`20^{}`$ field of view), and one in which the passive collimator was replaced with an active NaI collimator. In Figure 1 we have included the spectrum recorded by PoRTIA in the passive collimator/active rear shield, and the spectra measured by the WUSTL/UCSD detector in both the active shield/passive collimator and fully actively-shielded cases. All spectra are shown per volume to allow for differences in detector thickness. The PoRTIA background is $`3`$ times higher than the GEANT spectrum at all but the lowest energies, where the larger aperture flux from a 45 field of view dominates. It is not clear why the PoRTIA background is so much higher in this configuration, but it could be caused by local gamma-ray production in the passive collimator. The WUSTL/UCSD passively-collimated spectrum is slightly higher than the GEANT prediction above 150 keV while the actively-collimated spectrum agrees quite well with it. The active collimator reduces the background above 100 keV by a factor of 1.6–2. Below 100 keV the higher aperture flux assumed in the GEANT spectrum dominates. (The WUSTL/USCD experiment also used a depth-sensing technique to reduce background at low energies, although this correction has not been applied to the spectra shown in Figure 1.) The WUSTL/UCSD background was 10 times higher when both the rear and collimating shields were turned off. These results indicate that active shielding, together with plastic particle shielding around passive material, is essential for achieving low backgrounds in CZT hard X-ray telescopes, and the simple GEANT simulations represent the “best-case” scenario of no internal background. The rear active shield does most of the work, however, and the additional reduction from the active collimator may not be worth the added complexity and the volume taken up by thick scintillator crystals, especially if small segments of the CZT array need to be collimated individually. We therefore decided not to fly an active collimator with our imaging detectors. To measure directly the importance of active shielding for our wide field of view survey telescope application, we have decided to fly two simultaneous background experiments on the next flight of the EXITE2 payload. The new pixellated, tiled detectors will be flown entirely passively-shielded, with a passive collimator (45 field of view) surrounded by plastic scintillator in the front and a passive/plastic rear shield in back. At the same time, the CZT/BGO detector will be flown again with the Pb/Sn/Cu cup in front replaced by a passive/plastic collimator identical to that on the pixellated CZT detector. ## 4 DESCRIPTION AND TESTING OF INSTRUMENT The present experiment tests several of the key elements discussed in Section 2 that are needed for the development of coded-aperture CZT hard X-ray survey telescopes. Two thick CZT detectors (10 mm $`\times `$ 10 mm $`\times `$ 5 mm) will be fabricated and flown in a tiled arrangement such that the pixel pitch is preserved across both detectors. This is the first important step in building up a large detector area out of small crystal elements. One of the detectors will be eV Products HPB material with gold contacts, similar to detectors we have tested at length. The second detector will be IMARAD HB material made with blocking contacts to reduce its leakage current and improve energy resolution. The exact choice of contact material for the IMARAD detector to be flown has yet to be determined, but for initial testing we have inserted the IMARAD Au/In detector we have tested previously in the lab. This detector was manufactured for us by IMARAD with indium pixels and a gold cathode that acts as a blocking contact. Both detectors will have 1.9 mm pixels on a 2.5 mm pitch (the IMARAD standard pixel size, for compatibility) and will thus operate within the “small-pixel regime.” An outer guard ring will prevent surface leakage current around the edges. This requires making the outer pixels slightly smaller so that the two detectors may be tiled together while preserving pixel pitch. The two pixellated detectors will be mounted in a “flip-chip” style on a specially-designed printed circuit board carrier card (made of standard FR-4 PCB material). Figure 3 shows the IMARAD Au/In detector together with the flip-chip carrier board and its cover. Gold pads are arranged to match each pixel and connect it to a pin via traces on the underside of the board. To allow us to remove and replace detectors easily, the electrical connection between the pixels and traces is made with conductive rubber pads held in place with a conductive epoxy made by TRA-CON, Inc. The top cover provides the negative high voltage to the cathode through another conductive rubber pad, which is connected by a wire to the bias voltage pin on the board. The cover is made of G10 material 0.06” thick and holds the detector in place between the rubber pads when it is screwed down. Figure 3 shows the assembled carrier board with the IMARAD Au/In detector in place. We have measured the transmission of the G10/rubber cover using the low energy lines of a <sup>241</sup>Am source. We find transmissions of 6% at 17 keV (Np line), 60% at 26.34 keV, and 90% at 60 keV. The low energy absorption appears to be dominated by the G10, so to optimize the response down to 20 keV we will investigate alternative cover materials. The two CZT detectors are read out by a 32-channel VA-TA ASIC manufactured by IDE Corp. The flip-chip carrier card plugs directly into a custom-made circuit board that contains the ASICs and associated bias resistors and decoupling capacitors. The VA-TA combination is attractive because it includes a self-trigger and MUX to output all 32 channels for each event. This will allow us to study the contribution of multiple-pixel events to the background and possibly to correct for the effects of charge-spreading between adjacent pixels. The expected count rate is $`1`$ count cm<sup>-2</sup> s<sup>-1</sup> (see Figure 10), or $`2`$ counts s<sup>-1</sup> from all 32 pixels together, easily low enough to record all channels for each event. The VA-TA ASICs are controlled by a data acquisition (DAQ) board supplied by IDE. We are in the process of writing software to control this DAQ board that will run on a PC/104 single-board computer flown alongside the detectors. This computer will record data into buffers and transfer them, along with housekeeping data, into the main EXITE2 data stream. As described in Section 3, the two tiled flip-chip detectors will be surrounded by a passive/plastic collimator and rear shield. This is shown schematically in Figure 5. In order to keep the instrument weight down, the collimator only surrounds the CZT carrier board ($`3`$ cm across), providing a $`45^{}`$ field of view. Since the current prototype VA-TA board is too large to fit within this space, it was necessary to have the collimator and rear shield physically separated. This required making the rear shield large enough to prevent the detectors from having a line of sight to the outside. Future designs will stipulate that the ASIC and its circuit board fit entirely within the footprint of the detector carrier card, so that they may be assembled vertically and fit within the main shielding. Figure 5 shows the assembled passive/plastic collimator. The passive portion consists of 4.5 mm Pb, 1 mm Sn, and 1 mm Cu slats bolted within an aluminum support frame. Cosmic ray particles will interact in this dense material, generating gamma-ray photons. To prevent these locally-produced gamma-rays from producing background in the CZT, plastic shields surround the passive material to provide a veto pulse when charged particles pass through them. The shields are made of 0.5” thick NE-102 plastic scintillator joined with Bicron BC-600 optical cement into two L-shaped halves. The readout devices for such shields (and active gamma-ray shields as well) must be extremely compact if large-area arrays are to be built up of smaller detector/shield units. We have selected Hamamatsu R7400U miniature photomultiplier tubes (PMTs) as the readout devices for the plastic shields. These PMTs are only 1.5 cm across and 2.6 cm long, have far higher gain than photodiodes or avalanche photodiodes (APDs), and their gain is not temperature dependent as in APDs. One PMT is placed in the corner of each L to read out two sides of the plastic shield. We have tested this arrangement in the lab by setting up a muon telescope: two lab PMTs coupled to plastic scintillators were placed on either side of the shield. Only muons passing through all three scintillators generated a coincident signal, and using this coincidence we could identify pulses from particles interacting anywhere along the L. We found that particles passing through the shield on the opposite end from the readout PMT still produce easily-measurable pulses, and so the two miniature PMTs are able to read out the entire volume of plastic. Pulses from the PMTs are fed into a coincidence-logic card that generates a veto pulse for coincident CZT and plastic triggers. The PC/104 computer will recognize this veto pulse and flag the event. As described in Section 3, identical passive/plastic collimators will be flown with the old single-element CZT/BGO detector and the new pixellated tiled detectors. Figure 7 shows the passive/plastic collimator mounted in front of the CZT/BGO experiment. The CZT detector sits at the rear of the collimator and observes a $`45^{}`$ field of view. The BGO crystal is housed within the cylindrical container directly behind the CZT, and is read out by the large PMT at the rear. This setup will test the passive collimator/active rear shield configuration under consideration for future CZT telescopes. Figure 7 shows the passive/plastic collimator in the flight configuration with the flip-chip detector assembly and VA-TA board. The detector is mounted in the carrier card, visible at the bottom of the collimator. As shown schematically in Figure 5, the VA-TA board extends out from under the collimator as presently constructed. The large passive/plastic shield will be mounted directly under the VA-TA board in flight. This setup will test the completely passively-shielded configuration for direct comparison with the actively-shielded case described above. The ASICs are mounted under the black rectangular cover visible beneath the collimator, and the connector that leads to the DAQ board is shown at the bottom of the figure. Figure 8 shows the 4 $`\times `$ 4 array of “first light” <sup>57</sup>Co spectra from the 16 pixels of the IMARAD Au/In detector mounted on the flip-chip carrier card and read out through the VA-TA ASIC. Three channels are disconnected; it was found that these three conductive rubber pads had become detached during our initial attempts at mounting the detector. In addition, the channels in the top row are unusually noisy. Either a good connection was not made between the pixels and rubber pads, or these channels of the ASIC are picking up excess noise. In any case, the spectra in the lower left portion of the detector are comparable to those taken through more conventionally-mounted detectors, and so prove that our flip-chip mounting scheme is feasible. ## 5 DISCUSSION AND CONCLUSIONS Both the CZT/BGO detector with its passive collimator and the pixellated flip-chip detectors with their passive/plastic shielding will be flown as piggyback experiments on the next flight of the EXITE2 hard X-ray telescope payload. This flight is scheduled to take place in April of 2000 from Ft. Sumner, NM. The expected gamma-ray contributions to the backgrounds of the CZT/BGO and flip-chip CZT detectors have been calculated using GEANT in the same manner as the spectra shown in Figure 1. These simulations then represent a “lower limit” to the total background expected. In Figure 10 we compare them to the PoRTIA and WUSTL/UCSD results as before. Here we plot the backgrounds per area, since the important quantity is the background counts within the area of a pixel. It is obvious that the thicker pixellated detectors in the passive/plastic shield can expect to record a higher background per pixel than would the thinner detectors in the other experiments. The pixellated detector background is higher by a factor of $`1.5`$ even at the lowest energies because these detectors are mounted slightly closer to the front of the collimator than is the CZT/BGO detector, giving them a slightly larger field of view and higher aperture flux. There is good agreement between the expected CZT/BGO background and the completely actively-shielded WUSTL/UCSD background at high energies. This might indicate that the WUSTL/UCSD experiment has successfully rejected most of the internal background and is recording mostly shield leakage photons. Whether the CZT/BGO background will really be this low will depend on the efficiency of the BGO active shield in rejecting prompt internal background. Our previous results indicate that the actual spectrum should lie within a factor of two of that plotted in Figure 10. To compare the CZT/BGO and pixellated experiments without regard to detector thickness, we plot the GEANT spectra as a function of detector volume in Figure 10. The two backgrounds are in fairly close agreement. At low energies the background is dominated by aperture flux from the front. Both the thin CZT/BGO and thick pixellated detectors efficiently absorb these low energy photons, but the thin detector does so in a smaller total volume. Therefore the count rate in the thin detector appears higher when plotted per volume. At high energies the CZT/BGO background is slightly lower due to the rejection of Compton-scattered events. In reality, our results, together with those of the WUSTL/UCSD experiment, indicate that the background in the passive/plastic-shielded detectors will be 6–10 times higher than that in Figures 10 or 10. It will be of great value to measure how much higher it is, and to attempt to model the processes responsible for it. As noted in Section 3, the WUSTL/UCSD experiment makes use of a depth-sensing technique to further lower the background below 100 keV by rejecting low energy events near the bottom of the detector. The method is based on the fact that the ratio of the cathode and pixel pulses for a given event should be proportional to the depth of the interaction. Such a technique would be even more useful in 5 mm thick detectors such as ours, and we will implement it in our application by adding an extra channel to future ASICs to read out the cathode pulse. To fully understand the background measurements we will make, it will be necessary to model the response of the CZT detectors themselves. We have already successfully simulated the simple response of the single-element CZT/BGO detector. Modeling a pixellated detector requires knowledge of the internal electric field and weighting potentials. We have already developed electric field modeling tools based on the commercial software package ES4, and are now modeling the response of our tiled imaging detectors to laboratory X-ray sources. We will also attempt to include cosmic ray interactions and activation precesses (e.g. <sup>110</sup>Cd(n,$`\gamma `$)) in our background simulations in order to understand the internal background processes important in CZT. ## ACKNOWLEDGMENTS We thank F. Harrison and B. Matthews for providing the CZT/BGO detector for further flights. This work was supported in part by NASA grant NAG5-5103. P. Bloser acknowledges support from NASA GSRP grant NGT5-50020.
no-problem/9910/astro-ph9910168.html
ar5iv
text
# Iron Line Reverberation Mapping With Constellation-X ## 1. Introduction X-ray spectroscopic observations of Active Galactic Nuclei (AGN) have provided the first direct probe of the innermost regions of black hole accretion disks which are subject to strong-field general relativistic effects. X-ray illumination of the surface layers of the accretion disk produces a strong iron K$`\alpha `$ fluorescence line (George & Fabian 1991; Matt, Perola & Piro 1991) which is broadened and skewed in a characteristic manner by the Doppler motion of the disk and gravitational redshifting (Fabian et al. 1989; Laor 1991). Observations of Seyfert 1 galaxies by the Advanced Satellite for Cosmology and Astrophysics (*ASCA*) have confirmed the presence of such lines with the expected broad and skewed profiles (Tanaka et al. 1995; Nandra et al. 1997). Alternative mechanisms (i.e. those not involving black hole accretion disks) for the production of such a broad and skewed line profile appear to fail (Fabian et al. 1995). The X-ray emission from a typical AGN shows significant short timescale variability, with large amplitude changes in flux in hundreds of seconds. This variability is likely to be associated with the activation of new X-ray emitting regions either above the accretion disk or in a disk-hugging corona. These flares produce an ‘echo’ in the form of a fluorescent iron line response from the disk. The line profile depends upon the space-time geometry, the accretion disk structure and the pattern of the X-ray illumination. Given the low count rate of typical AGN in *ASCA*, it is necessary to observe for long periods ($`10^4`$ secs) to obtain sufficiently high signal-to-noise to study the Fe K$`\alpha `$ emission line profile. Such long exposure times include several hundred light crossing times of a gravitational radius ($`1GM/c^3=500`$ secs for a $`10^8M_{}`$ black hole) so that the observed line profile is *time-averaged* and details of any variability are lost, and this lack of a timescale does not allow the black hole mass to be determined. The time-averaged line profiles possess a degeneracy that does not allow the spin parameter of the black hole to be simply determined; extremely different space-time geometries and illumination patterns may produce almost identical time-averaged line profiles (Reynolds & Begelman 1997), although their absorption effects differ (Young, Ross & Fabian 1998). Detailed time-resolved observations of such emission line variability using the technique of *reverberation mapping* (Blandford & McKee 1982; Stella 1990; Reynolds et al. 1999, hereafter R99) allows the degeneracies mentioned above to be broken. Reverberation mapping has already been used with great success in the optical and UV wavebands, eg. the AGN Watch results on NGC 5548 (Peterson et al. 1999; AGN Watch). R99 have calculated the iron line response as seen by distant observers for a number of scenarios. Their method is outlined below. The activation of a new X-ray flaring region is approximated by an instantaneous flash from an isotropic point source above the accretion disk, whose location is specified in Boyer-Lindquist coordinates. The accretion disk is assumed to be geometrically thin and confined to the equatorial plane. Whilst the disk may extend out to large radii we are only interested in the line emitting region within $`50`$ Schwarzschild radii ($`50R_\mathrm{s}=100GM/c^2`$). The disk is divided into two regions, that inside the radius of marginal stability, $`r_{\mathrm{ms}}`$, where there are no stable circular orbits and material plunges into the black hole, and that outside $`r_{\mathrm{ms}}`$ where material is assumed to follow essentially Keplerian orbits. $`r_{\mathrm{ms}}`$ is a decreasing function of $`a`$, the dimensionless angular momentum per unit mass of the black hole, decreasing from $`6GM/c^2`$ for a Schwarzschild black hole ($`a=0`$) to $`1.2GM/c^2`$ for a maximally spinning Kerr black hole ($`a=0.998`$), assuming that the accretion disk is in a prograde orbit. The ionization state of the accreting material is determined by considering the ionization parameter $`\xi =4\pi F_\mathrm{x}/n_\mathrm{e}`$, where $`F_\mathrm{x}`$ is the illuminating X-ray flux and $`n_\mathrm{e}`$ is the electron density. Outside $`r_{\mathrm{ms}}`$ the electron density is very large and the disk is ‘cold’, with iron less ionized that Fe XVII. Inside $`r_{\mathrm{ms}}`$ the density drops rapidly as material plunges into the black hole and, if this region is illuminated, it may become photoionized, which affects the strength of the fluorescence line that is produced. We use the following prescription 1. $`\xi <100`$ — cold fluorescence line at 6.4 keV 2. $`100<\xi <500`$ — no line emission due to resonant trapping and Auger destruction of line photons 3. $`500<\xi <5000`$ — a combination of He-like and H-like lines at 6.67 keV and 6.97 keV each with an effective fluorescent yield equal to that for the neutral case 4. $`\xi >5000`$ — no line emission since material is completely ionized. The X-ray efficiency of the source $`\eta _\mathrm{x}`$ is defined as $`\eta _\mathrm{x}=L_\mathrm{x}/\dot{m}`$, where $`L_\mathrm{x}`$ is the X-ray luminosity and $`\dot{m}`$ is the mass accretion rate. In general, the higher the source efficiency the more highly ionized the material within $`r_{\mathrm{ms}}`$ becomes. This simple model of the accretion disk is sufficient for our purposes. Photon paths are traced from the flare to compute the illumination pattern on the disk as a function of time, and the corresponding iron line response. The evolution of the line profile as seen by an observer located at a given inclination to the accretion disk is then calculated. The line response to an instantaneous ($`\delta `$-function) flare is often referred to as the *transfer function*. For further details on the calculation of these transfer functions the reader is referred to R99. R99 note that transfer functions contain a number of robust indicators of the space-time geometry and the location of the X-ray emitting regions. It is possible to determine the location of the flares, whether they be on or above the accretion disk, on the approaching or receding side of the disk, or along its rotation axis. It is also possible to differentiate between Schwarzschild and maximally spinning Kerr black holes. In this paper we simulate Constellation X-ray Mission (*Constellation-X*) observations of the iron line response to such X-ray flares using the transfer functions of R99. *Constellation-X* will be the first X-ray observatory with sufficient sensitivity to detect a significant Fe K$`\alpha `$ line within a light crossing time for nearby bright Seyfert 1 galaxies (the proposed effective area of *Constellation-X* around the iron line energy is about 5 times that of the X-ray Multi-Mirror satellite (*XMM*) which in turn is about 20 times that of *ASCA*). The aim of this work is to assess the feasibility of detecting the various reverberation signatures described in R99 which will act as probes of the black hole spin and mass. We expand upon an issue raised by R99 and show that a ‘red-ward moving bump’ in the iron line profile is a robust signature of a black hole with spin parameter $`a>0.9`$. We also step beyond the single $`\delta `$-function flare case and examine whether reverberation from realistic multiple flare cases can be disentangled using an instrument such as *Constellation-X*. ## 2. Simulation method Our simulations are designed to represent the case of a *Constellation-X* observation of an X-ray bright Seyfert 1 galaxy such as MCG–6-30-15 or NGC 3516. These sources show considerable X-ray continuum flux variability on all timescales, with evidence for flares on short timescales (see Fig. 1). Taking parameters for MCG–6-30-15, a typical average continuum flux in the range 2–10 keV is $`6\times 10^3`$ ph s<sup>-1</sup> cm<sup>-2</sup>. The average equivalent width of the fluorescent iron line is 300 eV (Tanaka et al. 1995) which corresponds to a 2–10 keV flux of $`10^4`$ ph s<sup>-1</sup> cm<sup>-2</sup>. These figures may be used to estimate the line flux expected for a given continuum flux, assuming the efficiency of conversion of continuum to line photons remains unchanged. Initially, we assume a continuum level comparable to the average flux of NGC 3516 ($`1.2\times 10^2`$ ph s<sup>-1</sup> cm<sup>-2</sup>) which is constant apart from an instantaneous ($`\delta `$-function) flare at some time and localized to a point source above the disk. The flare is assumed to have the equivalent of 10000 seconds of continuum flux. This corresponds to a flare lasting 1000 seconds with 10 times the continuum flux or a flare lasting 5000 seconds with twice the continuum flux. The duration of the flare does not significantly influence our results as long as it is relatively short lived, lasting only a few $`GM/c^3`$. Longer flares would result in the transfer functions being blurred over time; a possibility we discuss in section 3.5. We do not specify a particular spectrum for the flare but assume that the fraction of flux from the flare that is converted into line photons is the same as that inferred from the time-averaged spectrum. *Constellation-X* observations of this system, including photon counting statistics, are then simulated using current estimates of this observatory’s (energy dependent) effective area<sup>1</sup><sup>1</sup>1Effective area curves were obtained from the NASA Goddard Space Flight Center web-page: http://constellation.gsfc.nasa.gov/www/area.html. The line profile was added to a power-law continuum of photon index 2 which was subtracted from the overall simulated data in order to yield a simulated, time-varying, iron line profile. The mass of the black hole determines the timescale on which reverberation effects occur. For a $`10^8\mathrm{M}_{}`$ black hole, for example, the light crossing time of one gravitational radius is $`1GM/c^3=500`$ sec. This is the time period over which we simulate and record individual iron line profiles in order to study the observability of various reverberation effects. For higher mass black holes the reverberation signatures would be more readily observed since longer integration times may be used. The converse is true of lower mass black holes. ## 3. Results ### 3.1. Spin parameter Measuring black hole spin (and, implicitly, testing general relativity in the strong field limit) is one of the main motivations for studying iron line reverberation. If an X-ray flare occurs along the symmetry axis above the accretion disk, the light echo will split into two distinct ‘rings’, one of which propagates outwards to large radii simply due to light travel time, and one which propagates asymptotically towards the horizon. This last feature is due to the progressively more severe relativistic time delays suffered by photons passing close to the black hole. It produces a red-bump in the line profile which moves to lower energies as time goes on. Equivalently, it produces a ‘red-wing’ in the transfer function. Fig. 2 shows simulated transfer functions for values of the spin parameter $`a`$ between 0–0.99. For these calculations a low source efficiency $`\eta _\mathrm{x}=10^3`$ was chosen to maximize the region within $`r_{ms}`$ that may produce highly redshifted fluorescent line emission. A more realistic, higher value of $`\eta _\mathrm{x}`$ would result in the region within $`r_{\mathrm{ms}}`$ becoming more highly ionized and less able to produce fluorescent line emission (Reynolds & Begelman 1997). The precise shape of the ‘red-wing’ depends upon the spin parameter. This red-wing only has a pronounced slope in the case of near-extremal Kerr black holes ($`a>0.9`$). Only in these cases will the corresponding line profile possess a red-bump which moves to lower energies as time progresses. We conclude that this feature is a robust signature of near-extremal Kerr black holes. Fig. 3 shows the line profile in simulated $`2GM/c^3`$ (1000 s) *Constellation-X* observations of the extreme Kerr case between 23–$`28GM/c^3`$ (11500 s–14000 s) after the flare is observed at time zero. Shown here is the case of a flare on the spin axis of the hole at a height $`10GM/c^2`$ above the disk plane, viewed almost face-on, at an inclination of $`3^{}`$. The red bump is clearly seen and its red-ward progress with time is an indicator that emission from around a Kerr black hole is being observed. Thus, this signature is readily observable in such a case. As the inclination of the observer increases, the photons in the red-wing are distributed over a greater range of energies and times and hence the detectability of the feature is reduced. Fig. 4a shows a simulated transfer function for a disk inclined at $`30^{}`$ around an extreme Kerr black hole. The flare is assumed to be on the symmetry axis at a height of $`10GM/c^2`$ above the disk plane. Red-tail emission (at $`3\mathrm{keV}`$ and $`t=30GM/c^3`$) can be discerned above the photon noise. For comparison, Fig. 4b shows the same case but with a Schwarzschild black hole. The differences between the transfer functions are subtle but observable. The data in these figures have been rebinned to increase the signal-to-noise ratio. On the basis of these simulations, we estimate that this phenomenon is observable for source inclinations less than $`30^{}`$, although the flare location is also an important consideration in determining the observability of this effect. ### 3.2. Confidence in the determination of the spin parameter The differences between the theoretical transfer functions for the same source location but different values of the black hole spin parameter may be seen in Fig. 2, the difference between the Schwarzschild, $`a=0`$, and nearly maximally spinning Kerr, $`a=0.99`$, cases being the most marked. These theoretical transfer functions may be used as templates to fit an observation of a system inclined at $`30^{}`$ with a flare located $`10GM/c^2`$ along the rotation axis of the disk. If we simulate an observation with a particular value of $`a`$ and attempt to fit that simulation with each of the template transfer functions. Fig. 5a shows the results of fitting five simulated observations of the case $`a=0.99`$. The $`\mathrm{\Delta }\chi ^2`$ values are an indication of the goodness of fit (with a lower $`\mathrm{\Delta }\chi ^2`$ representing a better fit), normalized to $`\mathrm{\Delta }\chi ^2=0`$ at the best fit values. The goodness of fit is seen to improve dramatically as the value of $`a`$ used in the fit is increased indicating that the black hole is spinning rapidly. Fig. 5b shows a similar plot for of fitting simulated observations of the $`a=0`$ case. Again the $`\mathrm{\Delta }\chi ^2`$ values are seen to decline sharply towards the best fitting values, and one may conclude that the black hole is not spinning rapidly. In reality a catalogue of transfer functions would be used to fit for different flare locations black hole spin parameters. ### 3.3. Flare location and the mass of the black hole The location of the X-ray flares and the mass of the central black hole (i.e. the linear dimensions of the whole system) are also important motivations for performing iron line reverberation mapping of AGN. Ideally, one would take a well-measured iron line response to a large flare and compare it to a library of theoretical transfer functions in order to determine the location of the flare and the mass of the hole (from the time scaling of the transfer function). The multi-dimensional parameter space, coupled with the limitations of realistic data, makes this a challenging problem. However, there are qualitative features in the transfer functions of R99 that can be used to estimate the mass and flare location. For most flare locations and observer inclinations, there is a re-emergence of the line flux (usually in the red-wing of the line) as the observed echo of the flare works its way round to the back regions of the disk. The time between the initial line response and this re-emergence is $`1020GM/c^3`$. Both the initial line response and the re-emergence are easily observable with Constellation-X (see Figs. 3, 4b and 6b) for $`M10^8\mathrm{M}_{}`$. Hence masses in this range can be measured to within a factor of two or so. Furthermore, the time between the observed flare and the initial line response can be compared with the above time in order to determine if the flare is at high-latitudes above the accretion disk, or in a disk-hugging corona. ### 3.4. Observing the region within the innermost stable orbit As matter crosses the innermost stable orbit its radial velocity increases rapidly and its density drops dramatically. If this region of the disk is illuminated by hard radiation, the matter becomes ionized and may emit ‘hot’ iron lines at 6.67 keV and 6.97 keV. In the Schwarzschild case where the innermost stable orbit is at a radius $`6GM/c^2`$ with a high latitude illuminating source, this results in a high energy ‘loop’ in the transfer function corresponding to the response from this ionized region (Fig. 6; also see Fig. 1(c) and (d) of R99). For an accretion disk at a given inclination the maximum expected blueshift may be calculated and hence, for a ‘cold’ iron line at 6.4 keV, the maximum line energy can be determined. Significant line response at higher energies would be an indication of fluorescence from ionized matter. Our simulations show that these features would be observable by *Constellation-X* for inclined disks around Schwarzschild black holes (see Figs. 6a and 6b). We have simulated a number of observations and by fitting them with two template transfer functions, one with the ‘loop’ and the other without, conclude that the presence of the ‘loop’ is statistically significant. Conversely, the observation of such loops would imply that the innermost stable orbit is some distance from the event horizon which, in turn, would indicate a slowly-rotating black hole. ### 3.5. More realistic flare models Considering isolated instantaneous flares is, of course, a great simplification of the complicated activity within the nucleus. Motivated by the light curve of Fig. 1 we consider the slightly more realistic scenario of two overlapping flares in a source with continuum flux comparable to that of MCG–6-30-15. Both flares have a duration of 3000 seconds so each of their transfer functions are smeared over $`6GM/c^3`$. One is on the approaching side of the disk having the same intensity as the continuum and the other is on the receding side of the disk with twice the intensity of the continuum. The flare on the receding side precedes the flare on the approaching side by $`6GM/c^3`$. Fig. 7 shows the theoretical line response to this double flare as well as the results of a *Constellation-X* simulation. From the simulated *Constellation-X* data, one can see that there have been two flares and can begin to disentangle the individual transfer functions. The height of the flares above the disk along with their location should be determinable. It is possible that flares significantly brighter than those simulated here may be observed and the fluorescent responses to those would be correspondingly stronger. Other galaxies, such as NGC 3516, are twice as bright as MCG–6-30-15 and this would help reduce the error bars associated with the photon statistics. Gravitational focusing of the flare emission towards the disk (Martocchia & Matt 1996) may further pronounce these reverberation signatures for flares occurring extremely close to the black hole, within $`6GM/c^2`$. In such cases the observed change in continuum flux would represent only a fraction of the flux incident upon the disk. In addition to the bright flares there will be a background of much smaller flares occuring elsewhere on the disk, the response to which may be approximated by a time-averaged line. These flares will appear as noise in the data. ## 4. Conclusions We have demonstrated that many of the iron line reverberation effects noted by R99 are within reach of *Constellation-X*. In particular, *Constellation-X* will be able to search for the red-ward moving bump in the iron line profile which is a robust and generic signature of rapidly rotating black holes. Maximally spinning Kerr and Schwarzschild black holes can be discriminated. It will also allow the time delay between a large flare and the iron line response, as well as the form of the corresponding transfer function to be determined. Comparison with a library of computed transfer functions will allow the mass of the hole and the location of the flare to be measured. Although this is a difficult task due to the multi-dimensional parameter space that one must consider, we note that there are easily determinable quantities that allow the black hole mass and flare location to be approximated. The time delay between the initial response and ‘re-emergence’ of the line flux may be used to estimate the black hole mass to within a factor of 2. The time delay between the change in the continuum and the initial response of the iron line, as well as the energy of this response, may be used to estimate the location of the flare. These studies will open up a new, and extremely powerful, probe of the immediate environment of supermassive black holes. ## 5. Acknowledgments We thank Kazushi Iwasawa and Julia Lee for the unpublished recent light curve of MCG–6-30-15, and Andy Fabian for insightful discussion. AJY acknowledges PPARC (UK) for support. CSR acknowledges support from NASA under LTSA grant NAG 5-6337. CSR also acknowledges support from a Hubble Fellowship grant HF-01113.01-98A awarded by the Space Telescope Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555.
no-problem/9910/cs9910009.html
ar5iv
text
# Locked and Unlocked Polygonal Chains in 3DThis research was initiated at a workshop at the Bellairs Res. Inst. of McGill Univ., Jan. 31–Feb. 6, 1998. This is a revised and expanded version of [BDD+99]. Reseach supported in part by FCAR, NSERC, and NSF. ## 1 Introduction A polygonal chain $`P=(v_0,v_1,\mathrm{},v_{n1})`$ is a sequence of consecutively joined segments (or edges) $`e_i=v_iv_{i+1}`$ of fixed lengths $`\mathrm{}_i=|e_i|`$, embedded in space.<sup>1</sup><sup>1</sup>1 All index arithmetic throughout the paper is mod $`n`$. A chain is closed if the line segments are joined in cyclic fashion, i.e., if $`v_{n1}=v_0`$; otherwise, it is open. A closed chain is also called a polygon. If the line segments are regarded as obstacles, then the chains must remain simple at all times, i.e., self intersection is not allowed. The edges of a simple chain are pairwise disjoint except for adjacent edges, which share the common endpoint between them. We will often use chain to abbreviate “simple polygonal chain.” For an open chain our goal is to straighten it; for a closed chain the goal is to convexify it, i.e., to reconfigure it to a planar convex polygon. Both goals are to be achieved by continuous motions that maintain simplicity of the chain throughout, i.e., links are not permitted to intersect. A chain that cannot be straightened or convexified we call locked; otherwise the chain is unlocked. Note that a chain in 3D can be continuously moved between any of its unlocked configurations, for example via straightened or convexified intermediate configurations. Basic questions concerning open and closed chains have proved surprisingly difficult. For example, the question of whether every planar, simple open chain can be straightened in the plane while maintaining simplicity has circulated in the computational geometry community for years, but remains open at this writing. Whether locked chains exist in dimensions $`d4`$ was only settled (negatively, in \[CO99\]) as a result of the open problem we posed in a preliminary version of this paper \[BDD<sup>+</sup>99\]. In piecewise linear knot theory, complete classification of the 3D embeddings of closed chains with $`n`$ edges has been found to be difficult, even for $`n=6`$. These types of questions are basic to the study of embedding and reconfiguration of edge-weighted graphs, where the weight assigned to an edge specifies the desired distance between the vertices it joins. Graph embedding and reconfiguration problems, with or without a simplicity requirement, have arisen in many contexts, including molecular conformation, mechanical design, robotics, animation, rigidity theory, algebraic geometry, random walks, and knot theory. We obtain several results for chains in 3D: open chains with a simple orthogonal projection, or embedded in the surface of a polytope, may be straightened (Sections 2 and 3); but there exist open and closed chains that are locked (Section 4). For closed chains initially taking the form of a polygon lying in a plane, it has long been known that they may be convexified in 3D, but only via a procedure that may require an unbounded number of moves. We provide an algorithm to perform the convexification (Section 5) in $`O(n)`$ moves. Previous computational geometry research on the reconfiguration of chains (e.g., \[Kan97\], \[vKSW96\], \[Whi92\]) typically concerns planar chains with crossing links, moving in the presence of obstacles; \[Sal73\] and \[LW95\] reconfigure closed chains with crossing links in all dimensions $`d2`$. In contrast, throughout this paper we work in 3D and require that chains remain simple throughout their motions. Our algorithmic methods complement the algebraic and topological approaches to these problems, offering constructive proofs for topological results and raising computational, complexity, and algorithmic issues. Several open problems are listed in Section 6. ### 1.1 Background Thinking about movements of polygonal chains goes back at least to A. Cauchy’s 1813 theorem on the rigidity of polyhedra \[Cro97, Ch. 6\]. His proof employed a key lemma on opening angles at the joints of a planar convex open polygonal chain. This lemma, now known as Steinitz’s Lemma (because E. Steinitz gave the first correct proof in the 1930’s), is similar in spirit to our Lemma 5.5. Planar linkages, objects more general than polygonal chains in that a graph structure is permitted, have been studied intensively by mechanical engineers since at least Peaucellier’s 1864 linkage. Because the goals of this linkage work are so different from ours, we could not find directly relevant results in the literature (e.g., \[Hun78\]). However, we have no doubt that simple results like our convexification of quadrilaterals (Lemma 5.2) are known to that community. Work in algorithmic robotics is relevant. In particular, the Schwartz-Sharir cell decomposition approach \[SS83\] shows that all the problems we consider in this paper are decidable, and Canny’s roadmap algorithm \[Can87\] leads to an algorithm singly-exponential in $`n`$, the number of vertices of the polygonal chain. Although hardness results are known for more general linkages \[HJW84\], we know of no nontrivial lower bounds for the problems discussed in this paper. See, e.g., \[HJW84\], \[Kor85\], \[CH88\], or \[Whi97\] for other weighted graph embedding and reconfiguration problems. ### 1.2 Measuring Complexity As usual, we compute the time and space complexity of our algorithms as a function of $`n`$, the number of vertices of the polygonal chain. This, however, will not be our focus, for it is of perhaps greater interest to measure the geometric complexity of a proposed reconfiguration of a chain. We first define what constitutes a “move” for these counting purposes. Define a joint movement at $`v_i`$ to be a monotonic rotation of $`e_i`$ about an axis through $`v_i`$ fixed with respect to a reference frame rigidly attached to some other edges of the chain. For example, a joint movement could feasibly be executed by a motor at $`v_i`$ mounted in a frame attached to $`e_{i1}`$ and $`e_{i2}`$. The axis might be moving in absolute space (due to other joint movements), but it must be fixed in the reference frame. Although more general movements could be explored, these will suffice for our purposes. A monotonic rotation does not stop or reverse direction. Note we ignore the angular velocity profile of a joint movement, which might not be appropriate in some applications. Our primary measure of complexity is a move: a reconfiguration of the chain $`P`$ of $`n`$ links to $`P^{}`$ that may be composed of a constant number of simultaneous joint movements. Here the constant number should be independent of $`n`$, and is small ($`4`$) in our algorithms. All of our algorithms achieve reconfiguration in $`O(n)`$ moves. One of our open problems (Section 6) asks for exploration of a measure of the complexity of movements. ## 2 Open Chains with Simple Projections This section considers an open polygonal chain $`P`$ in 3D with a simple orthogonal projection $`P^{}`$ onto a plane. Note that there is a polynomial-time algorithm to determine whether $`P`$ admits such a projection, and to output a projection plane if it exists \[BGRT96\]. We choose our coordinate system so that the $`xy`$-plane $`\mathrm{\Pi }_{xy}`$ is parallel to this plane; we will refer to lines and planes parallel to the $`z`$-axis as “vertical.” We will describe an algorithm that straightens $`P`$, working from one end of the chain. We use the notation $`P[i,j]`$ to represent the chain of edges $`(v_i,v_{i+1},\mathrm{},v_j)`$, including $`v_i`$ and $`v_j`$, and $`P(i,j)`$ to represent the chain without its endpoints: $`P(i,j)=P[i,j]\{v_i,v_j\}`$. Any object lying in plane $`\mathrm{\Pi }_{xy}`$ will be labelled with a prime. Consider the projection $`P^{}=(v_0^{},v_1^{},\mathrm{},v_{n1}^{})`$ on $`\mathrm{\Pi }_{xy}`$. Let $`r_i=\mathrm{min}_{j\{i1,i\}}d(v_i^{},e_j^{})`$, where $`d(v^{},e^{})`$ is the minimum distance from vertex $`v^{}`$ to a point on edge $`e^{}`$. Construct a disk of radius $`r_i`$ around each vertex $`v_i^{}`$. The interior of each disk does not intersect any other vertex of $`P^{}`$ and does not intersect any edges other than the two incident to $`v_i^{}`$: $`e_{i1}^{}`$ and $`e_i^{}`$; see Fig. 1. We construct in 3D a vertical cylinder $`C_i`$ centered on each vertex $`v_i`$ of radius $`r=\frac{1}{3}\mathrm{min}_i\{r_i\}`$. This choice of $`r`$ ensures that no two cylinders intersect one another (the choice of the fraction $`\frac{1}{3}<\frac{1}{2}`$ guarantees that cylinders do not even touch), and no edges of $`P`$, other than those incident to $`v_i`$, intersect $`C_i`$, for all $`i`$. The straightening algorithm proceeds in two stages. In the first stage, the links are squeezed like an accordion into the cylinders, so that after step $`i`$ all the links of $`P_{i+1}=P[0,i+1]`$ are packed into $`C_{i+1}`$. Let $`\mathrm{\Pi }_i`$ be the vertical plane containing $`e_i`$ (and therefore $`e_i^{}`$). After the first stage, the chain is monotone in $`\mathrm{\Pi }_i`$, i.e., it is monotone with respect to the line $`\mathrm{\Pi }_i\mathrm{\Pi }_{xy}`$ in that the intersection of the chain with a vertical line in $`\mathrm{\Pi }_i`$ is either empty or a single point. In stage two, the chain is unraveled link by link into a straight line. The rest of this section describes the first stage. Let $`\delta =r/n`$. ### 2.1 Stage 1 We describe the Step 0 and the general Step $`i`$ separately, although the former is a special case of the latter. 1. 1. Rotate $`e_0`$ about $`v_1`$, within $`\mathrm{\Pi }_0`$, so that the projection of $`e_0`$ on $`\mathrm{\Pi }_{xy}`$ is contained in $`e_0^{}`$ throughout the motion. The direction of rotation is determined by the relative heights ($`z`$-coordinates) of $`v_0`$ and $`v_1`$. Thus if $`v_0`$ is at or above $`v_1`$, $`e_0`$ is rotated upwards ($`v_0`$ remains above $`v_1`$ during the rotation); see Fig. 2. If $`v_0`$ is lower than $`v_1`$, $`e_0`$ is rotated downwards ($`v_0`$ remains below $`v_1`$ during the rotation). The rotation stops when $`v_0`$ lies within $`\delta `$ of the vertical line through $`v_1`$, i.e., when $`v_0`$ lies in the cylinder $`C_1`$ and is very close to its axis. The value of $`\delta `$ is chosen to be $`r/n`$ so that in later steps more links can be accommodated in the cylinder. Again see Fig. 2. 2. Now we rotate $`e_0`$ about the axis of $`C_1`$ away from $`e_1`$, until $`e_0^{}`$ and $`e_1^{}`$ are collinear (but not overlapping), i.e., until $`e_0`$ lies in the vertical plane $`\mathrm{\Pi }_1`$. After completion of Step 0, $`(v_0,v_1,v_2)`$ forms a chain in $`\mathrm{\Pi }_1`$ monotone with respect to the line $`\mathrm{\Pi }_1\mathrm{\Pi }_{xy}`$. 2. At the start of Step $`i>0`$, we have a monotone chain $`P_{i+1}=P[0,i+1]`$ contained in the vertical plane $`\mathrm{\Pi }_i`$ through $`e_i`$, with $`P_i=P[0,i]`$ in $`C_i`$ and $`v_0`$ within a distance of $`i\delta `$ of the axis of $`C_i`$. 1. As in Step 0(a), rotate $`e_i`$ within $`\mathrm{\Pi }_i`$ (in the direction that shortens the vertical projection of $`e_i`$) so that $`v_i`$ lies within a distance $`\delta `$ of the axis of $`C_{i+1}`$. The difference now is that $`v_i`$ is not the start of the chain, but rather is connected to the chain $`P_i`$. During the rotation of $`e_i`$ we “drag” $`P_i`$ along in such a way that only joints $`v_i`$ and $`v_{i+1}`$ rotate, keeping the joints $`v_1,\mathrm{},v_{i1}`$ frozen. Furthermore, we constrain the motion of $`P_i`$ (by appropriate rotation about joint $`v_i`$) so that it does not undergo a rotation. Thus at any instant of time during the rotation of $`e_i`$, the position of $`P_i`$ remains within $`\mathrm{\Pi }_i`$ and is a translated copy of the initial $`P_i`$. See Fig. 3. 2. Following Step 0(b), rotate $`P_{i+1}`$ about the axis of $`C_{i+1}`$ until $`e_i^{}`$ and $`e_{i+1}^{}`$ are coplanar. At the completion of Step $`i`$ we therefore have a chain $`P_{i+2}=P[0,i+2]`$ in the vertical plane $`\mathrm{\Pi }_{i+1}`$, with $`P_{i+1}`$ in $`C_{i+1}`$ and $`v_0`$ within a distance of $`(i+1)\delta `$ of its axis. The chain is monotone in $`\mathrm{\Pi }_{i+1}`$ with respect to the line $`\mathrm{\Pi }_{i+1}\mathrm{\Pi }_{xy}`$. ### 2.2 Stage 2 Now it is trivial to unfold this monotone chain by straightening one joint at a time, i.e., rotating each joint angle to $`\pi `$, starting at either end of the chain. We have therefore established the first claim of this theorem: ###### Theorem 2.1 A polygonal chain of $`n`$ links with a simple orthogonal projection may be straightened, in $`O(n)`$ moves, with an algorithm of $`O(n)`$ time and space complexity. Counting the number of moves is straightforward. Stage 1, Step $`i`$(a) requires one move: only joints $`v_i`$ and $`v_{i+1}`$ rotate. Step $`i`$(b) is again one move: only $`v_{i+1}`$ rotates. So Stage 1 is completed in $`2n`$ moves. As Stage 2 takes $`n1`$ moves, the whole procedure is accomplished with $`O(n)`$ moves. Each move can be computed in constant time, so the time complexity is dominated by the computation of the cylinder radii $`r_i`$. These can be trivially computed in $`O(n^2)`$ time, by computing each vertex-vertex and vertex-edge distance. However, a more efficient computation is possible, based on the medial axis of a polygon, as follows. Given the projected chain $`P^{}`$ in the plane (Fig. 4a), form two simple polygons $`P_1`$ and $`P_2`$, by doubling the chain from its endpoint $`v_0^{}`$ until the convex hull is reached (say at point $`x`$), and from there connecting along the line bisecting the hull angle at $`x`$ to a large surrounding rectangle, and similarly connecting from $`v_{n1}^{}`$ to the hull to the rectangle. For $`P_1`$ close the polygon above $`P^{}`$, and below for $`P_2`$. See Figs. 4bc. Note that $`P_1P_2`$ covers the rectangle, which, if chosen large, effectively covers the plane for the purposes of distance computation. Compute the medial axis of $`P_1`$ and $`P_2`$ using a the linear-time algorithm of \[CSW95\]. The distances $`r_i`$ can now be determined from the distance information in the medial axes. For a convex vertex $`v_i`$ of $`P_k`$, its minimum “feature distance” can be found from axis information at the junction of the axis edge incident to $`v_i`$. For a reflex vertex, the information is with the associated axis parabolic arc. Because the bounding box is chosen to be large, no vertex’s closest feature is part of the bounding box, and so must be part of the chain. ## 3 Open Chains on a Polytope In this section we show that any open chain embedded on the surface of a convex polytope may be straightened. We start with a planar chain which we straighten in 3D. Let $`P`$ be an open chain in 2D, lying in $`\mathrm{\Pi }_{xy}`$. It may be easily straightened by the following procedure. Rotate $`e_0`$ within $`\mathrm{\Pi }_0`$ until it is vertical; now $`v_0`$ projects into $`v_1`$ on $`\mathrm{\Pi }_{xy}`$. In general, rotate $`e_i`$ within $`\mathrm{\Pi }_i`$ until $`v_i`$ sits vertically above $`v_{i+1}`$. Throughout this motion, keep the previously straightened chain $`P_i=P[0,i]`$ above $`v_i`$ in a vertical ray through $`v_i`$. This process clearly maintains simplicity throughout, as the projection at any stage is a subset of the original simple chain in $`\mathrm{\Pi }_{xy}`$. In fact, this procedure can be seen as a special case of the algorithm described in the preceding section. An easy generalization of this “pick-up into a vertical ray” idea permits straightening any open chain lying on the surface of a convex polytope $`𝒫`$. The same procedure is followed, except that the surface of $`𝒫`$ plays the role of $`\mathrm{\Pi }_{xy}`$, and surface normals play the roles of vertical rays. When a vertex $`v_i`$ of the polygonal chain $`P`$ lies on an edge $`e`$ between two faces $`f_1`$ and $`f_2`$ of $`𝒫`$, then the line containing $`P_i`$ is rotated from $`R_1`$, the ray through $`v_i`$ and normal to $`f_1`$, through an angle of measure $`\pi \delta (e)`$, where $`\delta (e)`$ is the (interior) dihedral angle at $`e`$, to $`R_2`$, the ray through $`v_i`$ and normal to $`f_2`$. This algorithm uses $`O(n)`$ moves and can be executed in $`O(n)`$ time. Note that it is possible to draw a polygonal chain on a polytope surface that has no simple projection. So this algorithm handles some cases not covered by Theorem 2.1. We believe that the sketched algorithm applies to a class of polyhedra wider than convex polytopes, but we will not pursue this further here. ## 4 Locked Chains Having established that two classes of open chains may be straightened, we show in this section that not all open chains may be straightened, describing one locked open chain of five links (Section 4.1). A modification of this example establishes the same result for closed chains (Section 4.2). Both of these results were obtained independently by other researchers \[CJ98\]. Our proofs are, however, sufficiently different to be of independent interest. ### 4.1 A Locked Open Chain Consider the chain $`K=(v_0,\mathrm{},v_5)`$ configured as in Fig. 5, where the standard knot theory convention is followed to denote “over” and “under” relations. Let $`L=\mathrm{}_1+\mathrm{}_2+\mathrm{}_3`$ be the total length of the short central links, and let $`\mathrm{}_0`$ and $`\mathrm{}_4`$ be both larger than $`L`$; in particular, choose $`\mathrm{}_0=L+\delta `$ and $`\mathrm{}_4=2L+\delta `$ for $`\delta >0`$. (One can think of this as composed of two rigid knitting needles, $`e_0`$ and $`e_4`$, connected by a flexible cord of length $`L`$.) Finally, center a ball $`B`$ of radius $`r=L+ϵ`$ on $`v_1`$, with $`0<2ϵ<\delta `$. The two vertices $`v_0`$ and $`v_5`$ are exterior to $`B`$, while the other four are inside $`B`$. See Fig. 5. Assume now that the chain $`K`$ can be straightened by some motion. During the entire process, $`\{v_1,v_2,v_3,v_4\}B`$ because $`L<r`$. Of course $`v_0`$ remains outside of $`B`$ because $`\mathrm{}_0>r`$. Now because $`v_4B`$ and $`\mathrm{}_4=|v_4v_5|=2L+\delta `$ is more than the diameter $`2r=2(L+ϵ)`$ of $`B`$, $`v_5`$ also remains exterior to $`B`$ throughout the motion. Before proceeding with the proof, we recall some terms from knot theory. The trivial knot is an unknotted closed curve homeomorphic to a circle. The trefoil knot is the simplest knot, the only knot that may be drawn with three crossings. See, e.g., \[Liv93\] or \[Ada94\]. Because of the constant separation between $`\{v_0,v_5\}`$ and $`\{v_1,v_2,v_3,v_4\}`$ by the boundary of $`B`$, we could have attached a sufficiently long unknotted string $`P^{}`$ from $`v_0`$ to $`v_5`$ exterior to $`B`$ that would not have hindered the unfolding of $`P`$. But this would imply that $`KP^{}`$ is the trivial knot; but it is clearly a trefoil knot. We have reached a contradiction; therefore, $`K`$ cannot be straightened. ### 4.2 A Locked, Unknotted Closed Chain It is easy to obtain locked closed chains in 3D: simply tie the polygonal chain into a knot. Convexifying such a chain would transform it to the trivial knot, an impossibility. More interesting for our goals is whether there exists a locked, closed polygonal chain that is unknotted, i.e., whose topologically structure is that of the trivial knot. We achieve this by “doubling” $`K`$: adding vertices $`v_i^{}`$ near $`v_i`$ for $`i=1,2,3,4`$, and connecting the whole into a chain $`K^2=(v_0,\mathrm{},v_5,v_4^{},\mathrm{},v_1^{})`$. See Fig. 6. Because $`KK^2`$, the preceding argument applies when the second copy of $`K`$ is ignored: any convexifying motion will have the property that $`v_0`$ and $`v_5`$ remain exterior to $`B`$, and $`\{v_1,v_2,v_3,v_4\}`$ remain interior to $`B`$ throughout the motion. Thus the extra copy of $`K`$ provides no additional freedom of motion to $`v_5`$ with respect to $`B`$. Consequently, we can argue as before: if $`K^2`$ is somehow convexified, this motion could be used to unknot $`KP^{}`$, where $`P^{}`$ is an unknotted chain exterior to $`B`$ connecting $`v_0`$ to $`v_5`$. This is impossible, therefore $`K^2`$ is locked. ## 5 Convexifying a Planar Simple Polygon in 3D An interesting open problem is to generalize our result from Section 2 to convexify a general closed chain. We show now that the special case of a closed chain lying in a plane, i.e., a planar simple polygon, may be convexified in 3D. Such a polygon may be convexified in 3D by “flipping” out the reflex pockets, i.e., rotating the pocket chain into 3D and back down to the plane; see Fig. 7. This simple procedure was suggested by Erdős \[Erd35\] and proved to work by de Sz. Nagy \[dSN39\]. The number of flips, however, cannot be bound as a function of the number of vertices $`n`$ of the polygon, as first proved by Joss and Shannon \[Grü95\]. See \[Tou99\] for the complex history of these results. We offer a new algorithm for convexifying planar closed chains, which we call the “St. Louis Arch” algorithm. It is more complicated than flipping but uses a bounded number of moves, in fact $`O(n)`$ moves. It models the intuitive approach of picking up the polygon into 3D. We discretize this to lifting vertices one by one, accumulating the attached links into a convex “arch”<sup>2</sup><sup>2</sup>2 We call this the St. Louis Arch Algorithm because of the resemblance to the arch in St. Louis, Missouri. $`A`$ in a vertical plane above the remaining polygonal chain; see Fig. 8. Although the algorithm is conceptually simple, some care is required to make it precise, and to then establish that simplicity is maintained throughout the motions. Let $`P`$ be a simple polygon in the $`xy`$-plane, $`\mathrm{\Pi }_{xy}`$. Let $`\mathrm{\Pi }_ϵ`$ be the plane $`z=ϵ`$ parallel to $`\mathrm{\Pi }_{xy}`$, for $`ϵ>0`$; the value of $`ϵ`$ will be specified later. We use this plane to convexify the arch safely above the portion of the polygon not yet picked up. We will use primes to indicate positions of moved (raised) vertices; unprimed labels refer to the original positions. After a generic step $`i`$ of the algorithm, $`P(0,i)`$ has been lifted above $`\mathrm{\Pi }_ϵ`$ and convexified, $`v_0`$ and $`v_i`$ have been raised to $`v_0^{}`$ and $`v_i^{}`$ on $`\mathrm{\Pi }_ϵ`$, and $`P[i+1,n1]`$ remains in its original position on $`\mathrm{\Pi }_{xy}`$. We first give a precise description of the conditions that hold after the $`i`$th step. Let $`\mathrm{\Pi }_z(v_i,v_j)`$ be the (vertical) plane containing $`v_i`$ and $`v_j`$, parallel to the $`z`$-axis. 1. $`\mathrm{\Pi }_ϵ`$ splits the vertices of $`P`$ into three sets: $`v_0^{}`$ and $`v_i^{}`$ lie in $`\mathrm{\Pi }_ϵ`$, $`v_1^{},\mathrm{},v_{i1}^{}`$ lie above the plane, and $`v_{i+1},\mathrm{},v_{n1}`$ lie below it. 2. The arch $`A=P(0,i)`$ lies in the plane $`\mathrm{\Pi }_z(v_0^{},v_i^{})`$, and is convex. 3. $`v_0^{}`$ and $`v_i^{}`$ project onto $`\mathrm{\Pi }_{xy}`$ within distance $`\delta `$ of their original positions $`v_0`$ and $`v_i`$. (Here, $`\delta >0`$ is a constant that depends only on the input positions; it will be specified later.) 4. Edges $`v_{n1}v_0^{}`$ and $`v_i^{}v_{i+1}`$ connect between $`\mathrm{\Pi }_{xy}`$ and $`\mathrm{\Pi }_ϵ`$. 5. $`P[i+1,n1]`$ remains in its original position in $`\mathrm{\Pi }_{xy}`$. See Fig. 8. A central aspect of the algorithm will be choosing $`ϵ`$ small enough to guarantee a $`\delta `$ (see H3) that maintains simplicity throughout all movements. The algorithm consists of an initialization step S0, followed by repetition of steps S1–S4. ### 5.1 S0 The algorithm is initialized at $`i=2`$ by selecting an arbitrary (strictly) convex vertex $`v_1`$, and raising $`\{v_0,v_1,v_2\}`$ in four steps: 1. Rotate $`v_1`$ about the line through $`v_0v_2`$ up to $`\mathrm{\Pi }_ϵ`$. Call its new position $`v_1^{\prime \prime }`$. 2. Rotate $`v_0`$ about the line through $`v_{n1}v_1^{\prime \prime }`$ up to $`\mathrm{\Pi }_ϵ`$. Call its new position $`v_0^{}`$. 3. Rotate $`v_2`$ about the line through $`v_1^{\prime \prime }v_3`$ up to $`\mathrm{\Pi }_ϵ`$. Call its new position $`v_2^{}`$. 4. Rotate $`v_1^{\prime \prime }`$ about the line through $`v_0^{}v_2^{}`$ upwards until it lies in the plane $`\mathrm{\Pi }_z(v_0^{},v_2^{})`$. Call its new position $`v_1^{}`$. So long as the joint at $`v_1^{\prime \prime }`$ is not straight, the $`4`$th step above is unproblematical, simply rotating a triangle from a horizontal to a vertical plane. That this joint does not become straight depends on $`ϵ`$ and $`\delta `$, and will be established under the discussion of S1 below. Ditto for establishing that the first three steps can be accomplished without causing self-intersection. After completion of Step S0, the hypotheses H1–H5 are all satisfied. The remaining steps S1–S4 are repeated for each $`i>2`$. ### 5.2 S1 The purpose of Step S1 is to lift $`v_i`$ from $`\mathrm{\Pi }_{xy}`$ to $`\mathrm{\Pi }_ϵ`$. This will be accomplished by a rotation of $`v_i`$ about the line through $`v_{i1}^{}`$ and $`v_{i+1}`$, the same rotation used in substeps (2) and (3), and in a modified form in (1), of Step S0. Although this rotation is conceptually simple, it is this key movement that demands a value of $`ϵ`$ to guarantee a $`\delta `$ that ensures correctness. The values of $`ϵ`$ and $`\delta `$ will be computed directly from the initial geometric structure of $`P`$. Specifying the conditions on $`ϵ`$ is one of the more delicate aspects of our argument, to which we now turn. Let $`\alpha _j`$ be the smaller of the two (interior and exterior) angles at $`v_j`$. Also let $`\beta _j=\pi \alpha _j`$, the deviation from straightness at joint $`v_j`$. We assume that $`P`$ has no three consecutive collinear vertices. If a vertex is collinear with its two adjacent vertices, we freeze and eliminate that joint. So we may assume that $`\beta _j>0`$ for all $`j`$. #### 5.2.1 Determination of $`\delta `$ As in our earlier Figure 1, the simplicity of $`P`$ guarantees “empty” disks around each vertex. Here we need disks to meet more stringent conditions than used in Section 2. Let $`\delta >0`$ be such that: 1. Disks around each vertex $`v_j`$ of radius $`\delta `$ include no other vertices of $`P`$, and only intersect the two edges incident to $`v_j`$. 2. A perturbed polygon, obtained by displacing the vertices within the disks (ignoring the fixed link lengths), 1. remains simple, and 2. has no straight vertices. It should be clear that the simplicity of $`P`$ together with $`\beta _j>0`$ guarantees that such a $`\delta >0`$ exists. As a technical aside, we sketch how $`\delta `$ could be computed. Finding a radius that satisfies condition (1) is easy. Half this radius guarantees the simplicity condition (2a), for this keeps a maximally displaced vertex separated from a maximally displaced edge. To prevent an angle $`\beta _j`$ from reaching zero, condition (2b), displacements of the three points $`v_{j1}`$, $`v_j`$, and $`v_{j+1}`$ must be considered. Let $`\mathrm{}=\mathrm{min}_j\{\mathrm{}_j\}`$ be the length of the shortest edge, and let $`\beta ^{}=\mathrm{min}_j\{\beta _j\}`$ be the minimum deviation from collinearity. Lemma A.1,which we prove in the Appendix, shows that choosing $`\delta <\frac{1}{2}\mathrm{}\mathrm{sin}(\beta ^{}/2)`$ prevents straight vertices. Let $`\sigma `$ be the minimum separation $`|v_jv_k|`$ for all positions of $`v_j`$ and $`v_k`$ within their $`\delta `$ disks, for all $`j`$ and $`k`$. Condition (2a) guarantees that $`\sigma >0`$. Note that $`\sigma \mathrm{}`$. Let $`\beta `$ be the minimum of all $`\beta _j`$ for all positions of $`v_j`$ within their $`\delta `$ disks. Condition (2b) guarantees that $`\beta >0`$. Our next task is to derive $`ϵ`$ from $`\sigma `$, $`\beta `$, and $`\delta `$. To this end, we must detail the “lifting” step of the algorithm. #### 5.2.2 S1 Lifting Throughout the algorithm, $`v_0^{}`$ remains fixed at the position on $`\mathrm{\Pi }_ϵ`$ it reached in Step S0. During the lifting step, $`v_{i1}^{}`$ also remains fixed, while $`v_i`$ is lifted. Thus $`v_0^{}v_{i1}^{}`$, the base of the arch $`A`$, remains fixed during the lifting, which permits us, by hypothesis H1, to safely ignore the arch during this step. We now concentrate on the $`2`$-link chain $`(v_{i1}^{},v_i,v_{i+1})`$. By H5, $`v_iv_{i+1}`$ has not moved on $`\mathrm{\Pi }_{xy}`$; by H3, $`v_{i1}^{}`$ has not moved horizontally more than $`\delta `$ from $`v_{i1}`$. Let $`\alpha _i^{}`$ be the measure in $`[0,\pi ]`$ of angle $`\mathrm{}(v_{i1}^{},v_i,v_{i+1})`$, i.e., the angle at $`v_i`$ measured in the slanted plane determined by the three points. Because $`v_iv_{i+1}`$ lie on $`\mathrm{\Pi }_{xy}`$ and $`v_{i1}^{}`$ is on $`\mathrm{\Pi }_ϵ`$, $`\alpha _i^{}\pi `$ and the chain $`(v_{i1}^{},v_i,v_{i+1})`$ is kinked at the joint $`v_i`$. Now imagine holding $`v_{i1}^{}`$ and $`v_{i+1}`$ fixed. Then $`v_i`$ is free to move on a circle $`C`$ with center on $`v_{i1}^{}v_{i+1}`$. See Fig. 9. This circle might lie partially below $`\mathrm{\Pi }_{xy}`$, and is tilted from the vertical (because $`v_{i1}^{}`$ lies on $`\mathrm{\Pi }_ϵ`$). The lifting step consists simply in rotating $`v_i`$ on $`C`$ upward until it lies on $`\mathrm{\Pi }_ϵ`$; its position there we call $`v_i^{}`$. #### 5.2.3 Determination of $`ϵ`$ We now choose $`ϵ>0`$ so that two conditions are satisfied: 1. The highest point of $`C`$ is above $`\mathrm{\Pi }_ϵ`$ (so that $`v_i`$ can reach $`\mathrm{\Pi }_ϵ`$). 2. $`v_i^{}`$ projects no more than $`\delta `$ away from $`v_i`$ (to satisfy H3). It should be clear that both goals may be achieved by choosing $`ϵ`$ small enough. We sketch a computation of $`ϵ`$ in the Appendix. The computation of $`ϵ`$ ultimately depends solely on $`\sigma `$ and $`\beta `$—the shortest vertex separation and the smallest deviation from straightness—because these determine $`\delta `$, and then $`r`$ and $`\delta _1`$ and $`\delta _2`$ and $`ϵ`$. Although we have described the computation within Step S1, in fact it is performed prior to starting any movements; and $`ϵ`$ remains fixed throughout. As we mentioned earlier, two of the three lifting rotations used in Step S0 match the lifting just detailed. The exception is the first lifting, of $`v_1`$ to $`v_1^{}`$ in Step S0. This only differs in that the cone axis $`v_0v_2`$ lies on $`\mathrm{\Pi }_{xy}`$ rather than connecting $`\mathrm{\Pi }_{xy}`$ to $`\mathrm{\Pi }_ϵ`$. But it should be clear this only changes the above computation in that the tilt angle $`\psi `$ is zero, which only improves the inequalities. Thus the $`ϵ`$ computed for the general situation already suffices for this special case. #### 5.2.4 Collinearity We mention here, for reference in the following steps, that it is possible that $`v_i^{}`$ might be collinear with $`v_0^{}`$ and $`v_{i1}^{}`$ on $`\mathrm{\Pi }_ϵ`$. There are two possible orderings of these three vertices along a line: 1. $`(v_0^{},v_i^{},v_{i1}^{})`$. 2. $`(v_0^{},v_{i1}^{},v_i^{})`$. The ordering $`(v_i^{},v_0^{},v_{i1}^{})`$ is not possible because that would violate the simplicity condition 2(a), as all three vertices project to within $`\delta `$ of their original positions on $`\mathrm{\Pi }_{xy}`$, and no vertex comes within $`\delta `$ of an edge. Despite this possible degeneracy, we will refer to “the triangle $`\mathrm{}v_0^{}v_{i1}^{}v_i^{}`$,” with the understanding that it may be degenerate. This possibility will be dealt with in Lemma 5.6. We now turn to the remaining three steps of the algorithm for iteration $`i`$. We use the notation $`A^{(k)}`$ to represent the arch $`A=A^{(0)}`$ at various stages of its processing, incrementing $`k`$ whenever the shape of the arch might change. ### 5.3 S2 After the completion of Step S1, $`v_{i1}^{}v_i^{}`$ lies in $`\mathrm{\Pi }_ϵ`$. We now rotate the arch $`A^{(0)}`$ into the plane $`\mathrm{\Pi }_ϵ`$, rotating about its base $`v_0^{}v_{i1}^{}`$, away from $`v_{i1}^{}v_i^{}`$. This guarantees that $`A^{(1)}=A^{(0)}\mathrm{}v_0^{}v_{i1}^{}v_i^{}`$ is a planar weakly-simple polygon. Moreover, while $`\mathrm{}v_0^{}v_{i1}^{}v_i^{}`$ may be degenerate, the chain $`(v_0^{},\mathrm{},v_i^{})`$ lies strictly to one side of the line through $`(v_0^{},v_{i1}^{})`$ and so is simple. See Fig. 10. ### 5.4 S3 Now that $`A^{(1)}`$ lies in its “own” plane $`\mathrm{\Pi }_ϵ`$, it may be convexified without worry about intersections with the remaining polygon $`P[i+1,n1]`$ in $`\mathrm{\Pi }_{xy}`$. The polygon $`A^{(1)}`$ is a “barbed polygon”: one that is a union of a convex polygon ($`A^{(0)}`$) and a triangle ($`\mathrm{}v_0^{}v_{i1}^{}v_i^{}`$). We establish in Theorem 5.7 that $`A^{(1)}`$ may be convexified in such a way that neither $`v_0^{}`$ nor $`v_i^{}`$ move, and $`v_0^{}`$ and $`v_i^{}`$ end up strictly convex vertices of the resulting convex polygon $`A^{(2)}`$. ### 5.5 S4 We next rotate $`A^{(2)}`$ up into the vertical plane $`\mathrm{\Pi }_z(v_0^{},v_i^{})`$. Because of strict convexity at $`v_0^{}`$ and $`v_i^{}`$, the arch stays above $`\mathrm{\Pi }_ϵ`$. See Fig. 11. We have now reestablished the induction hypothesis conditions H1–H5. After the penultimate step, for $`i=n2`$, only $`v_{n1}`$ lies on $`\mathrm{\Pi }_{xy}`$, and the final execution of the lifting Step S1 rotates $`v_{n1}`$ about $`v_0^{}v_{n2}^{}`$ to raise it to $`\mathrm{\Pi }_ϵ`$. A final execution of Steps S1 and S2 yields a convex polygon. Thus, assuming Theorem 5.7 in Section 5.7 below, we have established the correctness of the algorithm: ###### Theorem 5.1 The “St. Louis Arch” Algorithm convexifies a planar simple polygon of $`n`$ vertices. We will analyze its complexity in Section 5.8. We now return to Step S3, convexifying a barbed polygon. We perform the convexification entirely within the plane $`\mathrm{\Pi }_ϵ`$. We found two strategies for this task. One maintains $`A`$ as a convex quadrilateral, and the goal of Step S3 can be achieved by convexifying the (nonconvex) pentagon $`A^{(1)}`$, and then reducing it to a convex quadrilateral. Although this approach is possible, we found it somewhat easier to leave $`A`$ as a convex $`(i+1)`$-gon, and prove that $`A^{(1)}=A^{(0)}\mathrm{}v_0^{}v_{i1}^{}v_i^{}`$ can be convexified. This is the strategy we pursue in the next two sections. Section 5.6 concentrates on the base case, convexifying a quadrilateral, and Section 5.7 achieves Theorem 5.7, the final piece needed to complete Step S3. ### 5.6 Convexifying Quadrilaterals It will come as no surprise that every planar, nonconvex quadrilateral can be convexified. Indeed, recent work has shown that any star-shaped polygon may be convexified \[ELR<sup>+</sup>98a\], and this implies the result for quadrilaterals. However, because we need several variations on basic quadrilateral convexification, we choose to develop our results independently, although relegating some details to the Appendix. Let $`Q=(v_0,v_1,v_2,v_3)`$ be a weakly simple, nonconvex quadrilateral, with $`v_2`$ the reflex vertex. By weakly simple we mean that either $`Q`$ is simple, or $`v_2`$ lies in the relative interior of one of the edges incident to $`v_0`$ (i.e., no two of $`Q`$’s edges properly cross). This latter violation of simplicity is permitted so that we can handle a collapsed triangle inherited from step S1 of the arch algorithm (Section 5.2.4). As before, let $`\alpha _i`$ be the smaller of the two (interior and exterior) angles at $`v_i`$. Call a joint $`v_i`$ straightened if $`\alpha _i=\pi `$, and collapsed if $`\alpha _i=0`$. All motions throughout this (5.6) and the next section (5.7) are in 2D. We will convexify $`Q`$ with one motion $`M`$, whose intuition is as follows; see Fig. 12. Think of the two links adjacent to the reflex vertex $`v_2`$ as constituting a rope. $`M`$ then opens the joint at $`v_0`$ until the rope becomes taut. Because the rope is shorter than the sum of the lengths of the other two links, it becomes taut prior to any other “event.” Any motion $`M`$ that transforms a shape such as $`Q`$ can take on rather different appearances when different parts of $`Q`$ are fixed in the plane, providing different frames of reference for the motion. Although all such fixings represent the same intrinsic shape transformation $`M`$, when convenient we distinguish two fixings: $`M_{02}`$, which fixes the line $`L`$ containing $`v_0v_2`$, and $`M_{03}`$, which fixes the line containing $`v_0v_3`$. The convexification motion $`M`$ is easiest to see when viewed as motion $`M_{02}`$. Here the two $`2`$-link chain $`(v_0,v_1,v_2)`$ and $`(v_0,v_3,v_2)`$ perform a line-tracking motion \[LW92\]: fix $`v_0`$, and move $`v_2`$ away from $`v_0`$ along the fixed directed line $`L`$ containing $`v_0v_2`$, until $`v_2`$ straightens. ###### Lemma 5.2 A weakly simple quadrilateral $`Q`$ nonconvex at $`v_2`$ may be convexified by motion $`M_{02}`$, which straightens the reflex joint $`v_2`$, thereby converting $`Q`$ to a triangle $`T`$. Throughout the motion, all four angles $`\alpha _i`$ increase only, and remain within $`(0,\pi )`$ until $`\alpha _2=\pi `$. See Fig. 12a. Although this lemma is intuitively obvious, and implicit in work on linkages (e.g., \[GN86\]), we have not found an explicit statement of it in the literature, and we therefore present a proof in the Appendix (Lemma A.3). We note that the same motion convexifies a degenerate quadrilateral, where the triangle $`\mathrm{}v_0v_1v_2`$ has zero area with $`v_2`$ lying on the edge $`v_0v_1`$. See Fig. 13. As long as we open $`\alpha _2`$ in the direction, as illustrated, that makes the quadrilateral simple, the proof of Lemma 5.2 carries through. The motion $`M_{02}`$ used in Lemma 5.2 is equivalent to the motion $`M_{03}`$ obtained by fixing $`v_0v_3`$ and opening $`\alpha _0`$ by rotating $`v_1`$ clockwise (cw) around the circle of radius $`\mathrm{}_0`$ centered on $`v_0`$. Throughout this motion, the polygon stays right of the fixed edge $`v_0v_3`$. See Fig. 12b. This yields the following easy corollary of Lemma 5.2: ###### Lemma 5.3 Let $`P=QP^{}`$ be a polygon obtained by gluing edge $`v_0v_3`$ of a weakly simple quadrilateral $`Q`$ nonconvex at $`v_2`$, to an equal-length edge of a convex polygon $`P^{}`$, such that $`Q`$ and $`P^{}`$ are on opposite sides of the diagonal $`v_0v_3`$. Then applying the motion $`M_{03}`$ to $`Q`$ while keeping $`P^{}`$ fixed, maintains simplicity of $`P`$ throughout. #### 5.6.1 Strict Convexity Motion $`M`$ converts a nonconvex quadrilateral into a triangle, but we will need to convert it to a strictly convex quadrilateral. This can always be achieved by continuing $`M_{02}`$ beyond the straightening of $`\alpha _2`$. ###### Lemma 5.4 Let $`Q=(v_0,v_1,v_2,v_3)`$ be a quadrilateral, with $`(v_1,v_2,v_3)`$ collinear so that $`\alpha _2=\pi `$, and such that $`\mathrm{}v_0v_1v_3`$ is nondegenerate. As in Lemma 5.3, let $`P=QP^{}`$ be a convex polygon obtained by gluing $`P^{}`$ to edge $`v_0v_3`$ of $`Q`$, with $`v_0`$ and $`v_3`$ strictly convex vertices of $`P`$. The motion $`M_{02}`$ (moving $`v_2`$ along the line determined by $`v_0v_2`$) transforms $`Q`$ to a strictly convex quadrilateral $`Q^{}`$ such that $`Q^{}P^{}`$ remains a convex polygon (See Fig. 14.) Proof: Because $`v_0`$ and $`v_3`$ are strictly convex vertices, and $`v_1`$ must be strictly convex because $`Q`$ is a nondegenerate triangle, all the interior angles at these vertices are bounded away from $`\pi `$. By assumption, they are also bounded away from $`0`$. Thus there is some freedom of motion for $`v_2`$ along the line determined by $`v_0v_2`$ before the next event, when one of these angles reaches $`0`$ or $`\pi `$. $`\mathrm{}`$ A lower bound on $`\beta ^{}=\pi \alpha _2^{}`$, the amount that $`v_2`$ can be bent before an event is reached, could be computed explicitly in $`O(1)`$ time from the local geometry of $`QP^{}`$, but we will not do so here. ### 5.7 Convexifying Barbed Polygons Call a polygon barbed if removal of one ear $`\mathrm{}abc`$ leaves a convex polygon $`P^{}`$. $`\mathrm{}abc`$ is called the barb of $`P`$. Note that either or both of vertices $`a`$ and $`c`$ may be reflex vertices of $`P`$. In order to permit $`\mathrm{}abc`$ to be degenerate (of zero area), we extend the definition as follows. A weakly simple polygon (Section 5.6, Figure 13) is barbed if, for three consecutive vertices $`a`$, $`b`$, $`c`$, deletion of $`b`$ (i.e., removal of the possibly degenerate $`\mathrm{}abc`$) leaves a simple convex polygon $`P^{}`$. Note this definition only permits weak simplicity at the barb $`\mathrm{}abc`$. The following lemma (for simple barbed polygons) is implicit in \[Sal73\], and explicit (for star-shaped polygons, which includes barbed polygons) in \[ELR<sup>+</sup>98b\], but we will need to subsequently extend it, so we provide our own proof. ###### Lemma 5.5 A weakly simple barbed polygon may be convexified, with $`O(n)`$ moves. Proof: Let $`P=(v_0,v_1,\mathrm{},v_{n1})`$, with $`\mathrm{}v_0v_{n2}v_{n1}`$ the barb. See Fig. 15. The proof is by induction. Lemma 5.2 establishes the base case, $`n=4`$, for every quadrilateral is a barbed polygon. So assume the theorem holds for all barbed polygons of up to $`n1`$ vertices. If both $`v_0`$ and $`v_{n2}`$ are convex, $`P`$ is already convex and we are finished. So assume that $`P`$ is nonconvex, and without loss of generality let $`v_0`$ be reflex in $`P`$. It must be that $`v_1v_{n2}`$ is a diagonal, as it lies within the convex portion of $`P`$. Let $`Q=(v_0,v_1,v_{n2},v_{n1})`$ be the quadrilateral cut off by diagonal $`v_1v_{n2}`$, and let $`P^{\prime \prime }=(v_1,\mathrm{},v_{n2})`$ be the remaining portion of $`P`$, so that $`P=QP^{\prime \prime }`$. $`Q`$ is nonconvex at $`v_0`$. Lemma 5.3 shows that motion $`M`$ (appropriately relabeled) may be applied to convert $`Q`$ to a triangle $`T`$ by straightening $`v_0`$, leaving $`P^{\prime \prime }`$ unaffected. At the end of this motion, we have reduced $`P`$ to a polygon $`P^{}`$ of one fewer vertex. Now note that $`T`$ is a barb for $`P^{}`$ (because $`P^{\prime \prime }`$ is convex): $`P^{}=TP^{\prime \prime }`$. Apply the induction hypothesis to $`P^{}`$. The result is a convexification of $`P`$. Each reduction uses one move $`M`$, and so $`O(n)`$ moves suffice for $`P`$. $`\mathrm{}`$ Note that although each step of the convexification straightens one reflex vertex, it may also introduce a reflexivity: $`v_1`$ is convex in Fig. 15a but reflex in Fig. 15b. We could make the procedure more efficient by “freezing” any joint as soon as it straightens, but it suffices for our analysis to freeze each straightened reflex vertex, thenceforth treating the segment on which it lies as a single rigid link. As is evident in Fig. 15c, the convexification leaves a polygon with several vertices straightened. One of the edges $`e`$ of the barbed polygon is the base of the arch $`A`$ from Section 5.2.2. If either of $`e`$’s endpoints are straightened, then part of the arch will lie directly in the plane $`\mathrm{\Pi }_ϵ`$, and could cause a simplicity violation during the S1 lifting step. Therefore we must ensure that both of $`e`$’s endpoints are strictly convex: ###### Lemma 5.6 Any convex polygon with a distinguished edge $`e`$ can be reconfigured so that that both endpoints of $`e`$ become strictly convex vertices. Proof: Suppose the counterclockwise endpoint $`v_2`$ of $`e`$ has internal angle $`\alpha =\pi `$; see Fig. 16. Let $`v_1`$ be the next strictly convex vertex in clockwise order before $`v_2`$ (it may be that $`v_1`$ is the other endpoint of $`e`$), and $`v_3,v_0`$ be the next two strictly convex vertices adjacent to $`v_2`$ counterclockwise. Let $`Q=(v_0,v_1,v_2,v_3)`$. Then apply Lemma 5.4 to $`Q`$ to convexify $`v_2`$ via motion $`M_{02}`$. Apply the same procedure to the other endpoint of $`e`$ if necessary. $`\mathrm{}`$ Using Lemma 5.5 to convexify the barbed polygon arch, and Lemma 5.6 to make its base endpoints strictly convex, yields: ###### Theorem 5.7 A weakly simple barbed polygon may be convexified in such a way that the endpoints of a distinguished edge are strictly convex. This completes the description of the St. Louis Arch Algorithm, as $`A^{(1)}=A^{(0)}\mathrm{}v_0^{}v_{i1}^{}v_i^{}`$ is a barbed polygon, and Step S4 may proceed because of the strict convexity at the arch base endpoints. ### 5.8 Complexity of St. Louis Arch Algorithm It is not difficult to see that only a constant number of moves are used in steps S0, S1, S2, and S4. Step S3 is the only exception, which we have seen in Lemma 5.5 can be executed in $`O(n)`$ moves. So the resulting procedure can be accomplished in $`O(n^2)`$ moves. The algorithm actually only uses $`O(n)`$ moves, as the following amortization argument shows: ###### Lemma 5.8 The St. Louis Arch Algorithm runs in $`O(n)`$ time and uses $`O(n)`$ moves. Proof: Each barb convexification move used in the proof of Lemma 5.5 constitutes a single move according to the definition in Section 1.2, as four joints open monotonically (cf. Lemma 5.2). Each such convexification move necessarily straightens one reflex joint, which is subsequently “frozen.” The number of such freezings is at most $`n`$ over the life of the algorithm. So although any one barbed polygon might require $`\mathrm{\Omega }(n)`$ moves to convexify, the convexifications over all $`n`$ steps of the algorithm uses only $`O(n)`$ moves. Making the base endpoint angles strictly convex requires at most two moves per step, again $`O(n)`$ overall. Each step of the algorithm can be executed in constant time, leading to a time complexity of $`O(n)`$. Again we must consider computation of the minimum distances around each vertex to obtain $`\delta `$ (Section 5.2.1), but we can employ the same medial axis technique used in Section 2 to compute these distances in $`O(n)`$ time. $`\mathrm{}`$ Note that at most four joints rotate at any one time, in the barb convexification step. ## 6 Open problems Although we have mapped out some basic distinctions between locked and unlocked chains in three dimensions, our results leave many aspects unresolved: 1. What is the complexity of deciding whether a chain in 3D can be unfolded? 2. Theorem 2.1 only covers chains with simple orthogonal projections. Extension to perspective (central) projections, or other types of projection, seems possible. 3. Can a closed chain with a simple projection always be convexified? None of the algorithms presented in this paper seem to settle this case. 4. Find unfolding algorithms that minimize the number of simultaneous joint rotations. Our quadrilateral convexification procedure, for example, moves four joints at once, whereas pocket flipping moves only two at once. 5. Can an open chain of unit-length links lock in 3D? Cantarella and Johnson show in \[CJ98\] that the answer is no if $`n5`$. ### Acknowledgements We thank W. Lenhart for co-suggesting the knitting needles example in Fig. 5, J. Erickson for the amortization argument that reduced the time complexity in Lemma 5.8 to $`O(n)`$, and H. Everett for useful comments. ## Appendix A Appendix ### A.1 Computation of $`ϵ`$ Here we detail a possible computation of $`ϵ`$, as needed in Section 5.2.3. The smallest radius $`r`$ for the circle $`C`$ is determined by the minimum angle $`\beta `$ (the smallest deviation from straightness) and the shortest edge length $`\mathrm{}`$. In particular, $`r\mathrm{}\mathrm{sin}(\beta /2)`$; see Figure 17a,b. Here it is safe to use the $`\beta `$ from the plane $`\mathrm{\Pi }_{xy}`$ because the deviation from straightness is only larger in the tilted plane of $`\mathrm{}v_{i+1},v_i,v_{i1}^{}`$ (cf. Fig. 9), and we seek a lower bound on $`r`$. The tilt $`\psi `$ of the circle leaves the top of $`C`$ at least at height $`r\mathrm{cos}\psi `$. Because $`|v_{i1}^{}v_{i+1}|\sigma `$, the tilt angle must satisfy $`\mathrm{cos}\psi \sigma /\sqrt{\sigma ^2+ϵ^2}`$; see Figure 17c. Thus to meet condition (1), we should arrange that $$\mathrm{}\mathrm{sin}(\beta /2)\frac{\sigma }{\sqrt{\sigma ^2+ϵ^2}}>ϵ$$ which can clearly be achieved by choosing $`ϵ`$ small enough, as $`\mathrm{}`$, $`\beta `$, and $`\sigma `$ are all constants fixed by the geometry of $`P`$. Turning to condition (2) of Section 5.2.3, the movement of $`v_i^{}`$ with respect to $`v_i`$ can be decomposed into two components. The first is determined by the rotation along $`C`$ if that circle were vertical. This deviation is no more than $`\delta _1=r(1\mathrm{cos}\varphi )`$, where $`\varphi `$ is the lifting rotation angle measured at the cone axis $`v_{i1}^{}v_{i+1}`$. Because $`\mathrm{sin}\varphi ϵ/r`$, this leads to $`\delta _1r\left[1\sqrt{1(ϵ/r)^2}\right]`$. The second component is due to the tilt of the circle, which is $`\delta _2=ϵ\mathrm{tan}\psi ϵ^2/\sigma `$; see Figure 17d. The total displacement is no more than $`\delta _1+\delta _2`$. Now it is clear that as $`ϵ0`$, both $`\delta _10`$ and $`\delta _20`$. Thus for any given $`\delta `$, we may choose $`ϵ`$ such that $`\delta _1+\delta _2<\delta `$. ### A.2 Straightening Lemma The following lemma is used to determine $`\delta `$ in Section 5.2.1. ###### Lemma A.1 Let $`ABC`$ be a triangle, with $`|AB|\mathrm{}`$, $`|BC|\mathrm{}`$, and $`\beta \mathrm{}ABC\pi \beta `$. Then for any triangle $`A^{}B^{}C^{}`$ whose vertices are displaced at most $`\delta `$ from those of $`\mathrm{}ABC`$, i.e., $$|AA^{}|<\delta ,|BB^{}|<\delta ,|CC^{}|<\delta ,$$ $`\mathrm{}A^{}B^{}C^{}<\pi `$. Proof: Let $`a`$ be the point on $`BA`$ a distance $`\mathrm{}/2`$ from $`B`$, and let $`c`$ be the point on $`BC`$ a distance $`\mathrm{}/2`$ from $`B`$. Let $`L`$ be the line containing $`ac`$. Set $`\theta =\mathrm{}Bac=\mathrm{}acB`$, and $`\varphi =\mathrm{}aBc=\pi 2\theta `$. Because $`\varphi =\mathrm{}ABC`$, the assumptions of the lemma give $`\beta \pi 2\theta \pi \beta `$, or $`\beta /2\theta (\pi \beta )/2`$. The distance $`d(A,L)`$ from $`A`$ to $`L`$ satisfies $`d(A,L)`$ $``$ $`(\mathrm{}/2)\mathrm{sin}\theta `$ (1) $``$ $`(\mathrm{}/2)\mathrm{sin}(\beta /2)`$ (2) $`>`$ $`\delta .`$ (3) The exact same inequality hold for the distances $`d(B,L)`$ and $`d(C,L)`$, because the relevant angle is $`\theta `$ is each case, and the relevant hypotenuse is $`\mathrm{}/2`$ in each case. Now suppose the three vertices $`A^{}`$, $`B^{}`$, $`C^{}`$ each move no more than $`\delta `$ from $`A`$, $`B`$, and $`C`$ respectively. Then $`L`$ continues to separate $`A^{}`$ and $`C^{}`$ from $`B^{}`$, by the above argument. $`\mathrm{}`$ ### A.3 Quadrilateral Convexification The next results are employed in Section 5.6 on convexifying quadrilaterals. We need the following lemma that states that the reflex joint of a quadrilateral can be straightened in the first place. Let $`Q=v_0v_1v_2v_3`$ be a four-bar linkage with $`v_2`$ a reflex joint. ###### Lemma A.2 A non-convex four-bar linkage can be convexified into a triangle by straightening its reflex joint. Proof: Let $`ray(v_0,v_2)`$ be the ray starting at $`v_0`$ in the direction of $`v_2`$ and refer to Figure 19. Without loss of generality let $`v_0`$ be the origin, $`ray(v_0,v_2)`$ the positive $`x`$-axis and assume the sum of the link lengths $`(l_0+l_1)`$ is smaller than $`(l_2+l_3)`$. Assume that $`v_2`$ is translated continuously along the $`x`$-axis in the positive direction until it gets stuck. Since $`v_2`$ cannot move further it follows that joints $`v_0`$, $`v_1`$ and $`v_2`$ all lie on the $`x`$-axis and joint $`v_1`$ has been straightened. This implies the new interior angle of $`v_2`$, $`\gamma <\pi `$. But before the motion $`v_2`$ was a reflex angle with $`\gamma >\pi `$. Since the angles change continuously there must exist a point during the motion at which $`\gamma =\pi `$. $`\mathrm{}`$ ###### Lemma A.3 When $`d(v_0,v_2)`$ is increased, all joints of the linkage open, that is, the interior angles of the convex joints and the exterior angle of the reflex joint all increase. Proof: We will show that if $`v_2`$ is moved along $`ray(v_0,v_2)`$ in such a way that the distance $`d(v_0,v_2)`$ is increased by some positive real number $`ϵ`$, no matter how small, while $`v_2`$ remains reflex, then all joints open. First note that by Euclid’s Proposition 24 of Book I, $`v_1`$ and $`v_3`$ open, that is, their interior angles increase. Secondly, note that if the interior angle at $`v_0`$ opens then so does the exterior angle at $`v_2`$ (and vice-versa) by applying Euclid’s Proposition 24 to distance $`d(v_1,v_3)`$. Hence another way to state the theorem in terms of distances only is: in a non-convex four-bar linkage the length of the interior diagonal increases if, and only if, the length of the exterior diagonal increases. It remains to show that increasing $`d(v_0,v_2)`$ increases the angle at $`v_0`$. Before proceeding let us take care of the value of $`ϵ`$. While there is no problem selecting $`ϵ`$ too small, we must ensure it is not too big, for otherwise when we increase $`d(v_0,v_2)`$ by $`ϵ`$ the linkage may become convex. From Lemma A.2 we know that as $`d(v_0,v_2)`$ is increased the linkage will become a triangle at some point when joint $`v_2`$ straightens, at which time $`d(v_0,v_2)`$ will have reached its maximum value, say $`l`$. Using the law of cosines for this triangle we obtain $$l^2=l_2^2+l_3^2l_2\{[(l_1+l_2)^2+l_3^2l_0^2]/(l_1+l_2)\}.$$ Therefore if we choose $`ϵ`$ such that $$ϵ<ld(v_0,v_2),$$ then we ensure that $`v_2`$ remains reflex. It is convenient to analyse the situation with link $`v_3v_0`$ as a rigid frame of reference rather than the $`ray(v_0,v_2)`$. Therefore let both $`v_0`$ and $`v_3`$ be fixed in the plane. Then as $`d(v_0,v_2)`$ increases, from Euclid’s Proposition 24 it follows that $`v_1`$ rotates about $`v_0`$ along the fixed circle $`C(v_0,l_0)`$ centered at $`v_0`$ of radius $`l_0`$, $`v_2`$ rotates about $`v_3`$ on the fixed circle $`C(v_3,l_2)`$ centered at $`v_3`$ with radius $`l_2`$, and $`ray(v_0,v_2)`$ rotates about $`v_0`$. Denote the initial configuration by $`Q=v_0v_1v_2v_3`$ and the final configuration after $`d(v_0,v_2)`$ is increased by $`ϵ`$ by $`Q^{}=v_0u_1u_2v_3`$. In other words $`v_1`$ has moved to $`u_1`$, $`v_2`$ has moved to $`u_2`$ and $`ray(v_0,v_2)`$ has moved to $`ray(v_0,u_2)`$. Since the exterior angle at $`v_2`$ is less than $`\pi `$ and link $`v_3v_2`$ rotates in a counterclockwise manner this motion causes $`u_2`$ to penetrate the interior of the shaded circle $`C(v_1,l_1)`$ centered at $`v_1`$ with radius $`l_1`$. Furthermore, $`u_2`$ cannot overshoot this shaded circle and find itself in its exterior after having penetrated it, for this would imply the joint $`u_2`$ is convex, which is impossible for the value of $`ϵ`$ we have chosen. Now, since $`u_2`$ is in the interior of the shaded disk $`C(v_1,l_1)`$ and the radius of this disk is $`l_1`$ it follows that the distance $`d(u_2,v_1)`$ is less than the link length $`l_1`$. Let us therefore extend the segment $`u_2v_1`$ along the $`ray(u_2,v_1)`$ to a point $`u_2^{}`$ so that $`d(u_2,u_2^{})=l_1`$. Note that the figure shows the situation when $`u_2^{}`$ lies in the exterior of $`C(v_0,l_0)`$. If $`u_2^{}`$ lies on $`C(v_0,l_0)`$ it yields $`u_1`$ imediately. If $`u_2^{}`$ lies in the interior of $`C(v_0,l_0)`$ then the arc $`u_1,u_2^{},u_1^{}`$ in the figure would be in the interior of $`C(v_0,l_0)`$. But of course $`u_1`$, the new position of $`v_1`$, must lie on the circle $`C(v_0,l_0)`$. To compute the possible locations for $`u_1`$ we rotate segment $`u_2u_2^{}`$ about $`u_2`$ in both the clockwise and counterclockwise directions to intersect the circle $`C(v_0,l_0)`$ at points $`u_1`$ and $`u_1^{}`$, respectively. Since $`u_2`$ lies on $`ray(v_0,u_2)`$ it follows that $`u_1^{}`$ lies to the left of $`ray(v_0,u_2)`$. But the two links $`v_0u_1`$ and $`u_1u_2`$ must remain to the right of $`ray(v_0,u_2)`$ because the links are not allowed to cross each other. Therefore $`u_1^{}`$ cannot be the final position of link $`v_1`$ and the latter must move to $`u_1`$. Now since $$d(u_2,u_2^{})=l_1>d(u_2,v_1),$$ it follows that $`u_1`$ lies clockwise from $`v_1`$. Therefore link $`v_0v_1`$ has rotated clockwise with respect to $`v_0`$ and since link $`v_0v_3`$ is fixed the interior angle at $`v_0`$ has increased, proving the theorem. $`\mathrm{}`$
no-problem/9910/cond-mat9910493.html
ar5iv
text
# Field-theoretic methods for systems of particles with exotic exclusion statistics ## 1 Introduction Haldane introduced in a generalized exclusion principle defining a quantity $`d(N)`$, the Haldane dimension, which is the dimension of the one-particle Hilbert space associated with the $`N`$-th particle, keeping the coordinates of the other $`N1`$ particles fixed. The statistical parameter, $`g`$, of a particle (‘$`g`$-on’) is defined by (where we add $`m`$ particles) $$g=\frac{d(N+m)d(N)}{m}$$ (1) and the conditions of homogeneity on $`N`$ and $`m`$ are imposed. The system is assumed to be confined to a finite region where the number $`K`$ of independent single-particle states is finite and fixed. Here the usual Bose and Fermi ideal gases have $`g=0`$ for Bose case (i.e. $`d(N)`$ does not depend on $`N`$) and $`g=1`$ for Fermi case – that is the dimension is reduced by one for each added fermion, which is the usual Pauli principle. Haldane also introduced a combinatorial expression (which we will term the Haldane-Wu state-counting procedure ) for the number of ways, $`W`$, to place $`N`$ $`g`$-ons into $`K`$ single–particle states. Then $$W=\frac{[d(N)+N1]!}{[d(N)1]!N!}d(N)=Kg(N1),$$ (2) which was subsequently used by many authors to describe thermodynamical properties of $`g`$-ons. In particular Bernard and Wu and Murthy and Shankar showed that the behavior of the excitations in the Calogero–Sutherland model is consistent with Eqn.(2) for $`g`$-ons, with fractional $`g`$, in general. In Ref the microscopic origin of the Haldane-Wu state-counting procedure was examined. The notion of statistics was considered in a probabilistic spirit. The author assumed that a single level may be occupied by any number of particles, and each occupancy is associated with an a priori probability. These probabilities are determined by enforcing consistency with the Haldane-Wu state-counting procedure and not with Haldane’s definition of exclusion statistics. There was no construction of a Hilbert space and the a priori probabilities may be negative. This approach has been further elaborated in a number of papers . Another probabilistic approach has been developed in . It was pointed out that there is a distinction between Haldane’s dimension and Haldane-Wu state counting procedures. A ‘fractional’ Hilbert space (associated with the non-integer nature of $`d(N)`$) and the corresponding creation-annihilation operators were constructed and a set of probabilities which give Haldane’s dimension was obtained. The paper is organized as follows. In the next section we introduce the notion of a fractional Hilbert space and creation-annihilation operators associated with it. In section 3 we obtain a generalised resolution of unity in terms of coherent states. In section 4 the definition of Haldane’s dimension is considered in detail. We calculate the partition function and the state-counting expression. In section 5 we consider the Haldane-Wu state-counting procedure and make a comparison with the definition of Haldane’s dimension. ## 2 Hilbert space and creation-annihilation operators In this section we recall the main ideas introduced in Ref . The definition of a fractional dimension Hilbert space is connected with state-counting, which we need to calculate the entropy and other thermodynamical quantities of $`g`$-particles. The main idea is to consider the process of inserting the $`N`$-th particle into the system as a probabilistic process (in spirit of Gibbs), i.e. we assume that the probability of such insertion plays the role of Haldane’s measure of the probability to add the $`N`$-th particle to the system. Let us illustrate the idea for the case of a single degree of freedom, $`g=1/p`$ , and provide an interpretation of $`d(N)`$ for that case. Firstly, we have the vacuum state to which we add the first particle. We assume that the nature of the statistics reveals itself at the level of two particles, so $`d(1)=1`$. Now let us assume that the process of insertion of the second particle is a probabilistic one with the probability $`(1g)`$ of success. We interpret this as fractional dimension, $`d(2)`$, of the subspace (corresponding to double occupation) and $`d(2)=1g`$. The conditional probability to add a third particle (with two assumed present) is $`12g`$. Hence the probability of success in adding three particles is $`1\times (1g)(12g)`$. This leads us to the probability of adding $`n`$ particles is: $$\alpha _n=[1g][12g]\mathrm{}[1(n1)g]$$ We see that the probability to find $`N>p`$ particles in the system is equal to zero. Drawing parallels with dimensional regularization we can formulate a geometrical definition of the fractional dimension. In that case the trace of the identity matrix is identified with the value of (non-integer) dimension, $`d(N)`$. In the calculation of thermodynamical quantities such as the partition function or the mean value of an arbitrary operator $`\widehat{O}`$ we must compute the following traces: $$Z=\mathrm{Tr}\left[\mathrm{Id}\mathrm{e}^{\beta H}\right],\widehat{O}=\frac{1}{Z}\mathrm{Tr}\left[\mathrm{Id}\mathrm{e}^{\beta H}\widehat{O}\right]$$ (3) where the Hamiltonian $`H`$, e.g. for an ideal gas, is $$H=\underset{i=1}{\overset{K}{}}ϵ_in_i$$ (4) and the “unit operator”, Id, which completely defines the exclusion statistics of the particles is defined by $$\mathrm{Id}=\underset{n_1,\mathrm{},n_K=0}{\overset{\mathrm{}}{}}\alpha _{n_1,\mathrm{},n_K}|n_1,\mathrm{},n_Kn_K,\mathrm{},n_1|$$ (5) where $`\alpha _{n_1,\mathrm{},n_K}`$ is the probability to find the state $`|n_1,\mathrm{},n_K`$. Then the full dimension of the $`N`$-particle subspace is given by the formula $$W(N)=\mathrm{Tr}\left(\mathrm{Id}|_{_{i=1}^Kn_i=N}\right)=\underset{n_1+\mathrm{}+n_K=N}{}\alpha _{n_1,\mathrm{},n_K}$$ (6) An analog of Haldane’s dimension, $`d(N)`$, for the $`N`$-particle subspace with an arbitrary fixed $`(N1)`$-particle substate is described by the relation $$d(N)=\underset{\mathrm{}=1}{\overset{N}{}}\frac{\alpha _{n_1,\mathrm{},n_{\mathrm{}}+1,\mathrm{},n_K}}{\alpha _{n_1,\mathrm{},n_K}|_{{\scriptscriptstyle n_i}=N1}}$$ (7) The above procedure is completely general, a concrete choice of the probabilities $`\alpha _{n_1,\mathrm{},n_K}`$ is not required. On the basis of the following two assumptions: 1. the definition of the $`N`$-th particle dimension $`d(N)`$ actually yields Haldane’s conjecture $`d(N)=Kg(N1)`$; 2. the Hilbert space of the system with $`K`$ degrees of freedom is factorized into a product of Hilbert spaces corresponding to each degree of freedom. This means $$\mathrm{Id}=\mathrm{Id}_1\mathrm{Id}_2\mathrm{}\mathrm{Id}_K$$ it was shown, in Ref , that there is a single self-consistent way to define $`\alpha _{n_1,\mathrm{},n_K}`$: $$\alpha _{n_1,\mathrm{},n_K}=\underset{i=1}{\overset{K}{}}[1g][12g]\mathrm{}[1(n_i1)g]$$ (8) where $`g=1/p`$, $`p`$ integer. In this case the statistical parameter $`g`$ can take values between $`0`$ and $`1`$. If we weaken the second condition and allow matrix Id to be a direct product of Id’s which correspond to some elementary ‘exclusion cell’ (block) with dimension $`q>1`$ but keeping the first condition, we obtain the following set of probabilities: $$\alpha (\{n_{ij}\}_{i,j=1}^{K,q})=\underset{i=1}{\overset{K}{}}\left[1\frac{1}{p}\right]\left[1\frac{2}{p}\right]\mathrm{}\left[1\frac{1}{p}\left(\underset{j=1}{\overset{q}{}}n_{ij}1\right)\right]$$ (9) with statistical parameter $`g=q/p`$, $`p`$ integer and $`K^{}=qK`$ the full number of single particle states . Note that the statistical parameter can take values greater than 1. Next we can weaken in addition the first condition and allow $`d(N)`$ to be a non-linear function of $`N=_{i=1}^Kn_i`$. Then we find a large variety of probabilities, among them there is a set of probabilities corresponding to the Haldane-Wu state-counting procedure. To allow interactions between exclusons, ‘hopping’ or interaction with some random potential we should develop a second-quantized formalism. Representation for these operators can be found from the following conditions: $$a_i^{}|n_1\mathrm{}n_i\mathrm{}n_K=\beta _{n_1\mathrm{}n_K}|n_1\mathrm{}n_i+1\mathrm{}n_K$$ (10) $$(a_i^{})^{}=a_i=\mathrm{Id}^1(a^{})^{}\mathrm{Id}$$ (11) $$N_i|n_1\mathrm{}n_K=n_i|n_1\mathrm{}n_K$$ (12) The most interesting case is a system consisting from one exclusion cell, that is $`g=K/p`$, $`K`$ the full number of single particle states. In this case coefficients $`\beta `$ depend only on $`n_i`$ and $`n=_kn_k`$: $$\beta _{n_1\mathrm{}n_K}=\beta _{n_i,n}=\sqrt{(n_i+1)\frac{\alpha _n}{\alpha _{n+1}}}$$ (13) A remarkable result is that for the hopping term ($`a_i^{}a_j`$) the dependence on $`\alpha `$ disappears and it can be represented as $$a_i^{}a_j=b_i^{}b_jP(p)$$ (14) where $`b^{},b`$ are the bosonic operators and $`P(p)`$ the projector on a subspace with the number of particles less than or equal to $`p`$. ## 3 Coherent states To illustrate the idea consider the simplest case $`K=1,g=1/p`$ and exchange statistics between $`g`$-particles to be bosonic one. Let us confine ourself for a moment by considering Hamiltonians depending on the number of particles only. Then states $`|n`$ can be chosen to be bosonic ones: $$|n=\frac{1}{\sqrt{n!}}(a^{})^n|0,n=a^{}a$$ $`a_i^{},a_i`$ are bosonic operators. There is a well-known expressions for a trace and a expansion of the unit in terms of the bosonic coherent states: $$\mathrm{Tr}[\widehat{O}]=\frac{1}{\pi }d\overline{z}dz\mathrm{e}^{\overline{z}z}\overline{z}|\widehat{O}|z$$ (15) $$\text{I}=\frac{1}{\pi }d\overline{z}dz\mathrm{e}^{\overline{z}z}|z\overline{z}|=\underset{n}{}|nn|$$ (16) $$|z=\mathrm{e}^{a^{}z}|0,\overline{z}|=0|\mathrm{e}^{a\overline{z}}$$ (17) If we express the matrix Id in terms of the bosonic coherent states then we can directly apply the bosonic technique to the system of exclusons. Having a look on the usual resolution of the unit we can conclude that if we find a function $`F`$ such that $$\frac{1}{\pi }d\overline{z}dzF(\overline{z}z)|z\overline{z}|=\underset{n}{}\alpha _n|nn|=\mathrm{Id}$$ (18) we solve the problem. Rewriting the probabilities in the form $$\alpha _n=\frac{p!}{p^n(pn)!}$$ it can be shown that the following relation for the matrix Id holds $$\mathrm{Id}=_CdtF_p(t)d\overline{z}dz\mathrm{e}^{\overline{z}z}|zt^{1/2}\overline{z}t^{1/2}|$$ (19) $$F_p(t)=\frac{1}{2\pi i}p!\mathrm{e}^{pt}t^{p1}p^p$$ (20) where the contour $`C`$ runs around the origin in the complex plane in the counter clockwise direction. Noting that $$_CdtF_p(t)t^k=\frac{p!}{p^k(pk)!}$$ (21) we see that the partition function takes the form: $$Z_{1/p}=_CdtF_p(t)(1t\mathrm{e}^{\beta ϵ})^1=\underset{k=0}{\overset{p}{}}\frac{p!}{p^k(pk)!}\mathrm{e}^{k\beta ϵ}$$ (22) For fermions ($`p=1`$): $$Z_\mathrm{f}=Z_1=1+\mathrm{e}^{\beta ϵ}$$ To investigate the bosonic limit the following representation for the partition function is useful: $$Z_{1/p}=_0^{\mathrm{}}dt\mathrm{e}^t\left[1+\frac{t\mathrm{e}^{\beta ϵ}}{p}\right]^p$$ (23) when $`p\mathrm{}`$ (bosonic limit) we have $$Z_{1/p}\underset{p\mathrm{}}{}Z_{\mathrm{}}=_0^{\mathrm{}}dt\mathrm{e}^{t+t\mathrm{e}^{\beta ϵ}}=\frac{1}{1\mathrm{e}^{\beta ϵ}}=Z_\text{b}$$ ## 4 Haldane’s dimension procedure In this section we consider in detail the set of probabilities (9) with $`K=1`$, i. e. one exclusion cell ($`q`$ is the number of states): $$\alpha (\{n_j\}_{j=1}^q)=\underset{j=1}{\overset{N1}{}}\left[1\frac{j}{p}\right]=\frac{1}{p^N}\frac{p!}{(pN)!},N=\underset{j=1}{\overset{q}{}}n_j$$ (24) From (24) and (7) we have obviously $$d(N)=q\left[1\frac{N1}{p}\right]=qg(N1),g=\frac{q}{p}$$ The matrix Id in this case has the form $$\mathrm{Id}=\underset{C}{}dtF_p(t)\frac{1}{\pi ^q}\underset{j=1}{\overset{q}{}}\left[\mathrm{d}\overline{z}_j\mathrm{d}z_j\mathrm{e}^{\overline{z}_jz_j}\right]|\{z_j\sqrt{t}\}_{j=1}^q\{\overline{z}\sqrt{t}\}_{j=1}^q|$$ (25) with the same function $`F_p(t)`$ defined in (20) and the usual bosonic coherent states. To express trace of an operator in terms of bosonic coherent states we use the following relation $$\mathrm{Tr}[\mathrm{Id}\widehat{O}]=\frac{1}{\pi ^q}\underset{j=1}{\overset{q}{}}\mathrm{d}\overline{w}_j\mathrm{d}w_j\mathrm{e}^{_j\overline{w}_jw_j}\{\overline{w}_j\}_{j=1}^q|\mathrm{Id}\widehat{O}|\{w\}_{j=1}^q$$ (26) The last relation allows us to calculate a partition function of $`g`$-ons with Hamiltonian $$\widehat{H}=ϵ\widehat{N},\widehat{N}=\underset{i=1\mathrm{}q}{}n_i=\underset{i=1\mathrm{}q}{}a_i^{}a_i$$ (27) with $`a_i^{},a_i`$ being bosonic creation-annihilation operators. From (26,25) we have $$Z_{q/p}=\underset{C}{}dtF_p(t)\frac{1}{\pi ^{2q}}\mathrm{D}\overline{w}\mathrm{D}w\mathrm{D}\overline{z}\mathrm{D}z\mathrm{e}^{\overline{w}w\overline{z}z}\overline{w}|z\sqrt{t}\overline{z}\sqrt{t}|\mathrm{e}^{\beta (\widehat{H}\mu \widehat{N})}|w$$ where $$\mathrm{D}\overline{w}\mathrm{D}w\underset{j=1}{\overset{q}{}}\mathrm{d}\overline{w}_j\mathrm{d}w_j$$ and summations in the exponential are implied. Using the following relation $$\mathrm{exp}[ca^{}a]=\mathrm{N}\left[\mathrm{exp}\left((\mathrm{e}^c1)a^{}a\right)\right]$$ (28) (here N stands for normal form of an operator expression) and the following properties of the coherent states: $$\overline{w}|\mathrm{N}Q(a^{},a)|z=Q(\overline{w},z)\overline{w}|z,\overline{w}|z=\mathrm{exp}(\overline{w}z)$$ (29) we obtain $$Z_{q/p}(ϵ)=\underset{C}{}dtF_p(t)\left(1t\mathrm{e}^{\beta (ϵ\mu )}\right)^q$$ (30) Using (21), after some algebra, expression (30) can be transformed to $$Z_{q/p}(ϵ)=\frac{1}{(q1)!}\underset{0}{\overset{\mathrm{}}{}}dtt^{q1}\mathrm{e}^t\left[1+\frac{t\mathrm{e}^{\beta (ϵ\mu )}}{p}\right]^p$$ (31) Taking $`p\mathrm{}`$ with $`q`$ fixed corresponds to the bosonic case. From (31) we readily obtain $$Z_{q/p}(ϵ)|_p\mathrm{}=\left[1\mathrm{e}^{\beta (ϵ\mu )}\right]^q$$ which is obviously the bosonic partition function. Calculating (31) in the thermodynamical limit ($`q,p\mathrm{}`$ with $`g=q/p`$ fixed) we have $$Z_{q/p}(ϵ)=\left[h^{(1+g)/g}(h+gz)^{1/g}\mathrm{e}^{1/h+1}\right]^q$$ (32) where $$h=h(g)=\frac{1}{2}\left[1(1+g)z+\sqrt{[1(1+g)z]^2+4gz}\right]$$ (33) and $$z\mathrm{e}^{\beta (ϵ\mu )}$$ (34) The distribution function is defined by the relation: $$n=\frac{\widehat{N}}{q}=\frac{_\mu Z}{q\beta Z}$$ (35) From (35) and (32) we have $$n=\frac{z}{h+gz}$$ (36) Putting in (32) and (33) $`g=1`$ we obtain $$Z_1(ϵ)=\left[h^2(1)(h(1)+z)\mathrm{e}^{1/h(1)+1}\right]^q$$ (37) where $$h(1)=\frac{1}{2}\left[12z+\sqrt{1+4z^2}\right]$$ (38) If we further consider the case of low densities ($`z1`$) we have from (37): $$Z_1(ϵ)[1+z]^q$$ which obviously coincides with usual fermionic partition function. Let us turn now our attention to the state-counting corresponding to the Haldane’s dimension formula. In the case of one exclusion cell we have from eq. (6): $$W(N)=\alpha (N)\underset{n_1,\mathrm{},n_q=0}{\overset{\mathrm{}}{}}\delta _{n_1+\mathrm{}+n_q,N}=\alpha (N)W_\mathrm{b}(N)$$ (39) where $$W_\mathrm{b}(N)=\frac{(q+N1)!}{N!(q1)!}$$ (40) is the bosonic statistical weight. For the set of probabilities (24) we obtain the following expression for the state-counting $$W(N)=\frac{1}{p^N}\frac{p!}{(pN)!}\frac{(q+N1)!}{N!(q1)!}$$ (41) which is obviously different from Haldane-Wu state-counting : $$W_\mathrm{H}(N)=\frac{[q+(1g)(N1)]!}{N![qg(N1)1]!}$$ (42) ## 5 Haldane-Wu state-counting procedure Although the procedure of the last section does not lead to the combinatorial expression derived by Haldane and Wu, we may modify the probabilities, $`\alpha `$, so that this is obtained. Comparing (39) and (42) we can easily write down a set of probabilities which provide the Haldane-Wu state-counting: $$\alpha _\mathrm{H}(N)=\frac{(q1)![qg(N1)+N1]!}{(q+N1)![qg(N1)1]!},N=\underset{i=1}{\overset{q}{}}n_i$$ (43) The operator Id in this case takes the form $$\text{Id}^\mathrm{H}=\underset{C}{}dtF_p^\mathrm{H}(t)\frac{1}{\pi ^q}\underset{j=1}{\overset{q}{}}\left[\mathrm{d}\overline{z}_j\mathrm{d}z_j\mathrm{e}^{\overline{z}_jz_j}\right]|\{z_j\sqrt{t}\}_{j=1}^q\{\overline{z}\sqrt{t}\}_{j=1}^q|$$ (44) where $$F_p^\mathrm{H}(t)=\frac{1}{2\pi i}\underset{n=0}{\overset{p}{}}\frac{(q1)![qg(n1)+n1]!}{(q+n1)![qg(n1)1]!}t^{n1}$$ (45) For the partition function we have the following expression: $`Z_{q/p}^\mathrm{H}(ϵ)`$ $`=`$ $`{\displaystyle \underset{C}{}}dtF_p^\mathrm{H}(t)\left(1t\mathrm{e}^{\beta (ϵ\mu )}\right)^q`$ (46) $`=`$ $`{\displaystyle \underset{N=0}{\overset{p}{}}}{\displaystyle \frac{[qg(N1)+N1]!}{N![qg(N1)1]!}}z^N`$ (47) which is identical to the one used by Wu. From (47) we can obtain the statistical distribution in the standard way : $$n\frac{N}{q}=\frac{1}{w(\mathrm{e}^{\beta (ϵ\mu )})+g}$$ (48) where function $`w`$ satisfies the following equation $$w(\mathrm{e}^{\beta (ϵ\mu )})^g\left[1+w(\mathrm{e}^{\beta (ϵ\mu )})\right]^{1g}=\mathrm{e}^{\beta (ϵ\mu )}$$ (49) Turning our attention to an analog of Haldane’s dimension formula for Haldane-Wu state-counting procedure we have from (7) and (43): $$d_\mathrm{H}(N)=\frac{\alpha _\mathrm{H}(N)}{\alpha _\mathrm{H}(N1)}=q\frac{qg(N1)+N1}{q+N1}\underset{j=1}{\overset{N1}{}}\frac{qg(N1)+j1}{qg(N2)+j1}$$ Taking the thermodynamical limit leads us to the following expression $$d_\mathrm{H}(N)=\frac{qgN+N}{1+N/q}$$ (50) At sufficiently small densities $`N/q1`$ we have $$d_\mathrm{H}(N)=qgN+\mathrm{O}(N^2/q)$$ We can conclude that at low densities Haldane’s dimension and Haldane-Wu state-counting procedures are equivalent while in general they are not. ## 6 Conclusion In this paper we have demonstrated the construction of coherent states for particles obeying Haldane exclusion statistics. This construction allows considerable freedom in the definition and permits the construction of states yielding either Haldane’s dimension or Haldane-Wu state-counting. These are demonstrated in the limit of low densities. These results will be used in a future publication to analyse “non-ideal” gas of particles obeying exclusion statistics. ## Acknowledgments. This work was supported by EPSRC grant GR K68356.
no-problem/9910/gr-qc9910088.html
ar5iv
text
# Entropy bound of a charged object and electrostatic self-energy in black holes ## 1 Introduction By arguing from the generalized second law of thermodynamics for the Schwarzschild black hole, Bekenstein conjectured in 1981 the existence of an upper bound on the entropy of any neutral object. This derivation was immediately criticized by Unruh and Wald which point out that quantum effects produce a buoyany force on the box containing the matter since this one is accelerated and, as a consequence, the generalized entropy always increases even if the entropy bound is not verified. Recently, Bekenstein and Mayo , Hod and then Linet derived an upper bound on the entropy of any charged object, initially found by Zaslavskii in another context, by requiring the validity of thermodynamics of Reissner-Nordström or Kerr-Newman black holes linearized with respect to the electric charge. In this proof, it is essential to take into account the electrostatic self-energy of the charged object in the Schwarzschild or Kerr black holes. As in the neutral case, Shimomura and Mukohyama criticized this derivation for the same reasons. In this paper, we are not going to discuss these criticisms but we give understanding of the relation between the entropy bound of a charged object, as obtained in the original method of Bekenstein, and the electrostatic self-energy in static black holes with spherical symmetry. We suppose that the electromagnetic field is a test field minimally coupled to the metric. Without pretending to any rigour, we find a general expression of the electrostatic self-energy in these black holes. Then, we assume the existence of thermodynamics for such black holes. In the neutral case, we obtain immediately the upper bound on the entropy of the object. In the charged case, we obtain an entropy bound which is independent on the choice of the black hole thanks to the found expression of the electrostatic self-energy. The plan of the work is as follows. In section 2, we recall basic definitions about the static black holes with spherical symmetry. We study the electrostatics in these black holes in section 3. The purpose of this section is to conjecture the general expression of the electrostatic self-energy. The derivation of the entropy bound is fulfilled in section 4. We add in section 5 some concluding remarks. ## 2 Static black holes with spherical symmetry We consider a static black hole with spherical symmetry which contains for example a $`U(1)`$-field $`A_\mu ^{(Q)}`$ with charge $`Q`$ and eventually other fields having no charge. The spacetime is asymptotically flat. The black hole is characterized by its mass $`m`$ and for instance the parameter $`Q`$. In the notations of Visser , there exists a coordinate system $`(t,r,\theta ,\phi )`$ in which the metric can be written as $$ds^2=\mathrm{e}^{2\varphi (r)}\left(1\frac{b(r)}{r}\right)dt^2+\left(1\frac{b(r)}{r}\right)^1dr^2+r^2d\theta ^2+r^2\mathrm{sin}^2\theta d\phi ^2$$ (1) where $`b`$ and $`\varphi `$ are two functions of the radial coordinate $`r`$ with the condition $`\varphi (\mathrm{})=0`$. The horizon $`r=r_H`$ is defined by the equation $`b(r_H)=r_H`$ with the sssumptions that $`\varphi `$ and its derivative are finite at $`r=r_H`$. So, metric (1) is valid for $`r>r_H`$. The quantity $`b(\mathrm{})/2`$ is the mass $`m`$ of the black hole. Since there exists a Killing horizon $``$ at $`r=r_H`$, we have the generalization of the Smarr formula $$m=\frac{\kappa 𝒜}{4\pi }\frac{1}{4\pi }_\mathrm{\Sigma }𝑑S_\mu R_\nu ^\mu \xi ^\nu $$ (2) where $`\xi ^\nu `$ is the timelike Killing vector and $`\mathrm{\Sigma }`$ a spacelike hypersurface, which starts from $``$ to reach the spatial infinity, with the surface element $`dS_\mu `$. $`𝒜`$ is the area $`4\pi r_H^2`$ of the horizon $``$. The quantity $`\kappa `$ is the surface gravity of the horizon defined by $`\xi ^\alpha _\alpha \xi ^\beta =\kappa \xi ^\beta `$ at $`r=r_H`$. For metric (1), this latter definition gives the expression $$\kappa =\frac{\mathrm{e}^{\varphi (r_H)}}{2r_H}\left(1\frac{db}{dr}(r_H)\right)$$ (3) which would be a function of $`m`$ and $`Q`$. It will hereafter be assumed that $`b^{}(r_H)<1`$ to get $`\kappa >0`$. For a black hole in any gravitational field theory, it is possible in general to define a function $`S_{bh}`$ of $`m`$ and $`Q`$ which satisfies the first law of the black hole mechanics . Discarding a variation of $`Q`$, we limit ourselves to consider the first law $$dm=\frac{\kappa }{2\pi }dS_{bh}\mathrm{when}dQ=0.$$ (4) Within the Euclidean approach in quantum field theory at finite temperature , it is clear that $`\kappa /2\pi `$ is the Hawking temperature. So, $`S_{bh}`$ is well interpreted as the entropy of the black hole. In general, $`S_{bh}𝒜/4`$. In the system containing a black hole and matter with entropy $`S`$, we assume the generalized second law of thermodynamics $$dS_{bh}+dS0.$$ (5) Metric (1) can support an electrostatic test field with spherical symmetry corresponding to an electric charge $`q`$ inside the horizon such that $`qm`$ and $`qQ`$. The electromagnetic potential $`A_\mu `$, distinct from $`A_\mu ^{(Q)}`$, has only a non-vanishing component $`A_0`$ which satisfies the electrostatic equation $`_r(r^2\mathrm{e}^{\varphi (r)}_rA_0)=0`$ for $`r>r_H`$. From this and the Gauss theorem, we obtain the solution $$A_0(r)=qa(r)\mathrm{with}a(r)=_r^{\mathrm{}}\frac{\mathrm{e}^{\varphi (r)}dr}{r^2}$$ (6) whose electric field has a regular behaviour at the horizon. We denote $`a_H=a(r_H)`$. We admit that a charged black hole with parameters $`m`$, $`Q`$ and electric charge $`q`$ exists since the electrostatic test field $`qa`$ is regular at the horizon of metric (1). The variation of the mass $`dm`$ of the solution to another solution resulting from a small amount $`dq`$ is given by $$dm=\frac{\kappa }{2\pi }dS_{bh}+qa_Hdq\mathrm{when}dQ=0$$ (7) where $`qa_Hdq`$ representes the electrostatic energy at the horizon of the charge $`dq`$ in the exterior field $`qa`$. We can rewrite first law (7) under the useful form $$dS_{bh}=\frac{2\pi }{\kappa }dm\frac{2\pi }{\kappa }qa_Hdq\mathrm{when}dQ=0.$$ (8) We now give the example of the Reissner-Nordström black hole for a $`U(1)`$-field $`A_\mu ^{(Q)}`$ minimally coupled. It is described by metric (1) with $$b^{(RN)}(r)=2M\frac{Q^2}{r}\mathrm{and}\varphi ^{(RN)}(r)=0.$$ (9) Its mass $`m`$ is $`M`$. The horizon is located at $`r_H=r_+=M+\sqrt{M^2Q^2}`$. The entropy $`S_{bh}`$ is $`4\pi r_H^2`$ and the surface gravity is $$\kappa ^{(RN)}=\frac{\sqrt{M^2Q^2}}{(M+\sqrt{M^2Q^2})^2}.$$ (10) The electrostatic potential $`a`$ is $`1/r`$ and consequently $`a_H=1/r_H`$. ## 3 Electrostatics in black holes We consider an electric test charge $`e`$ held fixed in metric (1), satisfying $`em`$ and $`eQ`$. Since $`e`$ and $`Q`$ are two different types of local charge, we can linearly determine the electrostatic field generated by the charge $`e`$ without taking into account the backreaction. For a point charge $`e`$ located at $`r=r_0`$, $`\theta =\theta _0`$ and $`\phi =\phi _0`$, the electrostatic potential $`V`$ obeys the electrostatic equation $`{\displaystyle \frac{}{r}}\left(r^2\mathrm{e}^{\varphi (r)}{\displaystyle \frac{}{r}}V\right)+\mathrm{e}^{\varphi (r)}\left(1{\displaystyle \frac{b(r)}{r}}\right)^1\left[{\displaystyle \frac{1}{\mathrm{sin}\theta }}{\displaystyle \frac{}{\theta }}\left(\mathrm{sin}\theta {\displaystyle \frac{}{\theta }}V\right)+{\displaystyle \frac{1}{\mathrm{sin}^2\theta }}{\displaystyle \frac{^2}{\phi ^2}}V\right]`$ $`=4\pi e\delta (rr_0)\delta (\mathrm{cos}\theta \mathrm{cos}\theta _0)\delta (\phi \phi ).`$ (11) The physical solution to equation (3) must have a regular behaviour at the horizon and no electric flux through the sphere of radius $`r_H`$. The electric flux $`_R[V]`$ for $`R>r_H`$ has to have the value $$_R[V]=_{r=R}r^2\mathrm{e}^{\varphi (r)}_rV\mathrm{sin}\theta d\theta d\phi =\{\begin{array}{cc}4\pi e\hfill & R>r_0\hfill \\ 0\hfill & R<r_0.\hfill \end{array}$$ (12) Moreover, we must add the exterior potential $`qa`$ to the potential $`V`$. The solution to equation (3) is expressed as the sum of the elementary solution in the Hadamard sense, denoted $`V_C`$, and a homogeneous solution. By virtue of the spherical symmetry of metric (1), we induce that the homogeneous term is proportional to $`a`$. Moreover, the operator in equation (3) is self-adjoint and consequently the expression of $`V`$ is symmetric in $`r`$ and $`r_0`$. Thus, $`V`$ has necessarily the form $$V=V_C(r,r_0,\theta ,\theta _0,\phi ,\phi _0)+sea(r_0)a(r).$$ (13) The potentiel $`V_C`$ has the Coulombian form at the neigbourhood of the point charge $`e`$. The parameter $`s`$, having the dimension of length, is to be determined in function of $`m`$ and $`Q`$ by taking into account the global condition (12). We now turn to the example of the Reissner-Nordström metric characterized by $`M`$ and $`Q`$. The electrostatic potential $`V`$ generated by an electric charge $`e`$ at $`r=r_0`$ and $`\theta =0`$ has already determined in closed form $$V^{(RN)}(r,\theta ,\phi )=V_C^{(RN)}(r,\theta ,\phi )+\frac{eM}{rr_0}$$ (14) where $`V_C^{(RN)}`$ has the expression $$V_C^{(RN)}=\frac{e}{rr_0}\frac{(rM)(r_0M)(M^2Q^2)\mathrm{cos}\theta }{[(rM)^2+(r_0M)^22(rM)(r_0M)\mathrm{cos}\theta (M^2Q^2)\mathrm{sin}^2\theta ]^{1/2}}$$ (15) having a singularity at $`r=r_0`$ and $`\theta =0`$. It should be a multiple of the elementary solution derived in isotropic coordinates by Copson . The electric flux $`_{r_H}[V_C^{(RN)}]`$ through the sphere of radius $`r_H`$ is $$_{r_H}[V_C^{(RN)}]=4\pi e\frac{M}{r_0}$$ (16) in accordance with the introduction of the homogeneous term into solution (14). In the Reissner-Nordström black hole, we see that $`s=M`$. In general, it is not possible to find an exact solution to equation (3). However, we are going to consider two cases in which we can determine the parameter $`s`$ appearing in form (13). ### 3.1 Analysis near of the horizon in the case $`\varphi =0`$ In the case of metric (1) with $`\varphi =0`$, the electrostatic equation (3) can be written near the horizon as $`{\displaystyle \frac{}{r}}\left(r^2{\displaystyle \frac{}{r}}V\right)+{\displaystyle \frac{r_H}{(1b^{}(r_H))(rr_H)}}\left[{\displaystyle \frac{1}{\mathrm{sin}\theta }}{\displaystyle \frac{}{\theta }}\left(\mathrm{sin}\theta {\displaystyle \frac{}{\theta }}V\right){\displaystyle \frac{1}{\mathrm{sin}^2\theta }}{\displaystyle \frac{^2}{\phi ^2}}V\right]`$ $`4\pi e\delta (rr_0)\delta (\mathrm{cos}\theta \mathrm{cos}\theta _0)\delta (\phi \phi _0)`$ (17) for $`r`$ and $`r_0`$ near $`r_H`$. In the other hand, the electrostatic equation in a Reissner-Nordström black hole is $`{\displaystyle \frac{}{r}}\left(r^2{\displaystyle \frac{}{r}}V^{(RN)}\right)+{\displaystyle \frac{r^2}{(rr_{})(rr_+)}}\left[{\displaystyle \frac{1}{\mathrm{sin}\theta }}{\displaystyle \frac{}{\theta }}\left(\mathrm{sin}\theta {\displaystyle \frac{}{\theta }}V^{(RN)}\right)+{\displaystyle \frac{1}{\mathrm{sin}^2\theta }}{\displaystyle \frac{^2}{\phi ^2}}V^{(RN)}\right]`$ $`=4\pi e\delta (rr_0)\delta (\mathrm{cos}\theta \mathrm{cos}\theta _0)\delta (\phi \phi _0).`$ (18) Near $`r=r_+`$, equation (3.1) coincides with equation (3.1) if the parameters $`M`$ and $`Q`$ of the Reissner-Nordström metric are such that $$r_H=r_+(M,Q)\mathrm{and}1\frac{db}{dr}(r_H)=\frac{2\sqrt{M^2Q^2}}{r_+(M,Q)}.$$ (19) The elementary solution in the Hadamard sense is uniquely determined in this neighbourhood and therefore we allow $`V_CV_C^{(RN)}`$ for $`r`$ and $`r_0`$ near $`r_H`$ to be true. Hence from result (16), we deduce $$_{r_H}[V_C]4\pi e\frac{M}{r_0}$$ (20) where $`M`$ will be determined from (19). In the limit where $`r_0r_H`$, we get $$_{r_H}[V_C]=4\pi e\left[1\frac{1}{2}\left(1\frac{db}{dr}(r_H)\right)\right].$$ (21) We are now in a position to determine the parameter $`s`$ in form (13) for a black hole with $`\varphi =0`$ since $`V`$ must further satisfy the global condition (12). Taking into account the electric flux (21), we obtain $`s`$ under the form $$s=\frac{1}{a_H}\left(1\kappa r_H\right)$$ (22) where $`\kappa `$ is given by (3) with $`\varphi =0`$. We notice that $`a_H=1/r_H`$. ### 3.2 A particular black hole with $`\varphi 0`$ It is desirable to adopt a metric which allows us to find an exact solution to the electrostatic equation (3) in the case $`\varphi 0`$, without referring to any gravitational theory of gravity. A trivial example is a metric which is conformal to a Reissner-Nordstrom metric. Since equation (3), $`\kappa `$ and $`a_H`$ are conformally invariant then we find again formula (22). In order to study a non-trivial case, we put the following metric $`ds^2=\left(1{\displaystyle \frac{\alpha }{R}}\right)^2\left(1{\displaystyle \frac{2m}{R}}\right)dt^2+\left(1{\displaystyle \frac{2m}{R}}\right)^1dR^2`$ $`+R^2\left(1{\displaystyle \frac{\alpha }{R}}\right)\left(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2\right)`$ (23) in the coordinate system $`(t,R,\theta ,\phi )`$. We take $`0<\alpha <2m`$. Metric (3.2) is defined for $`R>2m`$ and $`R=2m`$ is a horizon. So, metric (3.2) describes a black hole. We can easily go to form (1) of the metric by performing the change of radial coordinate $`r=\sqrt{R(R\alpha )}`$ and we verify that $`\varphi 0`$. The electrostatic potential $`V^{(P)}`$ due to an electric charge $`e`$ located at $`(R_0,\theta _0,\phi _0)`$ is governed by the equation $`{\displaystyle \frac{}{R}}\left[(R\alpha )^2{\displaystyle \frac{}{R}}V^{(P)}\right]+{\displaystyle \frac{R\alpha }{R2m}}\left[{\displaystyle \frac{1}{\mathrm{sin}\theta }}{\displaystyle \frac{}{\theta }}\left(\mathrm{sin}\theta {\displaystyle \frac{}{\theta }}V^{(P)}\right)+{\displaystyle \frac{1}{\mathrm{sin}^2\theta }}{\displaystyle \frac{^2}{\phi ^2}}V^{(P)}\right]`$ $`=4\pi e\delta (rr_0)\delta (\mathrm{cos}\theta \mathrm{cos}\theta _0)\delta (\phi \phi _0).`$ (24) We define a function X by setting $`V^{(P)}=XR/(R\alpha )`$. From (3.2), we obtain $`{\displaystyle \frac{^2}{R^2}}X+{\displaystyle \frac{2}{R}}{\displaystyle \frac{}{R}}X+{\displaystyle \frac{1}{(R\alpha )(R2m)}}\left[{\displaystyle \frac{1}{\mathrm{sin}\theta }}{\displaystyle \frac{}{\theta }}\left(\mathrm{sin}\theta {\displaystyle \frac{}{\theta }}X\right)+{\displaystyle \frac{1}{\mathrm{sin}^2\theta }}{\displaystyle \frac{^2}{\phi ^2}}X\right]`$ $`={\displaystyle \frac{4\pi e}{R_0(R_0\alpha )}}\delta (rr_0)\delta (\mathrm{cos}\theta \mathrm{cos}\theta _0)\delta (\phi \phi _0)`$ (25) which coincides with equation (3.1). So, the elementary solution in the Hadamard sense to equation (3.2) is $$V_C^{(P)}=\frac{R}{R\alpha }\frac{R_0}{R_0\alpha }V_C^{(RN)}(R,\theta ,\phi )$$ (26) for the parameters $`M`$ and $`Q`$ of the Reissner-Nordström background such that $$2m+\alpha =2M\mathrm{and}2m\alpha =Q^2.$$ (27) The electric flux at the infinity of $`V_C^{(P)}`$ can be calculated from expression (15) of $`V_C^{(RN)}`$ $$_{\mathrm{}}[V_C^{(P)}]=4\pi e\frac{R_0M}{R_0\alpha }.$$ (28) In the case of metric (3.2), the electrostatic potential $`a`$ given by (6) has the expression $`1/(R\alpha )`$. By substracting $`4\pi e`$ from (28), we find thereby $`s=M\alpha `$. With the value of $`M`$ deduced from (27), we therefore have the expression of the parameter $`s`$ $$s=m\frac{\alpha }{2}.$$ (29) The surface gravity $`\kappa `$ can be calculated by using the radial coordinate $`R`$ and we find $$\kappa ^{(P)}=\frac{1}{2(2m\alpha )}.$$ (30) We have $`a_H=1/(2m\alpha )`$ and so $`\kappa ^{(P)}/a_H=1/2`$. From general result (22) for black holes in the case $`\varphi =0`$ and this particular case (29) with $`\varphi 0`$, we conjecture that the parameter $`s`$ can be expressed in terms of $`a_H`$ and $`\kappa `$ by the formula $$s=\frac{1}{a_H}\left(1\frac{\kappa }{a_H}\right).$$ (31) ## 4 Entropy bound of a charged object We now consider a charged object with a mass $`ϵ`$, an electric charge $`e`$ and a radius $`\mathrm{}`$ located at the position $`(r_0,\theta _0,\phi _0)`$ in metric (1) characterized by the parameters $`m`$ and $`Q`$. We suppose that this object has its own gravitational field negligeable, i.e. $`ϵm`$ and $`ϵQ`$ and that its electric field satisfies the conditions of validity for a test field mentioned in the previous section. The energy $``$ of the charged object is the sum of the energy obtained by integrating on the energy-momentum of matter and the energy of the electromagnetic field. The energy of the electrostatic field $`V+qa`$ in the background metric is given by $`W_{em}={\displaystyle \frac{q}{4\pi }}{\displaystyle \sqrt{g}g^{00}g^{ij}_iV_jadrd\theta d\phi }`$ $`{\displaystyle \frac{1}{8\pi }}{\displaystyle \sqrt{g}g^{00}g^{ij}_iV_jVdrd\theta d\phi }`$ (32) which $`V`$ is governed by (3) with the global condition (12). The first term in expression (4) is the electrostatic energy $`W_{elect}`$ of the charge $`e`$ in the exterior field $`qa`$. By performing an integration by part, we obtain $$W_{elect}=qea(r_0)$$ (33) since the integral on the sphere of radius $`r_H`$ vanishes according to (12). The second term in expression (4) is infinite for a point charge. The divergence has a Coulombian type resulting from the part $`V_C`$ of the potential $`V`$ expressed in form (13). It should be incorporated at the mass of the charge. Nevertheless, it remains a finite part from the homogeneous term which leads to the electrostatic self-energy $`W_{self}`$ $$W_{self}=\frac{1}{2}e^2s[a(r_0)]^2.$$ (34) With formulas (33) and (34), we obtain $$=ϵ\sqrt{g_{00}(r_0)}+qea(r_0)+\frac{1}{2}e^2s[a(r_0)]^2.$$ (35) The last state in which the charged object is just outside the horizon is defined by the position $`r_0`$ which is related to $`\mathrm{}`$ by the formula $$\mathrm{}2(r_0r_H)^{1/2}r_H^{1/2}\frac{1}{[1b^{}(r_H)]^{1/2}}$$ (36) by assuming that the proper radial distance $`\mathrm{}`$ is very small. Its energy $`_{last}`$ has expression (35) calculated for $`r_0`$ determined by (36). We find thereby $$_{last}\kappa ϵ\mathrm{}+qea_H+\frac{1}{2}se^2a_H^2.$$ (37) We are now in a position to use the original method of Bekenstein for finding the entropy bound for the entropy $`S`$ of this object. We consider in fact this charged object in the black holes defined by metric (1), characterized by $`m`$ and $`Q`$, plus the electrostatic test field $`qa`$, characterized by the electric charge $`q`$. To obtain the expression of $`S_{bh}(m,Q,q)`$ of the charged black hole linearized with respect to $`q^2`$, we perform the integration of the first law (8) under the form $$S_{bh}(m,Q,q)\overline{S}_{bh}(m,Q)\frac{\pi }{\kappa }a_Hq^2.$$ (38) The generalized entropy of the state just ouside the horizon is $`S_{bh}(m,Q,q)+S`$. When the charged object falls in the horizon, the final state is a black hole with the new parameters $$m_f=m+_{last}q_f=q+e\mathrm{and}Q_f=Q.$$ (39) However, the entropy reduces to $`S_{bh}(m_f,Q_f,q_f)`$ in this final state. Now, we write down the generalized second law (5) of thermodynamics $$S_{bh}(m_f,Q_f,q_f)S_{bh}(m,Q,q)+S.$$ (40) The increase of entropy linear in $`_{last}`$ can be calculated from the first law (8). However, we want to keep the terms in $`e^2`$ and this is why we use the linearized expression (38). We obtain $$dS_{bh}=\frac{\pi }{\kappa }\left(2_{last}e^2a_H2eqa_H\right).$$ (41) By inserting (37) into (41), we thus have $$dS_{bh}=2\pi ϵ\mathrm{}+\pi e^2(sa_H1)a_H\frac{1}{\kappa }.$$ (42) With the help of (42), the generalized second law (40) gives the desired entropy bound of the charged object $$S2\pi ϵ\mathrm{}+\pi e^2(sa_H1)a_H\frac{1}{\kappa }$$ (43) which seems to be dependent on the parameters of the considered black hole. With our conjecture (31) on the value of $`s`$, we obtain the entropy bound $$S2\pi \left(ϵ\mathrm{}\frac{1}{2}e^2\right)$$ (44) already derived from thermodynamics of the Schwarzschild black hole. ## 5 Conclusion We have assumed the existence of thermodynamics of static black holes with spherical symmetry but fortunately without having has to know the expression of the entropy $`S_{bh}`$ in terms of the parameters of the black holes. Then, we have found the upper bound (43) on the entropy of a charged object which is dependent on the expression of the electrostatic self-energy. In the neutral case, it reduces immediately to the usual entropy bound . The crucial point is the determination of the electrostatic self-energy (34), i.e. the value of the parameter $`s`$ appearing in form (13). We have obtained very strong indications that $`s`$ can be expressed in terms of the surface gravity $`\kappa `$ and the value $`a_H`$ of the electrostatic potential at the horizon. We have found formula (31): $$s=\frac{1}{a_H}\left(1\frac{\kappa }{a_H}\right).$$ A rigorous proof of this result is beyond our scope. By admitting this result, we then obtain an entropy bound for a charged object where the parameters of the black hole disappear as in the Schwarzschild case .
no-problem/9910/astro-ph9910570.html
ar5iv
text
# QUASARS AS ABSORPTION PROBES OF THE HUBBLE DEEP FIELD1footnote 11footnote 1 Observations reported here were obtained at the Multiple Mirror Telescope Observatory, a facility operated jointly by the University of Arizona and the Smithsonian Institution; and at Kitt Peak National Observatory, National Optical Astronomy Observatories, operated by AURA Inc., under contract with the National Science Foundation. ## 1 INTRODUCTION The Hubble Deep Field (HDF), with its unprecedented depth and its rich resource of complementary data, has opened new avenues for studying galaxy evolution and cosmology (Williams et al. 1996; Livio, Fall & Madau 1998). The northern HDF no longer stands alone as the subject of the deepest image of the sky ever made; it was recently matched by deep Hubble Space Telescope (HST) observations of a southern field (Williams et al. 1998). In this paper, we use the term HDF to refer only to the northern field. Yet, no matter how deep any imaging survey might be, it can only reveal the luminous parts of galaxies, which comprise only 2-3% of the material in the universe. An examination of the cold, diffuse and dark components of the universe along the line of sight toward the HDF would provide an important complement to the study of the luminous matter content between $`0<z<4`$. There are a number of benefits to an absorption survey, using distant QSOs as background probes. Material can be detected in absorption that would be impossible to detect in emission. For example, galaxy halos can be detected using the C IV $`\lambda \lambda `$1548,1550 and Mg II $`\lambda \lambda `$2796,2800 doublets over the entire range $`0<z<4`$ (e.g. Meylan 1995). Moreover, quasar absorption can be detected via Lyman-$`\alpha `$ absorption at an H I column a million times lower than can be seen directly in emission (Rauch 1998). Lyman-$`\alpha `$ absorbers are as ubiquitous as galaxies and they effectively trace the potential of the underlying dark matter distribution (Hernquist et al. 1996; Miralda-Escudé et al. 1996). Finally, given a sufficiently bright background quasar, absorbers can be detected with an efficiency that does not depend on redshift. By contrast, galaxy surveys are inevitably complicated by effects such as Malmquist bias, cosmological dimming, k-corrections and surface brightness selection effects. The detection of a network of QSO absorbers in a volume centered on a deep pencil-beam survey allows several interesting experiments in large scale structure. It allows clustering to be detected on scales in excess of 10 $`h_{100}^1`$ Mpc. Individual QSO sightlines show C IV and Mg II correlation power on scales up to 20 $`h_{100}^1`$ Mpc (Quashnock & Vanden Berk 1998), and multiple sightlines have been used to trace out three-dimensional structures on even larger scales (Sargent & Steidel 1987; Dinshaw & Impey 1996; Williger et al. 1996). Deep pencil-beam galaxy redshift surveys have shown that around half of the galaxies lie in structures with line of sight separations of 50-300 $`h_{100}^1`$ Mpc (Cohen et al. 1996, 1999). In addition, the spatial relationship between quasar absorbers and luminous galaxies can be defined; they are expected to have different relationships to the underlying mass distribution (Cen et al. 1998). We have identified a set of QSOs in the direction of the HDF, suitable for use as background probes of the volume centered on the HDF line of sight. Section 2 contains the multicolor photometry and a simple multicolor QSO selection strategy. Section 3 includes a description of subsequent confirming spectroscopy, a catalog of confirmed QSOs out to $`z3`$ in the magnitude range $`17B21`$, and a discussion of the completeness of the survey and the properties of the confirmed QSOs. Section 4 is a brief summary, along with comments on the applications of these potential absorption line probes. ## 2 PHOTOMETRY AND CANDIDATE SELECTION ### 2.1 Multicolor Photometry The goal of this study was to obtain a grid of absorption probes out to a radius from the HDF of one cluster-cluster correlation scale length, or about 8 $`h_{100}^1`$ Mpc, at the median redshift of the deep galaxy surveys, $`z=0.8`$. For the cosmology we adopt with $`H_0`$ = 100 km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0`$=0.5, this corresponds to approximately 30 arcminutes. Our search area was thus the square degree centered on the HDF (B1950.0 12:34:35.5 +62:29:28). Our first broad-band images of the area centered on the HDF were obtained in March to May 1996, using the Steward Observatory 2.3-meter telescope on Kitt Peak and the Whipple Observatory 1.5-meter telescope on Mt. Hopkins. Poor weather for almost all the observing time on these runs limited the value of these data. Nevertheless, useful $`U`$, $`B`$, and $`R`$ band photometry was obtained for 0.12 square degree centered on the HDF. This photometry provided the first QSO candidates for spectroscopic followup, which was conducted April 9-10, 1997 at the Mutiple Mirror Telescope on Mt. Hopkins. Photometry of the complete square degree was ultimately achieved with the KPNO 0.9-meter telescope from April 29 to May 5, 1997. We obtained $`UBR`$ photometry of the survey area using the T2KA 2048$`\times `$2048 CCD. With a 23$`\mathrm{}`$ field of view, the entire square degree was covered with a 3$`\times `$3 mosaic of exposures. Integration times were 60 minutes in the $`U`$ band, 60 minutes in the $`B`$ band, and 30 minutes in the $`R`$ band, each divided into three exposures to facilitate cosmic ray rejection. The seeing for these observations ranged from 1.1 to 2.5 arcseconds FWHM. Images were bias-subtracted, flat-fielded and cleaned of cosmic rays using the standard routines in IRAF. Objects in the reduced images were detected using the Faint Object Classification and Analysis System (FOCAS; Valdes 1982). Next we used the apphot package in IRAF to measure aperture photometry of each object, using a fixed circular aperture 12 pixels (8$`\mathrm{}`$.2) in diameter and the sky value sampled from an annulus around each object with inner and outer diameters of 16 and 24 pixels respectively. Images in the three filters were registered, and positions were measured using the COORDS task in IRAF, with Hubble Space Telescope guide stars in the frames as a reference grid. The internal rms residuals of the astrometric solutions ranged from $`0\mathrm{}.2`$ to $`0\mathrm{}.5`$, which means there was no ambiguity in comparing objects between filters at the magnitude limit of this survey. The computed positions of stars in overlapping regions of the CCD fields matched to within $`0\mathrm{}.5`$ in all cases. As with the original imaging observations in the spring of 1996, many of the 0.9-meter observations in 1997 were obtained in non-photometric conditions. Fortunately, enough good weather was available to obtain flux calibrate each of the $`U`$, $`B`$ and $`R`$ mosaics under photometric conditions. Absolute photometric calibration was achieved by observing standard stars in the globular clusters NGC 4147 and M 92, which were reduced in the same way as the survey data described above. We used the photcal package in IRAF to fit zero points, color terms, and extinction coefficients. The photometric solutions yielded rms errors of 0.02, 0.04 and 0.03 magnitudes respectively for the $`U`$, $`B`$, and $`R`$ bands. The $`10\sigma `$ limiting magnitudes for point sources were $`U=21.6`$, $`B=22.1`$ and $`R=21.8`$. In all bands, point sources brighter than $`15.5`$ mag saturated the CCD readout; this was the practical brightness limit for our photometry. ### 2.2 Candidate Selection Multicolor selection of QSO candidates is a well understood and widely used technique (e.g., Koo, Kron & Cudworth 1986; Warren et al. 1991; Hall et al. 1996). Essentially, the power law energy distributions of QSOs cause them to be displaced from the stellar locus defined primarily by hot main sequence stars and white dwarfs. For this work, we used a straightforward application of the multicolor selection technique based on $`UB`$ and $`BR`$ colors. The $`UB`$ color provides optimal sensitivity to ultraviolet-excess QSOs at $`z<2`$, while the $`BR`$ color allows the detection of the rarer objects at high redshift. Figure 1 shows the $`UB`$ vs. $`BR`$ color-color diagram for the stellar objects in the survey area. For clarity, we have included in Figure 1 only objects with $`B<21.0`$ and present half the error bars. The great majority of objects lie along the stellar locus, which runs from the upper left at ($`UB0.3,BR0.7`$) to ($`UB1.5,BR2.5`$). The outliers blueward of the stellar locus in both $`UB`$ and $`BR`$ are most likely to be QSOs. We chose the boundaries for our QSO candidates based on both visual inspection of Figure 1, which clearly shows the edge of the bulk of the stellar locus, and color-color regions used by similar surveys in the literature. Following the work of Hall et al. (1996), we adopted a two-stage color selection process as follows: we consider QSO candidates to be (1) all objects bluer than $`BR=0.8`$, and (2) all objects with both $`UB0.4`$ and $`BR1.1`$. The boundary in color space of the candidate selection region is represented in Figure 1 by the bent solid line. Given our two-tiered color-color selection strategy, it is natural to divide our candidate selection region into three rectangular sub-regions. As shown in Figure 1, the area “Q1” is bounded by $`UB<0.4`$ and $`BR<0.8`$, and contains most of the candidates. Areas “Q2” ($`UB>0.4`$ and $`BR<0.8`$) and “Q3” ($`UB>0.4`$ and $`0.8<BR<1.1`$) together contain about half the number of candidates in “Q1.” ## 3 QUASARS IN THE DIRECTION OF THE HDF ### 3.1 Spectroscopy of QSO Candidates Slit spectroscopy of the QSO candidates was obtained with the Multiple Mirror Telescope between April 1997 and February 1998. Depending on the observing run, either the Blue Channel Spectrograph (3200-8000 Å coverage, 6 Å resolution) or the Red Channel Spectrograph (3700-7400 Å coverage, 10 Å resolution) was used with a 300 l mm<sup>-1</sup> grating. As with the photometry, the data were reduced using the standard methods in the ccdred and longslit packages in IRAF. Although not all the nights were photometric, relative spectrophotometry was obtained for all spectra using the spectrophotometric standards in Massey & Gronwall (1990). Since our primary scientific goal was to find QSOs bright enough to serve as background probes, we chose a practical limit of $`B21`$, corresponding roughly to the faintest QSO that can be measured at high resolution within a few hours using the largest ground-based telescopes. We thus began by observing all candidates brighter than $`B=21.2`$ within a 10$`\mathrm{}`$ radius of the HDF. Then we observed the brightest candidates in the entire square degree, moving progressively fainter as observing time and conditions allowed. Over the course of some 14 partial nights, with widely varying conditions of transparency and seeing, we observed a total of 61 candidates in the two-color region described above. This number comprises all the stellar objects in that region within the survey area from $`16B20.5`$, several fainter targets, and all such targets with $`B21.1`$ within 10$`\mathrm{}`$ of the HDF. We chose restrictive boundaries for the two-color selection in order to maximize the efficiency in the QSO selection and to produce a relatively complete spectroscopic sample. But the work of Hall et al. (1996), Kennefick et al. (1997) and others have shown that in a UV-excess color plot such as one we use, the region near the end of the stellar locus, though more strongly contaminated by blue stars, can potentially yield additional high-redshift ($`z>2.5`$) QSOs. In the hope of confirming even a few such high-redshift QSOs, we obtained spectra of an additional 29 randomly selected objects that were redward of the outlier boundary we established – that is, in the approximate color ranges $`0.3(UB)<0.4`$ and $`0.8(BR)<1.0`$. Unfortunately, none of these borderline candidates were found to be QSOs. ### 3.2 Confirmed QSOs Our search netted a total of 30 QSOs and 1 AGN. We present the QSO positions, magnitudes, colors and redshifts in Table 1, and their flux-calibrated spectra in Figure 2. The closest object is the AGN, a Seyfert galaxy at $`z=0.135`$; all the others have redshift $`z=0.44`$ or greater. The most distant QSO we identified lies at $`z=2.98`$. All of these QSO identifications were based either on two or more emission lines, or at least one strong, broad emission line which we assumed to be MgII at 2800 Å (where any other choice would have implied another strong line in our spectral window). Since the spectra were all flux-calibrated with relative spectrophotometry, we could also confirm through continuum fitting that the spectra were consistent with a power law energy distribution. Figure 3 shows the approximate positions on the sky of the confirmed QSOs with respect to the HDF and its flanking fields. In addition, we present in Table 2 the results of the spectroscopy of the objects in the color-color regions Q1, Q2 and Q3 (see Figure 1) that did not yield positive QSO confirmations, along with their classification as stars, compact galaxies, or unidentified sources. Together, the objects in Tables 1 and 2 comprise all the objects to the left of the bent solid line in Figure 1 that we have observed. For reference, we include the spectra of the four unidentified sources at the end of Figure 2. Finally, in Table 3 we list the fainter QSO candidates which fall into the outlier region for which we did not obtain spectra, down to a magnitude limit of $`B=22`$. The yield of QSOs within this faint list could potentially double the QSO sample presented in this paper. These tables will hopefully be useful for any future spectroscopic followup efforts within the area covered by this survey. Looking more closely at the distribution of the QSO candidates in color-color space, we find that region Q1 contains 28 of the 30 confirmed QSOs and a 67% fraction of QSOs to candidates. Region Q2 contains 1 AGN and 1 QSO out of 8 candidates, for a 25% fraction of active nuclei; both are relatively low-redshift objects, at $`z=`$0.135 and 0.58 respectively. Region Q3 contains only one QSO and a 9% fraction; but that object has the highest redshift in the sample at z=2.98. We have listed in Table 3 the color-color region of each faint candidate, as a possible indication of how likely the candidate is to be a QSO. ### 3.3 Completeness and QSO Surface Density As discussed in Section 3 above, every candidate from $`16.0B20.5`$ in the regions Q1, Q2 and Q3 was observed spectroscopically. This totaled 53 objects, 26 of which are confirmed as QSOs. Among the 8 fainter candidates observed, 4 are confirmed as QSOs. So in both subsamples and as a whole, the selection efficiency is about 50%. This fraction matches the 46% efficiency achieved by Kennefick et al. (1997) in the magnitude range $`16.5<B<21.0`$, using very similar color criteria with their $`UBV`$ data. There are 21 candidates with $`20.6B21.0`$ fitting our color criteria for which we have no spectra. If we assume that our observational efficiency is well represented by the 4 out of 8 faint candidates that are confirmed as QSOs, the entire square degree of this survey should contain a total of $`41\pm 6`$ QSOs in this magnitude range. This prediction is entirely consistent with the observations of Kennefick et al. (1997), and with predictions from the results of Koo & Kron (1988) and Boyle, Shanks & Peterson (1988). The fractions of $`z<2.3`$ and $`z>2.3`$ QSOs that we observe are 27/30 and 3/30, respectively, which is also consistent at the $`1\sigma `$ level with the above authors. This work is not intended as a study of the quasar luminosity function, nor is completeness required to use these QSOs as absorption probes. However, the consistency of our numbers with those in the literature means that this catalog fairly represents the QSO population in the survey area, and that it has not omitted a large fraction of the QSOs in our magnitude range. ## 4 SUMMARY We have surveyed the square degree centered on the Hubble Deep Field for QSOs which can be used as absorption probes, using a straightforward optical multi-color selection technique. We present the results of our spectroscopic identifications, which include 30 confirmed QSOs and 1 AGN in the magnitude range $`17.6<B<21.0`$ and the redshift range $`0.14<z<2.98`$. We also include a list of quasar candidates for which spectroscopy has not yet been obtained. It is our hope that this work will serve as a starting point for the establishment of a detailed grid of absorption probes, in order to study the non-luminous matter within Hubble Deep Field volume and its relationship to the galaxy distribution. This work was supported by a NASA archival grant for the HST (AR-06337) to the University of Arizona. We thank Paul Hewett for insights into the quasar-hunting business. Additionally, CTL gratefully acknowledges support from NSF grant AST96-17177 to Columbia University.
no-problem/9910/cond-mat9910127.html
ar5iv
text
# Interface dynamics in Hele-Shaw flows with centrifugal forces. Preventing cusp singularities with rotation ## Abstract A class of exact solutions of Hele-Shaw flows without surface tension in a rotating cell is reported. We show that the interplay between injection and rotation modifies drastically the scenario of formation of finite-time cusp singularities. For a subclass of solutions, we show that, for any given initial condition, there exists a critical rotation rate above which cusp formation is prevented. We also find an exact sufficient condition to avoid cusps simultaneously for all initial conditions. This condition admits a simple interpretation related to the linear stability problem. PACS number(s): 47.20.Hw, 47.20.Ma, 47.15.Hg, 68.10.-m The dynamics of the interface between viscous fluids confined in a Hele-Shaw cell has received attention for several decades from physicists, mathematicians and engineers. In particular it has played a central role in the context of interfacial pattern formation. As a free boundary problem it has the particular interest that explicit time-dependent solutions can often be found in the case with no surface tension. As a consequence, the issue of the role of surface tension as a singular perturbation in the interface dynamics has received increasing attention for its potential relevance to a broad class of problems. However, to what extent the physics of the real problem (with finite surface tension) is captured, even at a qualitative level, by the known solutions is still poorly understood. As an initial-value problem, the zero surface tension case is known to be ill-posed. An important aspect related to this fact is that some smooth initial conditions develop finite-time singularities in the form of cusps of the interface. After this blow-up of the solution, the time evolution is no longer defined. Generation of finite-time singularities is in itself interesting in connection with other singular perturbation problems in fluid dynamics, such as in the case of the Euler equations. In the present problem, surface tension acts as the natural regulator curing this singular behavior, but unfortunately the problem with surface tension is much more difficult and usually defies the analytical treatment. Motivated by this fact, and inspired by recent experiments on rotating Hele-Shaw cells, we address here the ’perturbation’ of the original free boundary problem by the presence of a centrifugal field. This new ingredient enriches the problem in a nontrivial way but, as we will see, it still admits explicit solutions, and may thus lead to new analytical insights, both in understanding the generation of finite-time singularities, and in elucidating the dynamical mechanisms of Laplacian growth with and without surface tension. In fact, although the new dimensionless parameter introduced by rotation does not fully regularize the problem, we show that it may prevent the occurrence of finite-time singularities, allowing for smooth, nonsingular solutions for the entire time evolution. This fact alone enlarges the class of (nontrivial) exact solutions without surface tension which are potentially relevant to the physically realizable situations. We study an interface between a fluid with viscosity $`\mu `$ and density $`\rho `$ and one with zero viscosity and zero density in a Hele-Shaw cell with gap $`b`$. The cell can be put in rotation with angular velocity $`\mathrm{\Omega }`$ and fluid can be injected or sucked out of the cell through a hole at the center of rotation, with areal rate $`Q`$. The cases $`Q>0`$ and $`Q<0`$ correspond respectively to injecting or sucking fluid. As in the traditional Hele-Shaw problem, the flow in the viscous fluid is potential, $`𝐯=\varphi `$, but now with a velocity potential given by $`\varphi ={\displaystyle \frac{b^2}{12\mu }}\left(p{\displaystyle \frac{1}{2}}\rho \mathrm{\Omega }^2r^2\right).`$ (1) Incompressibility then yields Laplace equation $`^2\varphi =0`$ for the field $`\varphi `$ (but not for the pressure). The two boundary conditions at the interface which complete the definition of the moving boundary problem are the usual ones, namely, the pressure on the viscous side of the interface $`p=\sigma \kappa `$, where $`\sigma `$ and $`\kappa `$ are respectively surface tension and curvature, and the continuity condition for the normal velocity $`v_n=𝐧\varphi `$. The crucial difference from the usual case is in the boundary condition satisfied by the laplacian field on the interface due to the last term in Eq.(1). This problem is well suited to conformal mapping techniques . The basic idea is to find an evolution equation for an analytical function $`z=f(\omega ,t)`$, which maps a reference region in the complex plane $`\omega `$, in our case the unit disk $`|\omega |1`$, into the physical region occupied by the fluid in the physical plane $`z=x+iy`$, with the physical interface being the image of the region boundary, $`|\omega |=1`$. We consider two types of situations, one in which the viscous fluid is inside the region enclosed by the interface, and one in which it is outside. It can be shown that the evolution equation for the mapping $`f(\omega ,t)`$ in the rotating case can be written in a compact form as $`\mathrm{Im}\{_tf^{}_\varphi f\}={\displaystyle \frac{Q}{2\pi }}\vartheta +{\displaystyle \frac{1}{2}}\mathrm{\Omega }^{}_\varphi \mathrm{H}_\varphi [|f|^2]+d_0_\varphi \mathrm{H}_\varphi [\kappa ],`$ (2) where $`\mathrm{\Omega }^{}=b^2\mathrm{\Omega }^2\rho /12\mu `$, $`d_0=b^2\sigma /12\mu `$, and where we have specified the mapping function at the unit circle $`\omega =e^{i\varphi }`$. The curvature is given by $`\kappa =\mathrm{Im}\{_\varphi ^2f/(_\varphi f|_\varphi f|)\}`$ and the Hilbert transform $`H_\varphi `$ is defined by $`\mathrm{H}_\varphi [g]={\displaystyle \frac{1}{2\pi }}\mathrm{P}{\displaystyle _0^{2\pi }}g(\theta )\mathrm{cotg}\left[{\displaystyle \frac{1}{2}}(\varphi \theta )\right]𝑑\theta .`$ (3) In Eq.(2), $`\vartheta =+1`$ and $`\vartheta =1`$ correspond to the cases with the viscous fluid respectively inside or outside the interface. Explicit time-dependent solutions of Eq.(2) are known only for the case $`d_0=0`$ and $`\mathrm{\Omega }^{}=0`$. Here we report explicit solutions for $`\mathrm{\Omega }^{}0`$. A class of solutions in this context is defined by a functional form of the mapping (with a finite number of parameters) which is preserved by the time evolution. The problem is then reduced to a set of nonlinear ODE’s for those parameters. All solutions we have found fit into the general class of the rational form $`f(\omega ,t)=\omega ^\vartheta {\displaystyle \frac{a_0(t)+_{j=1}^Na_j(t)\omega ^j}{1+_{j=1}^Nb_j(t)\omega ^j}},`$ (4) although not any mapping of this form is necessarily a solution. A more detailed study will be presented elsewhere. The general structure Eq.(4) is known to yield explicit solutions also in the nonrotating case (with different ODE’s for the parameters). However, some important classes of solutions of the usual case ($`\mathrm{\Omega }^{}=0`$) are no longer so for $`\mathrm{\Omega }^{}0`$. This is the case for instance of the superposition of a finite number of logarithmic terms (sometimes referred to as simple pole solutions, when considering the derivative of the mapping rather than the mapping itself). These solutions have the interesting feature of being free of finite-time singularities. On the contrary, the ’multiple-pole’ case (for the derivative of the mapping) turns out to include solutions for both $`\mathrm{\Omega }^{}=0`$ and $`\mathrm{\Omega }^{}0`$ . Finally, for the polynomial case ($`b_j=0`$ for all $`j`$’s), the nonrotating case $`\mathrm{\Omega }^{}=0`$ is known to always yield finite-time singularities in the form of cusps. We will see in the rest of this paper that this scenario is modified in a nontrivial way by the presence of rotation. We focus on the role of rotation in preventing cusp formation in the subclass of polynomial mappings of the form $`f(\omega ,t)=a_0(t)\omega ^\vartheta +a_n(t)\omega ^{n+\vartheta }.`$ (5) For $`a_na_0`$ this describes a $`n`$-fold sinusoidal perturbation of amplitude $`a_n`$ superimposed on a circular interface of radius $`a_0`$. It is convenient to introduce the dimensionless parameter $`\epsilon =(n+\vartheta )a_n/a_0`$. The range of physically acceptable values of $`a_n`$ and $`a_0`$ is given by the condition $`0<\epsilon <1`$ for all $`n`$. We also introduce a scaled mode amplitude $`\delta =a_0a_n`$ which turns out to be the relevant one to characterize the interface instability. To see this, let us first compute the standard linear growth rate. Inserting Eq.(5) into Eq.(2) and linearizing in $`a_n`$, we get, $`{\displaystyle \frac{\dot{a}_n}{a_n}}=\vartheta n\mathrm{\Omega }^{}(\vartheta n+1){\displaystyle \frac{Q}{2\pi a_0^2}}{\displaystyle \frac{d_0}{a_0^3}}n(n^21).`$ (6) The term $`Q/2\pi a_0^2`$, independent of both $`n`$ and $`\vartheta `$, has a purely kinematic origin, associated to the global expansion (or contraction) of the system. It can easily be shown that this quantity would be the growth rate of a mode corresponding to a redistribution of area given by the undistorted flow field with radial velocity $`v=Q/2\pi r`$. This in turn would imply $`a_n(t)a_0(t)=const`$. Accordingly, the marginal modes for $`\delta `$ (which in the rotating case may occur for all $`n`$) will be such that the flow field is undistorted by the interface perturbation, although such perturbation may grow or decay in the original variables $`a_n`$. Analogously, growth or decay of $`\delta `$ will correspond unambiguously to the stability properties of the actual velocity field. In this sense it may be justified to qualify the interface instability as described by $`\delta `$ as ’intrinsic’, as opposed to the ’morphological’ one as described by the amplitude $`a_n`$. In this way the intrinsic growth rate takes the simpler form $`{\displaystyle \frac{\dot{\delta }}{\delta }}=\vartheta n\left(\mathrm{\Omega }^{}Q^{}\right),`$ (7) where we have defined $`Q^{}=Q/2\pi a_0^2`$ and have dropped the surface tension term, since hereinafter we will restrict to the zero surface tension case. We introduce the relevant dimensionless control parameter of our problem, expressing the ratio of centrifugal to viscous forces, as $`P={\displaystyle \frac{\mathrm{\Omega }^{}2\pi R^2}{Q}}={\displaystyle \frac{\pi \rho b^2R^2\mathrm{\Omega }^2}{6\mu Q}},`$ (8) where $`R`$ is a characteristic radius of the interface. Eq.(7) clearly exhibits the competing effects of rotation and injection, although their roles are not quite symmetric. In fact, notice that $`Q^{}`$, which may have both signs, contains a dependence on $`a_0`$. In practice this means that $`Q^{}`$ depends effectively on time. An immediate consequence of this is that the growth of modes is not really exponential and may even be nonmonotonic. The asymmetry between injection and rotation shows up also in the fact that the sign of $`Q`$ determines which of the two effects dominates asymptotically in time. In fact, for positive injection rate the typical radius of the inner fluid is growing while typical interface velocities are decreasing, so centrifugal forces will dominate at long times. On the contrary, for negative injection rate, typical velocities increase while typical radii decrease, so injection will asymptotically dominate over rotation. In view of Eq.(7), the most interesting configurations will be those in which $`Q>0`$, so that injection and rotation have counteracting effects. In the case $`\vartheta =+1`$ (viscous fluid inside), which was experimentally studied in Ref., rotation is always destabilizing. A positive injection rate in this case tends to stabilize the circular interface. However, for fixed $`Q`$, $`Q^{}`$ will decrease with time, so eventually the interface will reach a radius after which all modes are linearly unstable. It is thus expected that, in this case, the formation of cusps can only be delayed but not avoided . The most interesting case from the point of view of preventing cusp formation is $`\vartheta =1`$ and $`Q>0`$, the usual configuration in viscous fingering experiments. In this case, a small rotation rate will slightly affect the linear instability, but will eventually stabilize the growth at long times, so it is conceivable to have a nontrivial evolution starting from an unstable interface but not developing finite-time singularities. We now study the fully nonlinear dynamics of polynomial mappings. Inserting Eq.(5) into Eq.(2) with $`d_0=0`$ we obtain two ordinary differential equations describing the evolution of $`a_0(t)`$ and $`a_n(t)`$. These can be integrated analytically and yield $`a_0^2(t)+\vartheta (n+\vartheta )a_n^2(t)={\displaystyle \frac{Q}{\pi }}t+k_0,`$ (9) $`a_0^{n+\vartheta }(t)a_n^\vartheta (t)=k_ne^{n\mathrm{\Omega }^{}t},`$ (10) where $`k_0`$ and $`k_n`$ are constants to be determined by initial conditions, and where $`n2`$ for $`\vartheta =+1`$ and $`n3`$ for $`\vartheta =1`$. Physically acceptable solutions require that the points in the $`\omega `$plane where $`_\omega f(\omega ,t)=0`$ (noninvertible) should lie outside the unit disk. The occurrence of a cusp is associated to such a point crossing the unit circle $`|\omega |=1`$ at a finite time $`t_c`$, that is, $`\left|{\displaystyle \frac{\vartheta a_0(t_c)}{(n+\vartheta )a_n(t_c)}}\right|=1.`$ (11) If we take the initial value $`a_0(0)`$ as the characteristic length $`R`$, which coincides with the radius of the perturbed circle if we are in the linear regime, and define the dimensionless time $`\tau =\mathrm{\Omega }^{}t`$, condition (11) reads $`\alpha _n\left({\displaystyle \frac{2R^2\tau _c}{P}}+k_0\right)=e^{\beta _n\tau _c},`$ (12) where $`\alpha _n={\displaystyle \frac{(n+\vartheta )^{\frac{n}{n+2\vartheta }}}{n+2\vartheta }}k_n^{\frac{2}{n+2\vartheta }},\beta _n={\displaystyle \frac{2n}{n+2\vartheta }}.`$ (13) Our aim is now at finding conditions such that an initially smooth interface remains smooth for an infinite time. Thus we have to impose that Eq.(11) should not have any solution for $`\tau _c>0`$. The transition between the regions with and without cusps will be defined by the conditions that the two members of Eq.(11) and their derivatives with respect to time are equal, such that the two curves have a common tangent. These two conditions allow us to eliminate $`\tau _c`$, and yield $`x\mathrm{log}xx=\alpha _nk_0`$ (14) where $`x=\frac{2\alpha _n}{P\beta _n}`$, and with $`R=a_0(0)`$ in Eq.(8). We now search for solutions of Eq.(14). For $`\vartheta =+1`$ it can be proven that this equation has no solutions, and therefore all initial conditions must eventually develop a cusp at finite time, as expected from the linear analysis. On the other hand, for $`\vartheta =1`$ the quantity on the rhs of Eq.(14) takes the simple form $`\alpha _nk_0={\displaystyle \frac{n1}{n2}}\left(1{\displaystyle \frac{\epsilon ^2}{n1}}\right)\epsilon ^{\frac{2}{n2}},`$ (15) and a nontrivial critical line $`P_c(\epsilon ;n)`$ can be found for each $`n3`$. This implies that, in the configuration with the viscous fluid outside, for any initial condition there is always a certain rotation rate above which there is no cusp formation. The numerical determination of these curves is shown in Fig.1. The leading behavior for initial conditions in the linear regime, $`\epsilon 1`$, can be found by expanding the lhs of Eq.(14) around $`x=e`$ and is given by $`P_c\frac{n1}{ne}\epsilon ^{\frac{2}{n2}}`$. Notice that there are qualitative differences for small values of $`n`$. For $`n=3`$ the curve starts horizontal at the linear level, implying that a very small rotation rate is sufficient to prevent cusp formation. For $`n=4`$ the threshold curve starts with a finite slope and for $`n>4`$ it has an infinite slope at $`\epsilon =0`$. A more detailed description and analysis of this diagram will be presented elsewhere . An example of rotation preventing cusp formation is shown in Fig.2. In Fig.1 we also see that for any given $`\epsilon `$, the critical $`P_c`$ increases monotonically with $`n`$. If we take the limit $`n\mathrm{}`$ at fixed $`\epsilon `$ we get $`\alpha _nk_01`$. From Eq.(14) this implies $`x=1`$ and consequently we obtain an absolute upper bound $`P_c^{max}=1`$ for all values of $`n`$ and $`\epsilon `$. This implies that, for all initial conditions, there is a critical rotation rate $`\mathrm{\Omega }_c=\left({\displaystyle \frac{6\mu Q}{\pi \rho b^2R^2}}\right)^{\frac{1}{2}}`$ (16) above which cusps are always eliminated. Although Eq.(16) has been derived for the class Eq.(5), it is expected that the existence of a certain $`\mathrm{\Omega }_c`$ and the scaling with physical parameters given by Eq.(16) could be more general. Notice that $`P=1`$ corresponds to the intrinsic marginal stability of the circular shape, $`Q^{}=\mathrm{\Omega }^{}`$. Therefore, the sufficient condition, valid for all initial conditions of the form Eq.(5), for not developing cusp singularities is that a circular interface with radius given by $`a_0(0)`$ be intrinsically stable, in the sense of Eq.(7). Whether deeper consequences can be drawn in a broader context from this inner connection between the linear problem and the possibility of cusp formation is an interesting open question. We acknowledge financial support by the Dirección General de Enseñanza Superior (Spain), Project PB96-1001-C02-02 and the European Commission Project ERB FMRX-CT96-0085.
no-problem/9910/astro-ph9910299.html
ar5iv
text
# X-ray Observations of Gravitationally Lensed Quasars; Evidence for a Hidden Quasar Population ## 1. INTRODUCTION Several attempts have been made to characterize the properties of distant and faint quasars and compare them to those of relatively nearby and bright ones. (Bechtold et al. 1994; Elvis et al. 1994; Vikhlinin et al. 1995; Cappi et al. 1997; Schartel et al. 1996; Laor et al. 1997; Yuan et al. 1998; Brinkmann et al. 1997; Reeves et al. 1997; Fiore et al. 1998). The evolution of quasar properties is in part studied by identifying changes in spectral properties with redshift. Information obtained from estimating the X-ray properties of quasars as a function of X-ray luminosity and redshift may be useful in constraining physical accretion disk models that explain the observed AGN continuum emission. The study of X-ray properties of quasars with relatively low luminosity may also provide clues to the nature of the remaining unresolved portion of the hard component of the cosmic X-ray background (XRB) (see, for example, Inoue et al. 1996; Ueda 1996; Ueda et al. 1998). The Gravitational Lensing (GL) effect has been widely employed as an analysis tool in a variety of astrophysical situations. The study of GL systems (GLS) in the radio and optical community has proven to be extremely rewarding by providing constraints on cosmological parameters $`\mathrm{\Lambda }`$ and H<sub>0</sub> (Kochanek 1996, Falco et al. 1997), by probing the evolution of mass-to-light ratios of high redshift galaxies (Keeton et al. 1998), by probing the evolution of the interstellar medium of distant galaxies (Nadeau et al. 1991), and by determining the total mass, spatial extent and ionization conditions of intervening absorption systems. The study of GL quasars in the X-ray band has been limited until now, the main limiting factor being the collecting area and spectral resolution of current X-ray telescopes combined with the small angular separations of lensed quasar images. One of the main objectives of this paper is to make use of the magnification effect of GL systems to investigate the X-ray properties of faint radio-loud and radio-quiet quasars. In many cases the available X-ray spectra of distant quasars have relatively low signal-to-noise ratio (S/N). This has lead to the development of various techniques to aid the study of faint quasars with 0.2 - 2 keV X-ray fluxes below 1 $`\times `$ 10<sup>-14</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. Most of these observational and analysis techniques employed to date to study the evolution and spectral emission mechanism of faint quasars are in general a slight variation of two distinct approaches. On the one end we find techniques that are based on summing the individual spectra of many faint X-ray sources taken from a large and complete sample (see for example Schartel et al. 1996; Vikhlinin et al. 1995). The goal of stacking is to obtain a single, high signal-to-noise ratio spectrum that contains enough counts to allow spectral fitting with quasar emission models. In some cases, where the initial sample is large enough, the spectra can be summed into bins of X-ray flux, redshift and radio luminosity class. Schartel et al. 1996 applied the stacking technique to a complete quasar sample and found that the mean spectral index for the stacked radio-quiet quasar spectrum was significantly ($`2\sigma `$) steeper than that of the stacked radio-loud quasar spectrum. Several of the assumptions made in this analysis which were related to the general properties of the quasars may, however, strongly influence the further interpretation of the results. It was assumed, for example, that quasar spectra follow single power-laws over rest frame energies ranging between (1+z)$`E_{min}`$ and (1+z)$`E_{max}`$, where $`E_{min}`$ and $`E_{max}`$ are the minimum and maximum energy bounds, respectively, of the bandpass of the X-ray observatory and z is the redshift of the quasar. Analyzed in the observer’s frame, quasar spectra with concave spectral slopes of increasing redshift would appear to be flatter as a consequence of the cosmological redshift. Any implications of evolutionary change within the quasar spectrum derived from the analysis of stacked spectra in observed frames would need to include the effect of cosmological redshift. Detailed spectra of quasars however indicate in general the presence of several components each associated to a different physical process. For example, several known processes contributing to the observed X-ray spectra are Compton reflection of photons in the disk coronae by the disk which becomes significant at rest energies above $``$ 10keV, inverse Compton scattering of photons in the disk coronae by UV photons originating from the disk resulting in a boost of photon energies from the UV range into the soft X-ray range (this is the mechanism that produces the observed power-law spectrum in the 2-10keV range), accretion disk emission, absorption by highly ionized gas (warm absorbers), beamed X-ray emission from jets which may be a large contributor for distant radio-loud quasars, absorption by accretion disk winds, and intervening absorption by damped Lyman alpha systems. The stacked spectrum therefore contains contributions from quasars of different redshifts and possibly spectral shapes that make the interpretation of the results difficult. Vikhlinin et al. 1995, using the ROSAT EMSS sample of 2678 sources, produced stacked spectra within several flux bins. They find a significant continuous flattening of the fitted spectral slopes from higher towards lower X-ray fluxes. One interesting result of their study is that the spectral slope at the very faint end is approximately equal to the slope of the hard (2 - 10keV) X-ray background. The unknown nature of many of the point sources included in the Vikhlinin sample, the inclusion of sources with different redshifts and the calculation of observed rest spectral indices complicates the interpretation of the results. A second technique used to study the general properties of quasars is based on obtaining deep X-ray observations of a few quasars. One advantage of such an approach is that the properties of individual quasars are not smeared out as with stacking methods. The faint fluxes however require extremely long observing times to achieve useable S/N. When total counts are low the quasar X-ray spectra are commonly characterized by a hardness ratio defined as R = (H - S)/(H + S), where H and S are the number of counts within some defined hard and soft energy band in the observer’s frame respectively. In this paper we outline an alternative approach to investigating the emission mechanism of radio-loud and quiet quasars at high redshift. The gravitational lensing magnification of distant quasars allows us to investigate the X-ray properties of quasars with luminosities relatively lower than those of unlensed quasars of similar redshifts. The amplification factors produced by lensing depends on the geometry of the lensing system and for our sample range between 2 and 30. The moderate-S/N spectra of our sample allow us to employ spectral models with multiple power-law slopes and perform fits in rest-frame energy bands. Our analysis makes use of the GL amplification effect to extend the study of quasar properties to unlensed X-ray flux levels as low as a few $`\times `$ 10<sup>-16</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. The limiting sensitivity of the ROSAT All-Sky Survey, for example, on which many recent studies are based is a few $`\times `$ 10<sup>-13</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. For a GL system with a lens that can be modeled well with a singular isothermal sphere (SIS) model the amplification is straightforward to derive analytically. In most observed cases, however, the deflector is a galaxy or cluster of galaxies with a gravitational potential that does not follow the SIS model and more sophisticated potential models need to be invoked to successfully model these GL systems. To estimate the intrinsic X-ray luminosity of GL quasars in our sample we have incorporated magnification factors determined from modeling of the GL systems with a variety of lens potentials. We performed fits of spectral models to the X-ray data in three rest energy bins, soft from 0.5 - 1 keV, mid from 1 - 4 keV, and high from 4 - 20 keV. Working in the quasar rest frame as opposed to the observers rest frame allows us to distinguish between true spectral evolution of quasars with redshift and apparent change in quasar spectra due to the cosmological redshift of quasar spectra through a fixed energy window in the observer’s rest frame. Our search of the ROSAT and ASCA archives yielded 16 GL systems detected in X-rays out of a total of approximately 40 GL candidates. Six of the GL quasars have observed X-ray spectra of medium-S/N. Our search for X-ray counterparts to known GL quasars resulted in the identification of the relatively X-ray bright radio-quiet quasar SBS0909+532 with an estimated 0.2-2 keV flux of about 7 $`\times `$ 10<sup>-13</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. Another interesting result of our search was the identification of a relatively large fraction of radio-quiet GL quasars with BAL quasars. In particular we find that at least 35% of the known radio-quiet GL quasars are BAL quasars. This value is significantly larger than the $``$ 10% value presently quoted from optical surveys. In section 2 we present details of X-ray observations of GL quasars and describe the analysis techniques used to extract and fit the X-ray spectra. Estimates of the flux magnification factors and unlensed luminosities for the GL systems studied in this paper are presented in section 3. A description of the properties of each GL quasar is presented in section 4. Included in this section are results from spectral modeling of several X-ray observations of the variable GL BAL quasar PG1115+080. Finally, in section 5 we summarize the spectral properties of faint quasars as implied by spectral fits to a sample of GL quasar spectra and provide a plausible explanation for the apparently large fraction of GL BAL quasars that we observe. ## 2. X-RAY OBSERVATIONS AND DATA ANALYSIS The X-ray observations presented here were performed with the ROSAT and ASCA observatories. Results for the spectral analyses in the X-ray band for the GL quasars Q0957+561, HE1104-1805, PKS1830-211 and Q1413+117 have already been published (Chartas et al. 1995, 1998; Reimers et al. 1995; Mathur et al. 1997; Green & Mathur, 1996) while results from X-ray spectral analyses for the quasars SBS0909+532, B1422+231, PG1115+080, 1208+1011 and QJ0240-343 are presented here for the first time. We have included in Table 1 several additional GL systems observed in the X-ray band. These observations however yielded either very low-S/N detections or were made with the ROSAT HRI which provides very limited spectral information. X-ray spectra of the GL quasars Q0957+561, 1422+231, HE1104-1805, SBS0909+532 and PG1115+080 with best fit models are presented in Figure 1 through Figure 5 respectively. The wide separation quasar pairs MG 0023+171, Q1120+0195, LBQS 1429-008 and QJ0240-343 are considered problematic GL candidates that may be binary quasars and not gravitational lenses (Kochanek et al. 1999). However, recent STIS spectroscopy of Q1120+0195(UM425) has revealed broad absorption line features in both lensed images thus confirming the lens nature of this wide separation system (Smette et al. 1998). The origin of the detected X-ray emission for the GL systems RXJ0921+4528 and RXJ0911.4+0551 cannot be determined from the presently available poor S/N ASCA GIS and ROSAT HRI observations respectively. The most likely origins are either the lensed quasar and/or a possible lensing cluster. In Table 1 we list the ASCA and ROSAT observations of GL quasars with detected X-rays. The spatial resolution for on axis pointing of the ROSAT HRI, ROSAT PSPC and ASCA GIS is about 5<sup>′′</sup>, 25<sup>′′</sup> and 3 respectively. Thus only for the ROSAT HRI observation of Q0957+561 was it possible to resolve the lensed X-ray images (Chartas et al. 1995). For the data reduction of the ASCA and ROSAT observations we used the FTOOLS package XSELECT. The ASCA SIS data used in this study were all taken in BRIGHT mode and high, medium and low bit rate data were included in the analysis. We created response matrices and ancillary files for each chip using the FTOOLS tasks sisrmg and ascaarf respectively. For the ROSAT PSPC data reduction we used the response matrix pspcb\_gain2\_256.rmf supplied by the ROSAT GOF and created ancillary files using the FTOOLS task pcarf. Net events shown in Table 1 are corrected for background, vignetting, exposure and point spread function effects using the source analysis tool SOSTA (part of the software package XIMAGE). For the spectral analyses we are mostly limited by the energy resolution and counting statistics to fitting simple absorbed power-law models to the data. In addition to spectral fits in the quasar rest frame we also performed spectral fits in the observer’s reference frame to facilitate the comparison of our results to those of previous studies. In Table 2 we show the results from fits of absorbed power-law models within three quasar rest frame energy intervals. In most cases no data are available in the soft interval since the corresponding observed energy interval is redshifted below the low energy quantum efficiency cutoff of the ROSAT XRT/PSPC. To estimate any significant difference of the spectral properties of our GL sample from those of unlensed quasar samples we computed the merit function, $`\frac{\chi ^2}{N}`$, defined by the expression, $$\frac{\chi ^2}{N}=\underset{i=0}{\overset{N}{}}\frac{[\alpha _{\nu ,GL}(i)\alpha _{\nu ,UL}(i)]^2}{\sigma _{UL}^{}{}_{}{}^{2}(i)}$$ (1) where $`\alpha _{\nu ,GL}`$(i) and $`\alpha _{\nu ,UL}`$(i) are the spectral indices of the GL and unlensed quasar sample respectively, $`\sigma _{UL}(i)`$ are the errors of the spectral indices of the unlensed samples and N is the number of spectral indices compared. We computed the merit function between our data set and that of Fiore et al. 1998. For the comparison we computed the spectral indices $`\alpha _S`$(0.1 - 0.8 keV) and $`\alpha _H`$(0.4 - 2.4 keV) in the quasar observed frames. Incorporating the 1$`\sigma `$ uncertainties in the fitted values for the GL spectral indices we obtain a distribution of values for the merit function with a most likely value of $`\frac{\chi ^2}{N}`$ $``$ 0.5 with N = 7. We therefore conclude that there is no significant difference at the $`\mathrm{\Delta }\alpha _\nu `$ = 0.3 level between our lensed sample and the Fiore et al. 1998 sample. Spectral modeling of the radio-loud and (non-BAL) radio-quiet quasars of our GL sample that have unlensed X-ray fluxes ranging between 3 $`\times `$ 10<sup>-16</sup> and 1 $`\times `$ 10<sup>-12</sup> erg s<sup>-1</sup> cm<sup>-2</sup> have indices that are consistent with those of brighter quasars (see Figure 7). In spite of the fact that most of the quasars in our sample have intervening absorption systems (see comments on individual systems) we find that the estimated photon indices for the very faint non-BAL quasars of our sample do not approach the level of 1.4 of the hard X-ray background. In Figure 8 we also show that the photon indices of our GL quasar sample do not show any signs of hardenning over a range of three orders of magnitude in unlensed 2-10keV luminosity. The three apparently harder spectra of Figure 7 and Figure 8 correspond to two BAL quasars and one absorbed blazar of our sample. The X-ray spectra of BAL quasars that are modeled as power-laws with Galactic absorption and no intrinsic absorption will erroneously imply relatively low photon indices. For example, spectral fits of a simple power-law plus Galactic absorption model to the X-ray spectrum of PG1115+080 (Fit 1, Table 8) yields a relatively low photon index of 1.4. The presently available spectra of the two BAL quasars of our sample have poor S/N and can not provide significant constrains on the intrinsic absorber column densities. ## 3. QUASAR UNLENSED LUMINOSITY The apparent surface brightness of gravitationally lensed images is a conserved quantity, however the observed X-ray flux is amplified due to the geometric distortion of the GL images. The lensed quasar images of our sample are not spatially resolved so we only observe the total magnification of the X-ray flux and not the spatial distortion of the image. Gravitational lensing is in general an achromatic effect, however possible differential absorption in the multiple images, microlensing from stars in the lens galaxy and source spectral variability combined with the expected time delay between photon arrival for each image may produce distinct features in the multiply lensed spectrum of the quasar. We estimated the unlensed X-ray luminosity of the quasars in our sample by scaling the lensed luminosity determined from the spectral fits to the GL magnification factors. The magnification parameters were derived from fits of singular isothermal ellipsoid SIE lens models (Keeton, Kochanek, & Falco, 1997) to optical and radio observables (e.g. image and lens positions and flux ratios) and incorporating the best fit parameters to derive the convergence, $`\kappa `$, of the lens. For a SIE lens the convergence $`\kappa (𝐱)`$ is given by, $$\kappa (𝐱)=\frac{b}{x\sqrt{1+e\mathrm{cos}(2(\theta \theta _0))}}$$ (2) where $`b`$ is the best-fit critical radius, $`x`$ is the distance from the lens galaxy center, $`\theta `$ is the position angle of point $`𝐱`$ with respect to the lens galaxy, $`e`$ is the ellipticity parameter of the lens and $`\theta _0`$ is the major axis position angle. The magnification $`\mu (x)`$ of each lensed image for an SIE lens is given by (Kormann, Schneider, and Bartelmann, 1994), $$\mu (x)=\frac{1}{(12\kappa )}$$ (3) In Table 3 we provide GL model parameters and magnification factors for several GL systems. ## 4. COMMENTS ON INDIVIDUAL SOURCES ### 4.1. The Newly Identified GL X-Ray Source SBS 0909+532 The radio-quiet quasar SBS0909+532 was recently identified as a candidate gravitational lens system with a source redshift of 1.377, and an image separation of 1.107<sup>′′</sup>. The lens has not yet been clearly identified, however, GL statistics place the most likely redshift for the lens galaxy at z<sub>l</sub> $``$ 0.5 with 90$`\%`$ confidence bounds of 0.18 $`<`$ z<sub>l</sub> $`<`$ 0.83 (Kochanek et al. 1997). Optical spectroscopy (Kochanek et al. 1997; Oscoz et al. 1997) has identified heavy element absorption lines of CIII, FeII and Mg II at z = 0.83. The optical data at this point cannot clearly discern whether the heavy-element absorber is associated with the lensing galaxy. We searched the HEASARC archive and found a bright X-ray source within 7<sup>′′</sup> of the optical location of SBS0909+532, well within the error bars of the ROSAT pointing accuracy of $``$ 30<sup>′′</sup>. The position of the X-ray counterpart as determined using the detect routine, which is part of the XIMAGE software package, is 09h 13m 1.7s, 52 59 39.5<sup>′′</sup> (J2000), whereas the optical source coordinates of SBS0909+532 are 09h 13m 2.4s, 52 59 36.4<sup>′′</sup> (J2000). This X-ray counterpart was observed serendipitously with the ROSAT PSPC on April 17 1991, April 28 1992 and October 27 1992 with detected count rates of 0.057 $`\pm `$ 0.006, 0.064 $`\pm `$ 0.005 and 0.09 $`\pm `$ 0.01 cnts s<sup>-1</sup> respectively. We performed simultaneous spectral fits in the observer frame to the three ROSAT PSPC observations. The results are summarized in Table 4. We considered two types of spectral models. In fit 1 of Table 4 we incorporated a redshifted power-law plus Galactic absorption and in fit 2 we included additional absorption at a redshift of 0.83 (possible lens redshift). Fits 1 and 2 yield acceptable reduced $`\chi ^2`$(dof) of 1.00(32) and 1.02(31) respectively. We can rule out absorption columns at z = 0.83 of more than 7.5 $`\times `$ 10<sup>20</sup> cm<sup>-2</sup> at the 68.3% confidence level. We also performed spectral fits in the quasar rest frame bands 0.5 - 1keV (soft band) and 1 - 4 keV (mid band). Simple spectral fits with an absorbed power-law assuming Galactic absorption of 1.72 $`\times `$ 10<sup>20</sup> cm<sup>-2</sup> result in spectral indices of $`1.62_{0.64}^{+1.0}`$ and $`2.25_{0.78}^{+0.74}`$ for the soft and mid bands respectively. All errors quoted in this paper are at the 68.3$`\%`$ confidence level unless mentioned otherwise. No X-ray data are presently available for the high energy band 4 - 20 keV. The ROSAT PSPC can only detect photons with energies up to about 3 keV in the rest frame of SBS0909+532. In Figure 4 we show the ROSAT PSPC spectrum of SBS0909+532 together with the best fit absorbed power-law model. ### 4.2. B1422+231 B1422+231 is a well studied quadrupole GLS with the lensed source being a radio-loud quasar at a redshift of 3.62 (Patnaik et al. 1992) and the lens consisting of a group of galaxies at a redshift of about 0.34 (Tonry, 1998). CIV doublets were found at redshifts of 3.091, 3.382, 3.536 and 3.538 (Bechtold et al. 1995). Strong Mg II and Mg I absorption lines at z = 0.647 have been identified in the quasar spectrum (Angonin - Willaime et al. 1993). X-ray observations of B1422+231 were made on Jan 14, 1995 for about 21.5ks and July 17, 1995 for about 13 ks with the ASCA satellite. The spectra were extracted from circular regions of 2.5 in radius centered on B1422+231 and the backgrounds were estimated from similar sized circular regions located on a source-free region on the second CCD. We first modeled the spectra of the two observations separately. Spectral fits in the observer’s frame, incorporating power-law models and absorption due to Galactic cold material, yield photon indices of 1.55$`{}_{}{}^{+0.08}{}_{0.08}{}^{}`$ and 1.46$`{}_{}{}^{+0.1}{}_{0.1}{}^{}`$ for the Jan 1995 and July 1995 observations respectively. We searched for possible departures from single power-law models by considering broken power-law models with a break energy fixed at 4 keV (rest frame). The Jan 14, 1995 data are suggestive of spectral flattening at higher energies while the poor S/N of the July 1995 spectrum cannot significantly constrain the spectral slopes. The 2-10keV X-ray fluxes for the Jan 14, 1995 and July 17, 1995 observations of B1422+231 are estimated to be 1.70$`{}_{}{}^{+0.46}{}_{0.37}{}^{}`$ and 1.93$`{}_{}{}^{+0.40}{}_{0.35}{}^{}`$ $`\times `$ 10<sup>-12</sup> erg s<sup>-1</sup> cm<sup>-2</sup> respectively (fits 1 and 4 in Table 5). Spectral fits in the quasar mid and high rest-frame bands for the Jan 14, 1995 observation with absorbed power-law models and assuming Galactic absorption of 2.52 $`\times `$ 10<sup>20</sup> cm<sup>-2</sup> yielded spectral indices of $`2.02_{0.53}^{+0.46}`$ and $`1.66_{0.12}^{+0.13}`$ respectively. ### 4.3. HE1104-1805 HE1104-1805 is a GL radio-quiet high redshift (z=2.316) quasar with an intervening damped Ly<sub>α</sub> system and a metal absorption system at z = 1.66 and a Mg II absorption system at z =1.32. Recent deep near IR imaging of HE1104-1805 (Courbin, Lidman, & Magain, 1998) detect the lensing galaxy at a redshift of 1.66 thus confirming the lens nature of this system. HE1104-1805 was observed with the ROSAT satellite on June 15 1993 for 13100 sec and with the ASCA satellite on May 31 1996 for 35989 sec with SIS0 and 35597 sec with SIS1. Reimers et al. (1995) have fit the ROSAT spectrum of HE1104-1805 in the 0.2 - 2 keV range with an absorbed power law model and find a photon index of 2.24 $`\pm `$ 0.16, consistent with our fitted value of 2.05 $`\pm `$ 0.2. The main difference between the Reimers et al. and Chartas 1999 models, used for the fits to the ROSAT spectrum of HE1104-1805, is that the former model allows the column density to be a free parameter in the spectral fit while in the latter model the column density is frozen to the Galactic value of 0.045 $`\times `$ 10<sup>22</sup> cm<sup>-2</sup>. For the data reduction of the ASCA SIS0 and SIS1 observations we extracted grade 0234 events within circular regions centered on HE1104-1805 and with radii of 3.2. The background was estimated by extracting events within circular regions in source free areas. High, medium and low bit rate data were combined and only Bright mode data were used in the analysis. We performed several spectral fits to the extracted ASCA spectrum with results summarized in Table 6. A simple spectral fit in the observers frame with an absorbed power-law model yields an acceptable fit with a photon index of 1.91$`{}_{}{}^{+0.06}{}_{0.06}{}^{}`$ and a 2-10 keV flux of about 9.4$`{}_{}{}^{+1.5}{}_{1.4}{}^{}`$ $`\times `$ 10<sup>-13</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. In Figure 3 we show the fit of this model to the ASCA data. Spectral fits to the ASCA and ROSAT X-ray spectra with absorbed power-law models in the mid and high energy bands result in spectral indices of $`1.93_{0.28}^{+0.27}`$ and $`2.01_{0.1}^{+0.1}`$ respectively. For the spectral model we assumed Galactic absorption with N<sub>H</sub> = 4.47 $`\times `$ 10<sup>20</sup> cm<sup>-2</sup>. ### 4.4. The Variable GL BAL Quasar PG1115+080 Recent observations of PG1115+080 in the FAR - UV with IUE (Michalitsianos et al. 1996) suggest the presence of a variable BAL region. In particular OVI$`\lambda `$ 1033 emission and BAL absorption with peak outflow velocities of $``$ 6,000 km s<sup>-1</sup> were observed to vary over timescales of weeks down to about 1 day. Variations in the BAL absorption features may be due to changes in the ionization state of the BAL material that could lead to changes in the column density. A model proposed by Barlow et al. (1992) to explain the 1 day fluctuations considers the propagation of an ionization front in the BAL flow. We expect variations in the BAL column densities to also manifest themselves as large variations in the observed X-ray flux. We searched the HEASARC archives and found that PG1115+080 was observed with the Einstein IPC on Dec 5 1979, the ROSAT PSPC on Nov 21, 1991 and with the ROSAT HRI on May 27 1994. Using the XIMAGE tool detect on the ROSAT HRI and PSPC images of the PG1115+080 observations and searching the NED database we found several X-ray sources within a 15 arcmin radius of PG1115+080. Most of these sources were detected in the ROSAT International X-ray Optical Survey (RIXOS). A list of their coordinates and NED identifications is shown in Table 7. The source extraction regions used in the analysis of the PG1115+080 event files were circles centered on PG1115+080 with radii of 1.5 arcmin and 4 arcmin for the PSPC and Einstein observations of PG1115+080, respectively. We excluded regions containing the nearby RIXOS sources. The background regions were circles in the near vicinity of PG1115+080. We performed various spectral fits to the PG1115+080 data with results summarized in Table 8. The observed Einstein IPC and ROSAT PSPC spectra of PG1115+080 accompanied by best fit models are shown in Figure 5. The X-ray observations of PG1115+080 with the ROSAT HRI show that all the detected X-ray emission is localized within a few arcsecs. We therefore do not expect any contamination from possible extended lenses. The HRI image of the field near PG1115+080 is shown in Figure 9. We modeled the observed spectra as power-laws with Galactic and intrinsic absorption. Our spectral fits to the Nov 21, 1991 observation imply absorption in excess to Galactic with a modeled intrinsic absorption of 1.43$`{}_{}{}^{+1.3}{}_{1.3}{}^{}`$ $`\times `$ 10<sup>22</sup> cm<sup>-2</sup> assuming a power-law photon index of 2.3 appropriate for high redshift radio-quiet quasars (Fiore et al. 1998; Yuan et al. 1998). For photon indices ranging between 2 and 2.6 the best fit values for the intrinsic absorption ranges between 0.2 and 3.5 $`\times `$ 10<sup>22</sup> cm<sup>-2</sup>. To evaluate the statistical significance of the existence of intrinsic absorption we calculated the F statistic formed by taking the ratio of the difference of $`\chi ^2`$ between a fit with only Galactic absorption (fit 3 in Table 8) and a new fit that in addition to Galactic assumes intrinsic absorption (fit 4 in Table 8) to the reduced $`\chi ^2`$ of the new fit. We find an F value of 21 between fits 3 and 4 (see Table 8) implying that the addition of an intrinsic absorption component improves the fit to the Nov 21 1991 observation of PG1115+080 with a probability of exceeding F by chance of about 0.005. Our spectral fits to the Dec 5 1979 observations do not indicate absorption in excess to the Galactic one. In contrast to the Nov 21 1991 observation of PG1115+080, the inclusion of intrinsic absorption into our model for spectral fits to the Dec 5 1979 observation produces a significantly larger reduced $`\chi ^2`$. The best fit value for the intrinsic absorber column for the Nov 21 1991 observation (fit 2, Table 8) is poorly constrained to be 1.2$`{}_{1.1}{}^{}{}_{}{}^{+1.1}`$ $`\times `$ 10<sup>23</sup> cm<sup>-2</sup>. Notice however from Table 8 that this is a very model dependent result. Our spectral model fits to the presently available X-ray observations of PG1115+080 indicate a decrease of about a factor of 13 of the 0.2-2keV flux between Dec 5 1979 and Nov 21 1991 and an increase by a factor of about 5 between the Nov 21 1991 and May 27 1994 observations. Figure 6 shows the estimated 0.2-2keV flux levels of PG1115+080 for the three X-ray observations. The poor S/N of the available spectra make it difficult to discern the cause of the X-ray flux variability. Possible origins may include a change in the column density of the BAL absorber, intrinsic variability of the quasar or a combination of both these effects. ### 4.5. Q1208+1011, Q1413+117, QJ0240-343 Q1208+1011 was observed with the ROSAT PSPC on Dec 16 1991 and June 3 1992, for 2,786 sec and 2,999 sec respectively. These short observations provide a weak constraint of 2.66$`{}_{0.91}{}^{}{}_{}{}^{+2.1}`$ on the mid band 1-4 keV rest-frame photon index. The magnification factor of this lens system is estimated to be approximately 4, assuming a singular isothermal sphere lens potential and a lens redshift of z=1.1349 (Siemiginowska et al. 1998). A recent application of the proximity effect, however, measured in the Lyman absorption spectrum of Q1208+1011 (Giallongo et al. 1998) implies an amplification factor as large as 22. Q1413+117 is a BAL GL quasar observed with the ROSAT PSPC on July 20, 1991 for 27,863sec. Using the standard detect and spectral fitting software tools XIMAGE-SOSTA, XIMAGE-detect and XSELECT-XSPEC we detect Q1413+117 at the 3$`\sigma `$ level in the ROSAT PSPC observation. The ROSAT PSPC and optical (HST) source coordinates of Q1413+117 are 14h 15m 46.4s, +11 29 56.3<sup>′′</sup> (J2000), and 14h 15m 45s, +11 29 42<sup>′′</sup> (J2000) respectively. The ROSAT and HST positions are well within the uncertainty of the ROSAT PSPC pointing accuracy. The improvements made in the processing of the ROSAT raw data by the U.S. ROSAT Science Data Center from the revision 0 product (rp700122) to the revision 2 product (rp700122n00), used in this analysis, may explain the non-detection of Q1413+117 in the Green & Mathur 1995 paper. We fitted the poor S/N PSPC spectrum of Q1413+117 with a power-law model that included Galactic and intrinsic absorption due to cold gas at solar abundances. For photon indices ranging between 2.0 and 2.6 our spectral fits imply intrinsic column densities ranging between 2 and 14 $`\times `$ 10<sup>22</sup> cm<sup>-2</sup>. Recently a pair of bright UV-excess objects, QJ0240-343 A and B, with a separation of 6.1 <sup>′′</sup> were discovered by Tinney (1997). The redshift of both objects was found to be 1.4, while no lens has been detected. Monitoring of this system in the optical indicates that it is variable on timescales of a few years. Spectra taken with the 3.9m Anglo-Australian telescope show a metal-line absorption system at z = 0.543 and a possible system at z = 0.337. QJ0240-343 was observed with the ROSAT PSPC in January 1992 with a detected count rate of 2.9$`\pm `$0.7 $`\times `$ 10<sup>-3</sup> cnts s<sup>-1</sup>. GL theory predicts that the lens for this system lies at about z = 0.5. The geometry of this system is very similar to that of the double lens Q0957+561. The large angular image separation of the proposed GL system QJ0240-343 suggests the presence of a lens consisting of a galaxy cluster. The lens however has yet to be detected and it has been suggested that this may be a binary quasar system. ## 5. DISCUSSION ### 5.1. X-ray Properties of Faint Quasars Our present sample of moderate to high S/N ASCA and ROSAT X-ray spectra of GL quasars contains two radio-loud quasars, Q0957+561 and B1422+231 (see Figures 1 and 2), and three radio-quiet quasars, HE1104-1805 (see Figure 3), SBS0909+532 (see figure 4) and Q1208+1011. Derived photon indices in the soft, mid and hard bands for these objects are presented in Table 2. For the two radio-loud quasars Q0957+561 and B1422+231 we observe a flattening of the spectra between mid and hard bands while for the radio-quiet quasar HE1104-1805 we do not observe any significant change in spectral slope between mid and hard bands. The spectral flattening of radio-loud quasars between mid and hard energy bands has been reported for non-lensed quasars (e.g. Wilkes & Elvis 1987; Fiore et al. 1998; Laor et al. 1997). The present findings for GL quasars are consistent with those for non-lensed quasars and imply that the underlying mechanism responsible for the spectral hardening in the hard band persists for the relatively high redshift GL quasars of our sample with X-ray luminousities that are less (by magnification factors ranging between 2 and 30) than previously observed objects at similar redshifts. Our analysis makes use of the GL amplification effect to extend the study of quasar properties to X-ray flux levels as low as a few $`\times `$ 10<sup>-16</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. The limiting sensitivity of the ROSAT All-sky Survey, for example, on which many recent studies are based, is a few $`\times `$ 10<sup>-13</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. We find that the spectral slopes of the radio-loud and non BAL quiet-quasars of our sample are consistent with those found in quasars of higher flux levels and do not appear to approach the observed spectral index of $``$ 1.4 of the hard X-ray background. Absorption due to known intervening systems in Q0957+561, B1422+231, SBS0909+532, Q1208+1011 and HE1104-1805 apparently does not lead to the spectral hardening observed in the Vikhlinin et al. (1995) sample at flux levels below $``$ 10<sup>-13</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. However, we do find that modeling the two radio-quiet BAL quasars PG1115+080 and Q1413+117 and the radio-loud quasar PKS1830-211, which shows strong X-ray absorption (Mathur et al. 1997), with simple power-law models with Galactic absorption results in very low spectral indices (see Table 2). Similar unlensed sources will therefore contribute to the remaining unresolved portion of the XRB. The presently available sample size of X-ray detected BAL radio-quiet quasars, however, will have to be significantly increased before we can make a statisticly significant quantitative assessment of the BAL quasar contribution to the hard XRB. ### 5.2. X-ray Properties of Gravitationally Lensed BAL Quasars Approximately 10% of optically selected quasars have optical/UV spectra that show deep, high-velocity Broad Absorption Lines (BAL) due mostly to highly ionized species such as C IV, Si IV, N V and O VI. However, a small fraction of BAL quasars show low ionization transitions of Mg II, Al III, Fe II and Fe III as well (e.g. Wampler et al. 1995). The observed absorption troughs are found bluewards of the associated resonance lines and are attributed (see Turnshek et al. 1988) to highly ionized gas flowing away from the central source at speeds ranging between 5,000 and 30,000 km s<sup>-1</sup>. Recent polarization observations (Goodrich 1997) indicate that the true fraction of BAL’s and BAL covering factors may be substantially larger ($`>`$ 30%) than the presently quoted value of 10%. Only a very small number of BAL quasars and AGN have been reported in the literature with detections in the X-ray band (PHL5200, Mrk231, SBS1542+541, 1246-057, and Q1120+0195(UM425)). With this work we also add to the list of X-ray detected BAL quasars the GL quasars PG1115+080, Q1413+117 and possibly RXJ0911.4+0551. We consider PG1115+080 and RXJ0911.4+0551 intermediate BAL quasars because of the relatively low peak velocities of the ouflowing absorbers (see Table 9 for a list of outflowing velocities of GL BAL quasars). The X-ray spectra obtained from X-ray observations of BAL quasars have modest (PHL5200, & Mrk231) to poor S/N and cannot accurately constrain the BAL column densities. Several of the GL quasars of our sample are known to contain intervening and intrinsic absorption. In particular PG1115+080 is known to contain a variable BAL system (Michalitsianos et al. 1996). The available X-ray observations indicate that PG1115+080 is a highly variable X-ray source. The large X-ray flux variations (a factor of about 13 decrease in X-ray flux between December 5 1979 and November 21 1991 and about a factor of 5 increase between November 21 1991 and May 27 1994) may possibly be used to substantially reduce the errors in the determination of the time delay of this GL system. Such a monitoring program will have to await the launch of the Chandra X-ray Observatory (CXO), a.k.a. AXAF, which has the spatial resolution to resolve the lensed images. The GL quasars Q1413+117, Q1120+0195 and RXJ0911.4+0551 have also been detected in X-rays and are known to contain BAL features. Unfortunately the available X-ray data have poor S/N and we have only provided estimates of their X-ray flux and luminosity. Recent high resolution optical and NIR imaging of RXJ0911.4+0551 have resolved the object into four lensed images and a lensing galaxy (Burud et al. 1998). They also detect a candidate galaxy cluster 38<sup>′′</sup> away from the image A1 with an estimated redshift of 0.7. It is possible that a large fraction of the detected X-ray emission in the ROSAT HRI observations of RXJ0911.4+0551 is originating from the cluster of galaxies. An interesting finding made by searching through the literature is the apparantly large fraction of optically detected GL BAL quasars. In particular we found seven GL BAL quasars out of a total of about 20 radio-quiet GL quasar candidates known to date. The probability of finding 7 or more BAL quasars out of a sample of 20 GL radio-quiet quasars assuming a true BAL fraction (amongst radio-quiet quasars only) of 0.11 is about 4 $`\times `$ 10<sup>-3</sup>. In Table 9 we list several properties of these GL BAL quasars. Thus we find that at least 35$`\%`$ of radio-quiet gravitationally lensed quasars contain BAL features which is significantly larger than the 10$`\%`$ fraction of BAL quasars found in optically selected quasar samples (almost all BAL’s are radio-quiet and about 90$`\%`$ of optically selected quasars are radio-quiet). Recently, BAL’s have also been identified in a few radio-loud quasars (Brotherton et al. 1998). These observations suggest that a large fraction of BAL quasars are missed from flux limited optical surveys, a view that has also been proposed by Goodrich (1997) based on polarization measurements of BAL quasars. One plausible explanation for the over-abundance of BAL quasars amongst radio-quiet GL quasars is based on the GL magnification effect which causes the luminosity distributions of BAL quasars and GL BAL quasars to differ considerably such that presently available flux limited surveys of BAL quasars detect more GL BAL quasars. We have created a simple model that can explain the difference between the observed GL BAL fraction of $``$ 35% and the observed non-lensed BAL quasar fraction of $``$ 10%. Our model makes use of the quasar luminosity function as parameterized by Pei (1995), assumes that only 20% of BALs observed in optical surveys of unlensed quasars are attenuated by a factor A (see Goodrich, 1997) and it uses the Warren et al. (1994) optical limits for non-lensed quasars and the CASTLES survey optical limits for lensed quasars (Kochanek et al. 1998). To simplify the analysis we assume an average magnification factor of $`<`$M$`>`$ for the GL quasars rather than incorporate each magnification factor separately. A survey of lensed quasars, with luminosity limits between L<sub>1</sub> and L<sub>2</sub>, will have a true luminosity range, assuming an average lens magnification factor of $`<`$M$`>`$, that lies between $`\frac{L_1}{<M>}`$ and $`\frac{L_2}{<M>}`$ for unattenuated lensed BAL quasars and that lies between $`\frac{L_1A}{<M>}`$ and $`\frac{L_2A}{<M>}`$ for attenuated lensed BAL quasars. Following the arguments of Goodrich (1997) we assume that only an observed fraction of 20% of BAL quasars are attenuated (this is approximately the observed fraction of BAL quasars with significant polarization). Based on what we have just discussed, the observed fraction , $`f_{ogb}(L_1,L_2)`$, of GL BAL quasars in the luminosity range of L<sub>1</sub> to L<sub>2</sub> can be approximated with the observed fraction, $`f_{ob}(\frac{L_1}{<M>},\frac{L_2}{<M>})`$, of non-GL BAL quasars in the observed luminosity range of $`\frac{L_1}{<M>}`$ to $`\frac{L_2}{<M>}`$. $$f_{ogb}(L_1,L_2)=f_{ob}(\frac{L_1}{<M>},\frac{L_2}{<M>})$$ (4) We separate the observed BAL quasar fraction, $`f_{ob}(\frac{L_1}{<M>},\frac{L_2}{<M>})`$, into the fraction that is attenuated, $`f_{oba}(\frac{L_1}{<M>},\frac{L_2}{<M>})`$, and the fraction $`f_{obna}(\frac{L_1}{<M>},\frac{L_2}{<M>})`$ that is not attenuated, $`f_{ob}({\displaystyle \frac{L_1}{<M>}},{\displaystyle \frac{L_2}{<M>}})=`$ $`f_{oba}({\displaystyle \frac{L_1}{<M>}},{\displaystyle \frac{L_2}{<M>}})+`$ $`f_{obna}({\displaystyle \frac{L_1}{<M>}},{\displaystyle \frac{L_2}{<M>}})`$ (5) If we assume that the luminosity distribution of non-attenuated BAL quasars is similar to that of non BAL quasars we expect the fraction $`f_{obna}(\frac{L_1}{<M>},\frac{L_2}{<M>})`$ to be independent of luminosity range and therefore approximately equal to the observed value $`f_{obna}(L_3,L_4)`$ $``$ 8$`\%`$, where L<sub>3</sub> and L<sub>4</sub> are the Warren et at. (1994) optical luminosity limits. As pointed out by Goodrich 1997 the attenuation expected to be present in about 20$`\%`$ of all BAL quasars causes the observed luminosity function for BAL quasars to be considerably different from the true luminosity function for BAL quasars. The ratio of observed, $`f_{oba}`$, to true, $`f_{tba}`$, fraction of attenuated BAL quasars can be determined if one incorporates the effect of attenuation in the quasar luminosity function. In particular if we define $`N(\frac{L_1}{<M>},\frac{L_2}{<M>}z_1,z_2)`$ as the integral of the quasar luminosity function as parametrized by Pei (1995) over the luminosity range $`\frac{L_1}{<M>}`$ and $`\frac{L_2}{<M>}`$ and the redshift range $`z_1`$ and $`z_2`$ then we may write the ratio of observed to true fraction of attenuated BAL quasars within this luminosity range as, $$\frac{f_{oba}(\frac{L_1}{<M>},\frac{L_2}{<M>})}{f_{tba}}=\frac{N(\frac{L_1A}{<M>},\frac{L_2A}{<M>},z_1,z_2)}{N(\frac{L_1}{<M>},\frac{L_2}{<M>},z_1,z_2)}$$ (6) The observed fraction of about 2% of attenuated non-lensed BAL quasars, however, is measured within the Warren et al. (1994) optical limits of L<sub>3</sub> = 2.7 $`\times `$ 10<sup>46</sup> erg s<sup>-1</sup> and L<sub>4</sub> = 3.8 $`\times `$ 10<sup>47</sup> erg s<sup>-1</sup> and redshift range of z<sub>3</sub> = 2 and z<sub>4</sub> = 4.5. We therefore write the ratio of observed to true fraction of attenuated BAL quasars within the L<sub>3</sub> and L<sub>4</sub> range as, $$\frac{f_{oba}(L_3,L_4)}{f_{tba}}=\frac{N(L_3A,L_4A,z_3,z_4)}{N(L_3,L_4,z_3,z_4)}$$ (7) Combining equations 4, 5, 6 and 7 we obtain the following expression for the observed fraction of GL BAL quasars as a function of average GL magnification $`<M>`$ and BAL attenuation factor A, $`f_{ogb}(L_1,L_2)=f_{obna}(L_3,L_4)+f_{oba}(L_3,L_4)`$ $`\times {\displaystyle \frac{N(L_3,L_4,z_3,z_4)}{N(L_3A,L_4A,z_3,z_4)}}{\displaystyle \frac{N(\frac{L_1A}{<M>},\frac{L_2A}{<M>},z_1,z_2)}{N(\frac{L_1}{<M>},\frac{L_2}{<M>},z_1,z_2)}}`$ (8) In Figure 10 we plot the expected observed fraction of GL BAL quasars as a function of attenuation values A and magnification factors $`<M>`$. The magnification effect of GL quasars alone cannot explain the observed enhanced GL BAL quasar fraction of $``$ 35$`\%`$. By combining, however, the magnification effect with the presence of an attenuation of the continuum in a fraction of BAL quasars, as suggested by the polarization observations by Goodrich 1997, our simple model can reproduce the observed GL BAL quasar fraction of $``$ 35$`\%`$. For a range of average magnification factors $`<M>`$ between 5 and 15 we obtain attenuation values A ranging between 5 and 4.5. The range of attenuation values of 4.5 to 5, suggested by the observed fraction of GL BALQSO’s, is close to the range of 3 to 4 implied by the observed polarization distributions of BALQSO’s and non-BAL radio-quiet quasars (Goodrich 1997), especially considering the uncertainties in both analyses. A value of $`<M>`$ $``$ 10 is consistent with typical estimated values for GL quasars, (see, for example, our GL model estimates in Table 3). ## 6. CONCLUSIONS We have introduced a new approach in studying the X-ray properties of faint quasars. Our analysis makes use of the GL amplification effect to extend the study of quasar properties to X-ray flux levels as low as a few $`\times `$ 10<sup>-16</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. For the two radio-loud GL quasars Q0957+561 and B1422+231 we observe a flattening of the spectra between mid and hard bands (rest-frame) while for the radio-quiet quasar HE1104-1805 we do not observe any significant change in spectral slope between mid and hard bands. The present findings in GL quasars are consistent with those of non-lensed quasars and imply that the underlying mechanism responsible for the spectral hardening from mid to hard bands persists for the relatively high redshift GL radio-loud quasars of our sample with X-ray luminousities that are less (by the magnification factors indicated in Table 3) than previously observed objects at similar redshifts. Our results suggest that radio-loud and non-BAL radio-quiet quasars with unlensed fluxes as low as a few $`\times `$ 10<sup>-16</sup> erg s<sup>-1</sup> cm<sup>-2</sup> do not have spectral slopes that are any different from brighter quasars. Modeling the spectra of the two GL BAL quasars and the radio-loud quasar PKS1830-211 in our sample with simple power-laws and including only Galactic absorption leads to spectral indices that are considerably flatter than the average values for quasars. These results therefore imply that BAL quasars and quasars with associated absorption will contribute to the unresolved portion of the hard XRB. We must emphasize, however, that our present sample of GL quasars will need to be enlarged to assess the significance of the contribution of BAL quasars to the XRB. X-ray observations in the near future with the X-ray missions CXO, XMM and ASTRO-E will significantly aid in adding many more GL quasars to this sample. Our analysis of several X-ray observations of the GL BAL quasar PG1115+080 show that it is an extremely variable source. Fits of various models to the spectra obtained during these observations suggest that the X-ray variability is partly due to a variable BAL absorber. The X-ray flux variability in this source can be used to improve present measurements of the time delay. The large variability in the X-ray compared to optical band offers the prospect of substantially reducing the errors in deriving a time delay from cross-correlating image light curves. A precise measurement of the time delay combined with an accurate model for the mass distribution of the lens can be used to derive a Hubble constant that does not depend on the reliability of a “standard candle”. The scheduled monitoring of PG1115+080 with the CXO will provide spatially resolved spectra and light curves for the individual lensed images. One of the significant findings of this work was a surprisingly large fraction of BAL quasars that are gravitationally lensed. In particular we find 7 BAL quasars out of a sample of 20 GL radio-quiet quasars. We have successfully modeled this effect and find that an attenuation factor A $``$ 5 of the BAL continuum of only 20$`\%`$ of all BAL quasars is consistent with the observed GL fraction of 35$`\%`$. We emphasize that the magnification effect alone cannot explain the observed difference between BAL fractions for lensed and non-lensed quasars. One needs to incorporate in addition an attenuation mechanism to produce the observed results. These observations therefore are suggestive of the existence of a hidden population of absorbed high redshift quasars which have eluded detection by present flux limited surveys. As X-ray and optical surveys approach lower flux limits we expect the fraction of BAL quasars found to increase. I would like to thank N. Brandt, M. Eracleous, G. Garmire, and J. Nousek for helpful discussions and comments. This work was supported by NASA grant NAS 8-38252. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center.
no-problem/9910/hep-ph9910390.html
ar5iv
text
# Electroweak Precision Tests Invited talk at the International Workshop Particles in Astrophysics and Cosmology: from Theory to Observation (València, 3–8 May 1999) ## 1 INTRODUCTION The Standard Model (SM) constitutes one of the most successful achievements in modern physics. It provides a very elegant theoretical framework, which is able to describe all known experimental facts in particle physics. A detailed description of the SM and its impressive phenomenological success can be found in Refs. and , which discuss the electroweak and strong sectors, respectively. The high accuracy achieved by the most recent experiments allows to make stringent tests of the SM structure at the level of quantum corrections. The different measurements complement each other in their different sensitivity to the SM parameters. Confronting these measurements with the theoretical predictions, one can check the internal consistency of the SM framework and determine its parameters. The following sections provide an overview of our present experimental knowledge on the electroweak couplings. A brief description of some classical QED tests is presented in Section 2. The leptonic couplings of the $`W^\pm `$ bosons are analyzed in Section 3, where the tests on lepton universality and the Lorentz structure of the $`l^{}\nu _ll^{}\overline{\nu }_l^{}`$ transition amplitudes are discussed. Section 4 describes the status of the neutral–current sector, using the latest experimental results reported by LEP and SLD. Some summarizing comments are finally given in Section 5. ## 2 QED The most stringent QED test comes from the high–precision measurements of the $`e`$ and $`\mu `$ anomalous magnetic moments $`a_l^\gamma (g_l2)/2`$: $`a_e^\gamma `$ $`=`$ $`\{\begin{array}{cc}(\mathrm{115\hspace{0.17em}965\hspace{0.17em}215.4}\pm 2.4)\times 10^{11}& (\text{Theory})\\ (\mathrm{115\hspace{0.17em}965\hspace{0.17em}219.3}\pm 1.0)\times 10^{11}& (\text{Exp.})\end{array}`$ (3) $`a_\mu ^\gamma `$ $`=`$ $`\{\begin{array}{cc}(\mathrm{1\hspace{0.17em}165\hspace{0.17em}916.0}\pm 0.7)\times 10^9& (\text{Theory})\\ (\mathrm{1\hspace{0.17em}165\hspace{0.17em}923.0}\pm 8.4)\times 10^9& (\text{Exp.})\end{array}`$ (6) The impressive agreement between theory and experiment (at the level of the ninth digit for $`a_e^\gamma `$) promotes QED to the level of the best theory ever build by the human mind to describe nature. Hypothetical new–physics effects are constrained to the ranges $`|\delta a_e^\gamma |<0.9\times 10^{10}`$ and $`|\delta a_\mu ^\gamma |<2.4\times 10^8`$ (95% CL). To a measurable level, $`a_e^\gamma `$ arises entirely from virtual electrons and photons; these contributions are known to $`O(\alpha ^4)`$. The sum of all other QED corrections, associated with higher–mass leptons or intermediate quarks, only amounts to $`+(0.4366\pm 0.0042)\times 10^{11}`$, while the weak interaction effect is a tiny $`+0.0030\times 10^{11}`$; these numbers are well below the present experimental precision. The theoretical error is dominated by the uncertainty in the input value of the electromagnetic coupling $`\alpha `$. In fact, turning things around, one can use $`a_e^\gamma `$ to make the most precise determination of the fine structure constant : $$\alpha ^1=137.03599959\pm 0.00000040.$$ (7) The resulting accuracy is one order of magnitude better than the usually quoted value $`\alpha ^1=137.0359895\pm 0.0000061`$. The anomalous magnetic moment of the muon is sensitive to virtual contributions from heavier states; compared to $`a_e^\gamma `$, they scale as $`m_\mu ^2/m_e^2`$. The main theoretical uncertainty on $`a_\mu ^\gamma `$ has a QCD origin. Since quarks have electric charge, virtual quark–antiquark pairs can be created by the photon leading to the so–called hadronic vacuum polarization corrections to the photon propagator (Figure 1.c). Owing to the non-perturbative character of QCD at low energies, the light–quark contribution cannot be reliably calculated at present; fortunately, this effect can be extracted from the measurement of the cross-section $`\sigma (e^+e^{}\text{hadrons})`$ at low energies, and from the invariant–mass distribution of the final hadrons in $`\tau `$ decays . The large uncertainties of the present data are the dominant limitation to the achievable theoretical precision on $`a_\mu ^\gamma `$. It is expected that this will be improved at the DA$`\mathrm{\Phi }`$NE $`\mathrm{\Phi }`$ factory, where an accurate measurement of the hadronic production cross-section in the most relevant kinematical region is expected . Additional QCD uncertainties stem from the (smaller) light–by–light scattering contributions, where four photons couple to a light–quark loop (Figure 1.d); these corrections are under active investigation at present . The improvement of the theoretical $`a_\mu ^\gamma `$ prediction is of great interest in view of the new E821 experiment , presently running at Brookhaven, which aims to reach a sensitivity of at least $`4\times 10^{10}`$, and thereby observe the contributions from virtual $`W^\pm `$ and $`Z`$ bosons ($`\delta a_\mu ^\gamma |_{\text{weak}}15\times 10^{10}`$). The extent to which this measurement could provide a meaningful test of the electroweak theory depends critically on the accuracy one will be able to achieve pinning down the QCD corrections. ## 3 LEPTONIC CHARGED–CURRENT COUPLINGS The simplest flavour–changing process is the leptonic decay of the $`\mu `$, which proceeds through the $`W`$–exchange diagram shown in Figure 2. The momentum transfer carried by the intermediate $`W`$ is very small compared to $`M_W`$. Therefore, the vector–boson propagator reduces to a contact interaction. The decay can then be described through an effective local 4–fermion Hamiltonian, $$_{\text{eff}}=\frac{G_F}{\sqrt{2}}\left[\overline{e}\gamma ^\alpha (1\gamma _5)\nu _e\right]\left[\overline{\nu }_\mu \gamma _\alpha (1\gamma _5)\mu \right],$$ (8) where $$\frac{G_F}{\sqrt{2}}=\frac{g^2}{8M_W^2}$$ (9) is called the Fermi coupling constant. $`G_F`$ is fixed by the total decay width, $$\frac{1}{\tau _\mu }=\frac{G_F^2m_\mu ^5}{192\pi ^3}\left(1+\delta _{\text{RC}}\right)f\left(m_e^2/m_\mu ^2\right),$$ (10) where $`f(x)=18x+8x^3x^412x^2\mathrm{ln}x`$, and $`\delta _{\text{RC}}=0.0042`$ takes into account the leading higher–order corrections . The measured $`\mu `$ lifetime , $`\tau _\mu =(2.19703\pm 0.00004)\times 10^6`$ s, implies the value $`G_F`$ $`=`$ $`(1.16637\pm 0.00001)\times 10^5\text{GeV}^2`$ (11) $``$ $`(293\text{GeV})^2.`$ The leptonic $`\tau `$ decay widths $`\tau ^{}l^{}\overline{\nu }_l\nu _\tau `$ ($`l=e,\mu `$) are also given by Eq. (10), making the appropriate changes for the masses of the initial and final leptons. Using the value of $`G_F`$ measured in $`\mu `$ decay, one gets a relation between the $`\tau `$ lifetime and leptonic branching ratios : $`B_{\tau e}`$ $`=`$ $`{\displaystyle \frac{B_{\tau \mu }}{0.972564\pm 0.000010}}`$ (12) $`=`$ $`{\displaystyle \frac{\tau _\tau }{(1.6321\pm 0.0014)\times 10^{12}\text{s}}}.`$ The errors reflect the present uncertainty of $`0.3`$ MeV in the value of $`m_\tau `$. The measured ratio $`B_{\tau \mu }/B_{\tau e}=0.974\pm 0.004`$ is in perfect agreement with the predicted value. As shown in Figure 4, the relation between $`B_{\tau e}`$ and $`\tau _\tau `$ is also well satisfied by the present data. The experimental precision (0.3%) is already approaching the level where a possible non-zero $`\nu _\tau `$ mass could become relevant; the present bound $`m_{\nu _\tau }<18.2`$ MeV (95% CL) only guarantees that such effect is below 0.08%. These measurements test the universality of the $`W`$ couplings to the leptonic charged currents. Allowing the coupling $`g`$ to depend on the considered lepton flavour (i.e. $`g_e`$, $`g_\mu `$, $`g_\tau `$), the $`B_{\tau \mu }/B_{\tau e}`$ ratio constrains $`|g_\mu /g_e|`$, while $`B_{\tau e}/\tau _\tau `$ provides information on $`|g_\tau /g_\mu |`$. The present results are shown in Tables 3, 3 and 3, together with the values obtained from the ratios $`R_{\pi e/\mu }\mathrm{\Gamma }(\pi ^{}e^{}\overline{\nu }_e)/\mathrm{\Gamma }(\pi ^{}\mu ^{}\overline{\nu }_\mu )`$ and $`R_{\tau /P}\mathrm{\Gamma }(\tau ^{}\nu _\tau P^{})/\mathrm{\Gamma }(P^{}\mu ^{}\overline{\nu }_\mu )`$ \[$`P=\pi ,K`$\], from the comparison of the $`\sigma B`$ partial production cross-sections for the various $`W^{}l^{}\overline{\nu }_l`$ decay modes at the $`p\overline{p}`$ colliders, and from the most recent LEP2 measurements of the leptonic $`W^\pm `$ branching ratios. The present data verify the universality of the leptonic charged–current couplings to the 0.15% ($`\mu /e`$) and 0.23% ($`\tau /\mu `$, $`\tau /e`$) level. The precision of the most recent $`\tau `$–decay measurements is becoming competitive with the more accurate $`\pi `$–decay determination. It is important to realize the complementarity of the different universality tests. The pure leptonic decay modes probe the charged–current couplings of a transverse $`W`$. In contrast, the decays $`\pi /Kl\overline{\nu }`$ and $`\tau \nu _\tau \pi /K`$ are only sensitive to the spin–0 piece of the charged current; thus, they could unveil the presence of possible scalar–exchange contributions with Yukawa–like couplings proportional to some power of the charged–lepton mass. ### 3.1 Lorentz Structure Let us consider the leptonic decay $`l^{}\nu _ll^{}\overline{\nu }_l^{}`$. The most general, local, derivative–free, lepton–number conserving, four–lepton interaction Hamiltonian, consistent with locality and Lorentz invariance $$=4\frac{G_{l^{}l}}{\sqrt{2}}\underset{n,ϵ,\omega }{}g_{ϵ\omega }^n\left[\overline{l_ϵ^{}}\mathrm{\Gamma }^n(\nu _l^{})_\sigma \right]\left[\overline{(\nu _l)_\lambda }\mathrm{\Gamma }_nl_\omega \right],$$ (13) contains ten complex coupling constants or, since a common phase is arbitrary, nineteen independent real parameters. The subindices $`ϵ,\omega ,\sigma ,\lambda `$ label the chiralities (left–handed, right–handed) of the corresponding fermions, and $`n`$ the type of interaction: scalar ($`I`$), vector ($`\gamma ^\mu `$), tensor ($`\sigma ^{\mu \nu }/\sqrt{2}`$). For given $`n,ϵ,\omega `$, the neutrino chiralities $`\sigma `$ and $`\lambda `$ are uniquely determined. Taking out a common factor $`G_{l^{}l}`$, which is determined by the total decay rate, the coupling constants $`g_{ϵ\omega }^n`$ are normalized to $$1=\underset{n,ϵ,\omega }{}|g_{ϵ\omega }^n/N^n|^2,$$ (14) where $`N^n=2`$, 1, $`1/\sqrt{3}`$ for $`n=`$ S, V, T. In the SM, $`g_{LL}^V=1`$ and all other $`g_{ϵ\omega }^n=0`$. The couplings $`g_{ϵ\omega }^n`$ can be investigated through the measurement of the final charged–lepton distribution and with the inverse decay $`\nu _l^{}ll^{}\nu _l`$. For $`\mu `$ decay, where precise measurements of the polarizations of both $`\mu `$ and $`e`$ have been performed, there exist stringent bounds on the couplings involving right–handed helicities. These limits show nicely that the $`\mu `$–decay transition amplitude is indeed of the predicted V$``$A type: $`|g_{LL}^V|>0.96`$ (90% CL). Figure 6 shows the most recent limits on the $`\tau `$ couplings . The circles of unit area indicate the range allowed by the normalization constraint (14). The present experimental bounds are shown as shaded circles. For comparison, the (stronger) $`\mu `$-decay limits are also given (darker circles). The measurement of the $`\tau `$ polarization allows to bound those couplings involving an initial right–handed lepton; however, information on the final charged–lepton polarization is still lacking. The measurement of the inverse decay $`\nu _\tau l\tau \nu _l`$, needed to separate the $`g_{LL}^S`$ and $`g_{LL}^V`$ couplings, looks far out of reach. ## 4 NEUTRAL–CURRENT COUPLINGS In the SM, all fermions with equal electric charge have identical vector, $`v_f=T_3^f(14|Q_f|\mathrm{sin}^2\theta _W)`$ and axial–vector, $`a_f=T_3^f`$, couplings to the $`Z`$ boson. These neutral current couplings have been precisely tested at LEP and SLC. The gauge sector of the SM is fully described in terms of only four parameters: $`g`$, $`g^{}`$, and the two constants characterizing the scalar potential. We can trade these parameters by $`\alpha `$, $`G_F`$, $$M_Z=(91.1871\pm 0.0021)\text{GeV},$$ (15) and $`m_H`$; this has the advantage of using the 3 most precise experimental determinations to fix the interaction. The relations $$M_W^2s_W^2=\frac{\pi \alpha }{\sqrt{2}G_F},s_W^2=1\frac{M_W^2}{M_Z^2},$$ (16) determine then $`s_W^2\mathrm{sin}^2\theta _W=0.2122`$ and $`M_W=80.94`$ GeV; in reasonable agreement with the measured $`W`$ mass , $`M_W=80.394\pm 0.042`$ GeV. At tree level, the partial decay widths of the $`Z`$ boson are given by $$\mathrm{\Gamma }\left[Z\overline{f}f\right]=\frac{G_FM_Z^3}{6\pi \sqrt{2}}\left(|v_f|^2+|a_f|^2\right)N_f,$$ (17) where $`N_l=1`$ and $`N_q=N_C`$. Summing over all possible final fermion pairs, one predicts the total width $`\mathrm{\Gamma }_Z=2.474`$ GeV, to be compared with the experimental value $`\mathrm{\Gamma }_Z=(2.4944\pm 0.0024)`$ GeV. The leptonic decay widths of the $`Z`$ are predicted to be $`\mathrm{\Gamma }_l\mathrm{\Gamma }(Zl^+l^{})=84.84`$ MeV, in agreement with the measured value $`\mathrm{\Gamma }_l=(83.96\pm 0.09)`$ MeV. Other interesting quantities are the ratios $`R_l\mathrm{\Gamma }(Z\text{hadrons})/\mathrm{\Gamma }_l`$ and $`R_Q\mathrm{\Gamma }(Z\overline{Q}Q)/\mathrm{\Gamma }(Z\text{hadrons})`$. The comparison between the tree–level theoretical predictions and the experimental values, shown in Table 4, is quite good. Additional information can be obtained from the study of the fermion–pair production process $`e^+e^{}\gamma ,Z\overline{f}f`$. LEP has provided accurate measurements of the total cross-section, the forward–backward asymmetry, the polarization asymmetry and the forward–backward polarization asymmetry, at the $`Z`$ peak ($`s=M_Z^2`$): $`\sigma ^{0,f}={\displaystyle \frac{12\pi }{M_Z^2}}{\displaystyle \frac{\mathrm{\Gamma }_e\mathrm{\Gamma }_f}{\mathrm{\Gamma }_Z^2}},`$ $`𝒜_{\text{FB}}^{0,f}={\displaystyle \frac{3}{4}}𝒫_e𝒫_f,`$ $`𝒜_{\text{Pol}}^{0,f}=𝒫_f,`$ $`𝒜_{\text{FB,Pol}}^{0,f}={\displaystyle \frac{3}{4}}𝒫_e,`$ (18) where $`\mathrm{\Gamma }_f`$ is the $`Z`$ partial decay width to the $`\overline{f}f`$ final state, and $$𝒫_f\frac{2v_fa_f}{v_f^2+a_f^2}$$ (19) is the average longitudinal polarization of the fermion $`f`$. The measurement of the final polarization asymmetries can (only) be done for $`f=\tau `$, because the spin polarization of the $`\tau `$’s is reflected in the distorted distribution of their decay products. Therefore, $`𝒫_\tau `$ and $`𝒫_e`$ can be determined from a measurement of the spectrum of the final charged particles in the decay of one $`\tau `$, or by studying the correlated distributions between the final products of both $`\tau ^{}s`$ . With polarized $`e^+e^{}`$ beams, one can also study the left–right asymmetry between the cross-sections for initial left– and right–handed electrons. At the $`Z`$ peak, this asymmetry directly measures the average initial lepton polarization, $`𝒫_e`$, without any need for final particle identification. SLD has also measured the left–right forward–backward asymmetries, which are only sensitive to the final state couplings: $$𝒜_{\text{LR}}^0=𝒫_e,𝒜_{\text{FB,LR}}^{0,f}=\frac{3}{4}𝒫_f.$$ (20) Using $`s_W^2=0.2122`$, one gets the (tree–level) predictions shown in the second column of Table 4. The comparison with the experimental measurements looks reasonable for the total hadronic cross-section $`\sigma _{\text{had}}^0_q\sigma ^{0,q}`$; however, all leptonic asymmetries disagree with the measured values by several standard deviations. As shown in the table, the same happens with the heavy–flavour forward–backward asymmetries $`𝒜_{\text{FB}}^{0,b/c}`$, which compare very badly with the experimental measurements; the agreement is however better for $`𝒫_{b/c}`$. Clearly, the problem with the asymmetries is their high sensitivity to the input value of $`\mathrm{sin}^2\theta _W`$; specially the ones involving the leptonic vector coupling $`v_l=(14\mathrm{sin}^2\theta _W)/2`$. Therefore, they are an extremely good window into higher–order electroweak corrections. ### 4.1 Important QED and QCD Corrections The photon propagator gets vacuum polarization corrections, induced by virtual fermion–antifermion pairs. Their effect can be taken into account through a redefinition of the QED coupling, which depends on the energy scale of the process; the resulting effective coupling $`\alpha (s)`$ is called the QED running coupling. The fine structure constant is measured at very low energies; it corresponds to $`\alpha (m_e^2)`$. However, at the $`Z`$ peak, we should rather use $`\alpha (M_Z^2)`$. The long running from $`m_e`$ to $`M_Z`$ gives rise to a sizeable correction : $`\alpha (M_Z^2)^1=128.878\pm 0.090`$. The quoted uncertainty arises from the light–quark contribution, which is estimated from $`\sigma (e^+e^{}\text{hadrons})`$ and $`\tau `$–decay data. Since $`G_F`$ is measured at low energies, while $`M_W`$ is a high–energy parameter, the relation between both quantities in Eq. (16) is clearly modified by vacuum–polarization contributions. One gets then the corrected predictions $`M_W=79.96`$ GeV and $`s_W^2=0.2311`$. The gluonic corrections to the $`Z\overline{q}q`$ decays can be directly incorporated by taking an effective number of colours $`N_q=N_C\left\{1+\frac{\alpha _s}{\pi }+\mathrm{}\right\}\mathrm{\hspace{0.17em}3.12}`$, where we have used $`\alpha _s(M_Z^2)0.12`$. The third column in Table 4 shows the numerical impact of these QED and QCD corrections. In all cases, the comparison with the data gets improved. However, it is in the asymmetries where the effect gets more spectacular. Owing to the high sensitivity to $`s_W^2`$, the small change in the value of the weak mixing angle generates a huge difference of about a factor of 2 in the predicted asymmetries. The agreement with the experimental values is now very good. ### 4.2 Higher–Order Electroweak Corrections Initial– and final–state photon radiation is by far the most important numerical correction. One has in addition the contributions coming from photon exchange between the fermionic lines. All these QED corrections are to a large extent dependent on the detector and the experimental cuts, because of the infra-red problems associated with massless photons. (one needs to define, for instance, the minimun photon energy which can be detected). These effects are usually estimated with Monte Carlo programs and subtracted from the data. More interesting are the so–called oblique corrections, gauge–boson self-energies induced by vacuum polarization diagrams, which are universal (process independent). In the case of the $`W^\pm `$ and the $`Z`$, these corrections are sensitive to heavy particles (such as the top) running along the loop . In QED, the vacuum polarization contribution of a heavy fermion pair is suppressed by inverse powers of the fermion mass. At low energies ($`s<<m_f^2`$), the information on the heavy fermions is then lost. This decoupling of the heavy fields happens in theories like QED and QCD, with only vector couplings and an exact gauge symmetry . The SM involves, however, a broken chiral gauge symmetry. The $`W^\pm `$ and $`Z`$ self-energies induced by a heavy top generate contributions which increase quadratically with the top mass . The leading $`m_t^2`$ contribution to the $`W^\pm `$ propagator amounts to a $`3\%`$ correction to the relation (16) between $`G_F`$ and $`M_W`$. Owing to an accidental $`SU(2)_C`$ symmetry of the scalar sector, the virtual production of Higgs particles does not generate any $`m_H^2`$ dependence at one loop . The dependence on the Higgs mass is only logarithmic. The numerical size of the correction induced on (16) is $`0.3\%`$ ($`+1\%`$) for $`m_H=60`$ (1000) GeV. The vertex corrections are non-universal and usually smaller than the oblique contributions. There is one interesting exception, the $`Z\overline{b}b`$ vertex, which is sensitive to the top quark mass . The $`Z\overline{f}f`$ vertex gets 1–loop corrections where a virtual $`W^\pm `$ is exchanged between the two fermionic legs. Since, the $`W^\pm `$ coupling changes the fermion flavour, the decays $`Z\overline{d}_i\overline{d}_i`$ get contributions with a top quark in the internal fermionic lines. These amplitudes are suppressed by a small quark–mixing factor $`|V_{td_i}|^2`$, except for the $`Z\overline{b}b`$ vertex because $`|V_{tb}|1`$. The explicit calculation shows the presence of hard $`m_t^2`$ corrections to the $`Z\overline{b}b`$ vertex, which amount to a $`1.5\%`$ effect in $`\mathrm{\Gamma }(Z\overline{b}b)`$. The non-decoupling present in the $`Z\overline{b}b`$ vertex is quite different from the one happening in the boson self-energies. The vertex correction does not have any dependence with the Higgs mass. Moreover, while any kind of new heavy particle, coupling to the gauge bosons, would contribute to the $`W^\pm `$ and $`Z`$ self-energies, possible new–physics contributions to the $`Z\overline{b}b`$ vertex are much more restricted and, in any case, different. Therefore, an independent experimental test of the two effects is very valuable in order to disentangle possible new–physics contributions from the SM corrections. The remaining quantum corrections (box diagrams, Higgs exchange) are rather small at the $`Z`$ peak. ### 4.3 Lepton Universality Tables 6 and 6 show the present experimental results for the leptonic $`Z`$ decay widths and asymmetries. The data are in excellent agreement with the SM predictions and confirm the universality of the leptonic neutral couplings. The average of the two $`\tau `$ polarization measurements, $`𝒜_{\text{Pol}}^{0,\tau }`$ and $`\frac{4}{3}𝒜_{\text{FB,Pol}}^{0,\tau }`$, results in $`𝒫_l=0.1450\pm 0.0033`$ which deviates by $`1.5\sigma `$ from the $`𝒜_{LR}^0`$ measurement. Assuming lepton universality, the combined result from all leptonic asymmetries gives $$𝒫_l=0.1497\pm 0.0016.$$ (21) Figure 8 shows the 68% probability contours in the $`a_l`$$`v_l`$ plane, obtained from a combined analysis of all leptonic observables. Lepton universality is now tested to the $`0.15\%`$ level for the axial–vector neutral couplings, while only a few per cent precision has been achieved for the vector couplings : $`{\displaystyle \frac{a_\mu }{a_e}}=1.0001\pm 0.0014`$ , $`{\displaystyle \frac{v_\mu }{v_e}}=0.981\pm 0.082,`$ $`{\displaystyle \frac{a_\tau }{a_e}}=1.0019\pm 0.0015`$ , $`{\displaystyle \frac{v_\tau }{v_e}}=0.964\pm 0.032.`$ The neutrino couplings can be determined from the invisible $`Z`$–decay width, $`\mathrm{\Gamma }_{\text{inv}}/\mathrm{\Gamma }_l=5.941\pm 0.016`$, by assuming three identical neutrino generations with left–handed couplings and fixing the sign from neutrino scattering data . The resulting experimental value , $`v_\nu =a_\nu =0.50123\pm 0.00095`$, is in perfect agreement with the SM. Alternatively, one can use the SM prediction, $`\mathrm{\Gamma }_{\text{inv}}/\mathrm{\Gamma }_l=(1.9912\pm 0.0012)N_\nu `$, to get a determination of the number of (light) neutrino flavours : $$N_\nu =2.9835\pm 0.0083.$$ (22) The universality of the neutrino couplings has been tested with $`\nu _\mu e`$ scattering data, which fixes the $`\nu _\mu `$ coupling to the $`Z`$: $`v_{\nu _\mu }=a_{\nu _\mu }=0.502\pm 0.017`$. Assuming lepton universality, the measured leptonic asymmetries can be used to obtain the effective electroweak mixing angle in the charged–lepton sector ($`\chi ^2/\text{d.o.f.}=3.4/4`$): $$\mathrm{sin}^2\theta _{\text{eff}}^{\text{lept}}\frac{1}{4}\left(1\frac{v_l}{a_l}\right)=\mathrm{\hspace{0.17em}0.23119}\pm 0.00021.$$ Including also the information provided by the hadronic asymmetries, one gets $`\mathrm{sin}^2\theta _{\text{eff}}^{\text{lept}}=0.23153\pm 0.00017`$, with a $`\chi ^2/\text{d.o.f.}=13.3/7`$. ### 4.4 SM Electroweak Fit The high accuracy of the present data provides compelling evidence for the pure weak quantum corrections, beyond the main QED and QCD corrections discussed in Section 4.1. The measurements are sufficiently precise to require the presence of quantum corrections associated with the virtual exchange of top quarks, gauge bosons and Higgses. Figure 9 shows the constraints obtained on $`m_t`$ and $`m_H`$, from a global fit to the electroweak data . The fitted value of the top mass is in excellent agreement with the direct Tevatron measurement $`m_t=174.3\pm 5.1`$ GeV . The data prefers a light Higgs, close to the present lower bound from direct searches, $`m_H>95.2`$ GeV (95% CL). There is a large correlation between the fitted values of $`m_t`$ and $`m_H`$; the correlation would be much larger if the $`R_b`$ measurement was not used ($`R_b`$ is insensitive to $`m_H`$). The fit gives the upper bound : $$m_H<245\text{GeV}(95\%\text{CL}).$$ (23) The global fit results in an extracted value of the strong coupling, $`\alpha _s(M_Z^2)=0.119\pm 0.003`$, which agrees very well with the world average value $`\alpha _s(M_Z^2)=0.119\pm 0.002`$. As shown in Table 4, the different electroweak measurements are well reproduced by the SM electroweak fit. At present, the larger deviation appears in $`𝒜_{\text{FB}}^{0,b}`$, which seems to be too low by $`2.2\sigma `$. The uncertainty on the QED coupling $`\alpha (M_Z^2)^1`$ introduces a severe limitation on the accuracy of the SM predictions. The uncertainty of the “standard” value, $`\alpha (M_Z^2)^1=128.878\pm 0.090`$ , causes an error of $`0.00023`$ on the $`\mathrm{sin}^2\theta _{\text{eff}}^{\text{lept}}`$ prediction. A recent analysis , using hadronic $`\tau `$–decay data, results in a more precise value, $`\alpha (M_Z^2)^1=128.933\pm 0.021`$, reducing the corresponding uncertainty on $`\mathrm{sin}^2\theta _{\text{eff}}^{\text{lept}}`$ to $`5\times 10^4`$; this translates into a $`30\%`$ reduction in the error of the fitted $`\mathrm{log}\left(m_H\right)`$ value. To improve the present determination of $`\alpha (M_Z^2)^1`$ one needs to perform a good measurement of $`\sigma (e^+e^{}\text{hadrons})`$, as a function of the centre–of–mass energy, in the whole kinematical range spanned by DA$`\mathrm{\Phi }`$NE, a tau–charm factory and the B factories. This would result in a much stronger constraint on the Higgs mass. ## 5 SUMMARY The SM provides a beautiful theoretical framework which is able to accommodate all our present knowledge on electroweak interactions. It is able to explain any single experimental fact and, in some cases, it has successfully passed very precise tests at the 0.1% to 1% level. However, there are still pieces of the SM Lagrangian which so far have not been experimentally analyzed in any precise way. The gauge self-couplings are presently being investigated at LEP2, through the study of the $`e^+e^{}W^+W^{}`$ production cross-section. The $`VA`$ ($`\nu _e`$-exchange in the $`t`$ channel) contribution generates an unphysical growing of the cross-section with the centre-of-mass energy, which is compensated through a delicate gauge cancellation with the $`e^+e^{}\gamma ,ZW^+W^{}`$ amplitudes. The recent LEP2 measurements of $`\sigma (e^+e^{}W^+W^{})`$, in good agreement with the SM, have provided already convincing evidence for the contribution coming from the $`ZWW`$ vertex. The study of this process has also provided a more accurate measurement of $`M_W`$, allowing to improve the precision of the neutral–current analyses. The present LEP2 determination, $`M_W=80.350\pm 0.056`$ GeV, is already more precise than the value $`M_W=80.448\pm 0.062`$ GeV obtained in $`p\overline{p}`$ colliders. Moreover it is in nice agreement with the result $`M_W=80.364\pm 0.029`$ GeV obtained from the indirect SM fit of electroweak data . The Higgs particle is the main missing block of the SM framework. The data provide a clear confirmation of the assumed pattern of spontaneous symmetry breaking, but do not prove the minimal Higgs mechanism embedded in the SM. At present, a relatively light Higgs is preferred by the indirect precision tests. LHC will try to find out whether such scalar field exists. In spite of its enormous phenomenological success, the SM leaves too many unanswered questions to be considered as a complete description of the fundamental forces. We do not understand yet why fermions are replicated in three (and only three) nearly identical copies? Why the pattern of masses and mixings is what it is? Are the masses the only difference among the three families? What is the origin of the SM flavour structure? Which dynamics is responsible for the observed CP violation? Clearly, we need more experiments in order to learn what kind of physics exists beyond the present SM frontiers. We have, fortunately, a very promising and exciting future ahead of us. This work has been supported in part by the ECC, TMR Network $`EURODAPHNE`$ (ERBFMX-CT98-0169), and by DGESIC (Spain) under grant No. PB97-1261.
no-problem/9910/astro-ph9910221.html
ar5iv
text
# Evolution of Clustering and Bias in a ΛCDM Universe ## References Weinberg, D. H., Davé, R., Gardner, J. P., Hernquist, L., & Katz, N. 1999 in “Photometric Redshifts and High Redshift Galaxies”, eds. R. Weymann, L. Storrie-Lombardi, M. Sawicki & R. Brunner, (SF: ASP Conf Series) Pearce, F. R., et al. 1999, ApJ, 521, 99 Katz, N., Hernquist, L., Weinberg, D. H. 1999, ApJ, 523, 463
no-problem/9910/cond-mat9910032.html
ar5iv
text
# Numerical studies of the vibrational isocoordinate rule in chalcogenide glasses ## I Introduction Establishing the microscopic properties of disordered materials based on macroscopic probes is a difficult endeavor: the charateristic isotropy of these materials limits measurements to mostly scalar, orientation-averaged properties, reducing significantly the amount of information accessible compared with, for example, what is available in crystals. This is the case for scattering experiments. X-ray provides, after Fourier transform, only an isotropic radial distribution function. This smooth, structureless curve beyond medium-range order, can be reproduced numerically with a wide range of mutually inconsistent models. Such measurement can at most provide a way of eliminating bad models, but is useless as a tool for the positive identification among the other ones: any model that fails to produce a realistic RDF is clearly incorrect, this leaves, as shown using reverse Monte-Carlo, a wide range of incompatible models. The experimental evidence for an isocoordinate rule in chalcogenide glasses provides yet another example of the difficulty of extracting microscopic information from these disordered materials. This rule states that for a given average coordination, samples with varying compositions will display identical properties. The isocoordinate rule has already been noted for a wealth of mechanical and thermal properties such as ultrasonic elastic constants, hole relaxation, and glass transition temperatures and hardness, and was found to hold for the more complex vibrational density of states : systems as different as Se<sub>40</sub>As<sub>60</sub> and Se<sub>55</sub>As<sub>30</sub>Ge<sub>15</sub>, with an average coordination of $`r=2.6`$, show a similar VDOS in the transverse accoustic (TA) region. The vibrational isocoordinate rule (VIR) has, until now, only been checked experimentally, with the inherent limitations due to atomic species available and glass phase diagrams. This leaves a few questions open regarding the range of validity of this rule as well as its accuracy. In this paper, we present some results on a set of idealized models that provide a bound on these two questions. Analytical study of this type of problem is difficult. Due to the vectorial nature of the problem, even simplified topologies such as the Bethe lattice are difficult to treat meaningfully. With the additional topological disorder, analytical solutions are beyond reach. We report here on the results of direct numerical simulations on a model system with simplified dynamics. ## II Details of the simulation The simulations proceeded as follows. We start with a 4096-atom cell of Sillium – a perfectly tetravalent continuous-random network – constructed by Djordjevic et al. following the prescription of Wooten, Winer and Weaire; this network provides an idealized model with the appropriate initial topology. We then remove bonds at random in the network until we reach the desired concentration of 2-, 3- and 4-fold atoms. In the first stage of the simulation, we do not enforce any extra correlation and the final network corresponds to a perfectly random amorphous alloy. We then relax the network using a Kirkwood potential with interactions based on the table of neighbours, not on distance. $$E=\frac{\alpha }{2}\underset{ij}{}(L_{ij}L_0)^2+\frac{\beta }{8}L_0^2\underset{ijk}{}\left(\mathrm{cos}\theta _{jik}+\frac{1}{3}\right)^2$$ (1) where $`\alpha `$ and $`\beta `$ are taken to be the same for all bonds and $`L_0`$ is the ideal bond length. We take a ratio of three-body to two-body force, $`\beta /\alpha =0.2`$, typical of tetrahedral semiconductors . The resulting network is one of identical atoms except for the coordination. This is not too far from SeAsGe chalcogenides; because they sit side by side in the same row, these elements share very similar masses, elastic properties and Pauling electronegativities. As an additional simplification, we take the same tetrahedral angle for all triplets in the network. Real Se and As, in the respective 2- and 3-fold configuration, have angles that deviate from this value and tend towards 120 degrees. This simplification is less drastic than it appears because of the relatively low coordination, allowing a significant degree of flexibility in the network: angles can then be accomodated at very little elastic cost. A more serious concern is that although the 3 elements have very similar elastic constants in 4-fold environment, then bonding will get stronger as the coordination decreases. Comparison with experimental data shows that this effect shows up mostly in the high frequency TO peak. Moreover, because of the square root scaling, the deviation becomes apparent only between samples at the extreme of the composition scale. To verify the isocoordinate rule, we prepare 3 different compositions at each average coordination from $`r=2.0`$ to $`3.0`$ (except at $`r=2.0`$, where only two different cells are created). We then proceed to distribute at random a desired proportion of 2-, 3- and 4–fold atoms. The three configurations typically correspond to (1) a configuration with a maximum of 3-fold atoms for the given average coordination, (2) one with a maximum of 4–fold, and (3) a composition between the two. For example, at $`r=3.0`$, we create a configuration with 50 percent of 2–old and 50 percent of 4–fold atoms, one with 25, 50 and 25 percent of 2-, 3-, and 4–fold atoms, respectively, and one with 100 percent of 3-fold atoms. This gives us the widest spectrum possible to study the VIR. Because we are not constrained by the glass forming diagram, this is also wider than what can be achieved experimentally. After the topology has been determined, each sample is relaxed with the Kirkwood potential, using periodic boundary conditions. The dynamical matrix is then computed numerically on the fully relaxed configuration. This 12288 $`\times `$ 12288 matrix is then diagonalized exactly in order to obtain the full vibrational properties. The eigenvalues are binned and smoothed with a Gaussian of experimental width to provide the vibrational density of states presented in this paper. We also introduce some chemical correlations to see how sensitive the VIR is to local fluctuations. We study here two types of correlations: phase separation – intoducing some king of homopolar preference – and mixing, with heteropolar bonds. A cost function is introduced in the bond-distributing sub-routine and all other phases of simulation remain the same. ## III Results and discussion ### A The vibrational isocoordinate rule Figure 1 shows the vibrational density of state as a function of average coordination from $`r=2.0`$ to $`3.0`$. This distribution goes through the topological rigidity transition at $`r=2.4`$. First, we note that the VIR is approximately valid for two frequency bands: the transverse accoustic band – below $`f=0.7`$ and transverse optic band – above $`f=1.5`$. This holds for configurations with significant difference of composition, even configurations as different as the 100 percent 3-fold vs. 50-50 of 2- and 4–fold, show fairly similar VDOS in these regions. This is a wider application range than what was measured experimentally; the data reported by Effey and Cappelletti shows good overlap for the TA band but no consistent overlap in the higher frequency region of the VDOS. This is especially true of samples with widely different composition. The experimental shift in the TO peak seems to follow the concentration of Se. This is consistent with the expected increase in stiffness of the low coordinated atoms discussed above. Taking this into account, a second look at the figures indicates clearly that the VIR is not an exact law. The structure of the TA peak in the configurations with $`r=3.0`$ shows significant variations following the concentration of 3-fold coordinated atoms. This shoulder, slightly above $`f=0.5`$, decreases in importance with the average coordination. This effect is not seen experimentally; the range of compositions for the real samples, however, is much narrower than that studied here. Figure 1 also provides some indication about the relation between specific structures in the VDOS and the local environment. Because the VIR holds well, these are features that depend mostly weakly on the details of the composition and cannot be of much help in experimental situations. We have already mentioned the presence of additional modes related to 3-fold coordinated atoms on the high-frequency side of the TA band – a feature that is also present at high coordination in experimental measurement. Although this shoulder is significant, it represents the maximum impact it can have, with a direct comparison between a fully three-fold coordinated sample and one with zero such atoms. Given that the two network have a totally different topology, it is the similarity between the two curves rather than their differences that has to be emphasized. The structure between the TA and the TO bands is more directly sensitive to the details of local structure, especially at low average coordination where more modes become localized. The structure around $`f=1`$ consistently represents the four-fold coordinated atoms. We can compare these structures to a fully four-fold structure (Fig. 2). As we decrease the average coordination of the networks, we go through the topological rigidity threshold, at $`r=2.4`$. Below this value, the network becomes floppy and its macroscopic elastic constants vanish; local rigidity remains, however, and the VDOS is mostly unaffected except for a shift in the position of the peaks and an accumulation of modes at low frequencies. The signature of these zero-frequency modes is reported here in the backward peak formed at low frequencies and the accumulation at the lower-end of the TO peak in networks with large fraction of 2-fold coordinated atoms. The backward peak corresponds to spurious imaginary frequencies associated with floppy modes. Based on the theory of topological rigididity, these modes are localized above $`p_c`$ and span the whole network below this threshold. The floppiness of the network is also reflected in the TO peak. Strikingly, the width of this peak is much more related to the overall coordination of the network than with local environment. From the work of Alben and Weaire, the TA peak has been associated with the overall coordination while the TO peak had been ascribed to the local tetrahedral symmetry. Vibrational density of states of fully four-fold cells generally gives a much wider if somewhat lower TO peak with, in the case of ill-coordinated networks, a very flat structure. Such a wide peak has generally been associated with non-tetrahedral environment but it is clear, based on the results obtained here, that the present of strain is also necessary. Even at $`r=3.0`$ the network contains enough floppy modes to relax a good part of the strain: the optical modes become then more localized and emerge as a relatively sharp feature at the edge of the vibrational spectrum. As the averge coordination decreases the LA (around $`f=1.0`$) and LO ($`f=1.4`$) features also become more prominent. The first is particularly sensitive to the concentration of 4-fold atoms. This is especially clear at the lowest average coordination where this feature is totally absent in the Se<sub>80</sub>As<sub>20</sub> sample. At the same time, the LA peak is shifted upward to about $`f=1.15`$ and can therefore be associated with the presence of 3-fold atoms. This interpretation is in full agreement with the experimental results reported by Effey and Cappelletti. ### B Correlations The above results are all for non-correlated samples. It is not impossible, however, that chemical ordering might take place in chalcogenide glasses. To address this question, we prepared, at $`r=2.6`$, two configurations of nominal Se<sub>40</sub>As<sub>60</sub> with full mixing -heteropolar bonds fully favored – and demixing – homopolar bonds preferred. In the first case, no 2-fold atom is bonded to another 2-fold atom. In the second case, clustering tends to take place. Figure 3 shows the VDOS for these two samples as compared with the non-correlated case already described in the previous section. There is remarkably little impact on the spectrum. Although we see some broadening of the TA peak and a sharp structure at the far left of the TA peak in the demixed case, due to the very floppy selenium chains, the rest of the spectrum is essentially unchanged. Except, that is, for some shifting in the two peaks associated with the LA and LO peaks. Mixed environment seem to contribute more to the peak at 1.15 while demixed environments shift the peak to higher frequencies. The average of these two effects is manifest in the non-correlated spectrum. These changes in the spectrum are minor, however, and too subtle to allow any quantitative extraction from experimental data. The choice of $`r=2.6`$ was based on some peculiar spectra found experimentally for Se<sub>2</sub>As<sub>3</sub>. Although we have been unable to reproduce the experimental behavior, we can conclude from this section that simple two-body correlation is not sufficient to provide the type of structure in the VDOS seen experimentally. More striking local changes need to occur, such as pseudo-molecular constructions, which give rise to localized modes. ## IV Conclusions Following some interesting experiments displaying an isocoordinate rule for chalcogenide glasses, we studied the case of an ideal ternary glass consisting of identical atoms save for their coordination: same mass, same elastic constants same angular forces. It is then possible to examine the nature of this rule without interference from any other contribution to the vibrational density of state. We have examined the isocoordinate rule at average concentrations varying from 2.0 to 3.0. What we find is that the isoccordinate rule is not an exact one. However there is much less variation of the vibrational density of state for a given average concentration, even as the type of atom is varied completely, with, for example, from totally three-fold to a mixture containing only 2 and 4-fold atoms than changing the overall concentration of atoms by 0.1 or 0.2. Moreover, it appears that the VDOS is not very sensitive either to the presence of correlations. Changing considerably the local correlations affects very little even the qualitative structure of the VDOS. All these results, in parallel with the experimental results obtained recently point to the fact that it is difficult to obtain positive information simply from the VDOS: if two samples have a different VDOS, it is clear that they diverge; nothing can be said, however if they have the same VDOS. ## V Acknowledgements We thank Drs. Ronald Cappelletti and Birgit Effey for many stimulating discussions. NM thanks the NSF for support under Grant DMR 9805848, DAD thanks NSF for support under grants DMR 9618789 and DMR 9604921. \[ \] \[ \] \[ \]
no-problem/9910/hep-ph9910455.html
ar5iv
text
# Isospin Violation and the Proton’s Strange Form Factors ## INTRODUCTION The interaction between a proton and a neutral weak boson ($`Z^0`$) involves form factors which are related to the familiar electromagnetic form factors via the standard electroweak model. For example, the proton’s neutral weak vector form factors are $$G_X^{p,Z}(q^2)=\frac{1}{4}[G_X^p(q^2)G_X^n(q^2)]G_X^p(q^2)\mathrm{sin}^2\theta _W\frac{1}{4}G_X^s(q^2),X=E,M,$$ (1) where $`G_{E,M}^p(q^2)`$ and $`G_{E,M}^n(q^2)`$ are the usual electromagnetic form factors of the proton and neutron respectively, and $`G_{E,M}^s(q^2)`$ are called the proton’s strange electric and magnetic form factors. Using Eq. (1), an experimental measurement of $`G_X^{p,Z}(q^2)`$ leads to a determination of $`G_X^s(q^2)`$, which provides information about the effects of strange quarks in the proton. The first measurement of $`G_M^{p,Z}(q^2)`$ was reported two years ago by the SAMPLE Collaboration, and led to $$G_M^s(0.1\mathrm{GeV}^2)=0.23\pm 0.37\pm 0.15\pm 0.19.$$ (2) A linear combination of strange electric and magnetic form factors has been measured by the HAPPEX Collaboration: $$[G_E^s+0.39G_M^s](0.48\mathrm{GeV}^2)=0.023\pm 0.034\pm 0.022\pm 0.026.$$ (3) Further efforts are underway by various groups.<sup>∗\*</sup><sup>∗\*</sup>$``$See in particular the second SAMPLE measurement, Ref. , which appeared after the MENU99 conference. Using a calculation of electroweak corrections as input, they find $`G_M^s(0.1\mathrm{GeV}^2)=+0.61\pm 0.17\pm 0.21`$. It is important to recall that $`G_E^s(q^2)`$ and $`G_M^s(q^2)`$ contain more than just strangeness effects. Even in a world of only two flavours (up and down) $`G_{E,M}^s(q^2)`$ would be nonzero due to isospin violation. Thus, the true effects of strange quarks can only be extracted from an experimental determination of $`G_{E,M}^s(q^2)`$ if isospin violating effects can be calculated. Dmitrašinović and Pollock, and also Miller, have studied the isospin violating contribution to $`G_M^s(q^2)`$ within the nonrelativistic constituent quark model. Ma has used a light-cone meson-baryon fluctuation model. More recently, a model-independent study of isospin violating effects (using heavy baryon chiral perturbation theory) has been published and it is this work which will be emphasized below, after a brief review of attempts to calculate the authentic strange quark effects. ## THE STRANGENESS CONTRIBUTIONS TO $`𝑮_{𝑬\mathbf{,}𝑴}^𝒔\mathbf{(}𝒒^\mathrm{𝟐}\mathbf{)}`$ Many attempts have been made to calculate the contribution of strange quarks to the “strange” electric and magnetic form factors, $`G_{E,M}^s(q^2)`$. In principle a lattice QCD calculation could give the definitive answer, and an exploratory calculation has been performed in the quenched theory. The errors due to finite lattice spacing, finite lattice volume and quenching are not yet known, but the existing results, $`r_s^2_E6\mathrm{d}G_E^s(0)/\mathrm{d}q^2=0.060.16\mathrm{fm}^2`$ and $`G_M^s(0)=0.36\pm 0.20`$, still provide important inputs to the discussion. One might consider using chiral perturbation theory to calculate the strangeness contributions to $`G_{E,M}^s(q^2)`$, but both form factors have a free parameter at their first nonzero order in the chiral expansion, so the magnitude of neither form factor can be predicted from chiral symmetry alone. However, two experimental inputs are sufficient to fix these parameters, and chiral symmetry does determine the $`q^2`$-dependence of the form factors at leading chiral order. This tact has been taken by the authors of Ref. , who use the SAMPLE and HAPPEX measurements as input. Beyond lattice QCD and chiral perturbation theory, there are many models and dispersion relation methods which have been employed in the effort to determine the strange quark contributions to $`G_{E,M}^s(q^2)`$. (The authors of Refs. have collected some predictions from the literature.) The various methods lead to differing results. For $`G_M^s(0)`$, most predictions lie in the range $$0.5\stackrel{<}{}G_M^s(0)\stackrel{<}{}+0.05,$$ (4) and it has often been noted that this tendency toward a negative number does not seem to be supported by the experimental data, Refs. . Predictions for the magnitude and sign of $`r_s^2_E`$ also span a large range. A precise experimental measurement would help to distinguish between the various models of strangeness physics, but only after the isospin violating contribution has been calculated and subtracted. ## THE ISOSPIN VIOLATING CONTRIBUTIONS TO $`𝑮_{𝑬\mathbf{,}𝑴}^𝒔\mathbf{(}𝒒^\mathrm{𝟐}\mathbf{)}`$ In a world with no strange quark, $`G_E^s(q^2)`$ and $`G_M^s(q^2)`$ do not vanish. Instead, $$G_X^s(q^2)G_X^{u,d}(q^2)\mathrm{as}\mathrm{the}\mathrm{strange}\mathrm{quark}\mathrm{decouples},(X=E,M)$$ (5) where $`G_E^{u,d}(q^2)`$ and $`G_M^{u,d}(q^2)`$ are isospin violating quantities. If both the strange and isospin violating components of $`G_{E,M}^s(q^2)`$ are small, then contributions which are both isospin violating and strange are doubly suppressed. The following discussion considers $`G_{E,M}^{u,d}(q^2)`$ in a strange-free world. Constituent quark model calculations have led to a vanishing result for $`G_M^{u,d}(0)`$ and a very mild $`q^2`$ dependence: $`0.001<G_M^{u,d}(0.25\mathrm{GeV}^2)<0`$. There is no symmetry which would force $`G_M^{u,d}(0)`$ to vanish exactly, but perhaps the constituent quark model is trying to anticipate a “small” result. A light-cone meson-baryon fluctuation model permits a large range, $`G_M^{u,d}(0)=0.0060.088`$. Heavy baryon chiral perturbation theory (HBChPT) is a natural tool for the study of $`G_{E,M}^{u,d}(q^2)`$. It is a model-independent approach which employs a systematic expansion in small momenta ($`q`$), small pion masses ($`m_\pi `$), small QED coupling ($`e`$), large chiral scale ($`4\pi F_\pi `$) and large nucleon masses ($`m_N`$). It is appropriate to use $`O(q)O(m_\pi )O(e)`$ with $`4\pi F_\pi m_N`$, and then the HBChPT Lagrangian can be ordered as a single expansion, $$_{\mathrm{HBChPT}}=^{(1)}+^{(2)}+^{(3)}+^{(4)}+^{(5)}+\mathrm{}.$$ (6) For the explicit form of the Lagrangian, see Ref. and references therein. For the present discussion, it is simply noted that $`^{(1)}`$ contains parameters $`g_A`$, $`F_\pi `$ and $`e`$; $`^{(2)}`$ contains 11 parameters (7 strong and 4 electromagnetic); $`^{(3)}`$ contains 43 parameters; $`^{(4)}`$ contains hundreds of parameters and $`^{(5)}`$ has even more. Happily, it will be shown that $`G_E^{u,d}(q^2)`$ is parameter-free at its first nonzero order, and $`G_M^{u,d}(q^2)`$ is parameter-free at its first and second nonzero orders except for a single additive constant. The coupling of a vector current (e.g. $`Z^0`$) to a nucleon begins at first order in HBChPT, $`^{(1)}`$, but is isospin conserving. To be precise, recall the usual notation, $$N(\stackrel{}{p}+\stackrel{}{q})|\overline{f}\gamma _\mu f|N(\stackrel{}{p})\overline{u}(\stackrel{}{p}+\stackrel{}{q})\left[\gamma _\mu F_1^f(q^2)+\frac{i\sigma _{\mu \nu }q^\nu }{2m_N}F_2^f(q^2)\right]u(\stackrel{}{p}),$$ (7) where $`f`$ denotes a particular flavour of quark. The Sachs form factors for that flavour are $$G_E^f(q^2)=F_1^f(q^2)+\frac{q^2}{4m_N^2}F_2^f(q^2),G_M^f(q^2)=F_1^f(q^2)+F_2^f(q^2).$$ (8) An explicit calculation using $`^{(1)}+^{(2)}+^{(3)}`$ leads to isospin violating vector form factors which vanish exactly. At first glance this might seem surprising, but it can be readily understood as follows. An isospin violating factor, such as $`(m_nm_p)/m_p`$, is suppressed by two HBChPT orders. Moreover, the $`F_2`$ term in Eq. (7) has an extra explicit $`1/m_N`$ suppression factor, so isospin violating $`F_2`$ terms cannot appear before $`^{(4)}`$. Meanwhile, $`F_1`$ is constrained by Noether’s theorem (QCD’s flavour symmetries: upness and downness) to be unity plus momentum-dependent corrections, and dimensional analysis therefore requires a large scale, $`m_N`$ or $`4\pi F_\pi `$, in the denominator of all corrections. This demonstrates that both $`G_E^{u,d}(q^2)`$ and $`G_M^{u,d}(q^2)`$ vanish in HBChPT until the fourth order Lagrangian: $`^{(4)}`$. A leading order (LO) calculation of $`G_E^{u,d}(q^2)`$ or $`G_M^{u,d}(q^2)`$ involves tree-level terms from $`^{(4)}`$ plus one-loop diagrams built from $`^{(1)}+^{(2)}`$. Referring to Ref. for details of the calculation and renormalization, the results are $`G_E^{u,d}(q^2)|_{\mathrm{LO}}`$ $`=`$ $`{\displaystyle \frac{4\pi g_A^2m_{\pi ^+}}{(4\pi F)^2}}(m_nm_p)\left[1{\displaystyle _0^1}dx{\displaystyle \frac{1(14x^2)q^2/m_{\pi ^+}^2}{\sqrt{1x(1x)q^2/m_{\pi ^+}^2}}}\right],`$ (9) $`G_M^{u,d}(q^2)|_{\mathrm{LO}}`$ $`=`$ $`\mathrm{constant}{\displaystyle \frac{16g_A^2m_N}{(4\pi F)^2}}(m_nm_p){\displaystyle _0^1}dx\mathrm{ln}\left(1x(1x){\displaystyle \frac{q^2}{m_{\pi ^+}^2}}\right).`$ (10) Notice that the electric form factor contains no unknown parameters, and the magnetic form factor has only a single parameter (an additive constant). The LO results for $`G_E^{u,d}(q^2)`$ and $`G_M^{u,d}(q^2)G_M^{u,d}(0)`$ are plotted in Fig. 1. The contribution of isospin violation to $`r_s^2_E`$ is $`6\mathrm{d}G_E^{u,d}(0)/\mathrm{d}q^2+0.013\mathrm{fm}^2`$. Consider next-to-leading order (NLO). Here, one expects tree-level terms from $`^{(5)}`$ plus one- and two-loop diagrams built from lower orders in the Lagrangian. Since small HBChPT expansion parameters without uncontracted Lorentz indices come in pairs (e.g. $`q^2`$, $`m_\pi ^2`$, $`e^2`$), the $`^{(5)}`$ counterterms can contribute to $`F_1`$ but not to $`F_2`$. Thus $`G_M^{u,d}(q^2)`$ is independent of these parameters at NLO, although $`G_E^{u,d}(q^2)`$ is not. It is also found that no two-loop diagrams contribute to $`G_M^{u,d}(q^2)`$ at NLO, although in principle they could have. Furthermore, unknown coefficients from $`^{(3)}`$ are also permitted to appear within loops, but none of them actually contribute. This means that the NLO corrections to $`G_M^{u,d}(q^2)`$ are basic one-loop diagrams. The explicit result is given in Ref. . It needs to be stressed that the NLO contribution is parameter-free; the only new quantities (with respect to LO) are the well-known nucleon magnetic moments. The LO+NLO result for $`G_M^{u,d}(q^2)G_M^{u,d}(0)`$ is shown in Fig. 2. Notice that the NLO corrections serve to soften the $`q^2`$-dependence. The NLO correction to $`G_M^{u,d}(0)`$ is $$G_M^{u,d}(0)|_{LO+NLO}G_M^{u,d}(0)|_{LO}=\frac{24\pi g_A^2m_{\pi ^+}}{(4\pi F)^2}(m_nm_p)\left(\frac{5}{3}\mu _p\mu _n\right)0.013.$$ (11) The value of $`G_M^{u,d}(0)`$ itself is not determined by chiral symmetry alone, and it receives contributions from physics other than the “pion cloud” of HBChPT (consider, for example, isospin violation due to vector mesons). The pion cloud contribution to $`G_M^{u,d}(0)`$ is estimated in Ref. via a physically-motivated cutoff in HBChPT, and is comparable in size to the NLO contribution of Eq. (11). The full result for the pion cloud contribution to $`G_M^{u,d}(q^2)`$ is shown in Fig. 2 with error bands to reflect truncation of the HBChPT expansion: the narrow band assumes $`|\mathrm{NNLO}||\mathrm{NLO}|m_\pi /m_N`$ and the wide band assumes $`|\mathrm{NNLO}||\mathrm{NLO}|/2`$. Fig. 1. Parameter-free results for $`G_E^{u,d}(q^2)`$ and $`G_M^{u,d}(q^2)G_M^{u,d}(0)`$ at LO in HBChPT. Fig. 2. The pion cloud contribution to $`G_M^{u,d}(q^2)`$ at LO+NLO, with uncertainties due to truncation of the HBChPT expansion. ## CONCLUSIONS The strange vector form factors of the proton are basic to an understanding of proton structure. The contribution due to strange quarks has proven to be a theoretical challenge. Isospin violation also contributes to the so-called “strangeness” form factors, and this contribution must be calculated and subtracted from experimental data before the strange quark contribution can be identified. The present work indicates that chiral symmetry is of great value for discussions of the isospin violating effects. Despite the large number of parameters in the Lagrangian, $`G_E^{u,d}(q^2)`$ is parameter-free at leading order, and $`G_M^{u,d}(q^2)`$ has only one ($`q^2`$-independent) parameter at leading order, and no parameters at next-to-leading order. The isospin violating effects computed here are large compared to some models of the strange quark effects, but small compared to other models. The experimental results for the full “strangeness” form factors in Eqs. (2) and (3) are not precise enough to indicate their size relative to the isospin violating contributions found in this work. It will be interesting to see what future experiments reveal. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada.
no-problem/9910/cond-mat9910205.html
ar5iv
text
# Tunneling between the Edges of Two Lateral Quantum Hall Systems The edge of a two-dimensional electron system (2DES) in a magnetic field consists of one-dimensional (1D) edge-channels that arise from the confining electric field at the edge of the specimen<sup>1-3</sup>. The crossed electric and magnetic fields, E x B, cause electrons to drift parallel to the sample boundary creating a chiral current that travels along the edge in only one direction. Remarkably, in an ideal 2DES in the quantum Hall regime all current flows along the edge<sup>4-6</sup>. Quantization of the Hall resistance, $`R_{xy}=h/Ne^2`$, arises from occupation of N 1D edge channels, each contributing a conductance of $`e^2/h^{711}`$. To explore this unusual one-dimensional property of an otherwise two-dimensional system, we have studied tunneling between the edges of 2DESs in the regime of integer quantum Hall effect (QHE). In the presence of an atomically precise, high-quality tunnel barrier, the resultant interaction between the edge states leads to the formation of new energy gaps and an intriguing dispersion relation for electrons traveling along the barrier. The absence of tunneling features due to the electron spin and the persistence of a conductance peak at zero bias are not consistent with a model of weakly interacting edge states. The edge channels of the QHE is a prototypical one-dimensional electronic system. Other such one-dimensional model systems include semiconductor based quantum wires<sup>12</sup>, carbon nanotubes<sup>13,14</sup>, and molecular chain materials<sup>15</sup>. While these systems force the electrons to move along a preferential direction, the edge channels of a QHE system is unique in its ability to adjust itself spatially as well as energetically. A localized defect is easily avoided by the edge current by simply skirting the impurity potential. In real samples, variations in the potential landscape can generate a complex and intriguing topology of edge channels<sup>16,17</sup>. Consequently, most real edge channels are ill defined, both spatially and energetically, and it becomes difficult to generalize various experimental geometries in terms of 1D edge states. While typical experiments seek to study edge states in lithographically defined geometries<sup>18-19</sup>, their edges typically fluctuate on the scale of a magnetic length. In order to address the energetics of edge channels, well-defined geometries with clearly delineated edges are essential. In particular for tunneling experiments, where distance factors exponentially into the tunneling current, exact knowledge of the shape and the magnitude of the barrier is crucial in making contact with model calculations. Our 2DES-barrier-2DES (2D-2D) tunneling device consists of two stripes of 2DESs separated by an atomically precise 88Å-thick semiconductor barrier, fabricated by cleaved edge overgrowth, as shown in Fig.1a. We explore the lateral tunneling between these two physically separated 2DESs in the QHE regime. Fig.1c shows the differential conductance of the high density sample at 9.2T and 6.0T. At 9.2T, $`dI/dV_{bias}`$ is strongly suppressed around zero bias, while oscillatory conductance peaks appear above a threshold electric field. Each peak represents the onset of an additional tunneling path through the barrier. Successive conductance peaks are nearly equally spaced with a spacing on the order of the cyclotron energy $`\mathrm{}\omega _c`$ ($``$17meV at 10T) and the amplitude of their oscillation decreases. Most of the $`dI/dV_{bias}`$-traces resemble the 9.2T conductance data. However, for certain ranges of magnetic fields, there is no threshold to tunneling and a very sharp and tall conductance peak arises at zero bias, as illustrated by the 6T data. Fig.2 shows an image map of the $`dI/dV_{bias}`$ data as a function of Landau level filling factor, $`\nu =nh/eB`$, and the normalized bias voltage, $`eV_{peak}/\mathrm{}\omega _c`$. Blue and red represent small and large $`dI/dV_{bias}`$ signals, respectively. With scaled axes, the disparate data from two different samples with very different electron densities join and produce a universal tunneling map for the 2D-2D tunneling. Altogether, an intriguing pattern arises with a continuous progression of conductance maxima (black squares and circles) from the low density to the high density data. The $`dI/dV_{bias}`$ plot exhibits gap-like thresholds that define regions of vanishing $`dI/dV_{bias}`$ in which the tunneling is strongly suppressed. Around zero bias, tunneling is suppressed in a bell-shaped region for filling factors $`\nu <1`$. Similar thresholds occur at higher fillings near $`\nu 2`$. Between fillings $`\nu =1.21.5`$ and 2.2 - 2.5, a sharp conductance peak dominates at zero bias. The positions of the oscillatory conductance peaks vary roughly linearly at high fillings and produce a fan of maxima that branches out from zero bias. Some of these branches cross each other above $`\nu >1`$. For fillings $`\nu <1`$ the different branches tend to saturate. In analogy to the edge-state transport in the QHE, we consider the 2D-2D tunneling in terms of 1D edge states in the presence of a barrier. The edge states around the perimeter of the 2D-2D system are the same as in a simple 2DES, whose dispersion relation along the edge is approximately parabolic. This is no longer the case along the barrier. Fig. 3a illustrates the spatial dependence of the Landau level energy in its vicinity. The energy levels E(x) of Fig. 3a correspond to the energy of an electron with orbit guiding center $`x`$. Since $`x=k_y\mathrm{}_{}^2`$, the energy diagram of Fig. 3a is also the 1D dispersion relation, $`E(k_y)`$, for electrons traveling along the barrier in the y-direction with velocity $`v=h^1dE/dk_y=h^1\mathrm{}_{}^2dE(x)/dx`$. Electrons on the left side of the barrier counter-propagate with respect to those on the right side of the barrier. At the intersections of the two sets of rising Landau ladders, there exist two oppositely traveling, degenerate edge states with the same wavevector, $`k_y=x/\mathrm{}_{}^2`$. The degeneracy at the crossings is lifted by the formation of a series of small energy gaps that separate the symmetric and antisymmetric combinations of the underlying wavefunctions as seen in Fig. 3a. Altogether, Fig 3a bears a surprising resemblance with the data of Fig. 2. Electronic transport along the barrier depends crucially on the position of the Fermi energy with respect to this complex 1D barrier level structure<sup>20,21</sup>. When the Fermi level resides below the first maximum, say at $`E_1`$ in Fig. 3a, electrons in both 2DESs follow two separate, counter-propagating tracks along the junction as depicted in inset A of Fig. 3a. Although traveling along the barrier, they have very different $`k_y`$-wave vectors and are practically uncoupled. Therefore, tunneling through the barrier is negligible, corresponding to the absence of a peak at zero bias at low filling factor. As the Fermi level reaches the first maximum at $`E_2`$, the edge states on both sides of the barrier have identical wavevector and resonate, which leads to substantial tunneling. In fact, the tracks of the wavefunctions have insignificantly changed as compared to inset A, but coupling between both edges has vastly increased due to the equivalence of their $`k_y`$-momentum. Consequently, at this particular position of $`E_F`$, the conductance at zero bias turns finite, as observed in the zero bias peak of our data. According to the barrier level scheme in Fig. 3a, the edge states from the second Landau level, N = 1, are also occupied for this range of fillings, but remain uncoupled. As the Fermi level rises slightly above this critical position and into the gap of the dispersion relation, electrons can no longer travel along the barrier, as shown in the inset B of Fig. 3a. Consequently, coupling between both 2DESs should vanish and tunneling should cease. This represents an interesting paradox. If electrons can no longer travel along the barrier and, due to the chirality of the edge channel, are not allowed to backscatter, one would think they need to tunnel. This should give rise to a conductance of $`e^2/h`$, which is, however, two orders of magnitude larger than observed. The detailed tunneling and scattering model at this position of $`E_F`$ remains unclear. In any case, our model does not seem to agree with experiment, in which the zero-bias peak persists for a range of filling factors $`1.2<\nu <1.5`$, rather than existing at just one B-field. This may be related to electron scattering along the barrier, which relaxes k-conservation and washes out the dispersion relation. It is also unclear why the conductance vanishes abruptly at $`\nu 1.5`$. This may coincide with $`E_F`$ reaching the top of the energy gap of the N = 0 branch, where both 2DESs couple once again along the barrier. A sharp zero-bias conduction and a strong suppression of tunneling alternate as $`E_F`$ moves through the higher lying gaps in Fig. 3a. What is the origin of the complex overall pattern of Fig. 2, away from $`V_{bias}=0`$, and what is its relationship to Fig. 3a? At finite bias, one set of Landau levels of Fig. 3a is raised with respect to the other by an energy $`eV_{bias}`$, as shown in Fig. 3b. This results in a shift of the intercepts and the conditions for onset of conduction is moved to a different energy. For example, when $`E_F=E_1`$ in Fig. 3a, tunneling across the barrier is inhibited, due to the absence of coupling of the edge states. In Fig. 3b the application of $`eV_{bias}`$ has shifted the crossing so as to coincide with $`E_1`$. This immensely enhances the coupling between both 2DESs in analogy to position $`E_2`$ in Fig. 3a and provides an explanation for the shift of the onset of tunneling to higher $`V_{bias}`$ for smaller $`\nu `$ in Fig. 2. In general, whenever the Fermi energy coincides with one of the crossing points, electrons can tunnel through the barrier and relax to the Fermi energy of the 2DES, lying $`eV_{bias}`$ below. Each additional coincidence of the levels produces a peak in $`dI/dV_{bias}`$. We can track their positions if we neglect the small gaps at the crossings. The Landau ladders on both sides are approximately described by $`{\displaystyle \frac{E_L}{\mathrm{}\omega _c}}`$ $`=`$ $`\left({\displaystyle \frac{x}{\mathrm{}_{}}}+\sqrt{N_L+1}\right)^2+(N_L+{\displaystyle \frac{1}{2}})`$ (2) $`\mathrm{and}`$ $`{\displaystyle \frac{E_R}{\mathrm{}\omega _c}}`$ $`=`$ $`\left({\displaystyle \frac{x}{\mathrm{}_{}}}\sqrt{N_R+1}\right)^2+(N_R+{\displaystyle \frac{1}{2}}){\displaystyle \frac{eV_{bias}}{\mathrm{}\omega _c}}`$ (3) continued by flat Landau levels for $`\frac{x}{\mathrm{}_{}}\sqrt{N+1}`$. Neglecting $`N_R`$ and $`N_L`$ versus $`eV_{bias}/2\mathrm{}\omega _c`$, which is justified for large sections of Fig. 2, and assuming that $`\frac{E_L}{\mathrm{}\omega _c}\nu `$, which holds for very broad Landau levels, as is likely the case in our lower-mobility specimen, we arrive at $`\nu \left({\displaystyle \frac{eV_{bias}}{2\mathrm{}\omega _c}}+\sqrt{N_L+1}\right)^2+(N_L+{\displaystyle \frac{1}{2}}).`$ (4) as the condition for coincidence. Eq. 1, which, together with its $`LR`$ mirror image describes the barrier level structure of Fig. 3a, is identical to eq. 2, which identifies the peaks in the differential conductance in Fig. 2 as long as $`\frac{E_L}{\mathrm{}\omega _c}`$ is replaced by $`\nu `$ and $`x/\mathrm{}_{}`$ is replaced by $`eV_{bias}/2\mathrm{}\omega _c`$. This resolves the puzzle of the intriguing similarity between both graphs. While we can account for the general features of our experimental results, many aspects of the data remain unresolved and require an explanation beyond our simple model. One such feature is the extended existence of the zero-bias peak, which is expected to occur only at the point of coincidence of $`E_F`$ with the edges of the gaps. The origin of this discrepancy may be a relaxed $`k`$-conservation due to remnant disorder along the barrier, which scatters electrons, broadens the dispersion relation and therefore extends the allowed energy range for strong tunneling. Such a broadening may also account for the conductance which is much reduced compared to $`e^2/h`$: The sharp resonance with universal conductance is broadened into a band of much lower, average conductance. However, the sharpness of the zero-bias peak as a function of voltage bias implies limited broadening. A proper accounting of the role of disorder in 2D-2D tunneling requires a detailed, quantitative analysis of our 2D-2D device. Another puzzling feature of our data is the position in filling factor at which the zero bias conductance peak appears. According to Fig. 3a, the first coincidence of the Landau ladders occurs at $`\nu >4`$, somewhat above the N = 1 Landau level, contrary to the experimental value of $`\nu 1.2`$. Similarly, the next coincidence is expected for $`\nu >6`$, while it is observed at $`\nu 2.2`$. These observations point to a shifting of the levels in addition to a possible broadening. It could arise from self-consistent screening and from an accumulation of charge in the vicinity of the barrier<sup>22</sup>, which modifies the level scheme. Finally, the influence of the electron spin on our experiment as well as on the dispersion relation in Fig. 3a remains unclear. While the Zeeman splitting in GaAs is only 1/70 of the cyclotron splitting, partial and spatially dependent occupation of Landau levels near the barrier can produce large spatially dependent enhancement of the $`g`$-factor<sup>1,2</sup>. This can give rise to additional coincidences – possibly over ranges of fillings and will appear in the tunneling characteristics, in particular, if spin-flip scattering is strong. This absence of spin-dependent features in the tunneling data remains a key puzzle. Numerical studies of our comparably simple physical system should provide guidance as to the relative importance of different mechanisms. ————————— 1. Prange, R.E. & Girvin, S.M. (eds) The Quantum Hall Effect 2nd edn (Springer, New York, 1990). 2. Das Sarma, S. & Pinczuk, A. (eds) Perspectives in Quantum Hall Effects (Wiley Inter- Science, New York, 1997). 3. Halperin, B.I. Quantized Hall conductance, current-carrying edge states, and the existence of extended states in a two-dimensional disordered potential. Phys. Rev. B 25, 2185-2188 (1983). 4.MacDonald, A.H. & Streda, P. Quantized Hall effect and edge currents. Phys. Rev. B 29, 1616-1619 (1984). 5. Apenko, S.M. & Lozovik, Yu. E. J. The quantized Hall effect in strong magnetic fields. Phys. C 18, 1197-1203 (1985). 6. Fontein, P.F. et al. Spatial potential distribution in GaAs/Al<sub>x</sub>Ga<sub>1-x</sub>As heterostructures under quantum Hall conditions studied with the linear electro-optic effect. Phys. Rev. B 43, 12090-3 (1991). 7. Buttiker, M. Absence of backscattering in the quantum Hall effect in multiprobe conductors. Phys. Rev. B 38, 9375-9389 (1988). 8. Streda, P., Kucera, J. & MacDonald, A.H. Edge states, transmission matrices, and the Hall resistance. Phys. Rev. Lett. 59, 1973-1975 (1987). 9. Jain, J.K. & Kivelson, S.A. Landauer-type formulation of quantum-Hall transport: critical currents and narrow channels. Phys. Rev. B 37, 4276-4279 (1988). 10. Haug, R.J., MacDonald, A.H., Streda, P. & von Klitzing. Quantized multichannel magnetotransport through a barrier in two dimensions. K. Phys. Rev. Lett.61, 2797-2800 (1988). 11. Washburn, S., Fowler, A.B., Schmid, H. & Kern, D. Quantized Hall effect in the presence of backscattering. Phys. Rev. Lett. 61, 2801-2804 (1988). 12. Yacoby, A. et. al. Non-universal conductance quantization in quantum wires. Phys. Rev. Lett. 77, 4612-4615 (1996). 13. Wildoer, J.W. G., Venema, L.C., Rinzler, A.G., Smalley, & R.E., Dekker, C. Electronic structure of atomically resolved carbon nanotubes. Nature 391, 59-62 (1998). 14. Odom, T.W., Huang, J., Kim, P. & Lieber, C.M. Atomic structure and electronic properties of single walled carbon nanotubes. Nature 391, 62-64 (1998). 15. Ishiguro, T., Yamaji, K., & Saito, G. Organic Supercondcutors 2nd edn (Springer- Verlag, New York, 1998). 16. Tessmer, S.H., Glicofridis, P.I., Ashoori, R.C., Levitov, L.S., & Melloch, M.R. Surface charge accumulation imaging of a quantum Hall liquids. Nature. 392, 51-54 (1998). 17. McCormick, K.L. et. al. Scanned potential microscopy of edge and bulk currents in the quantum Hall regime. Phys. Rev. B 59, 4654 (1999). 18. Goldman, V.J. & Su, B. Resonant tunneling in the quantum Hall regime: measurement of fractional charge. Science 267, 1010-12 (1995). 19. Tarucha, S., Honda, T., & Saku, T. Reduction of quantized conductance at low temperatures observed in 2 to 10 $`\mu m`$-long quantum wires. Solid State Communications, 94, 413-18 (1995). 20. Ho, T.L. Oscillatory tunneling between quantum Hall systems. Phys. Rev. B 50, 4524-4533 (1994). 21. Girvin, S.M. Private Communication. 22. Wulf, U., Gudmundsson, V. & Gerhardts, R.R. Screening properties of the two- dimensional electron gas in the quantum Hall regime. Phys. Rev. B. 38, 4218-4230 (1988). 23. Pfeiffer, L.N. et. al. Formation of a high quality two-dimensional electron gas on cleaved GaAs. Appl. Phys. Lett 56, 1697-1699 (1990). 24. Chang, A.M., Pfeiffer, L.N., & West, K.W. Observation of chiral Luttinger behavior in electron tunneling into fractional quantum Hall edges. Phys. Rev. Lett. 77, 2538-2341 (1996). We are very grateful to S.M. Girvin for providing us with much insight into the intricate energetics of our experimental geometry. We would also like to thank R. De Picciotto, A. M. Chang, T.L. Ho, and J.P. Eisenstein for valuable discussions. Correspondence and requests for materials should be addressed to W.K.(e-mail: wkang@rainbow.uchicago.edu). Fig. 1. Structure and differential conductance measurement of the 2D-2D tunneling device. (a). The junctions are fabricated by cleaved edge overgrowth in molecular beam epitaxy (MBE)<sup>23,24</sup>. The first growth on a standard (100) GaAs substrate consists of an undoped 13$`\mu m`$ GaAs layer followed by a 88Å-thick digital alloy of undoped Al<sub>0.1</sub>Ga<sub>0.9</sub>As/AlAs, and completed by a 14$`\mu m`$ layer of undoped GaAs. This triple-layer sample is cleaved along the (110) plane in an MBE machine and a modulation-doping sequence is grown over the exposed edge. It consists of a 3500Å-thick AlGaAs layer, delta-doped with Si at a distance of 500Å from the interface. Carriers from the Si impurities transfer only into the GaAs layers of the cleaved edge, forming two stripes of 2DESs of width 13$`\mu m`$ and 14$`\mu m`$ separated from each other by a 88A-thick, Al<sub>0.1</sub>Ga<sub>0.9</sub>As/AlAs barrier. The sample is fabricated into a mesa incorporating the barrier and the two 2DESs. Contacts are made to the 2DESs, far away from the tunneling region. (b) Schematic band structure of the 2D-2D tunneling device. Two different samples with electron density of $`n=1.1\times 10^{11}cm^2`$ and $`n=2.0\times 10^{11}cm^2`$ are studied. From a simultaneously grown monitor wafer we estimate the mobility of the 2DESs in the device to be $`1\times 10^5cm^2/Vsec`$. (c) Two representative traces of differential conductance through the tunneling barrier with electron density $`n=2.0\times 10^{11}cm^2`$. A low-frequency AC-technique (typically 10 $`\mu V`$ amplitude) is employed to measure the differential conductance, $`dI/dV_{bias}`$, through the barrier in the presence of a DC voltage bias, $`V_{bias}`$. The samples are measured at T = 300mK in a magnitude field. Near zero bias differential conductances, $`dI/dV_{bias}`$, nearly vanish, while conductance oscillations on the order of $`10^6\mathrm{\Omega }^1`$ are found at high bias. Fig. 2. Evolution of differential conductance of 2D-2D tunneling devices under high magnetic fields. The Landau level filling factor, $`\nu =nh/eB`$, and the normalized bias, $`eV_{peak}/\mathrm{}\omega _c`$, are used as universal axis for the 2D-2D tunneling data from samples with electron densities, $`n=2.0\times 10^{11}cm^2`$ (top) and $`n=1.1\times 10^{11}cm^2`$ (bottom). Their maxima are indicated by black circles and squares, respectively. Blue (red) regions represent minimum (maximum) conductance. In the blue region, conductances, $`dI/dV_{bias}`$,nearly vanish, while the red areas represent conductances on the order of $`10^6\mathrm{\Omega }^1`$. Fig. 3. Schematic energy dependence of the Landau levels in the vicinity of the barrier. (a). Shown for zero voltage bias. Far away from the barrier Landau levels are equally spaced, $`E_N=(N+1/2)\mathrm{}\omega _c`$. As electrons approach the edge or the barrier their energy rises parabolically. $`{\displaystyle \frac{E}{\mathrm{}\omega _c}}=\left(\left|{\displaystyle \frac{x}{\mathrm{}_{}}}\right|+\sqrt{N+1}\right)^2+(N+{\displaystyle \frac{1}{2}}),N=0,1,2,\mathrm{}\mathrm{and}\left|{\displaystyle \frac{x}{\mathrm{}_{}}}\right|\sqrt{N+1},`$ (5) otherwise $`\frac{E}{\mathrm{}\omega _c}=N+\frac{1}{2}`$ where $`\mathrm{}_{}=\sqrt{eB/h}`$ is the magnetic length. In the vicinity of the barrier, these parabolas anti-cross and create a gapped spectrum. The traces represent the energy of an electron whose guiding center is located at $`x`$. N is the Landau level index, $`E_1`$ through $`E_3`$ represent Fermi energies at different filling factors. Inserts represent the in-plane track of the electronic wavefunction. (b) Same as Fig. 3a but with a bias of $`2\mathrm{}\omega _c`$ applied across the barrier.
no-problem/9910/cond-mat9910178.html
ar5iv
text
# Lattice-Independent Approach to Thermal Phase Mixing ## I Introduction During the past decade, the study of equilibrium and nonequilibrium dynamics of field theories has greatly benefited from the widespread availability of workstations capable of millions of floating point operations per CPU-second. One of the most popular applications of computers in the physical sciences is the examination of phenomena which are generated by nonperturbative effects. These include nonlinear dynamical systems with a few, several, or an infinite number of degrees of freedom. Of these, we are particularly interested here in the latter, as they represent a unique challenge to computational physics. Implementing field theories in the computer implies discretizing not only time but also space: the system is cast on a finite lattice with a discrete spatial step, effectively cutting off the theory both in the infrared (by the lattice size) and in the ultraviolet (by the lattice spacing). Although in classical field theories an ultraviolet cutoff solves the Rayleigh-Jeans ultraviolet catastrophe, the solution comes with a high price tag: whenever there is dynamical mixing of short and long wavelength modes, the results will in general depend on the shortest distance scale in the simulation, the lattice spacing. To be sure, in many instances this dependence on small spatial scales does not affect qualitatively the physics one is interested in: for example, very near criticality for Ising systems, where spatial correlations in the order parameter diverge , or are controllable in some way . However, in many other cases one is interested in achieving a proper continuum limit on the lattice which is independent of the choice of ultraviolet cutoff. These include a wide range of phenomena which have triggered much recent interest, from pattern formation in fluid dynamics to simulations of phase transitions and topological defect formation, which often use stochastic methods . In the present work, we are concerned with curing, or at least greatly alleviating, the lattice-spacing dependence that appears in stochastic simulations of scalar field theories. We will show that it is indeed possible to obtain results which are lattice-spacing independent, as long as proper counterterms are added to the lattice effective potential. Following a suggestion by Parisi , lattice-spacing independent results were recently obtained by J. Borrill and one of us within the context of finite-temperature symmetry restoration in a simple Ginzburg-Landau model . However, that study focused on a regime where the large temperatures needed for symmetry restoration compromised the approach to obtain lattice-spacing independent results, which is based on a perturbative expansion in powers of the temperature. Furthermore, no attempt was made to obtain results which were independent of the renormalization scale. Thus, in that study, the numerical prediction for the critical temperature depends on the particular choice of renormalization scale. Here, we would like to apply an expanded version of the method proposed in Ref. to a related problem, phase mixing in Ginzburg-Landau models. The distinction between phase mixing and symmetry restoration is made clearer through the following argument. Suppose a system described by a Ginzburg-Landau free energy density with odd powers of the order parameter $`\varphi (t,𝐱)`$ is rapidly cooled from a high temperature to a temperature where two phases can coexist. The system was cooled so as to remain entirely in one of the two phases. The odd term(s) could be due to an external magnetic field (linear term), or to the integration of other fields coupled to $`\varphi `$, as in certain gauge theories (cubic term) , or in de Gennes-Landau models of the nematic-isotropic transition in liquid crystals . Due to the odd terms, there is a free-energy barrier for large-amplitude fluctuations between the two phases. \[Of course, small-amplitude fluctuations within each phase are also possible, but less interesting.\] This barrier is usually controlled by the coefficients of the odd terms in the Ginzburg-Landau model, which we will call the control parameter(s). Suppose now that the system is held at the temperature where the two phases have the same free energy densities (sometimes called the critical temperature in the context of discontinuous phase transitions) and that we are free to change the value of the control parameter(s). The question we would like to address is how does the system behave as a function of the control parameter(s), that is, as the free-energy barrier for large-amplitude fluctuations between the two phases is varied. As is well known, the mean-field theory approach breaks down when large-amplitude fluctuations about equilibrium become large enough. Thus, we should expect that the prediction from the Ginzburg-Landau model, that the system remains localized in one phase until the barrier disappears (when the control parameter goes to zero) will eventually be wrong. There will be a critical value for the control parameter beyond which nonperturbative effects lead to the mixing of the two phases. \[Note that due to the odd terms, there is no symmetry to be restored.\] In the language of the Ginzburg-Landau model, the system should at this point be described as having a single well, centered at the mean value of the order parameter. We would like to obtain the lattice-independent critical value of the control parameter for this phase mixing to occur. It is important to distinguish between phase coexistence and phase mixing. As is well-known, phase coexistence will generally occur when a system is cooled into the so-called phase coexistence region of the phase diagram. In this case, the system will relax into its lowest free-energy configuration via spinodal decomposition. Here, we are preparing the system initially outside the phase coexistence region, namely in one particular phase only. In the infinite-volume limit, mean field theory predicts the system will remain there, since, as the two minima are degenerate, the nucleation of a critical droplet would cost an infinite amount of free energy. Phase mixing is a nonperturbative phenomenon characterized by large-amplitude fluctuations not included in the mean-field approach. It signals the breakdown of mean-field theory. This paper is organized as follows: In the next section we describe the continuum model we use and some of its properties. In section 3 we describe the lattice implementation and how simulations using a bare lattice potential give results which depend severely on the lattice spacing. In section 4 we show how to cure this dependence, and also how to make the simulations independent of the choice of renormalization scale. We conclude in section 5 with a brief summary of our results and a discussion of future work. ## II The Model in the Continuum Our starting point is the 2-dimensional Hamiltonian, (we use $`c=k_B=1`$) $$\frac{H[\varphi ]}{T}=\frac{1}{T}d^2x\left[\frac{1}{2}\left(\varphi \varphi \right)+V(\varphi )\right],$$ (1) where the homogeneous part of the free energy density is $$V(\varphi )=\frac{a}{2}(T^2T_2^2)\varphi ^2\frac{\alpha }{3}T\varphi ^3+\frac{\lambda }{4}\varphi ^4.$$ (2) This choice of $`V(\varphi )`$ is inspired by several models of nucleation in the condensed matter and high-energy physics literature, in particular in recent models of the electroweak phase transition . The several parameters in $`V(\varphi )`$ allow one to apply it to several situations of interest. However, we note that here the order parameter is a scalar quantity, and thus the critical behavior of this model belongs to the universality class of the 2-dimensional Ising model . It is quite straight forward to generalize our results to systems in different numbers of spatial dimensions. At the critical temperature $`T_c^2=T_2^2/(12\alpha ^2/9a\lambda )`$, the system exhibits two degenerate free energy minima at $$\varphi =0\mathrm{and}\varphi _+=\frac{2\alpha T_c}{3\lambda },$$ (3) while at the temperature $`T_2`$ the barrier between the two phases disappears. Throughout this work, we will be interested in the behavior of the system at $`T_c`$. One reason for this choice has to do with the use of a perturbative expansion which is in powers of $`T`$; at $`T_c`$ the expansion parameter is sufficiently small, allowing us to stay at 1-loop. Another reason is that we are interested in measuring the breakdown of mean-field theory in terms of parameters controlling the free-energy barrier, and the calculations are much simpler at $`T_c`$, as we will see next. According to the model described by Eq. 2, at $`T_c`$, unless $`\alpha =0`$ (or $`\lambda \mathrm{}`$) there will always be a barrier separating the two phases: at $`T_c`$, this model does not predict phase mixing to occur. It is thus very convenient to introduce the shifted field $$\varphi \varphi ^{}\varphi \frac{T\alpha }{3\lambda },$$ (4) and write the shifted homogeneous free energy density as (dropping the primes) $$V_0(\varphi )=\frac{1}{2}\mu ^2(T)\varphi ^2+\frac{\lambda }{4}\varphi ^4+A(T)\varphi +\mathrm{constants},$$ (5) with $$\mu ^2(T)a(T^2T_2^2)+\frac{T^2\alpha ^2}{3\lambda },$$ (6) and $$A(T)a(T^2T_2^2)\frac{T\alpha }{3\lambda }+\frac{2}{27}\frac{T^3\alpha ^3}{\lambda ^2}.$$ (7) The shifted free energy density is just the usual Ginzburg-Landau free energy density with an external magnetic field $`A(T)`$. Note that $`A(T_c)=0`$ and the two minima are degenerate, as they should be. We now introduce the dimensionless variables $`\theta T/T_2`$, $`\stackrel{~}{t}\sqrt{a}T_2t`$, $`\stackrel{~}{x}\sqrt{a}T_2x`$, $`\stackrel{~}{\varphi }\frac{\varphi }{\sqrt{T_2}}`$, according to which we can write, at $`\theta _c=\left[1\frac{2\stackrel{~}{\alpha }^2}{9\stackrel{~}{\lambda }}\right]^{1/2}`$, $$\stackrel{~}{V}_0=\frac{1}{2}\stackrel{~}{\mu }^2(\theta _c)\stackrel{~}{\varphi }^2+\frac{1}{4}\stackrel{~}{\lambda }\stackrel{~}{\varphi }^4,$$ (8) where $`\stackrel{~}{\lambda }\frac{\lambda }{aT_2}`$, $`\stackrel{~}{\alpha }\frac{\alpha }{a\sqrt{T_2}}`$, and $$\stackrel{~}{\mu }^2=\frac{\mu ^2}{aT_2}=(\theta _c^21)+\frac{\theta _c^2\stackrel{~}{\alpha }^2}{3\stackrel{~}{\lambda }}.$$ (9) Since we will keep the system at $`\theta _c(\stackrel{~}{\alpha },\stackrel{~}{\lambda })`$, the only two control parameters are $`\stackrel{~}{\alpha }`$ and $`\stackrel{~}{\lambda }`$. In what follows, we will fix $`\stackrel{~}{\lambda }=0.1`$ for simplicity. This was also the choice in a previous study of phase mixing in the same system, which did not address the issue of lattice-spacing dependence . We will also drop all tildes, except in the plots and captions, where unshifted, dimensionless variables are used and marked explicitly. ## III Numerical Results: Bare Lattice ### A Description of the Simulation As mentioned in the introduction, we would like to study the behavior of the system described in the previous section when coupling to an external thermal bath promotes fluctuations about equilibrium. We will consider the situation where the system is initially prepared in the phase given by $`\varphi =0`$ in the unshifted potential or, more generically, the left well. Since we are only interested in the final equilibrium value of the system, we will simulate the coupling of the scalar field $`\varphi `$ to the thermal bath using a generalized Langevin equation, $$\frac{^2\varphi }{t^2}=^2\varphi \eta \frac{\varphi }{t}\frac{V_0}{\varphi }+\xi (𝐱,t),$$ (10) where the viscosity coefficient $`\eta `$, set equal to unity in all simulations, is related to the stochastic force of zero mean $`\xi (𝐱,t)`$ by the fluctuation-dissipation relation, $$\xi (𝐱,t)\xi (𝐱^{},t^{})=2\eta \theta \delta (𝐱𝐱^{})\delta (tt^{}).$$ (11) The system is discretized and put on a square lattice with side length, $`L`$, equal to 64 for all the simulations, but several lattice spacings, $`\delta x`$, and time steps, $`\delta t`$, are used. For $`\delta x=`$1.0, 0.8, and 0.2 the respective time steps are $`\delta t=`$ 0.2, 0.1, and 0.02. We have, of course, checked the stability of the program for these choices of lattice parameters. Using a standard second-order staggered leapfrog method (which is second order in both space and time) we can write, $`\dot{\varphi }_{i,m+\frac{1}{2}}`$ $`=`$ $`{\displaystyle \frac{(1\frac{1}{2}\eta \delta t)\dot{\varphi }_{i,m\frac{1}{2}}+\delta t(^2\varphi _{i,m}V_0^{}(\varphi _{i,m})+\xi _{i,m})}{1+\frac{1}{2}\eta \delta t}}`$ (12) $`\varphi _{i,m+1}`$ $`=`$ $`\varphi _{i,m}+\delta t\dot{\varphi }_{i,m+\frac{1}{2}}`$ (13) where $`i`$-indices are spatial and $`m`$-indices temporal, overdots represent derivatives with respect to $`t`$ and primes with respect to $`\varphi `$. The discretized fluctuation-dissipation relation now reads $$\xi _{i,m}\xi _{j,n}=2\eta \theta \frac{\delta _{i,j}}{\delta x^2}\frac{\delta _{m,n}}{\delta t}$$ (14) so that $$\xi _{i,m}=\sqrt{\frac{2\eta \theta }{\delta x^2\delta t}}G_{i,m}$$ (15) where $`G_{i,m}`$ is taken from a zero-mean unit-variance Gaussian. ### B Results from Bare Lattice Simulations Keeping the system always at the critical temperature $`\theta _c`$, we are interested in its behavior as the free-energy barrier between the two equilibrium phases is changed. We will measure the value of the ensemble-averaged and area-averaged order parameter $`\varphi _A(t)1/Ad^2x\varphi (𝐱,t)`$ for several choices of the lattice spacing $`\delta x`$, taking note of its final equilibrium value, $`\overline{\varphi }_{\mathrm{eq}}`$. In figure 1 we show the results for $`\varphi _A(t)`$ for several choices of lattice spacing and $`\alpha =0.45`$. The dependence on lattice-spacing is quite evident; different lattices produce different physics. In figure 2, we show the phase diagram depicting phase mixing as a function of $`\alpha `$ for different choices of the lattice spacing $`\delta x`$. The phase diagram is constructed by defining the “phase-mixing order parameter”, $$\delta _\varphi (\alpha )|\overline{\varphi }_{\mathrm{eq}}\varphi _{\mathrm{max}}|/\varphi _{\mathrm{max}},$$ (16) where $`\varphi _{\mathrm{max}}=\alpha \theta _c/3\lambda `$ is the location of the maximum of the free energy density separating the two phases. Clearly, as $`\alpha `$ decreases, the free-energy barrier decreases and larger-amplitude fluctuations between the two phases become more probable. Below a critical value $`\alpha _c`$, $`\overline{\varphi }_{\mathrm{eq}}`$ just tracks the location of the maximum, indicating complete phase mixing, or the breakdown of the mean field theory of Eq. 1. The problem, though, is that phase mixing, or the breakdown of mean-field theory, occurs for values of $`\alpha _c`$ which are strongly dependent on the value of $`\delta x`$, as can be seen from figure 2. For the range of $`\delta x`$ investigated, $`0.2\delta x1`$, we obtained $`0.355\alpha _c0.40`$. In the next section, we argue that this dependence can be effectively cured by including proper counterterms to the lattice potential. ## IV Approaching the Continuum on the Lattice ### A Computing the Lattice Effective Potential Setting up a continuum system on a lattice introduces two artificial length scales, the ultraviolet momentum cutoff $`\mathrm{\Lambda }=\pi /\delta x`$ and the infrared momentum cutoff $`\mathrm{\Lambda }_L=\pi /L`$, where $`L`$ is the lattice size. In the continuum limit, $`L\mathrm{}`$, and $`\delta x0`$ or, equivalently, the number of degrees of freedom $`N=(L/\delta x)^d\mathrm{}`$. The coupling to the thermal bath induces fluctuations at all allowed length scales. We should thus expect that the lattice simulation is related to a continuum model with both infrared and ultraviolet cutoffs. In order to obtain the lattice effective potential, we start by analyzing the divergences of the related continuous model. For classical field theories in 2 dimensions, the corresponding 1-loop corrected effective potential is given by $`V_{1\mathrm{L}}(\varphi )`$ $`=`$ $`V_0(\varphi )+{\displaystyle \frac{T}{2}}{\displaystyle _{\mathrm{\Lambda }_L}^\mathrm{\Lambda }}{\displaystyle \frac{d^2p}{(2\pi )^2}}\mathrm{ln}\left(p^2+V_0^{\prime \prime }\right)`$ (19) $`+\mathrm{counterterms},`$ where the primes denote derivatives with respect to $`\varphi `$. Performing the integration and making all variables dimensionless we obtain, $`V_{1\mathrm{L}}(\varphi )`$ $`=`$ $`V_0(\varphi )+{\displaystyle \frac{\theta }{8\pi }}V_0^{\prime \prime }\left[1\mathrm{ln}\left({\displaystyle \frac{\mathrm{\Lambda }_L^2+V_0^{\prime \prime }}{\mathrm{\Lambda }^2}}\right)\right]`$ (23) $`{\displaystyle \frac{\theta }{8\pi }}\mathrm{\Lambda }_L^2\mathrm{ln}\left(\mathrm{\Lambda }_L^2+V_0^{\prime \prime }\right)+B\varphi ^2`$ $`+\mathrm{constants},`$ The infrared cutoff does not introduce a divergence as $`\mathrm{\Lambda }_L0`$, but it does introduce finite corrections to $`V_{1\mathrm{L}}`$, or finite size effects, which become small as $`L`$ increases. These become more severe near criticality, but well-known scaling behavior can be used to regulate this dependence . As we will further argue below, for our purposes we can safely set $`\mathrm{\Lambda }_L=0`$. This is not the case for the ultraviolet cutoff. The reader can see now why it is useful to use the shifted potential of Eq. 5 as opposed to the original one of Eq. 2: all divergences are quadratic in $`\varphi `$, simplifying the computations considerably, while the physical results, of course, remain unchanged. This is why we added only the counterterm $`B\varphi ^2`$ above. The counterterm $`B`$ is computed by imposing the renormalization condition $$V_{1L}^{\prime \prime }(\varphi _{\mathrm{RN}})=V_0^{\prime \prime }(\varphi _{\mathrm{RN}})=M^2,$$ (24) where $`M`$ is the arbitrary renormalization scale and we write $`\varphi _{\mathrm{RN}}\sqrt{\frac{M^2+\mu ^2}{3\lambda }}`$. \[Note that $`M`$ here is dimensionless (tilde is dropped), being defined as $`\stackrel{~}{M}=M/T_2`$.\] One obtains, $$B(M)=\frac{\theta }{16\pi }\left[V_0^{\prime \prime \prime \prime }\mathrm{ln}\left(\frac{V_0^{\prime \prime }}{\mathrm{\Lambda }^2}\right)+\frac{(V_0^{\prime \prime \prime })^2}{V_0^{\prime \prime }}\right]_{\varphi =\varphi _{RN}}.$$ (25) Applying this to the shifted potential of Eq. 5, we obtain, for the 1-loop renormalized continuum potential, $`V_{1L}^M(\varphi )`$ $`=`$ $`\left[{\displaystyle \frac{1}{2}}\mu ^2+{\displaystyle \frac{9\lambda \theta }{8\pi }}+{\displaystyle \frac{3\lambda \theta \mu ^2}{4\pi M^2}}\right]\varphi ^2+{\displaystyle \frac{\lambda }{4}}\varphi ^4`$ (28) $`+A\varphi {\displaystyle \frac{3\lambda \theta }{8\pi }}\varphi ^2\mathrm{ln}\left({\displaystyle \frac{\mu ^2+3\lambda \varphi ^2}{M^2}}\right)`$ $`+{\displaystyle \frac{\mu ^2\theta }{8\pi }}\mathrm{ln}(\mu ^2+3\lambda \varphi ^2)+\mathrm{constants}.`$ Recall that at $`\theta _c`$ the linear term proportional to $`A(\theta )`$ vanishes. Since the counterterm cancels the dependence on the ultraviolet cutoff, we define the lattice effective potential as $$V_{\mathrm{latt}}(\varphi )=V_0(\varphi )+B(M)\varphi ^2.$$ (29) In figure 3 we show the results of repeating the simulations of figure 1 but now adding the counterterm to the lattice simulations following Eq. 29. The addition of the counterterm practically eliminates the lattice-spacing dependence of the results. Figure 3 also shows the near elimination of lattice-spacing dependence for $`\alpha =0.40`$. ### B Extracting the Critical Value of the Order Parameter In figures 4 and 5, we show the phase diagrams using $`\delta _\varphi `$ defined in Eq. 16 as a function of $`\alpha `$ for different choices of lattice spacing $`\delta x`$. These are to be compared with figure 2. Figure 4 is for a choice of renormalization scale $`M=1`$, while figure 5 is for $`M=10`$. It is clear that the results for different lattice spacings converge around one value of $`\alpha _c`$. We compute $`\alpha _c`$ as follows: for a given value of $`\alpha `$ we perform several ($`i_{\mathrm{max}}`$) measurements of $`\overline{\varphi }_{\mathrm{eq}}`$ by varying the lattice spacing, which we call $`\overline{\varphi }_{\mathrm{eq}}^i(\alpha )`$. Their average is simply $`\overline{\varphi }_{\mathrm{eq}}(\alpha )=\left[_1^{i_{\mathrm{max}}}\overline{\varphi }_{\mathrm{eq}}^i(\alpha )\right]/i_{\mathrm{max}}`$, while the departure from the average for each measurement is, $`\mathrm{\Delta }\overline{\varphi }_{\mathrm{eq}}^i=|\overline{\varphi }_{\mathrm{eq}}^i\overline{\varphi }_{\mathrm{eq}}|/\overline{\varphi }_{\mathrm{eq}}`$. Near criticality, the results are naturally poorer due to the existence of long-range correlations in the field. We can use this fact to our advantage, since we expect that, at criticality, the departure from the average defined above is maximized, that is, the quantity $$\mathrm{\Delta }\varphi _{\mathrm{eq}}(\alpha )\frac{\underset{1}{\overset{i_{\mathrm{max}}}{}}\mathrm{\Delta }\overline{\varphi }_{\mathrm{eq}}^i}{i_{\mathrm{max}}},$$ (30) reaches a maximum at $`\alpha _c`$. This can be clearly seen from figure 6 for the same choices of lattice spacings (or coarse-graining scales) as in figures 4 and 5. The measured value of $`\alpha _c`$ is now $`\alpha _c0.365\pm 0.005`$, for $`M=1`$, and $`\alpha _c0.435\pm 0.005`$ for $`M=10`$. We have thus achieved lattice-spacing independence on the measurement of $`\alpha _c`$. Clearly, the error in $`\alpha _c`$ could be further decreased by taking a larger number of measurements of $`\overline{\varphi }_{\mathrm{eq}}^i`$. However, since our main goal here is to show the convergence of the results for different lattice spacings, we are not concerned with very high-accuracy measurements. Nevertheless, the values for $`\alpha _c`$ still depend on the renormalization scale, which is arbitrary. In the next subsection, we show how to obtain lattice results which are independent of $`M`$. ### C Achieving independence of renormalization scale on the lattice As with conventional renormalization theory, the renormalized potential should not depend on the choice of renormalization scale . One usually solves the renormalization group equations to find how the couplings vary with the scale. Here, we propose a simpler approach which works quite well on the lattice implementation of scalar field theories. It is an interesting question how to generalize it to more complex models. Consider the 1-loop renormalized potential $`V_{1\mathrm{L}}^M(\varphi )`$ as given in Eq. 28. The superscript $`M`$ is a reminder that this potential is renormalized at a given scale $`M`$. Now consider an equivalent potential renormalized at another scale $`M^{}`$, $`V_{1\mathrm{L}}^M^{}(\varphi )`$. Since the divergences are quadratic, this potential has a shifted mass $`\mu ^2`$. By imposing that the two potentials are identical, $`V_{1\mathrm{L}}^M(\varphi )=V_{1\mathrm{L}}^M^{}(\varphi )`$, we obtain a condition on the shifted mass $`\mu ^2`$, approximating $`\mathrm{ln}\left(\mu ^2+3\lambda M^{}\right)\mathrm{ln}\left(\mu ^2+3\lambda M\right)`$, $$\mu _{}^{}{}_{}{}^{2}\mu ^2+\frac{3\lambda \theta }{4\pi }\mathrm{ln}\left(\frac{M^2}{M^2}\right)\frac{3\lambda \theta \mu ^2}{2\pi }\left[\frac{1}{M^2}\frac{1}{M^2}\right].$$ (31) Thus, we can always relate a theory with a choice of $`M`$ to any other theory with $`M^{}`$ by redefining the mass $`\mu ^2`$ according to Eq. 31. We claim that this is also the case for the lattice effective potential. As an illustration, we show the phase diagram for $`M^{}=10`$ in figure 7, where the results for $`M^{}=10`$ were obtained after scaling $`\mu ^2`$ according to Eq. 31 in the lattice potential of Eq. 29. It is practically indistinguishable from the phase diagram for $`M=1`$ shown in figure 4. Figure 8 demonstrates clearly that $`M^{}=10`$ has the identical $`\alpha _c`$ previously found for $`M=1`$, within our level of accuracy. This is in stark contrast to figure 6, where the values of $`\alpha _c`$ for $`M=1`$ and $`M=10`$ were very different, as evidenced by its “twin peaks” structure. ## V Summary and Outlook We have investigated the continuum limit of lattice simulations of stochastic scalar field theories. In particular, we have proposed a method to obtain not only lattice-spacing independent results, but also results independent of the renormalization scale of the lattice effective potential. We illustrated our approach by examining a Ginzburg-Landau model which exhibits phase mixing depending on the values of the parameters controlling the free-energy barrier for large-amplitude fluctuations between the two low-temperature phases in our model. Thermal fluctuations of the order parameter are induced by coupling it to a thermal bath at fixed temperature $`T_c`$, defined as the temperature where the two phases have the same free energy density. We simulate the dynamics using a generalized Langevin equation with Gaussian noise, which brings the system to its final equilibrium state. The results were presented in terms of phase diagrams which clearly illustrate the effectiveness of our approach. We also proposed a simple way of determining the critical value of the control parameter for phase mixing, which uses the spread in values of the equilibrium order parameter around criticality for different choices of lattice spacing (or coarse-graining scales). Thus, we effectively turn a weakness of lattice simulations into a strength, something that can be useful for the examination of critical phenomena of continuous field theories in fairly small lattices. We plan to expand the present study to investigate the effects of spatio-temporal memory on the dynamics of nonequilibrium fields. Recent results have shown that the effective Langevin equation for self-coupled scalar systems exhibits colored and multiplicative noise . It is possible to expand the two-point function characterizing the noise (or noises) in terms of a “persistence factor”, which defines short or long-term memory, spatial, temporal, or both. The possible impact of this kind of noise on the nonequilibrium dynamics of fields remains largely unexplored. ## VI Acknowledgements C.G. was supported in part by NASA through the New Hampshire NASA Space Grant (NGT5-40010). M.G. was supported in part by an NSF Presidential Faculty Fellows award PHY-9453431. M.G. thanks the Particle Theory Group at Boston University, where parts of this work were developed, for their hospitality. Carmen Gagne: carmen.gagne@dartmouth.edu Marcelo Gleiser: marcelo.gleiser@dartmouth.edu
no-problem/9910/astro-ph9910562.html
ar5iv
text
# Effects of Kerr Spacetime on Spectral Features from X-Ray Illuminated Accretion Discs ## 1 Introduction Radiation emitted in the X-ray band by Active Galactic Nuclei (AGN) and Galactic Black Hole Candidates (BHCs) exhibits the imprints of strong gravitational fields and orbital rapid motion of matter near a black hole. Here we adopt the model with a rotating back hole surrounded by an accretion disc. The very first studies of light propagation in the Kerr metric were performed in the 1970’s (Bardeen, Press & Teukolsky 1972; Cunningham & Bardeen 1973; Cunningham 1975), and it was argued that the observed radiation should be substantially affected by the presence of a black hole and by its rotation. In the last few years, even before the great excitement aroused by the ASCA detection of the relativistic iron K$`\alpha `$ line in the spectrum of the Seyfert 1 galaxy MCG-6-30-15 (Tanaka et al. 1995), many authors modelled the effects of Special and General Relativity on the line profiles under various physical and geometrical assumptions (e.g. Fabian et al. 1989). Calculations of line profiles from a disc-like source in Kerr spacetime have been performed by Laor (1991), Kojima (1991), Hameury, Marck & Pelat (1994), Karas, Lanza & Vokrouhlický (1995), Bromley, Chen & Miller (1997), Fanton et al. (1997), Dabrowski et al. (1997), Čadež, Fanton & Calvani (1998), and others. Nowadays, these studies are particularly relevant for the understanding of AGN in view of the above mentioned detection of line profiles which suggests substantial relativistic effects in many objects (Nandra et al. 1997). Various simplifying assumptions have been adopted in previous calculations. Line profiles have been often calculated independently of the underlying reflection continuum (Guilbert & Rees 1988; Lightman & White 1988), which is however produced along with the line after illumination of the disc by some primary X-ray source. Even when considered, calculations of light propagation were performed in Schwarzschild metric. Matt, Perola & Piro (1991) adopted a weak-field approximation. Recently, Maciołek-Niedźwiecki & Magdziardz (1998), and Bao, Wiita & Hadrava (1998) made use of fully relativistic codes, but still in Schwarzschild metric. Morever, a simple power-law parameterization of the disc emissivity has been usually adopted, e.g. the one which follows from the Page & Thorne (1974) model, while the actual emissivity is substantially more complex and depends on the geometry of the illuminating matter (Matt, Perola & Piro 1991; Martocchia & Matt 1996). Self-consistent calculations of iron lines and continuum together are still missing in the case of Kerr metric. This problem is thus examined in the present paper. Reflected light rays are properly treated as geodesics in the curved spacetime and, furthermore, light propagation from the primary, illuminating source to the reflecting material is also calculated in a fully relativistic approach. The adopted point-like geometry for the primary X-ray emitting region (Sec. 2.1) is clearly a simplification, but can be considered as a rough phenomenological approximation to more realistic models. Off-axis flares, which would be expected from magnetic reconnection above the accretion disc, have been considered by Yu & Lu (1999) and Reynolds et al. (1999) in Schwarzschild and Kerr metric, respectively. More complex scenarios, like hot coronae and non-keplerian accretion flows, would be very interesting to explore but are beyond the scope of this paper. ## 2 An illuminated disc in Kerr metric ### 2.1 The model We adopt the model described by Martocchia & Matt (1996): a geometrically thin equatorial disc of cold (neutral) matter, illuminated by a stationary point-like source on the symmetry axis at height $`h`$. The local emissivity of the disc has been computed following Matt, Perola & Piro (1991), taking into account the energy and impinging angle of illuminating photons, as seen by the rotating matter in the disc. Then, transfer of photons leaving the disc was carried out according to Karas, Vokrouhlický & Polnarev (1992). A similar model of illumination has been used by Henry & Petrucci (1997), who call it an anisotropic illumination model, and by other authors (Reynolds et al. 1999, and references cited therein). Bao, Wiita & Hadrava (1998) used a fixed direction of impinging photons (as if the source were distant, possibly displaced from the rotation axis); however, these authors did not solve the radiation transfer within the disc. This is an important point, as different values of $`h`$ correspond to substantially different illumination of the disc in the local frame corotating with the matter, and consequently to different emissivity laws, $`I(r,h)`$. With decreasing $`h`$, the effect of light bending is enhanced and the fraction of X-ray photons impinging onto the disc is increased with respect to those escaping to infinity and contributing to the direct (primary) continuum component. Moreover, photons arriving to the disc at radii $`\mathrm{}<h`$ are blueshifted, so that the fraction of photons with energies above the photoionization threshold is increased. It has been argued (e.g. Martocchia & Matt 1996) that a way to discriminate between static and spinning black holes could be based on the fact that the innermost stable orbit $`r_{\mathrm{ms}}(a,m)`$ of a fast-rotating black hole lies close to the event horizon and approaches the gravitational radius $`r_\mathrm{g}(a,m)m`$ for a maximally rotating Kerr hole with the limiting value of $`am`$ (we use standard notation, Boyer-Lindquist coordinates<sup>1</sup><sup>1</sup>1We recall that the Boyer-Lindquist radial coordinate is directly related to the circumference of $`r=\mathrm{const}`$ circles in the equatorial plane. Coordinate separation between $`r_{\mathrm{ms}}(a,m)`$ and $`r_\mathrm{g}`$ obviously decreases when $`am`$, but proper radial distance (which has direct physical meaning) between these two circles increases as $`(ma)^{1/6}`$. What is however essential for our discussion of observed radiation fluxes are local emissivities and the total outgoing flux, which is obtained by summing over individual contributions of $`r=\mathrm{const}`$ rings. When expressed in terms of $`r`$, as we see in Fig. 2, the local emissivity is large near the inner edge and becomes very anisotropic when $`a`$ approaches its limiting value for the maximally rotating hole. and geometrized units $`c=G=1`$; e.g. Chandrasekhar 1983). Highly redshifted features would then represent an imprint of photons emitted at extremely small disc radii, which is possible only near fast-rotating black holes. Other explanations are also viable but require more complicated models of the source (compared to purely Keplerian, geometrically thin discs). Reynolds & Begelman (1997) pointed out that the difference between spectra of rapidly versus slowly rotating black holes would be much smaller if efficient line emission is allowed also from free-falling matter inside the last stable orbit, and they applied this assumption to the reddish line profile observed during a low-flux state of MCG–6-30-15 (Iwasawa et al. 1996). If this is the case, the presence of an extended red tail of the line could no longer be used as an evidence for rapid rotation of the black hole, whereas validity of the “spin paradigm” (the often made suggestion that rotating black holes are associated with jet production and radio-loudness) remains preserved. The problem can be solved by calculating in detail the optical thickness and the ionization state of the free-falling matter, as in Young, Ross & Fabian (1998), who noted that the reflection component in MCG–6-30-15 is not consistent with the expected ionization state of the matter inside $`r_{\mathrm{ms}}`$. ### 2.2 The emissivity laws We used a Monte Carlo code to calculate photon transfer within the disc (Matt, Perola & Piro 1991). The resulting local spectra in the frame comoving with the disc matter are shown in Figure 1. Assumptions about local emissivity, disc shape and rotation law can be varied in our code in order to account for different accretion models, but here we describe only the case of standard Keplerian, geometrically thin and optically thick discs for simplicity. Let us note that any radial inflow decreases the observed line widths when compared with the corresponding case of a Keplerian disc. While in other works the disc emissivity has been often described as a power law ($`r^s`$), we made use of the emissivities derived by Martocchia & Matt (1997) through integration of geodesics from the primary source (Fig. 2). The source distance $`h`$ stays here, instead of the power law index $`s`$, as one of the model parameters. ### 2.3 Spectral features Illumination of cold matter in the disc by the primary, hard X-ray flux results in a Compton-reflection component with specific signatures of bound-free absorption and fluorescence. The most prominent iron features are gathered in a narrow energy range: K$`\alpha `$ and K$`\beta `$ lines with rest energies at 6.4 keV and 7.07 keV, respectively, and the iron edge at 7.1 keV. On the other hand, the overall continuum is rather broad, and it is best illustrated in the energy range $`E=2`$$`220`$ keV (Figures 3 and 4). The continuum gets broader with increasing inclination due to Doppler shifts. The large spread in blue- and red-shifts blurs the photoelectric edge at 7.1 keV and results in broad troughs. This can be seen better after increasing the energy resolution, i.e. in the narrow band spectra (next section). The overall spectrum is also sensitive to inclination angles, being more intense when seen pole-on ($`\theta _\mathrm{o}=0`$). Line profiles in real spectra must result from a subtraction of the proper underlying continuum, taking into account the relativistic smearing of the iron edge. This work has been already started by several authors in Schwarzschild metric (Maciołek-Niedźwiecki & Magdziarz 1998; Young, Ross & Fabian 1998; $`\dot{\mathrm{Z}}`$ycki, Done & Smith 1997 and 1998) and developed further here, in the case of sources around rotating black holes. The effects of inclination on the smearing and smoothing of all spectral features may be dramatic in Kerr metric not only because of substantial energy shifts of photons emitted at the innermost radii, but also due to the mutual combination of this effect on both the iron line and the underlying continuum. The spectral features thus spread across a broad range of energies and may become difficult to observe. Similar behaviour could be obtained for static black holes if efficient emission were allowed from inside the innermost stable orbit, but Young, Ross & Fabian (1998) noticed that in this case a large absorption edge beyond 6.9 keV should appear, and this is usually not observed in AGN. ### 2.4 The line profile The qualitative behaviour of the observed line profiles as a function of observer’s inclination is very intuitive. When a disc is observed almost pole-on, the iron line gets somewhat broadened and redshifted because of the deep potential well and transverse Doppler effect due to the rapid orbital motion of the matter. This effect is more pronounced in the extreme Kerr case, when the emitting material, still on stable orbits, extends down almost to the very horizon. The broadening of the observed spectral features is particularly evident when strongly anisotropic emissivity laws, resulting from small $`h`$, are considered. As the disc inclination increases, the iron line becomes substantially broader, with the well-known double-peaked profile due to the Doppler shift of the radiation coming from opposite sides of the disc. The interplay of Doppler effects and gravitational light bending determines the details of the profile. The relative distance of the two horns increases with the inclination angle; then the horns and the iron edge almost disappear as individual and well recognizable features for very high inclinations when the Doppler effect is maximum (disc observed edge-on). Therefore, such horns are well visible only at intermediate inclinations. In this situation the blue peak is substantially higher than the red one, due to Doppler boosting. Quantitative account of all these effects requires to adopt specific models and to calculate profiles numerically. Figures 510 show the line profiles corresponding to different $`h`$ in our model. It is evident that the effects of the anisotropic illumination can be enormous, causing a substantial amount of the re-emitted flux to be highly redshifted, especially in the low-$`h`$ case. When the source is very close to the black hole, because of strong anisotropy only the innermost part of the disc contributes to the line and to the reflection continuum fluxes. As a consequence, spectral features can be huge in Kerr metric, whereas in the Schwarzschild case they gradually disappear if no efficient reemission is possible from $`r<6m`$. The adopted emissivity law is clearly a key ingredient in the calculations of reflected spectra. Flat local emissivity laws apply when the source is distant from the hole ($`hr_\mathrm{g}`$), and result in spectra which show very weak dependence on the black hole angular momentum. On the other hand, with steep emissivities (low $`h`$) the observed spectra strongly depend on $`a/m`$. The detailed line profiles contain a wealth of information which can in principle be compared with observed data, but it may be useful to describe them also by integral quantities which can be determined also from data with lower resolution. In the next section we will consider the line equivalent width (EW), the centroid energy ($`E_{\mathrm{cen}}`$), and the geometrical width ($`\sigma `$). Here, EW is defined in terms of radiation fluxes (line and continuum) as $`\mathrm{EW}=F_{\mathrm{line}}(E)dE/F_{\mathrm{cont}}(E=E_{\mathrm{line}})`$, where the underlying continuum can be either the direct, or the reflected one, or their sum; i.e. $`F_{\mathrm{cont}}=F_{\mathrm{dir}}+F_{\mathrm{ref}}`$. $`E_{\mathrm{line}}`$ is the rest energy of the line, i.e. 6.4 keV for the iron K$`\alpha `$ line. ## 3 Combined effects and integral quantities When dealing with low energy resolution detectors and/or faint sources, a detailed study of the line profile may not be possible. In these cases, one can resort to integral quantities. In Figs. 1113 we present the equivalent widths of the iron K$`\alpha `$ fluorescent line in different cases. As a check, we verified that in the Schwarzschild case the results were in agreement with those of Matt et al. (1992). We carried out calculations for several different values of the model parameters $`h`$, $`a`$, $`\theta _\mathrm{o}`$, and for different sizes of the disc. Apart from the above mentioned strong $`h`$-dependencies of the observed spectra, it turns out that the radial extension of the disc is an important parameter determining the reflection continuum and affecting EWs. In Fig. 11, the height of the source is fixed to $`20m`$. The upper three curves correspond to an outer radius $`r_{\mathrm{out}}=1000m`$, while those below to $`r_{\mathrm{out}}=100m`$. For both sets, the curves refer to (from top to bottom): (i) $`r>r_{\mathrm{in}}=1.23m`$ (the innermost stable orbit in Kerr metric, given the fiducial value of $`a/m=0.9981`$, Thorne 1974); (ii) $`r>r_{\mathrm{in}}=6m`$ (the innermost stable orbit in Schwarzschild metric): we found that the resulting values for static and for rotating holes are very similar in this case; (iii) $`r>r_{\mathrm{in}}=10m`$: values for static and for rotating black holes are almost identical. In the other figures the strong dependence on the source parameter $`h`$ is evident. In our calculations we also accounted for the effect of the light-rays distortion on the primary flux, i.e. for the fact that the solid angle $`\mathrm{\Omega }`$ of the direct continuum which escapes to infinity (arriving directly from the primary source to the observer) diminishes considerably when the source is near the black hole (low $`h`$). The effect is accounted for by introducing an $`h`$-dependent factor $`f(h)=\mathrm{\Omega }_{inf}/\mathrm{\Omega }_{inf,\mathrm{cl}}`$ which multiplies $`F_{\mathrm{dir}}`$ in the definition of EW. As expected, $`f`$ approaches unity if the source is far away from the hole. Figure 12 shows the equivalent width as a function of the cosine of inclination, $`\mu =\mathrm{cos}\theta _\mathrm{o}`$. The slope of EW$`(\mu )`$ gets inverted around $`h10m`$: for larger values of $`h`$ we obtain EW decreasing with $`\mu `$, as in the Schwarzschild case, whereas for $`h\mathrm{}<10m`$ EW increases when $`\mu `$ decreases. This is a direct consequence of both the large efficiency of line emission and the enhanced influence of light bending from the innermost Kerr orbits, which strongly affects the profiles. This behaviour is less pronounced when EWs with respect to the total continuum (direct plus reflected; cf. solid lines) are considered, the reason being that the Compton-reflected contribution $`F_{\mathrm{ref}}`$ increases together with the line contribution, $`F_{\mathrm{line}}`$, when the primary source height decreases, and eventually dominates. In Figure 13, EWs versus three different definitions of the continuum are shown for the sake of illustration: thick solid lines refer to EWs with respect to the total underlying continuum, taking into account the solid angle distortion due to the spacetime curvature; thin solid lines have been computed in a similar manner, but with respect to the direct continuum only; finally, the dash-dotted lines have been plotted with respect to the direct continuum only, and they do not include the solid angle distortion effect. It is worth noticing that, due to efficient emission from the innermost region, in the extreme Kerr case one obtains EW values very much enhanced at low $`h`$’s. Table 1 reports integral quantities (centroid energy, line width, and equivalent width with respect to the total continuum) of the line profiles for selected values of the parameters $`h`$ and $`\theta _\mathrm{o}`$. The table refers to a disc extending up to $`100`$ gravitational radii and a maximally spinning black hole; values for a static hole (with $`r_{\mathrm{in}}=6m`$) are given in parentheses for comparison. In Table 1, $`E_{\mathrm{cen}}`$ and $`\sigma `$ are expressed in keV while EW is in eV. Finally, a word of caution is needed here on the effect of the iron abundance. The EW strongly depends on abundance (e.g. Matt, Fabian & Reynolds 1997) but fortunately the other integral parameters of the line do not so much. Moreover, the reflection component depends on the iron abundance in an easily recognizable way, i.e. changing the depth of the iron edge (Reynolds, Fabian & Inoue 1995). Strong constraints have been derived in the case of MCG–6-30-15 (Lee et al., 1999). Therefore one can hope to separate the influence of poorly known iron abundances from other effects. ## 4 Conclusions We calculated the relativistic effects on both the iron line and the reflection continuum emitted in the innermost regions of accretion discs around spinning black holes in AGN and BHCs. The calculations are fully relativistic with respect both to the primary emission (illumination from a central source) and to the secondary one (disc reflection), so that the line profiles are in this sense computed self-consistently. We found that the adopted geometry of the source copes very well with observed widths and energy centroids of the spectral features around 6.4 keV. However, the final assessment of one of the few viable models requires more detailed comparisons than it has been possible so far with the currently available data. Predicted values should be compared against high-resolution data, together with the results of alternative scenarios, such as quasi-spherical accretion and further line broadening due to Comptonization. The development of a XSPEC compatible code, making use of a big atlas of geodesics, is in progress. This will enable a fast fitting of the data. The results presented here are thus relevant for the near future high-sensitivity X-ray observatories, like XMM, as far as the iron line is concerned. We have to wait for missions like Constellation-X, with its very large sensitivity and broad band coverage, in order to simultaneously examine both the iron line and the reflection continuum in the desired detail. ## Acknowledgments V K acknowledges support from grants GACR 205/97/1165 and 202/99/0261 in Prague. ## APPENDIX: Fitting formulae for the emissivity laws and integral quantities Local emissivities of the disc surface have been plotted in Fig. 2. In Table 2 we provide a practical fit of the emissivities, which can be useful in calculations. We used a simple law of the type: $$ϵ(r)=c_1r^{\lambda _1}+c_2r^{\lambda _2}$$ (1) The adoption of function (1) for the fitting enables comparisons with the power laws, which are commonly used in standard line profile calculations. Attempts to derive the emissivity dependence from radius by “inverting” observed line profiles have also been made, e.g., by Čadež et al. (1999), Dabrowski et al. (1997), and Mannucci, Salvati & Stanga (1992). In Table 3 we provide coefficients of the least-square fitting for the observable quantities: $`E_{\mathrm{cen}}(h)`$, $`\sigma (h)`$, and EW$`(h)`$. Here we used quadratic polynomials of the form $$a_0+a_1h/m+a_2(h/m)^2,$$ (2) which can approximate $`h`$-dependences in the range $`4mh20m`$ with sufficient accuracy. For the whole interval, up to $`h=100m`$, we used more precise spline fits; the corresponding Matlab script is available from the authors (this can be used for the numerical inversion to obtain parameters of the model, i.e. $`a/m`$, $`\theta _\mathrm{o}`$, and $`h/m`$, in terms of the three observables).
no-problem/9910/astro-ph9910251.html
ar5iv
text
# High-Redshift Quasars as Probes of Galaxy and Cluster Formation ## 1. Introduction Hy Spinrad always hated quasars (Spinrad 1979). In those paleolithic days (i.e., before 1987 or so) it was not yet obvious that a powerful quasar lurks in the heart of every one of Hy’s beloved radio galaxies, but in some sense this does not really matter: in the work of the Spinrad School of Observational Cosmology, AGN have been used simply as means to find stellar populations at large redshifts, in order to probe their formation and evolution. This is in principle a viable and sound approach. In the simplest view, the very existence of luminous quasars at large redshifts suggests the existence of their (massive?) host galaxies, at least in the minds of a vast majority of astronomers today. At $`z>4`$, this has some very interesting and non-trivial implications for our understanding of galaxy and structure formation (Turner 1991). At a slightly more complex level, the observed history of the comoving number density of quasars may be indicative of the history of galaxy formation and evolution: the same kind of processes, i.e., dissipative mergers and tidal interactions, may be fueling both bursts of star formation and AGN activity. The peak seen in the comoving number density of quasars around $`z2`$ or 3 (Schmidt, Schneider & Gunn 1995) can then be interpreted in this context: the ostensible decline at high redshifts may be indicative of the initial assembly and growth of quasar central engines and their host galaxies; whereas the decline at lower redshifts may be indicative of the decrease in fueling, as galaxies are carried apart by the universal expansion, as many of the smaller pieces are being consumed, and as the gas is being converted into stars. Qualitatively similar predictions are made by virtually all models of hierarchical structure formation (see, e.g., Cataneo 1999, or Kauffmann & Haehnelt 1999). Due to their brightness, quasars are much easier to find (per unit telescope time) than galaxies at comparable redshifts. It then makes sense to use quasars as probes, or at least as pointers of sites of galaxy formation. Quasars have been used very effectively as probes of the intergalactic medium, and indirectly of galaxy formation, through the studies of absorption line systems. A vast literature exists on this subject, which is beyond the scope of this review; for good summaries, see, e.g., Rees (1998a) or Rauch (1998). A good review of the searches for quasars and related topics was given by Hartwick & Schade (1990). Osmer (1999) provides a modern update. Some of the issues covered in this review have been described by Djorgovski (1998) and Djorgovski et al. (1999). ## 2. Quasars and Galaxy Formation Possibly the most direct evidence for a close relation between quasars and galaxy formation is the remarkable correlation between the masses of central black holes (MBH) in nearby galaxies, and the luminosities ($``$ masses) of their old, metal-rich stellar populations, a.k.a. bulges (Kormendy & Richstone 1995, Magorrian et al. 1998), with MBH’s containing on average $`0.6`$% of the bulge stellar mass. The most natural explanation for this correlation is that both MBH’s and the stellar populations are generated through a parallel set of processes, i.e., dissipative merging and assembly at large redshifts. Quiescent MBH’s are evidently common among the normal galaxies at $`z0`$, and had to originate at some point: as they grow by accretion, their formation $`is`$ the quasar activity. Quasars may thus be a common phase of the early formation of ellipticals and massive bulges. Quasar demographics support this idea. Small & Blandford (1992), Chokshi & Turner (1992), and Haehnelt & Rees (1993) all conclude that an average $`L_{}`$-ish galaxy today should contain an MBH with $`M_{}10^7M_{}`$ or so. These estimates (essentially integrating the known AGN radiation over the past history of the universe) are fully consistent with the actual census of MBH (quasar remnants) in nearby galaxies. Two other pieces of fossil evidence link the high-$`z`$ quasars with the formation of old, metal-rich stellar populations. First, the analysis of metallicities in QSO BEL regions indicates super-solar abundances (up to $`Z_Q10Z_{}`$!) in quasars at $`z>4`$ (Hamann & Ferland 1993, 1999; Matteucci & Padovani 1993). The only places we know such abundances to occur are the nuclei of giant elliptical galaxies. Furthermore, abundance patterns in the intracluster x-ray gas at low redshifts are suggestive of an early, rapid star formation phase in protoclusters populated by young ellipticals (Loewenstein & Mushotzky 1996). A nearly simultaneous formation of quasars and their host galaxies, or at least ellipticals and bulges, is consistent with all of these observations, and it fits naturally in the general picture of hierarchical galaxy and structure formation via dissipative merging (see, e.g., Norman & Scoville 1988, Sanders et al. 1988, Carlberg 1990, Hernquist & Mihos 1995, Mihos & Hernquist 1996, Monaco et al. 1999, Franceschini et al. 1999, etc.). An extreme case of this idea is that quasars are completely reducible to ultraluminous starbursts, as advocated for many years by Terlevich and collaborators (see, e.g., Terlevich & Boyle 1993, and references therein). Most other authors disagree with such a view (cf. Heckman 1991, or Williams & Perry 1994), but (nearly) simultaneous manifestations of both ultraluminous starbursts and AGN, perhaps with comparable energetics, are clearly allowed by the data. It is thus also possible that the early AGN can have a profound impact on their still forming hosts, through the input of energy and momentum (Ikeuchi & Norman 1991, Haehnelt et al. 1998). ## 3. Quasar (Proto)Clustering and Biased Galaxy Formation Producing sufficient numbers of massive host galaxies needed to accommodate the observed populations of quasars at $`z>4`$, say, is not easy for most hierarchical models: such massive halos should be rare, and associated on average with $`4`$ to $`5`$-$`\sigma `$ peaks of the primordial density field (Efstathiou & Rees 1988, Cole & Kaiser 1989, Nusser & Silk 1993). It is a generic prediction that for essentially every model of structure formation such high density peaks should be strongly clustered (Kaiser 1984). This is a purely geometrical effect, independent of any messy astrophysical details of galaxy formation, and thus it is a fairly robust prediction: the formation of the first galaxies (some of which may be the hosts of high-$`z`$ quasars) and of the primordial large-scale structure should be strongly coupled. Quasars provide a potentially useful probe of large-scale structure out to very high redshifts. The pre-1990 work has been reviewed by Hartwick & Schade (1990). A number of quasar pairs on tens to hundreds of comoving kpc scales has been seen (Djorgovski 1991, Kochanek et al. 1999), and some larger groupings on scales reaching $`100`$ Mpc (Crampton et al. 1989, Clowes & Campusano 1991), but all in heterogeneous data sets. Analysis of some more complete samples did show a clustering signal (e.g., Iovino & Shaver 1988, Boyle et al. 1998). The overall conclusion is that quasar clustering has been detected, but that its strength decreases from $`z0`$ out to $`z2`$, the peak of the quasar era, presumably reflecting the linear growth of the large-scale structure. However, if quasars are biased tracers of structure formation at even higher redshifts, associated with very massive peaks of the primordial density field, this trend should reverse and the clustering strength should again start $`increasing`$ towards the larger look-back times. The first hints of such an effect were provided by the three few-Mpc quasar pairs found in the statistically complete survey by Schneider et al. (1994), as pointed out by Djorgovski et al. (1993) and Djorgovski (1996), and subsequently confirmed by more detailed analysis by Kundic (1997) and Stephens et al. (1997). A deeper survey for more such pairs by Kennefick et al. (1996) did not find any more, presumably due to a limited volume coverage. La Franca et al. (1998) find a turn-up in the clustering strength of quasars even at redshifts as low as $`z2`$. It would be very important to check these results with new, large, complete samples of quasars over a wide baseline in redshift. More recently, observations of large numbers of “field” galaxies at $`z33.5`$ by Steidel et al. (1998) identified redshift space structures which are almost certainly the manifestation of biasing. However, the effect (the bias) should be even stronger at higher redshifts, and most of the earliest massive galaxies should be strongly clustered. A search for protoclusters around known high-$`z`$ objects such as quasars thus provides an important test of our basic ideas about the biased galaxy formation. Intriguingly, there is a hint of a possible superclustering of quasars at $`z>4`$, on scales $`100h^1`$ comoving Mpc (cf. Djorgovski 1998). The effect is clearly present in the DPOSS sample (which is complete, but still with a patchy coverage on the sky), and in a more extended, but heterogeneous sample of all QSOs at $`z>4`$ reported to date. The apparent clustering in the complete sample may be an artifact of a variable depth of the survey, which we will be able to check in a near future. Or, it could be due to patchy gravitational lensing magnification of the high-$`z`$ quasars by the foreground large-scale structure; again, we will be able to test this hypothesis using the DPOSS galaxy counts. But it could also represent real clustering of high-density peaks in the early universe, only $`0.51`$ Gyr after the recombination. The observed scale of the clustering is intriguing: it is comparable to that corresponding to the first Doppler peak seen in CMBR fluctuations, and to the preferred scales seen in some redshift surveys (e.g., Broadhurst et al. 1990; Landy et al. 1996). More data are needed to check on this remarkable result. ## 4. Quasar-Marked Protoclusters at z $`>`$ 4? Any single search method for high-$`z`$ protogalaxies (PGs) has its own biases, and formative histories of galaxies in different environments may vary substantially. For example, galaxies in rich clusters are likely to start forming earlier than in the general field, and studies of galaxy formation in the field may have missed possible rare active spots associated with rich protoclusters. We are conducting a systematic search for clustered PGs, by using quasars at $`z>4`$ as markers of the early galaxy formation sites (ostensibly protocluster cores). The quasars themselves are selected from the DPOSS survey (Djorgovski et al. 1999, and in prep.; Kennefick et al. 1995). They are purely incidental to this search: they are simply used as beacons, pointing towards the possible sites of early, massive galaxy formation. The first galaxy discovered at $`z>3`$ was a quasar companion (Djorgovski et al. 1985, 1987). A Ly$`\alpha `$ galaxy and a dusty companion of BR 1202–0725 at $`z=4.695`$ have been discovered by several groups (Djorgovski 1995, Hu et al. 1996, Petitjean et al. 1996), and a dusty companion object has been found in the same field (Omont et al. 1995, Ohta et al. 1995). Hu & McMahon (1996) also found two companion galaxies in the field of BR 2237–0607 at $`z=4.55`$. We have searches to various degrees of completeness in about twenty QSO fields so far (Djorgovski 1998; Djorgovski et al., in prep.). Companion galaxies have been found in virtually every case, despite very incomplete coverage. They are typically located anywhere between a few arcsec to tens of arcsec from the quasars, i.e., on scales $`100+`$ comoving kpc. We also select candidate PGs by using deep $`BRI`$ imaging over a field of view of several arcmin, probing $`10`$ comoving Mpc ($``$ cluster size) projected scales. This is a straightforward extension of the method employed so successfully to find the quasars themselves at $`z>4`$ (at these redshifts, the continuum drop is dominated by the Ly$`\alpha `$ forest, rather than the Lyman break, which is used to select galaxies at $`z23.5`$). The candidates are followed up by multislit spectroscopy at the Keck, which is still in progress as of this writing. As of the mid-1999, about two dozen companion galaxies have been confirmed spectroscopically. Their typical magnitudes are $`R25^m`$, implying continuum luminosities $`LL_{}`$. The Ly$`\alpha `$ line emission is relatively weak, with typical restframe equivalent widths $`2030`$ Å, an order of magnitude lower than what is seen in quasars and powerful radio galaxies, but perfectly reasonable for the objects powered by star formation. There are no high-ionization lines in their spectra, and no signs of AGN. The SFR inferred both from the Ly$`\alpha `$ line, and the UV continuum flux is typically $`510M_{}`$/yr, not corrected for the extinction, and thus it could easily be a factor of 5 to 10 times higher. Overall, the intrinsic properties of these quasar companion galaxies are very similar to those of the Lyman-break selected population at $`z34`$, except of course for their special environments and somewhat higher look-back times. There is a hint of a trend that the objects closer to the quasars have stronger Ly$`\alpha `$ line emission, as it may be expected due to the QSO ionization field. In addition to these galaxies where we actually detect (presumably starlight) continuum, pure Ly$`\alpha `$ emission line nebulae are found within $`23`$ arcsec for several of the quasars, with no detectable continuum at all. The Ly$`\alpha `$ fluxes are exactly what may be expected from photoionization by the QSO, with typical $`L_{Ly\alpha }`$ a few $`\times 10^{43}`$ erg/s. They may represent ionized parts of still gaseous protogalaxy hosts of the quasars. We can thus see and $`distinguish`$ both the objects powered by the neighboring QSO, and “normal” PGs in their vicinity. The median projected separations of these objects from the quasars are $``$ a few $`\times 100h^1`$ comoving kpc, an order of magnitude less than the comoving r.m.s. separation of $`L_{}`$ galaxies today, but comparable to that in the rich cluster cores. The frequency of QSO companion galaxies at $`z>4`$ also appears to be an order of magnitude higher than in the comparable QSO samples at $`z23`$, the peak of the QSO era and the ostensible peak merging epoch. However, interaction and merging rates are likely to be high in the densest regions at high redshifts, which would naturally account for the propensity of some of these early PGs to undergo a quasar phase, and to have close companions. The implied average star formation density rate in these regions is some 2 or 3 orders of magnitude higher than expected from the limits estimated for these redshifts by Madau et al. (1996) for $`field`$ galaxies, and 1 or 2 orders of magnitude higher than the measurements by Steidel et al. at $`z4`$, even if we ignore any SFR associated with the QSO hosts (which we cannot measure, but is surely there). These must be very special regions of an enhanced galaxy formation in the early universe. It is also worth noting that (perhaps coincidentally) the observed comoving number density of quasars at $`z>4`$ is roughly comparable to the comoving density of very rich clusters of galaxies today. Of course, depending on the timescales involved, there must be some protoclusters without observable quasars in them, and some where more than one AGN is present (an example may be the obscured companion of BR 1202–0725). ## 5. Towards the Renaissance at z $`>`$ 5: the First Quasars <br>and the First Galaxies The remarkable progress in cosmology over the past few years, reviewed by several speakers at this meeting, has pushed the frontiers of galaxy and structure formation studies out to $`z>5`$. Half a dozen galaxies, two QSOs (cf. Fan et al. 1999), and one radio galaxy are now known at $`z5`$, with the most distant confirmed object at $`z=5.74`$ (Hu et al. 1999). Remarkably, there is no convincing evidence yet for a high-$`z`$ decline of the comoving star formation rate density out to $`z>4`$ (Steidel et al. 1999). Moreover, the universe at $`z5`$ appears to be already fully reionised (Songaila et al. 1999, Madau et al. 1999), implying the existence of a substantial activity in a population of sources at even higher redshifts. These observational results pose something of a challenge for the models of galaxy formation. Essentially in all modern models, the first subgalactic fragments with masses $`10^6M_{}`$ begin to form at $`z1030`$, and the universe becomes reionised at $`z812`$ (see, e.g., Gnedin & Ostriker 1997, Miralda-Escude & Rees 1997, or Rauch 1998 and references therein). This corresponds to a time interval of only about $`0.51`$ Gyr for a reasonable range of cosmologies. What is not known is what are the first or the dominant ionisation sources which break the “dark ages”: primordial starbursts or primordial AGN? This is one of the fundamental questions in cosmology today, and it dominates many of the discussions about the NGST (see, e.g., Rees 1998b, Haiman & Loeb 1998, or Loeb 1999). Optical searches for quasars at $`z>5`$ have been reviewed by Osmer (1999). There are exciting new prospects of detecting such a population in x-rays using CXO (Haiman & Loeb 1999). The value of such quasars as probes of the earliest phases of galaxy and structure formation during the reionisation era at $`z10\times 2^{\pm 1}`$ cannot be overstated. Some numerical simulations suggest that an early formation of quasars, at $`z8`$, say, is viable in the framework of the currently popular hierarchical models with dissipation (cf. Katz et al. 1994). It is even possible that a substantial amount of QSO activity may predate the peak epoch of star formation in galaxies (Silk & Rees 1998). A catastrophic gravitational collapse of a massive primordial star cluster may be the most natural way of forming the first MBHs, but a variety of other mechanisms have been proposed (e.g., Loeb 1993, Umemura et al. 1993, Loeb & Rasio 1994, etc.). Future observations will tell whether such primordial fireworks marked the end of the dark ages in the universe. ### Acknowledgments. It is a pleasure to acknowledge the work of my collaborators, R. Gal, R. Brunner, R. de Carvalho, S. Odewahn, and the rest of the DPOSS QSO search team. I also wish to thank the staff of Palomar and Keck observatories for their expert assistance during our observing runs. This work was supported in part by the Norris Foundation and by the Bressler Foundation. Ivan King and his LOC crew brought this meeting into existence; thank you all. Finally, many thanks to Hy for introducing me to the joys of low-S/N astronomy and letting me play with the big toys: it was fun (most of the time)! ## References Boyle, B., Croom, S., Smith, R., Shanks, T., Miller, L., & Loaring, N. 1998, preprint \[astro-ph/9805140\] Broadhurst, T., Ellis, R., Koo, D., & Szalay, A. 1990, Nat, 343, 726 Cataneo, A. 1999, MNRAS in press \[astro-ph/9907335\] Carlberg, R. 1990, ApJ, 350, 505 Chokshi, A., & Turner, E. 1992, MNRAS, 259, 421 Clowes, R., & Campusano, L. 1991, MNRAS, 249, 218 Cole, S. & Kaiser, N. 1989, MNRAS, 237, 1127 Crampton, D., Cowley, A., & Hartwick, F.D.A. 1989, ApJ, 345, 59 Djorgovski, S., Spinrad, H., McCarthy, P., & Strauss, M. 1985, ApJ, 299, L1 Djorgovski, S., Strauss, M. , Perley, R., Spinrad, H., & McCarthy, P. 1987, AJ, 93, 1318 Djorgovski, S. 1991, in The Space Distribution of Quasars, ed. D. Crampton, ASPCS, 21, 349 Djorgovski, S., Thompson, D., & Smith, J. 1993, in First Light in the Universe, eds. B. Rocca-Volmerange et al., Gif sur Yvette: Eds. Frontières, p.67 Djorgovski, S.G. 1995, in Science with the VLT, eds. J.R. Walsh & I.J. Danziger, Berlin: Springer Verlag, p. 351 Djorgovski, S., Pahre, M., Bechtold, J., & Elston, R. 1996, Nat, 382, 234 Djorgovski, S.G. 1996, in New Light on Galaxy Evolution, IAU Symp. 171, eds. R. Bender & R. Davies, Dordrecht: Kluwer, p.277 Djorgovski, S.G. 1998, in Fundamental Parameters in Cosmology, Rec. de Moriond, eds. Y. Giraud-Heraud et al., Gif sur Yvette: Eds. Frontières, p. 313 \[astro-ph/9805159\] Djorgovski, S.G., Odewahn, S.C., Gal, R.R., Brunner, R., & de Carvalho, R.R. 1999, in Photometric Redshifts and the Detection of High Redshift Galaxies, eds. R. Weymann et al., ASPCS in press \[astro-ph/9908142\] Djorgovski, S.G., Gal, R. R., Odewahn, S. C., de Carvalho, R. R., Brunner, R., Longo, G., & Scaramella, R. 1999, in Wide Field Surveys in Cosmology, eds. S. Colombi et al., Gif sur Yvette: Eds. Frontières, p.89 \[astro-ph/9809187\] Efstathiou, G., & Rees, M. 1988, MNRAS, 230, P5 Fan, X. et al. (the SDSS collaboration) 1999, AJ, 118, 1 Franceschini, A., Hasinger, G., Miyaji, T., & Malquori, D. 1999, MNRAS in press \[astro-ph/9909290\] Gnedin, N., & Ostriker, J. 1997, ApJ, 486, 581 Haehnelt, M., & Rees, M. 1993, MNRAS, 263, 168 Haehnelt, M., Natarajan, P., & Rees, M. 1998, MNRAS, 300, 817 Haiman, Z., & Loeb, A. 1998, ApJ, 503, 505 Haiman, Z., & Loeb, A. 1999, ApJ, 521, L9 Hamann, F., & Ferland, G. 1993, ApJ, 418, 11 Hamann, F., & Ferland, G. 1999, ARAA in press Hartwick, F.D.A, & Schade, D. 1990, ARAA, 28, 437 Heckman, T. 1991, in Massive Stars in Starbursts, eds. C. Leitherer et al., STScI Symposium No. 5, Cambridge: Cambridge Univ. Press, p.289 Hernquist, L., & Mihos, C. 1995, ApJ, 448, 41 Hu, E., McMahon, R., & Egami, E. 1996, ApJ, 459, L53 Hu, E., & McMahon, R. 1996, Nat, 382, 231 Hu, E., McMahon, R., & Cowie, L. 1999, ApJL in press \[astro-ph/9907079\] Ikeuchi, S., & Norman, C. 1991, ApJ, 375, 479 Iovino, A., & Shaver, P. 1988, ApJ, 330, L13 Kaiser, N. 1984, ApJ, 284, L9 Katz, N., Quinn, T., Bertschinger, E., & Gelb, J. 1994, MNRAS, 270, L71 Kauffmann, G., & Haehnelt, M. 1999, MNRAS in press \[astro-ph/9906493\] Kennefick, J.D., Djorgovski, S.G., & de Carvalho, R. 1995, AJ, 110, 2553 Kennefick, J.D., Djorgovski, S.G., & Meylan, G. 1996, AJ, 111, 1816 Kochanek, C., Falco, E., & Muñoz, J. 1999, ApJ, 510, 590 Kormendy, J., & Richstone, D. 1995, ARAA, 33, 581 Kundic, T. 1997, ApJ, 482, 631 La Franca, F., Andreani, P., & Cristiani, S. 1998, ApJ, 497, 529 Landy, S., Shectman, S., Lin, H., Kirshner, R., Oemler, A., & Tucker, D. 1996, ApJ, 456, L1 Loeb, A. 1993, ApJ, 403, 542 Loeb, A., & Rasio, F. 1994, ApJ, 432, 52 Loeb, A. 1999, this volume Loewenstein, M., & Mushotzky, R. 1996, ApJ, 466, 695 Madau, P., Ferguson, H., Dickinson, M., Giavalisco, M., Steidel, C., & Fruchter, A. 1996, MNRAS, 283, 1388 Madau, P., Haardt, F., & Rees, M. 1999, ApJ, 514, 648 Magorrian, J. et al. 1998, AJ, 115, 2285 Matteucci, F,. and Padovani, P. 1993, ApJ, 419 485 Mihos, C., & Hernquist, L. 1996, ApJ, 464, 641 Miralda-Escude, J., & Rees, M. 1997, ApJ, 478, L57 Monaco, P., Salucci, P., & Danese, L. 1999, MNRAS in press \[astro-ph/9907095\] Norman, C., & Scoville, N. 1988, ApJ, 332, 124 Nusser, A., & Silk, J. 1993, ApJ, 411, L1 Ohta, K., Yamada, T., Nakanishi, K., Kohno, K,. Akiyama, M., & Kawabe, R. 1996, Nat, 382, 426 Omont, A., Petitjean, P., Guilloteau, S., McMahon, R., Solomon, P., & Pecontal, E. 1996, Nat, 382, 428 Osmer, P. 1999, this volume Petitjean, P., Pecontal, E., Vals-Gabaud, D., & Charlot, S. 1996, Nat, 380, 411 Rauch, M. 1998, ARAA, 36, 267 Rees, M. 1998a, in Structure and Evolution of the Intergalactic Medium from QSO Absorption Systems, eds. P. Petitjean & S. Charlot, Gif sur Yvette: Eds. Frontieres, p. 19 Rees, M. 1998b, preprint \[astro-ph/9809029\] Sanders, D., Soifer, B.T., Elias, J., Neugebauer, G., & Matthews, K. 1988, ApJ, 328, L35 Schmidt, M., Schneider, D., & Gunn, J. 1995, AJ, 110, 68 Schneider, D., Schmidt, M., & Gunn, J. 1994, AJ 107, 1245 Silk, J., & Rees, M. 1998, A&A, 331, L1 Small, T., & Blandford, R. 1992, MNRAS, 259, 725 Songaila, A., Hu, E., Cowie, L., & McMahon, R. 1999, ApJL in press Spinrad, H. 1979, private communication Steidel, C., Giavalisco, M., Pettini, M., Dickinson, M., & Adelberger, K. 1996, ApJ, 462, L17 Steidel, C., Adelberger, K., Dickinson, M., Pettini, M., & Kellogg, M. 1998, ApJ, 492, 428 Steidel, C., Adelberger, K., Giavalisco, M., Dickinson, M., & Pettini, M. 1999, ApJ, 519, 1 Stephens, A., Schneider, D., Schmidt, M., Gunn, J., & Weinberg, D. 1997, AJ, 114, 41 Terlevich, R., & Boyle, B. 1993, MNRAS, 262, 491 Turner, E. 1991, AJ, 101, 5 Umemura, M., Loeb, A., & Turner, E. 1993, ApJ, 419, 459 Williams, R., & Perry, J. 1994, MNRAS, 269, 538
no-problem/9910/astro-ph9910516.html
ar5iv
text
# The INES System III: Evaluation of IUE NEWSIPS High Resolution Spectra ## 1 Introduction The IUE NEWSIPS processing system was developed with the aim of creating a “Final Archive” of IUE data to be made available to the astronomical community as a legacy after the end of the project. This archive would include all the IUE spectra re-processed with improved algorithms and up-to-date calibrations. The NEWSIPS processing system is fully described by Nichols and Linsky (1996) and Nichols (1998). Technical details are given in the NEWSIPS Manual (Garhart et al. 1997). The introduction of new techniques to perform the geometric and photometric corrections led to a substantial improvement in the signal-to-noise ratio of the final spectra. Further improvements in the quality of the high resolution data arise from the new method to determine the image background and from the improved ripple correction and absolute calibration (Cassatella et al. 1999, hereinafter Paper II). The background subtraction has always been one of the more critical issues in the processing of IUE high resolution spectra, particularly at the shortest wavelengths, where orders crowd and an accurate estimate of the background is essential for a correct flux extraction. This problem has been overcome in NEWSIPS through the derivation of a bi-dimensional background (see Smith 1999 for a description of the method). The goal of the INES processing system was to correct the deficiencies found during the scientific evaluation of the data processed with NEWSIPS for the IUE Final Archive, and to provide the output data to the users in a simple way requiring a minimum knowledge of the operational and instrumental characteristics of IUE (Wamsteker et al. 1999). The modifications introduced in the processing of low resolution data have been described by Rodríguez-Pascual et al. (1999, Paper I). As for high resolution data, the INES system provides two output products derived from the NEWSIPS MXHI (i.e. high resolution extracted spectra) files: the “concatenated” spectrum, where the spectral orders are merged eliminating the overlap regions, and the “rebinned” spectrum, which is the concatenated spectrum resampled to the low resolution wavelength step. Both concatenated and rebinned spectra include an error vector calibrated in absolute flux units. The inconsistency between the high resolution short and long wavelength scales in NEWSIPS has been corrected for in the INES concatenated spectra. In the first part of this paper we discuss the overall quality of NEWSIPS high resolution spectra in terms of accuracy, stability and repeatability of wavelength and flux measurements (Section 2). The second part deals with the INES processing of high resolution data, describing the order concatenation and rebinning procedures (Sections 3.1 and 3.2, respectively). Finally, the application of the correction to the wavelength scale is discussed (Section 4). ## 2 NEWSIPS Data quality evaluation The overall quality of IUE high resolution spectra processed with NEWSIPS has been evaluated by studying the accuracy and the stability of wavelength determinations, the accuracy of equivalent width measurements, the flux repeatability and the residual camera non–linearities. The analysis is based on a large number of spectra, mainly of the IUE standard stars. The spectra have been corrected for the echelle blaze function and calibrated in terms of absolute fluxes according to the procedure described in Paper II. Hereinafter, wavelengths are assumed to be in the heliocentric reference frame and in vacuum. ### 2.1 Wavelength accuracy To assess the wavelength accuracy three aspects have been considered separately: a) the accuracy and repeatability of wavelength measurements of a given spectral feature in several spectra of the same star, b) the stability of the wavelength scale along the full spectral range, c) the consistency of radial velocity determinations obtained from the SWP, LWP and LWR cameras. #### 2.1.1 Expected accuracy One of the most important limitations to the wavelength accuracy of IUE high resolution spectra are the target acquisition errors at the nominal center of the spectrographs entrance apertures. These, if large enough, can also affect the quality of the ripple correction (see Paper II). From the NEWSIPS dispersion constants and the central wavelengths of the spectral orders it can be readily deduced that the velocity dispersion corresponding to one pixel on the image is practically constant all through the range covered by the cameras, and namely: $`\mathrm{\Delta }`$V=7.73 $`\pm `$ 0.05 km s<sup>-1</sup> for SWP $`\mathrm{\Delta }`$V=7.26 $`\pm `$ 0.09 km s<sup>-1</sup> for LWP $`\mathrm{\Delta }`$V=7.26 $`\pm `$ 0.03 km s<sup>-1</sup> for LWR Taking into account that the plate scales are 1.530, 1.564 and 1.553 arcsec/pix for SWP, LWP and LWR, respectively (Garhart et al. 1997), an acquisition error of 1 arcsec along the high resolution dispersion direction would lead to a constant velocity offset of 5.1 km s<sup>-1</sup> for SWP, 4.6 km s<sup>-1</sup> for LWP and 4.7 km s<sup>-1</sup> for LWR. Since the pointing/tracking accuracy is usually better than 1 arcsec, we can consider 5 km s<sup>-1</sup> as a reasonable upper limit to the expected wavelength accuracy. Wavelength errors substantially larger might arise internally in the data extraction procedures. #### 2.1.2 Repeatability of wavelength determinations To obtain a reliable information on the self-consistency of wavelength determinations, we have attempted to reduce the effects of spectral noise by performing a large number of measurements of selected narrow and symmetric absorption lines from the interstellar medium, which are strong in some of the IUE calibration stars, as well as of many emission lines in RR Tel. Out of the IUE standards, we have selected BD+28 4211, HD 60753, HD 93521, BD+75 325, $`\lambda `$ Lep (HD 34816), $`\zeta `$ Cas (HD 3360) and $`\eta `$ UMa (HD 120315). In addition, we have also used spectra of the star $`\zeta `$ Oph (HD 149757). The present measurements refer only to large aperture spectra. The interstellar lines selected for the SWP range were: SII 1259.520 Å, SiII 1260.412 Å, OI 1302.168 Å, SiII 1304.372 Å and CII 1334.532 Å, 1335.703 Å. For the long wavelength range we have used the K and H components of the MgII doublet at 2796.325 Å and 2803.530 Å and MgI 2852.965 Å. In the case of $`\zeta `$ Oph we have also measured MnII 2576.877 Å, 2594.507 Å and several FeII lines. The cases in which the distortion of the profile due to close-by reseau marks (in particular for OI 1302 and CII 1334) precluded the accurate determination of the line position have not been taken into account in the final statistics. Laboratory wavelengths have been taken form Morton (1991). The mean values of the radial velocities, the corresponding rms deviation and the number of independent measurements are reported in Table 1 for each target and camera. According to this table, the rms repeatability error on radial velocities, averaged over the three cameras is 4.6$`\pm `$1.5 km s<sup>-1</sup>. This value is smaller than the upper limit of about 5 km s<sup>-1</sup> expected from acquisition errors. Considering the presence of spectral noise, we can safely conclude that the repeatability of wavelength (or velocity) determinations is satisfactory. The results in Table 1 indicate that the radial velocities derived from the two long wavelength cameras are consistent while, on the contrary, the velocities derived from SWP spectra are systematically more negative. This, and other considerations about the consistency of radial velocity determinations from the three cameras will be discussed in Section 4. #### 2.1.3 Stability of the wavelength scale along the full spectral range We have studied the accuracy of the wavelength scale over a wide spectral range to look for possible time-dependent distortions across the camera faceplate. To this purpose, we have selected 6 SWP, 11 LWP and 3 LWR spectra of the emission line object RR Tel obtained at different epochs. For each spectrum we have measured the peak wavelengths of several emission lines chosen among those reasonably well exposed and with the cleanest profiles, covering the full spectral range. The highest excitation lines, such as those from \[MgV\], were purposely excluded because they provided systematically higher negative radial velocities probably due to stratification effects within the nebular region. The mean radial velocities of RR Tel are -69.5$`\pm `$6.5 km s<sup>-1</sup>, (SWP), -49.3$`\pm `$3.0 km s<sup>-1</sup> (LWP), and -51.0$`\pm `$4.1 km s<sup>-1</sup> (LWR). The total number of measurements are 106, 170 and 132 for SWP, LWP and LWR, respectively. Since the errors are of the same order as the repeatability errors quoted in the previous section, we conclude that the wavelength scales do no present appreciable distortions over the wavelength range covered and, within the observational errors, are stable over the period of time considered (1983-1994, 1985-1995 and 1978-1983 for SWP, LWP and LWR, respectively). #### 2.1.4 Radial velocity determinations from the Mg II doublet The present analysis has revealed the existence of an inconsistency in the radial velocities derived from the MgII K (2796.32 Å) and H (2803.53 Å) lines as measured in the LWP camera, where the two lines are present in both orders 82 and 83. To quantify this discrepancy, we have measured the velocities of the Mg II interstellar lines in 89 spectra of five IUE standard stars. We find that, in the LWP camera, the radial velocity difference V<sub>H</sub>-V<sub>K</sub> is -10.8$`\pm `$2.5 km s<sup>-1</sup> when measured in order 83, and 11.7$`\pm `$1.5 km s<sup>-1</sup> when measured in order 82. We find also that there is a discrepancy in the velocity of the K line measured in the two orders (V<sub>83</sub>-V<sub>82</sub>=-20.5$`\pm `$0.9 km s<sup>-1</sup>), while the velocities of the H line measured in the two orders are consistent within 2 km s<sup>-1</sup>, on average. Since the velocity derived from other interstellar lines (e.g. MgI 2852.965 Å) is fully consistent with the measurements from the K line in order 83, we conclude that only this line provides correct radial velocity values. In INES concatenated spectra (Section 3.1) the K line comes from order 83 and the H line from order 82. Therefore, there is a systematic difference V<sub>K</sub>-V<sub>H</sub>=-8.8$`\pm `$1.3 km s<sup>-1</sup> between the velocities determined from the two lines, being the correct value only that given by the K line. A similar study performed in LWR spectra, where the K line appears only in order 83, shows that this problem is not present in this camera, where the two Mg II lines provide consistent velocity values: V<sub>K</sub>(order 83)-V<sub>H</sub>(order 82)=-1.0$`\pm `$1.2 km s<sup>-1</sup>. ### 2.2 Background extraction It has been repeatedly pointed out that the background extraction for high resolution spectra processed with IUESIPS was not accurate enough especially shortward of 1400 Å in the SWP camera and 2400 Å in the long wavelength cameras, as denoted by the negative fluxes assigned to the wings of the strongest emission lines and to the core of the saturated absorption lines. As shown below, this effect is not present anymore in spectra processed with NEWSIPS, which makes use of an upgraded background determination procedure (Smith 1999). Overestimating or underestimating the background level leads to underestimating or overestimating the fluxes of the emission lines and the equivalent widths of the absorption lines. In the following, we report the tests done on the accuracy of the equivalent widths to verify the correctness of the background extraction. In addition, we have used the repeatability of the equivalent widths determinations as an indirect test of the stability of the background levels. #### 2.2.1 SWP Figure 1 shows the profiles of the NV doublet emission and of the broad Lyman $`\alpha `$ feature in the longest exposure available of RR Tel. It is clearly seen that the wings of these lines are not assigned negative values. Particularly interesting are the NV “ghost” lines marked with an asterisk, which are still present in NEWSIPS data, but sensibly fainter than in the IUESIPS spectra, most likely due to the optimized extraction slit. The presence of such spurious lines has recently been reported by Zuccolo et al. (1997) and ascribed to overspilling of the strong NV doublet into adjacent orders. To verify the accuracy of the background subtraction, we have compared the equivalent widths of the strongest interstellar lines in four spectra of $`\zeta `$ Oph with those reported by Morton (1975) obtained from Copernicus data. These latter determinations are presumably not affected by background determination problems, unlike the IUE echelle spectra near the short wavelength end of the cameras. The results of the comparison are given in Fig. 2 and in Table 2, which provides the mean and the standard deviation of the four measurements. As it appears clearly from the figure, the NEWSIPS measurements are consistent, within the errors, with the values from Copernicus data, suggesting that the background evaluation for SWP spectra is essentially correct. The stability of the background subtraction has been evaluated by measuring the equivalent width of several strong interstellar lines in a large sample of spectra of two standard stars. The repeatability of the equivalent widths ranges from 10% for the strongest lines to 30% for the faintest ones (Table 4). #### 2.2.2 LWP We tested the accuracy of the background extraction from the cores of strongly saturated absorption lines and the wings of strongly saturated emission lines. Fig. 3 shows a portion of a spectrum of SN1987A centered around the Mg II doublet at 2800 Å. It appears from the figure that the cores of the absorption lines do not become systematically negative, as expected for a correct background extraction. Shown in the same figure is another example, that of the strongly saturated Mg II emission doublet in the longest LWP exposure available of RR Tel (LWP25954): the lines wings do not reach negative fluxes, as required. A second test has been performed in four spectra of $`\zeta `$ Oph, in which we have measured the equivalent widths of some strong interstellar lines, and compared them with the Copernicus values given by Morton (1975). The results are summarized in Fig. 2 and Table 3, where are given the mean and the standard deviation of the four spectra. Finally, we have measured the equivalent widths of the MgII and MgI lines in a large sample of spectra of BD+28 4211 and BD+75 325. The repeatability errors range from 35% for the faint MgI line in BD+75 325, to less than than 10% for the strong lines. Results are shown in Table 4. #### 2.2.3 LWR As for the LWR camera, we have verified the accuracy of the background subtraction by measuring the equivalent widths of six strong FeII and MnII interstellar lines in four spectra of $`\zeta `$ Oph, and compared them with measurements based on Copernicus data. As shown in Fig. 2, there is a good agreement between the two sets of equivalent widths, and no systematic departures are found. The repeatability of equivalent width determinations has been determined measuring the equivalent widths of the MgII doublet in a large sample of spectra of $`\eta `$ UMa and $`\zeta `$ Cas and $`\lambda `$ Lep. The repeatability error ranges from 30% for the weak lines to less than 5% for the strongest ones (Table 4). ### 2.3 Flux repeatability Two different tests have been performed to assess the flux repeatability of high resolution spectra. In the first one we selected restricted samples of spectra obtained close in time, and measured the ripple corrected net fluxes, i.e. without applying the time sensitivity degradation and the temperature dependence corrections. The second test has been performed on larger samples covering extended periods of time, measuring the absolutely calibrated fluxes, which include time and temperature corrections. In all case we have averaged the flux over a narrow wavelength interval free of lines. The flux repeatibility was defined as the percent rms deviation from the mean value. The results are summarized in Table 5. #### 2.3.1 SWP For the first test we have used 41 high resolution spectra of IUE calibration standards grouped into sets of data with a similar exposure level and obtained close enough in time. The test was done in six bands 5 Å wide. In Table 5 (under “set A”) we report the percent rms deviation. The repeatability of spectra obtained sufficiently close in time is about 2%. The second test was made on a larger number of spectra with similar exposure time, without restricting the date of observation (“set B”). This test provides repeatability errors ranging from 3 to 4%. These somewhat larger errors are due to the intrinsic uncertainties of the sensitivity degradation correction algorithm. #### 2.3.2 LWP The tests performed are similar to those described above for the SWP camera. The spectra in “Set A” consist of three groups each containing images obtained in a restricted period of time. The flux repeatability was evaluated in four wavelength bands 5 Å wide. In this cameras, the repeatability errors can reach the 5% level near the short wavelength end of the camera, but are a factor of two lower in the central bands. A similar test performed on a larger set of spectra needing correction for the time-dependent sensitivity degradation (“Set B”) provides errors slightly larger, confirming that the sensitivity degradation algorithm adopted for the LWP camera is essentially correct. #### 2.3.3 LWR The flux repeatability was evaluated in four wavelength bands 5 Å wide. The test performed on spectra taken close in time, “Set A”, shows that the repeatability errors reach 4% near the short wavelength end of the camera, decreasing in the region of maximum sensitivity and increasing again at the longest wavelengths. In the “Set B” spectra the repeatability is worse, reflecting the uncertainties in the time degradation correction and also the instability of the camera after it ceased to be routinely used. ### 2.4 Flux Linearity Despite the linearity correction applied during the processing, residual non linearities are still present in IUE data. This effect has been evaluated in Paper I for low resolution data. In what follows we discuss this effect in high resolution spectra. The method followed consists on studying a set of spectra of the same star with different exposure times obtained, whenever possible, very close in time (preferably on the same observing shift) and with similar camera temperatures. The variation of the flux with the level of exposure (or the exposure time) defines the flux linearity. Unfortunately there exist few sets of high resolution data suitable for this study, and a slightly different approach has been taken here. The results are summarized in Table 6. In general, these results are in good agreement with those derived in Paper I. #### 2.4.1 SWP The most complete set of SWP spectra appropriate to assess the linearity consists of seven images of the white dwarf CD-38 10980 obtained in the period April-August 1991, with exposure times ranging from 50% to 180% of the optimum value (200 min). For these images we have measured the mean flux in five 5 Å wide bands. The fluxes in each band have been averaged together and divided by the mean flux in the two 100% exposures. The ratios so obtained indicate departures from linearity ranging from -6% at 1185 Å to +4% at 1785 Å for the 50% exposure and up to -5% for the 70% exposure. #### 2.4.2 LWP There is not any complete set of high resolution data of the same star which allows to study the LWP camera linearity. We have constructed average spectra of different exposure levels of the two standard stars BD+28 4211 and BD+75 325, and divided them by the corresponding 100% spectrum. The exposure levels covered range from 27% to 207% of the optimum exposure time. The test was performed in four wavelength bands 5 Å wide, selected for being relatively free from strong absorption lines. The maximum departures from linearity, reaching 8%, are found for the 133% level at 2120 Å, and for the 207% level near the regions of maximum sensitivity of the camera. The latter deviation can be easily understood in terms of saturation. #### 2.4.3 LWR The LWR high resolution linearity test has been performed with five images of the standard star HD 93521 obtained in the period July 1980 to February 1981, covering the range of exposure times from 50% to 250% of the optimum value. The test was performed in four bands 5 Å wide. The maximum departures form linearity (up to 25%) are found at the shortest wavelengths, where fluxes are underestimated by 25% for the 50% exposure and overestimated by 12% for the 250% exposure. ## 3 INES Processing of High Dispersion Spectra The starting point for the INES processing of high resolution data are the NEWSIPS MXHI files. The spectra have not been re-extracted from the bi-dimensional files, as in the case of the low resolution data (see Paper I). The INES system provides two output spectra for each high dispersion image: the “concatenated” and the “rebinned” spectra. Both include a modified wavelength scale and the error vector calibrated in absolute flux units. All these aspects are discussed in the following sections. ### 3.1 The high resolution concatenated spectra The main features of the INES concatenated spectra are: a) The overlap regions between adjacent orders are suppressed in such a way that the less noisy portion of the orders is retained. b) The error vector is calibrated in absolute flux units. c) The wavelength scale is modified to make consistent the radial velocities obtained from the short and the long wavelength cameras. The wavelength sampling of the MXHI files has been retained. d) The spectra are provided as FITS tables having the same format as the low resolution spectra, i.e. they contain only four columns: wavelength, absolute flux, error and quality factor. This setting reduces significantly the download time for remote data retrieval, and simplifies considerably the structure of the NEWSIPS MXHI files, which contain additional information, not relevant for most investigations, such as the position of the orders on the bi-dimensional image, the height of the extraction slit and the order number for each extracted point. The procedure followed to concatenate the spectral orders is described in detail in the next paragraphs. #### 3.1.1 The concatenation procedure The most critical point in the concatenation of adjacent echelle orders is a suitable definition of the ”cut wavelengths” so that only the highest quality data points of the overlap region are retained. In IUE high resolution spectra, the signal-to-noise level at the edge of the orders is different in the short and long wavelength cameras. In the LWP and LWR cameras the signal-to-noise is always lower at the short wavelength end of the orders (except in the highest orders, see below). The contrary happens in the SWP camera, where the long wavelength edge of the orders is much noisier than the short wavelength one. Taking this into account, the cut wavelengths have been defined as follows: SWP: $$\lambda _{cut}=\lambda _{start}+(\lambda _{end}\lambda _{start})/3.$$ (1) LWP/LWR: $$\lambda _{cut}=\lambda _{start}+2\times (\lambda _{end}\lambda _{start})/3.$$ (2) where $`\lambda _{cut}`$: cut wavelength between orders m and m-1 $`\lambda _{start}`$: start wavelength of overlap region (order m-1) $`\lambda _{end}`$: end wavelength of overlap region (order m) These expressions are valid for all spectral orders except order 125 in the LWP camera and orders 120 to 125 in the LWR camera, where the S/N ratio in order m is systematically higher than in order m-1 in the overlap region. In these cases only the points of order m are taken. The above defined cut wavelengths (i.e. end wavelengths of order m) can be computed as a function of order number as: $$\lambda _{cut}(m)=A+\frac{B}{m}+\frac{C}{m^2}$$ (3) with the values of A, B and C given in Table 7. For non overlapping orders (lower than 73, 77 and 76 for SWP, LWP and LWR, respectively) only photometrically corrected pixels have been included in the concatenated spectra (see Fig. 6). The concatenated spectra cover the same spectral range as the INES low resolution spectra, i.e. 1150-1980 Å for the SWP camera, and 1850-3350 Å for LWP and LWR. Figures 4, 5 and 6 show examples of the concatenation procedure for the three cameras. #### 3.1.2 The error vector The NEWSIPS processing provides an error vector for the high resolution spectra which is computed simply as the sum along the extraction slit of the noise values for the individual pixels, as derived from the camera noise model. Unlike the “sigma” of the low resolution data, the “sigma” vector in the MXHI files is not flux calibrated but given in FN (Flux Number) units. In the INES high resolution data, the “sigma” spectrum is provided in absolute flux units. The calibration is performed by applying to the MXHI error vector the high resolution calibration and the time sensitivity degradation correction. ### 3.2 The resampled spectra In the INES Archive, each high resolution image has an associated ”rebinned” spectrum, which is obtained by rebinning the ”concatenated” spectrum at the same wavelength step size as low resolution data. This data set represents an important complement to the low resolution archive, and it is especially useful for time variability studies. The rebinned spectra have not been convolved with the low resolution Point Spread Function, and therefore have a better spectral resolution than low dispersion spectra. Examples of rebinned spectra are shown in Figures 7 and 8. The high resolution concatenated spectra (derived as described in the previous Section) are resampled into the low resolution wavelength space following the procedure detailed below. #### 3.2.1 The rebinning procedure The concatenated spectra have been resampled into the INES low resolution wavelength domain as defined in Paper I. The sampling interval is 1.6764 Å/pixel and 2.6693 Å/pixel, for the short wavelength and the long wavelength ranges, respectively, and the wavelength coverage is 1150-1980 Å for SWP and 1850-3350 Å for LWP and LWR. The resampling has been performed so that the total flux is conserved, that is, if n pixels with fluxes $`f_1`$, $`f_2`$, $`\mathrm{}`$, $`f_n`$ are rebinned into one, the total flux in the bin is: $$F=\underset{i=1}{\overset{i=n}{}}(\lambda _i\lambda _{i1})(f_i+f_{i1})/2.$$ (4) where the flux at the bin edges (i=1, i=n) is calculated by linearly interpolating between the two adjacent pixels. The flux of the final pixel is: $$Flux=F/step$$ (5) being “step” the low resolution pixel size defined above. #### 3.2.2 Errors The rebinned error spectrum is computed from the concatenated error spectrum according to the following expression: $$E=\frac{\sqrt{e_i^2}}{n}$$ (6) where $`e_i`$ are the errors of the original pixels in the concatenated spectrum. #### 3.2.3 Flagging The quality flag assigned to each pixel in the resampled spectrum is the sum of the flags of the original high resolution pixels. Only the most relevant quality flags present in the high resolution spectrum have been transmitted to the rebinned spectrum: * -8192: Missing minor frames in extracted spectrum * -1024: Saturated pixel * -16: Microphonic noise (for the LWR camera only) * -8: Potential DMU corrupted pixel * -2: Uncalibrated data point Other flags (e.g. reseau marks) have not been taken into account to avoid that a too large fraction of the pixels in the output spectrum come out flagged with error conditions, despite their quality is not significantly affected. Pixels corresponding to the gaps between non-overlapping orders are flagged with “-2”. ## 4 The Correction to the Wavelength Scale We already pointed out in Section 2 that there is a significant discrepancy between the wavelength scales of short and long wavelength spectra processed with NEWSIPS. This inconsistency, which is well above the accuracy of the wavelength calibration, was already present in IUESIPS data (Nichols-Bohlin and Fesen 1986, 1990). Table 8 presents a summary of the radial velocities of interstellar features in several stars, taken from Table 1, together with values from the literature (v(lit)). The LWP and LWR velocities of Table 1, being consistent with each other, have been averaged together into v(LW). The literature value for RR Tel is from Tackeray (1977). The value quoted for $`\zeta `$ Oph measured in the optical Ca II K and Na I D lines has been taken from Barlow et al. (1995). This value is in good agreement with the velocities derived from GHRS ultraviolet spectra by Savage et al. (1992), -14.9 km s<sup>-1</sup>, and Brandt et al. (1996), -15.4 km s<sup>-1</sup>. The velocities for $`\eta `$ UMa and $`\zeta `$ Cas correspond to measurements of the Ca II K and Na I D optical lines reported by Vallerga et al. (1993). The velocity quoted for $`\lambda `$ Lep refers to the optical Ca II doublet (Frisch et al. 1990), which presents two components at 2 and 18 km s<sup>-1</sup>, which cannot be resolved with IUE. The spectrum of HD 93521 presents up to nine interstellar components, with the two strongest ones located at, approximately, -10 and -60 km s<sup>-1</sup> (Spitzer and Fitzpatrick 1993). In the IUE spectra all these systems appear blended, and therefore, as in the case of $`\lambda `$ Lep, we cannot compare reliably the IUE velocities with the optical values. According to the data in Table 8, the mean difference between long and short wavelength (large aperture) velocities is 17.7 km s<sup>-1</sup>, i.e. SWP velocities are systematically more negative. The mean difference between the long wavelength and the literature values is 8 km s<sup>-1</sup>. A similar test was made to check the consistency between the wavelength scale of spectra taken through the large and small apertures. Being the number of small aperture high resolution spectra very limited, useful data were available only for the star $`\zeta `$ Oph in the SWP and LWR cameras. The wavelength scales of the small and large aperture spectra are fully consistent in the short wavelength range: v(LAP)-v(SAP) = 1.1$`\pm `$6.7 km s<sup>-1</sup>, while for the LWR camera a significant difference is found: v(LAP)-v(SAP) = -13.7$`\pm `$4.1 km s<sup>-1</sup>. The lack of a suitable data set precludes an accurate determination of the offset between the large and small aperture scales in LWP spectra, but the limited tests performed seem to indicate that small aperture velocities are systematically lower, although the actual difference cannot be quantified. The reason for the discrepancy between the short and long wavelength range velocity scales is not clear, while the large/small aperture discrepancy in LW spectra is most likely related to the transfer of the dispersion constants from the small to the large aperture: the dispersion relations were derived from spectra taken through the small aperture and then transferred to the large aperture on the basis of the assumed aperture separations. In order to provide an internally consistent wavelength scale within the INES system, a velocity correction of +17.7 km s<sup>-1</sup> has been applied to the wavelength scale of SWP high resolution spectra. The wavelength scale of LWP/LWR small aperture spectra has been corrected by +13.7 km s<sup>-1</sup>. With these corrections, the INES velocity scale is consistent with the optical determinations. ## 5 Conclusions In this paper we have discussed the overall quality of IUE high resolution spectra processed with the NEWSIPS system. The stability of the wavelength scale ($``$ 5 km s<sup>-1</sup>) is within the limits imposed by the acquisition and tracking accuracy. No appreciable distortions in the wavelength scale over the full spectral range or during the spacecraft lifetime have been found. A discrepancy of 9 km s<sup>-1</sup> has been found in the velocities derived from the two components of the Mg II doublet at 2800 Å in the LWP camera. The correct velocity is provided by the K line measured on spectral order 83. No such discrepancy has been found in LWR spectra. The wavelength scales of NEWSIPS short and long cameras present an inconsistency, which is well above the repeatability errors quoted above. Measurements of narrow interstellar lines have shown that SWP velocities are systematically more negative by -17.7 km s<sup>-1</sup>, on average. A similar discrepancy has been detected in long wavelength small aperture spectra, whose velocities are more negative than those from long wavelength large aperture spectra by -13.7 km s<sup>-1</sup>. The determination of the inter–order background has greatly improved with respect to the the IUESIPS system, especially at the shortest wavelengths, as shown by the absence of negative fluxes in the cores of saturated absorption lines and by the greater accuracy of equivalent width measurements. The INES system derives two spectra from each original NEWSIPS MXHI file. In the first one, the “concatenated” spectrum, the spectral orders are merged together and the overlap regions are suppressed according to an algorithm which computes suitable cut wavelengths which maximize the signal-to-noise ratio. This spectrum contains an error vector calibrated into absolute flux units, which is not available in NEWSIPS data. Since the INES high resolution spectra are obtained from NEWSIPS MXHI files, all previous considerations about the stability of the wavelength scale, the stability and accuracy of the flux scale and the validity of the background extraction are applicable to them. The second output product is the “rebinned” spectrum, which is the “concatenated” spectrum after resampling at the low resolution wavelength step. To correct for the discrepancies found in the NEWSIPS high resolution wavelength scale, a velocity correction of +17.7 km s<sup>-1</sup> for SWP spectra and of +13.5 km s<sup>-1</sup> for LWP/LWR small aperture spectra has been applied to the INES “concatenated” spectra. With this correction, the overall INES velocity scale is self–consistent, and agrees to within 8 km s<sup>-1</sup> with the optical velocity scale. ###### Acknowledgements. We would like to acknowledge the contribution of all VILSPA staff to the development and production of the INES system, and the referee, Dr. J.S. Nichols, for her useful comments.
no-problem/9910/quant-ph9910077.html
ar5iv
text
# EPR states for von Neumann algebras ## I Introduction A key ingredient in the argument of the famous paper of Einstein Podolsky and Rosen was the idea that in suitable states with perfect correlations an “element of reality” of a subsystem could be determined by measuring on a distant system, hence without any perturbation. States with such perfect correlations are nowadays used in many ways in Quantum Information Theory, and even in practice. It was therefore interesting to see a paper in today’s posting on quant-ph in which a mathematical characterization of all such cases of perfect correlation was undertaken. The present note arose from reading this paper, and trying to find the key points in the rather cumbersome arguments. Since this resulted in a much shorter argument applying to a wider context, I compiled these notes for the benefit of other readers of the archive. ## II EPR states and Doubles We will look at the general situation of a quantum system, in which two subsystems are singled out, whose observables are given by two commuting von Neumann algebras $`𝒜`$ and $``$, respectively. That is, $`𝒜`$ is an algebra of bounded operators acting on a Hilbert space $``$, which is closed under limits in the weak operator topology and the \*-operation; the same holds for $``$, and any $`A𝒜`$ and $`B`$ commute. The special case considered in was the most familiar case, namely of a tensor product Hilbert space $`=_1_2,`$ with $`𝒜`$ and $``$ the algebras of observables $`A1\mathrm{I}`$ and $`1\mathrm{I}B`$, respectively. While this covers most situations considered in quantum mechanics, and especially in quantum information theory (see, however, ), this wider framework is needed in quantum field theory and statistical mechanics of systems with infinitely many degrees of freedom. The key feature of the situation is that every observable $`A𝒜`$ can be measured jointly with every $`B`$. Now in we find the following concept: a density operator $`\rho `$ on $``$ is said to be an EPR-state for an observable $`A=A^{}𝒜`$, if there is an observable $`A^{}`$ such that the joint distribution of $`A`$ and $`A^{}`$ with respect to the state is concentrated on the diagonal<sup>*</sup><sup>*</sup>*Actually, consider only vector states, and only require the existence of a $`\stackrel{~}{A}^{}`$ and a Borel function $`g`$ such that $`A^{}=g(\stackrel{~}{A}^{})`$ satisfies the above condition. But since we may then just replace $`A^{}`$ by $`g(\stackrel{~}{A}^{})`$, this only fakes a gain in generality. In other words, $`A𝒜`$ and $`A^{}`$ are equal with probability one with respect to $`\rho `$, or, $$\mathrm{tr}\left(\rho (AA^{})^2\right)=0.$$ (1) We will call $`A^{}`$ the double of $`A`$ in $``$, and denote by $`D(𝒜,,\rho )`$ the subspace of elements $`A𝒜`$ for which a double exists. This is the object determined in in a special case. Now condition (1) can be written as $`\mathrm{tr}\left(X^{}X\right)=0`$ with $`X=\sqrt{\rho }(AA^{})`$, hence implies $`X=0`$, or $$\rho (AA^{})=(AA^{})\rho =0.$$ (2) Obviously, this equation makes sense also for non-hermitian $`A,A^{}`$, so we use it to extend the definition of doubles and of $`D(𝒜,,\rho )`$ to this case as well. Note that for vector states $`\rho =|\psi \psi |`$ this reduces to the two equations $`A\psi =A^{}\psi `$ and $`A^{}\psi =A^{}\psi `$. If $`A_1,A_2D(𝒜,,\rho )`$, we have $`A_1A_2\rho =A_1A_2^{}\rho =A_2^{}A_1\rho =A_2^{}A_1^{}\rho `$, and similarly on the other side, so $`A_2^{}A_1^{}`$ is a double of $`A_1A_2`$. This makes $`D(𝒜,,\rho )`$ an algebra. Since we can choose the double $`A^{}`$ to have the same norm as $`A`$ (truncate by a spectral projection, if necessary. This won’t make a difference on the support of $`\rho `$) a simple compactness argument for weak limits shows that $`D(𝒜,,\rho )`$ is also weakly closed, so it is a von Neumann algebra. To further identify this algebra note that, for $`AD(𝒜,,\rho )`$ and any $`A_1𝒜`$, $`\mathrm{tr}\left(\rho AA_1\right)=\mathrm{tr}\left(\rho A^{}A_1\right)=\mathrm{tr}\left(\rho A_1A^{}\right)=\mathrm{tr}\left(A^{}\rho A_1\right)=\mathrm{tr}\left(A\rho A_1\right)=\mathrm{tr}\left(\rho A_1A\right)`$. That is to say $`D(𝒜,,\rho )`$ is contained in the centralizer of $`\rho `$ in $`𝒜`$, which we will denote by $`C_\rho (𝒜)`$. Note that the centralizer does not depend on the entire density operator $`\rho `$, but only on the linear functional it induces on $`𝒜`$. So in the special case when $`𝒜`$ is isomorphic to the bounded operators on a Hilbert space $`_A`$, we can express this restriction by a density operatorIn this density operator is written as $`\rho _A=L_\psi ^{}L_\psi `$, where $`\psi _A_B`$ is the vector determining $`\rho `$, and $`L_\psi :_A_B`$ is the conjugate linear Hilbert-Schmidt operator they could have defined in a basis free way through the formula $`\psi ,\chi _A\chi _B=L_\psi (\chi _A),\chi _B`$ and an invocation of Riesz’s Theorem. $`\rho _A`$ on $`_A`$. The centralizer in this case is simply the set of operators commuting with $`\rho _A`$. In the trivial case considered in it is easy to see that, conversely, any element of the centralizer indeed has a double. In the more general situation that is not true, but there is one standard situation in which it is. Moreover, the general case can be understood completely in terms of the standard case. In this standard case $`\rho `$ is a vector state, given by a vector $`\psi `$, which is cyclic and separating for $`𝒜`$, i.e., $`𝒜\psi `$ is dense in $``$, and $`A\psi =0`$ for $`A𝒜`$ implies $`A=0`$. In this situation the modular theory of Tomita and Takesaki applies, and we get the following Theorem: TheoremLet $`𝒜`$ be a von Neumann algebra with cyclic and separating vector $`\psi `$, and set $`\rho =|\psi \psi |`$. Then $`D(𝒜,𝒜^{},\rho )=C_\rho (𝒜).`$ Moreover, the double $`A^{}𝒜^{}`$ of any $`AC_\rho (𝒜)`$ is unique. The following proof is sketchy, because it fails to explain modular theory, which is, however, well documented and accessible (e.g., ). The basic object of that theory is the unbounded conjugate linear operator $`S`$ defined by $`SA\psi =A^{}\psi `$. Its polar decomposition $`S=J\mathrm{\Delta }^{1/2}`$ yields an antiunitary involution $`J`$ such that $`J𝒜J=𝒜^{}`$. Then $`A𝒜`$ belongs to the centralizer iff $`\mathrm{\Delta }`$ commutes with $`A`$ in the sense that $`\mathrm{\Delta }^{it}A\mathrm{\Delta }^{it}=A`$, which also implies $`\mathrm{\Delta }A\psi =A\psi `$ and $`\mathrm{\Delta }A^{}\psi =\psi `$. We claim that in that case $`A^{}=JA^{}J𝒜^{}`$ is a double of $`A`$ in $`𝒜^{}`$: we have $`A^{}\psi =JA^{}J\psi =JA^{}\psi =JSA\psi =\mathrm{\Delta }A\psi =A\psi `$. For the uniqueness of the double we only need that $`\psi `$ is cyclic, which is equivalent to $`\psi `$ being separating for $`𝒜^{}`$. Then any two doubles $`A^{}`$ and $`\stackrel{~}{A}^{}`$, which have to satisfy $`A^{}\psi =A\psi =\stackrel{~}{A}^{}\psi `$ must be equal. This concludes the proof. As a corollary we can compute the algebra: $`D(𝒜,,\rho )`$ for $`𝒜^{}`$. Since a double in $``$ is also a double in $`𝒜^{}`$, it is the subalgebra of $`C_\rho (𝒜)`$ for which the doubles $`JA^{}J`$ lie in $``$. That is, $$D(𝒜,,\rho )=C_\rho (𝒜)JJ.$$ (3) To reduce the general case to the case with cyclic and separating vector for $`𝒜`$, one first enlarges the Hilbert space by a suitable tensor factor, so that $`\rho `$ extends to a pure state $`|\psi \psi |`$ on the enlarged space. Denote by $``$ and $`^{}`$ the closed subspaces generated by $`𝒜\psi `$ and $`𝒜^{}\psi `$, respectively. Then, for $`A𝒜`$, we have $`A`$, and if $`A`$ has a double in $`𝒜^{}`$, we get $`AB^{}\psi =B^{}A\psi =B^{}A^{}\psi ^{}`$, which implies that $`A^{}^{}`$. The same arguments apply to the equation $`A^{}\psi =A^{}\psi `$, so we find that both $`A`$ and its double $`A^{}`$ have to commute with both the projection $`R𝒜^{}`$ onto $`^{}`$ and the projection $`R^{}𝒜`$ onto $`^{}`$. Hence any $`AD(𝒜,,\rho )`$ can be split in $`𝒜`$ into $`A=(1\mathrm{I}R^{})A(1\mathrm{I}R^{})+R^{}AR^{}`$, where the first summand has zero as its double, and only the second summand is of interest in this problem. Similarly, any putative double can be split into an irrelevant part $`(1\mathrm{I}R)A^{}(1\mathrm{I}R)`$, which only creates non-uniqueness, and an essential part $`RA^{}R`$. Hence we may restrict consideration to the subspace $`^{}`$ on which $`\psi `$ is indeed cyclic and separating. ## III Concluding Remarks Finally, a comment seems in order about the relevance of the generalization of the concept of EPR-states to the general von Neumann algebraic setting. First of all, in quantum field theory, type I algebras (in von Neumann’s classification; i.e., those considered in ) never appear as the observable algebras of local regions, but interesting insights can be gained from studying and EPR-phenomena where spacelike separated are localized close to each other (see and references therein). Secondly, there is a conclusion in , which may seem striking at first glance, namely that an observable which possesses a double necessarily has discrete spectrum. In view of the present note this becomes immediately clear: it is an artefact of the type I situation, where all centralizers are sums of finite dimensional matrix algebras. As soon as one drops this constraint, the conclusion disappears: a prototype is the trace on a type II<sub>1</sub> factor, where the centralizer is the whole algebra, and many observables with continuous spectrum exist. In fact, such an algebra, which arises as the tensor product of infinitely many qubit pairs with maximal violations of Bell’s inequality, plays a canonical role in the study of extremely strong violations of Bell’s inequalities in .
no-problem/9910/astro-ph9910012.html
ar5iv
text
# Correlation among QPO frequencies and Quiescence-state Duration in Black Hole Candidate GRS 1915+105 ## 1 Introduction X-ray transient source GRS 1915+105 in our galaxy exhibits various types of quasi-periodic oscillations with frequencies ranging from $`0.0010.01Hz`$ to $`67Hz`$ (Morgan et al, 1997; Paul et al. 1998; Yadav et al. 1999). The object is sometimes in a flaring state with regular and quasi-regular bursts and quiescences, while at some other time it is in usual low-hard and high-soft states. While the light curves look very chaotic with no apparent similarity between observations in two different days, some of the features are classifiable: (a) low-frequency QPO ($`\nu _L0.0010.01`$Hz) is due to the transition between burst and quiescence states (which we term as ‘on’-state and ‘off’-state respectively) and vice versa; (b) the intermediate frequency QPO ($`\nu _I110`$Hz) could be due to oscillations of shocks located at tens to hundreds of Schwarzschild radii $`R_g`$ ($`=2GM/c^2`$ is the Schwarzschild radius. Here, $`M`$ is the mass of the black hole, $`G`$, and $`c`$ are the gravitational constant and velocity of light respectively) and (c) very high frequency QPO ($`\nu _H67`$Hz), if at all present, could be due to oscillations of the shocks located at several $`R_g`$. $`\nu _I`$ is generally observed during quiescence states. Typically, a shock located at $`R_s`$ (unless mentioned otherwise, measured hereafter in units of $`R_g`$), produces an oscillation of frequency, $$\nu _I=\frac{1}{t_{ff}}\frac{1}{R}R_s^\alpha \frac{cv_0}{R_g}s$$ $`(1)`$ where, $`R`$ is the compression ratio of the gas at the shock. Here we used a result of Molteni, Sponholz & Chakrabarti (1996, hereafter referred to as MSC96) which states that the time-period of QPO oscillation is comparable to the infall time ($`t_{infall}=t_{ff}RR_s^{3/2}`$) in the post-shock region. However, we assume now that the post-shock velocity is not necessarily $`R_s^{1/2}`$ dependent as in a free fall but could be slowly varying, especially when angular momentum is high. In this case, $`t_{infall}R_s^\alpha `$. Clearly, $`\alpha 3/2`$ for a low angular momentum freely falling matter and $`\alpha 1`$ for a post-shock flow of constant velocity $`v_0c/R`$. Here $`v_0`$ is a dimensionless quantity which is exactly unity for a free-fall gas. For a gas of $`\gamma =4/3`$, $`R7`$ and for $`\gamma =5/3`$, $`R4`$, when the shock is strong. Thus, for instance, for a $`\nu _I=6`$Hz, $`R_s38`$ for $`M=10M_{}`$ and $`\gamma =4/3`$. For $`\nu _H=67`$Hz, $`R_s8`$ for the same parameters. MSC96 and Chakrabarti & Titarchuk (1995, hereafter CT95) postulated that since black hole QPOs show a large amount of photon flux variation, they cannot be explained simply by assuming some inhomogeinities, or perturbations in the flow. If the QPOs are really due to shock oscillations, they should almost disappear at low energy soft X-rays, since these X-rays are produced in pre-shock flow which does not participate in large-scale oscillations. Second, a shock-compressed gas with compression ratio $`R>1`$, must produce outflows or extended corona which pass through sonic points located at at $`R_c=f_0R_s/2`$, where $`f_0=R^2/(R1)`$, if the flow is assumed to be isothermal till $`R_c`$ (Chakrabarti 1998, 1999, hereafter C98 and C99 respectively). In this solution the location of the sonic point $`R_c`$ and ratio between outflow and inflow rates are functions of the compression ratio $`R`$ of the shock alone. Till the sonic point $`R_c`$, matter is subsonic and this subsonic volume is filled in a time of (C99), $$t_{fill}=\frac{4\pi R_c^3<\rho >}{3\dot{M}_{out}},$$ $`(2)`$ where, $`<\rho >`$ is the average density of the sonic sphere, and $`\dot{M}_{out}`$ is the outflow rate. The Compton cooling becomes catastrophic when $`<\rho >R_ck_{es}\stackrel{>}{}1`$, $`k_{es}=0.4`$ is the Thomson scattering opacity. Thus the duration of the off-state (i.e., duration between the end of a burst and the beginning of the next burst) is given by, $$t_{off}=\frac{4\pi R_c^2}{3\dot{M}_{out}k_{es}}.$$ $`(3)`$ We use now a simple relation between inflow and outflow rates given by (C98, C99), $$\frac{\dot{M}_{out}}{\dot{M}_{in}}=R_{\dot{m}}=\frac{\mathrm{\Theta }_{out}}{\mathrm{\Theta }_{in}}\frac{R}{4}[\frac{R^2}{R1}]^{3/2}exp(\frac{3}{2}\frac{R^2}{R1})$$ $`(4)`$ where, $`\mathrm{\Theta }_{in}`$ and $`\mathrm{\Theta }_{out}`$ are the solid angles of the inflow and the outflow respectively. Because of the uncertainties in $`\mathrm{\Theta }_{in}`$, $`\mathrm{\Theta }_{out}`$ and $`\dot{M}_{in}`$ (subscript ‘in’ refers to the accretion rate) we define a dimensionless parameter, $$\mathrm{\Theta }_{\dot{M}}=\frac{\mathrm{\Theta }_{out}}{\mathrm{\Theta }_{in}}\frac{\dot{M}_{in}}{\dot{M}_{Edd}}.$$ $`(5)`$ where, $`\dot{M}_{Edd}`$ is the Eddington rate. Using Eqs. (4-5), we get the following expression for $`t_{off}`$ as, $$t_{off}=\frac{10.47}{(R1)^{1/2}}\frac{R_s^2R_g^2exp(f_0\frac{3}{2})}{\dot{M}_{Edd}\mathrm{\Theta }_{\dot{M}}}s.$$ $`(6)`$ Or, eliminating shock location $`R_s`$ using eq. (1) and $`\alpha =3/2`$, $`v_0=1`$, we obtain, $$t_{off}=14.1\frac{exp(f_0\frac{3}{2})}{R^{4/3}(R1)^{1/2}\mathrm{\Theta }_{\dot{M}}}(\frac{M}{10M_{}})^{1/3}\nu _I^{4/3}s$$ $`(7)`$ For an average shock of strength $`2.5\stackrel{<}{}R\stackrel{<}{}3.3`$, the result is insensitive to the compression ratio. Using average value of $`R=2.9`$ and for $`\mathrm{\Theta }_{\dot{M}}0.1`$ (which corresponds to $`0.1`$ Eddington rate for $`\mathrm{\Theta }_{out}\mathrm{\Theta }_{in}`$) we get, $$t_{off}=461.5(\frac{0.1}{\mathrm{\Theta }_{\dot{M}}})(\frac{M}{10M_{}})^{1/3}\nu _I^{4/3}s.$$ $`(8)`$ Thus, the duration of the off-state must go down rapidly as the QPO frequency increases if the flow geometry and the net accretion rate remains fixed. When one considers a constant velocity post-shock flow, $`\alpha =1`$, and $`v_0=0.066`$ (chosen so as to keep the same numerical coefficient as in eq. 8) the above equation is changed to, $$t_{off}=461.5(\frac{0.1}{\mathrm{\Theta }_{\dot{M}}})(\frac{M}{10M_{}})^1(\frac{v_0}{0.066})^2\nu _I^2s.$$ $`(9)`$ Interestingly, $`v_0=0.066`$ (i.e., a constant velocity of seven percent of the velocity of light) is very reasonable for a black hole accretion. If the hot post-shock gas of height $`R_s`$ intercepts $`n`$ soft photons per second, from the pre-shock Keplerian component (CT95), it should intercept about $`nf_0^2/4`$ soft photons per second when the sonic sphere of size $`R_c`$ is filled in. Thus, the photon flux in the burst state should be about $`f_0^2/4\stackrel{>}{}4`$ times larger compared to the photon flux in off-state. Depending on the degree of flaring, and the fact that the wind is bent backward due to centrifugal force, the interception may be higher. Since $`\nu _L`$ is basically due to recurrences of on and off-states, it is clear that $`\nu _L1/<t_{off}+t_{on}>`$. Here, $`t_{on}`$ is the duration of the burst state which may be very small for extremely regular (spiky) bursts reported by in Taam, Chen & Swank (1997) and Yadav et al. 1999. In this case, $$\nu _L=0.0022(\frac{\mathrm{\Theta }_{\dot{M}}}{0.1})(\frac{10M_{}}{M})\nu _I^2\mathrm{Hz}.$$ $`(10)`$ When on-state has a non-negligible duration ($`t_{on}0`$), it is found to be directly related to the $`t_{off}`$ (Belloni et al., 1997; Yadav et al. 1999). Assuming $`t_{on}t_{off}`$, the $`\nu _L`$ would be less by a factor of two when on states are broad. When the burst is very regular but ‘spiky’ (i.e., with momentary on-state), $`t_{on}0`$. The presence of $`\nu _L`$ for these regular ‘spiky’ bursts are reported in Manickam and Chakrabarti (1999a) Thus, if our shock oscillation solution for QPO is correct, the observations must pass all the following tests: (a) the QPO in the off-state must disappear at low energies, (b) the QPO must generally be absent in the on-state, when the sonic sphere is cooled down, (c) the intermediate QPO frequency must be correlated with $`t_{off}`$ as in Eqs. (8-9) and (d) the photon flux must jump at least a factor of $`4`$ or more when going from quiescence to burst state. In addition, (e) lowest frequency $`\nu _L`$ observed must be correlated to the intermediate QPO frequency $`\nu _I`$ by eq. (10). There are uncertainties regarding the inflow velocity and actual volume-filling time, but we expect that above relations to be satisfied in general. In the present Letter we show that observations do pass through these tests and therefore the shock oscillation model may be the correct picture. In the next Section, we present detailed analysis of some of the observational results on GRS1915+105 available in public archive and show how they point to the shock oscillation model. Finally in §3, we make concluding remarks. ## 2 Observational Results Figure 1 shows a light curve of the first phase of observation of June 18th, 1997, on the right panel. The average count rate (per second) vary from around $`5000`$ in the off-state to about $`24,000`$ in the on-state. The ratio of the fluxes is about $`5`$. The duration of these states vary chaotically. At the mean location of a few off-states (arrows on right axis), observation time is marked in seconds. For each of these off-states, the power density spectrum (PDS) in arbitrary units is drawn in the left panel. The most prominent QPO frequencies ($`\nu _I`$ in our notation) are connected by a dashed curve just to indicate its variation with time. There are some weaker peaks which follow the short dashed curve, indicating that they may be higher harmonics. Observations of this kind for several other days show similar variations in QPO frequencies and details are presented elsewhere (Manickam & Chakrabarti, 1999ab). In Figure 2, variation of $`\nu _I`$ with the duration $`t_{off}`$ of the off-states (triangles) for the whole observation period on June 18th, 1997 in the log-log scale. Observational results from several other days (May 26th, 1997; June 9th, 1997; June 25, 1997; October 7th, 1996; October 25th, 1996) are also plotted on the same curve with circles, filled squares, squares, filled circles and stars respectively. We did not put error bars since in duration scale is it uniformly $`\pm 2`$ seconds, and in frequency scale error bar is decided by the chosen bin-size while obtaining the PDS (In our analysis it remains around 0.15-0.25 Hz, increasing monotonically with QPO frequency). Times at which the photon flux is halved during the on-to-off and off-to-on transitions are taken respectively to be the beginning and the end of an off-state. Equation (9) is plotted in dashed lines with (from uppermost to the lowermost line) $`\mathrm{\Theta }_{\dot{M}}=0.0034`$, $`0.0123`$, $`0.0163`$, $`0.028`$, $`0.0293`$ and $`0.043`$ respectively indicating a slow variation of the accretion rate, provided the collimation property remains the same. Two dotted curves, on the other hand, represent eq. (8), with $`\mathrm{\Theta }_{\dot{M}}=0.06`$ (top) and $`0.093`$ (bottom) respectively. We find that the inverse-squared law (eq. 9) may be a better fit to the observations. Since on Oct. 7th, 1996 the points are lumped closed to the lower right corner, not much could be said about whether it follows our relation or not, but it is to be noted that its general behaviour (low frequency, high duration) follows our result for any reasonable $`\mathrm{\Theta }_{\dot{M}}`$. Table 1 shows the variation of the $`\nu _I`$ (taken from Fig. 2) with days of observations. In 3rd column, the expected $`\nu _L`$ has been put ($`\nu _L/2`$ for Oct. 7 and June 18 results as they show $`t_{on}t_{off}`$). In 4th Column, the observed $`\nu _L`$ is given. Generally, what we observe is that, when the drift of $`\nu _I`$ is large, PDS around $`\nu _L`$ is also broad In any case, observed $`\nu _L`$ agrees with our expectations. Figure 3 shows the PDS of the off-state centered at 1576s (see Fig. 1) of the June 18th observation. The energy range is given in each panel Clearly, QPO disappears completely at low energies, exactly as is expected in the shock oscillation model (MSC96) though QPO frequencies, when present, seem to be energy independent (see, Manickam & Chakrabarti, 1999b for details). The pre-shock flow which emits soft radiation participates little in the oscillation as the fractional change in the Keplerian disk due to shock oscillation is negligible. On the contrary, the fractional change in the size of the post-shock flow during the oscillation is large. Thus, the flux of hard X-rays oscillates as the size of the post-shock region oscillates. This is the cause of QPO in our model. In on-states, when they exist, the QPO is found to be very weak. ## 3 Discussion and Conclusions In this Letter, we have discovered a relation between the QPO frequency in 1-10Hz range with the duration of the quiescence state at which the QPO is observed. We also derived a relation between the low QPO frequency and the intermediate QPO frequency. We analyzed several days of RXTE observations and showed that our relations are satisfied, especially when the average bulk velocity in the post-shock region is constant. We showed that the QPO disappears in the low energy, but is very strong in high energies. The photon flux is found to fluctuate, typically by a factor of $`4`$ or more, indicating that a vertically inflated post-shock region is responsible for interception of the soft photons from a Keplerian disk. This factor seems to be similar to $`f_0^2/4`$ (C98, C99) for any reasonable compression of the gas, which strengthens our belief that the quasi-periodic cooling of the sonic sphere of the outflow from the post-shock region may be responsible for the rapid transitions between on and off-states. Our computation of the duration of the off-states from this considerations are found to be quite reasonable. We find that for $`\nu ^{4/3}`$ law, $`t_{off}`$ is insensitive to the mass of the black hole while for $`\nu ^2`$ law, $`t_{off}`$ is inversely proportional to the mass. Trudolyubov et al. (1999) recently found the duration of ‘hard’ states varies as $`7/3`$ power of the lowest centroid frequency for a group of data while we find an inverse-squared law when we choose the QPO frequency where the power is strongest. Although we chose a specific model for the outflow (C99) for concreteness, the physical processes invoked are generic and the explanation should be valid even when other models for outflows are used (except self-similar models). The shock location near the inner edge of the Keplerian disk can drift on viscous time scale (see appendix of CT95 where the transition from Keplerian to sub-Keplerian is plotted as a function of viscosity). The shocks can evacuate the disk, and form once again, very similar to what was seen in the numerical simulation of Ryu et al. (1997). This drift would cause a drift in frequency as Trudolyubov et al. (1999) recently showed (see also, Belloni et al. 1997; Markwardt, Swank & Taam 1999, Muno et al. 1999). Our model invokes also outflows (which we believe form naturally in the post-shock region and from the transition region from Keplerian to a sub-Keplerian flow) which we find useful to explain variations in photon counts between ‘off’ and ‘on’ states. This work is partly supported by a project (Quasi Periodic Oscillations in Black Hole Candidates) funded by Indian Space Research Organization (ISRO). The authors thank NASA for making RXTE data available and ISRO for creating a Data Bank at their Centre where these data are stored.
no-problem/9910/cond-mat9910101.html
ar5iv
text
# Bloch-like oscillations induced by charge discreteness in quantum mesoscopic rings. (Universidad de Tarapacá, Departamento de Física, Casilla 7-D, Arica, Chile) We study the effect of charge discreteness in a quantum mesoscopic ring wih inductance $`L`$. The ring is pierced by a time depending external magnetic field. When the external magnetic flux varies uniformly, the current induced in the ring oscillates with a frequency proportional to the charge discreteness and the flux variation. This phenomenon is very similar to the well known Bloch’s oscillation in crystals. The similitude is related to the charge discreteness in the charge-current representation, which plays the same role as the constant lattice in crystals. PACS: 03.65 Quantum Mechanics. 05.60.G Quantum Transport Process. 07.50.E Electronics Circuits. 84.30.B Circuits Theory. Recent advances in the development of mesoscopic physics, have allowed an increasing degree of miniaturization and some parallel advances in nanoelectronics. On this respect, the quantization of mesoscopic electrical circuits appears as a natural task to undertake. In this article we discuss the effects of a time depending magnetic flux, $`\varphi _{ext}(t)`$, acting on a mesoscopic ring (perfect conductor) with self-inductance $`L,`$ producing in this way the equivalent to a nondissipative circuit. From the classical point of view, the motion equation for the current can be obtained using energy balance for this nondissipative circuit. The electrical power $`P`$ transferred to a mesoscopic ring, by an external magnetic field $`B_{ext}(t),`$ is given by $$P=I\epsilon =I\left(\frac{d\varphi _{ext}}{dt}\right),$$ (1) where $`I`$ is the induced current; but, in the slow time variation regime, this power is used to overcome the electromotive force in the self-inductance $`L,`$ as the electric current $`I`$ is setting up, i.e., $$P=I\epsilon =I\left(L\frac{dI}{dt}\right).$$ (2) In this way, from (1) and (2), we obtain the relationship: $$L\frac{dI}{dt}=\left(\frac{d\varphi _{ext}}{dt}\right).$$ (3) Because of the similitude between electric circuits and particle dynamics, the quantization of circuits seems straightforward . Nevertheless, as pointed out by Li and Chen , the charge discreteness must be considered in the quantization process. Let $`q_e`$ be the elementary charge and consider the charge operator $`\widehat{Q}`$ as given by (spectral decomposition) $$\widehat{Q}=q_e\underset{n}{}nnn,$$ (4) where $`n`$ is an integer. Following the references , and from the motion equation (3), the Hamiltonian of the ring in the charge representation is given by $$\widehat{H}=\frac{\mathrm{}^2}{2q_e^2L}\underset{n}{}\left\{nn+1+n+1n2nn\right\}$$ $$\frac{d\varphi _{ext}}{dt}q_e\underset{n}{}nnn.$$ (5) Moreover, the current operator $`\widehat{I}=\frac{1}{i\mathrm{}}[\widehat{H},\widehat{Q}]`$ is given explicitly by $$\widehat{I}=\frac{\mathrm{}}{2iLq_e}\underset{n}{}\left\{nn+1n+1n\right\}.$$ (6) The eigenstates and eigenvalues of the operator $`\widehat{I}`$ are easily found. In fact, the eigenstates are $$I_k=\underset{n}{}e^{ikn}n,$$ (7) where the quantum number $`k`$ runs between $`0`$ and $`2\pi `$. The current operator $`\widehat{I}`$ acting on the eigenstates (7) gives: $$\widehat{I}I_k=\frac{\mathrm{}}{Lq_e}\mathrm{sin}(k)I_k,$$ (8) that is, the eigenvalues $`I_k`$ of the operator $`\widehat{I}`$ are $$I_k=\frac{\mathrm{}}{Lq_e}\mathrm{sin}(k),$$ (9) which are bounded since $`\left|I_k\right|\mathrm{}/Lq_e`$. As it was said before, we will study a mesoscopic ring with self-inductance $`L`$ which is pierced by a magnetic field $`B_{ext}(t)`$ producing a time depending flux $`\varphi _{ext}(t)`$. In fact, we will show that, if we begin with one eigenstate of the current operator then, the dynamic evolution is related to a series of states with index $`k`$. Explicitly, $`k(t)=\varphi _{ext}(t)+k_o,`$ and then, for a homogeneously increasing magnetic flux, an oscillating behavior of the current exists. The frequency of the oscillations depends on the external flux variation, and is given by $`\omega =\frac{q_e}{\mathrm{}}\left(\frac{d\varphi _{ext}}{dt}\right),`$which is a constant, if $`\left(\frac{d\varphi _{ext}}{dt}\right)`$ is a constant. We stress the similitude between this behavior and Bloch’s oscillations in crystals under an external dc field \[4-7\]. In our case, it is the charge discreteness which plays a role equivalent to the lattice constant. In order to find the oscillations, we proceed as follows: let $`k(t)`$ be the state at time $`t`$ which is assumed as an eigenstate of the current operator $`\widehat{I}`$. Let $`k(t+\mathrm{\Delta }t)`$ be the state of the systems at time $`t+\mathrm{\Delta }t`$. To show that this state is also an eigenstate of the current operator, we use the first order evolution equation $$k(t+\mathrm{\Delta }t)=k(t)+\frac{\mathrm{\Delta }t}{i\mathrm{}}\widehat{H}k(t).$$ (10) Using the commutator $$[\widehat{I},\widehat{H}]=\frac{\mathrm{}}{2iL}\left(\frac{d\varphi _{ext}}{dt}\right)\underset{n}{}\left\{nn+1+n+1n\right\},$$ (11) and neglecting the second order terms ($`\mathrm{\Delta }t^2`$), we obtain $$\widehat{I}k(t+\mathrm{\Delta }t)=\left(I_k+\frac{\mathrm{\Delta }t}{L}\frac{d\varphi _{ext}}{dt}\mathrm{cos}k\right)k(t+\mathrm{\Delta }t).$$ (12) That is, $`k(t+\mathrm{\Delta }t)`$ is an eigenstate of the current operator with eigenvalue $`\left(I_k+\frac{\mathrm{\Delta }t}{L}\frac{d\varphi _{ext}}{dt}\mathrm{cos}k\right)`$. So, if the state of the system is initially an eigenstate of the current operator, then it always evolves toward a state of the current with quantum number $`k(t).`$ Clearly, from (12) and going to the limit $`\mathrm{\Delta }t0,`$ we obtain the evolution equation for the quantum number $`k`$ (acceleration theorem ): $$\frac{dk}{dt}=\frac{q_e}{\mathrm{}}\frac{d\varphi _{ext}}{dt}.$$ (13) In this way, $`k`$ has a linear behavior with respect to the external flux $$k(t)=\frac{q_e}{\mathrm{}}\varphi _{ext}(t)+k_o,$$ (14) assuming that $`\varphi _{ext}(0)=0.`$ If we consider that the magnetic flux varies uniformly with time: $$\varphi _{ext}(t)=\alpha t,$$ (15) the quantum number $`k`$ becomes uniformly accelerated and then, the current $`k(t)\widehat{I}k(t)`$ oscillates with a frequency $$\omega =\frac{q_e}{\mathrm{}}\alpha .$$ (16) As it was said before, these oscillations in the current, and the charge, are equivalent to Bloch’s oscillations in crystals under an external dc electric field. This analogy is very much related to charge quantization (4), which plays a role similar to the constant lattice in a crystal. Finally we note that a Hamiltonian like that described by equation (5), under an external dc electric field has been extensively studied in solid state physics (tight-binding Hamiltonian). All eigenstates are factorial localized and the spectrum is discrete . Also, we want to emphasize here that, as showed in , the discretization process related to a Hamiltonian like (5) is not univocal.
no-problem/9910/hep-ph9910507.html
ar5iv
text
# 1 Introduction ## 1 Introduction Extra neutral gauge bosons are a feature of many models of physics beond the Standard Model (SM). If discovered they would represent irrefutable proof of new physics, most likely that the SM gauge group must be extended . The search for the $`Z^{}`$ is included in the physics programme of all the present and future high energy collider facilities. In particular, the strategies for the experimental determination of the $`Z^{}`$ couplings to the ordinary SM degrees of freedom, and the relevant discovery limits, have been discussed in the large, and still growing, literature on this subject -. Taking into account the limit $`M_Z^{}>600700\mathrm{GeV}`$ from ‘direct’ searches at the Tevatron , only ‘indirect’ (or virtual) manifestations of the $`Z^{}`$ can be expected at LEP2 and at the planned $`e^+e^{}`$ linear collider (LC) with CM energy $`\sqrt{s}=500`$ GeV . Such effects would be represented by deviations from the calculated SM predictions of the measured observables relevant to the different processes. In this regard, of particular interest for the LC is the annihilation into fermion pairs $$e^++e^{}\overline{f}+f,$$ (1) that gives information on the $`Z^{}ff`$ interaction. In the case of no observed signal within the experimental accuracy, limits on the $`Z^{}`$ parameters to a conventionally defined confidence level can be derived, either from a general analysis taking into account the full set of possible $`Z^{}`$ couplings to fermions, or in the framework of specific models where characteristic relations among the couplings strongly reduce the number of independent free parameters. Clearly, completely model-independent limits can result only in the optimal situation where the different couplings can be disentangled, by means of suitable observables, and analysed independently so as to avoid potential cancellations. The essential role of the initial electron beam polarization has been repeatedly emphasized in this regard, and the potential of the linear collider along these lines has been extensively reviewed, e.g., in Refs. . The same need of a procedure to disentangle the different $`Z^{}`$ couplings arises in the case where deviations from the SM were experimentally observed. Indeed, in this situation, the numerical values of the individual couplings must be extracted from the measured deviations in order to identify the source of these effects and to make tests of the various theoretical models. In what follows, we discuss the role of two particular, polarized, variables $`\sigma _+`$ and $`\sigma _{}`$ in the analysis of the $`Z^{}ff`$ interaction from both points of view, namely, the derivation of model-independent limits in the case of no observed deviation and the sensitivity to individual couplings and model identification in the hypothesis of observed deviations. These observables could directly distinguish the helicity cross sections of process (1) and, therefore, depend on a minimal number of independent free parameters (basically, the product of the $`Z^{}`$ chiral couplings to electrons and to the fermionic final state). They have been previously introduced to study $`Z^{}`$ effects at LEP2 (no polarization there) and manifestations of four-fermion contact interactions at the LC . Here, we extend the analysis of to the case of the LC with polarized beams. For illustration, we will explicitly consider a specific class of $`E_6`$-motivated models and of Left-Right symmetric models. ## 2 Polarized observables The polarized differential cross section for process (1) with $`fe,t`$ is given in Born approximation by the $`s`$-channel $`\gamma `$, $`Z`$ and $`Z^{}`$ exchanges. Neglecting $`m_f`$ with respect to the CM energy $`\sqrt{s}`$, it has the form $$\frac{d\sigma }{d\mathrm{cos}\theta }=\frac{3}{8}\left[(1+\mathrm{cos}\theta )^2\stackrel{~}{\sigma }_++(1\mathrm{cos}\theta )^2\stackrel{~}{\sigma }_{}\right],$$ (2) where, in terms of helicity cross sections $$\stackrel{~}{\sigma }_+=\frac{1}{4}\left[(1+P_e)(1P_{\overline{e}})\sigma _{RR}+(1P_e)(1+P_{\overline{e}})\sigma _{LL}\right],$$ (3) $$\stackrel{~}{\sigma }_{}=\frac{1}{4}\left[(1+P_e)(1P_{\overline{e}})\sigma _{RL}+(1P_e)(1+P_{\overline{e}})\sigma _{LR}\right],$$ (4) with ($`\alpha ,\beta =L,R`$) $$\sigma _{\alpha \beta }=N_C\sigma _{pt}|A_{\alpha \beta }|^2.$$ (5) In these equations, $`\theta `$ is the angle between the initial electron and the outgoing fermion in the CM frame; $`N_C`$ the QCD factor $`N_C3(1+\frac{\alpha _s}{\pi })`$ for quarks and $`N_C=1`$ for leptons, respectively; $`P_e`$ and $`P_{\overline{e}}`$ are the degrees of longitudinal electron and positron polarization; $`\sigma _{\mathrm{pt}}\sigma (e^+e^{}\gamma ^{}l^+l^{})=(4\pi \alpha _{e.m.}^2)/(3s)`$; $`A_{\alpha \beta }`$ are the helicity amplitudes. According to Eqs. (3) and (4), the cross sections for the different combinations of helicities, that carry the information on the individual $`Z^{}ff`$ couplings, can be disentangled via the measurement of $`\stackrel{~}{\sigma }_+`$ and $`\stackrel{~}{\sigma }_{}`$ with different choices of the initial beams polarization. Instead, the total cross section and the forward-backward asymmetry, defined as: $$\sigma =\sigma ^\mathrm{F}+\sigma ^\mathrm{B};A_{\mathrm{FB}}=(\sigma ^\mathrm{F}\sigma ^\mathrm{B})/\sigma ,$$ (6) with $`\sigma ^\mathrm{F}=_0^1(d\sigma /d\mathrm{cos}\theta )d\mathrm{cos}\theta `$ and $`\sigma ^\mathrm{B}=_1^0(d\sigma /d\mathrm{cos}\theta )d\mathrm{cos}\theta `$, depend on linear combinations of all helicity cross sections even for longitudinally polarized initial beams. One can notice the relation $$\stackrel{~}{\sigma }_\pm =0.5\sigma \left(1\pm \frac{4}{3}A_{\mathrm{FB}}\right)=\frac{7}{6}\sigma _{\mathrm{F},\mathrm{B}}\frac{1}{6}\sigma _{\mathrm{B},\mathrm{F}}.$$ (7) Alternatively, one can directly project out $`\stackrel{~}{\sigma }_+`$ and $`\stackrel{~}{\sigma }_{}`$ from Eq. (2), as differences of integrated observables. To this aim, we define $`z^{}>0`$ such that $$\left(_z^{}^1_1^z^{}\right)\left(1\mathrm{cos}\theta \right)^2d\mathrm{cos}\theta =0.$$ (8) Numerically, $`z^{}=2^{2/3}1=0.59`$, corresponding to $`\theta ^{}=54^{}`$,<sup>2</sup><sup>2</sup>2In the case of a reduced angular range $`|\mathrm{cos}\theta |<c`$, one has $`z^{}=(1+3c^2)^{1/3}1`$. and for this value of $`z^{}`$: $$\left(_z^{}^1_1^z^{}\right)\left(1+\mathrm{cos}\theta \right)^2d\mathrm{cos}\theta =8\left(2^{2/3}2^{1/3}\right).$$ (9) From Eq. (2) one can easily see that the observables $`\sigma _+`$ $``$ $`\sigma _{1+}\sigma _{2+}=\left({\displaystyle _z^{}^1}{\displaystyle _1^z^{}}\right){\displaystyle \frac{d\sigma }{d\mathrm{cos}\theta }}d\mathrm{cos}\theta ,`$ (10) $`\sigma _{}`$ $``$ $`\sigma _1\sigma _2=\left({\displaystyle _1^z^{}}{\displaystyle _z^{}^1}\right){\displaystyle \frac{d\sigma }{d\mathrm{cos}\theta }}d\mathrm{cos}\theta `$ (11) are such that $$\stackrel{~}{\sigma }_\pm =\frac{1}{3\left(2^{2/3}2^{1/3}\right)}\sigma _\pm =1.02\sigma _\pm .$$ (12) Therefore, for practical purposes one can identify $`\sigma _\pm \stackrel{~}{\sigma }_\pm `$ to a very good approximation. Although the two definitions are practically equivalent from the mathematical point of view, in the next Section we prefer to use $`\sigma _\pm `$, that are found more convenient to discuss the expected uncertainties and the corresponding sensitivities to the $`Z^{}`$ couplings. Also, it turns out numerically that $`z^{}=0.59`$ in (10) and (11) maximizes the statistical significance of the results. The helicity amplitudes $`A_{\alpha \beta }`$ in Eq. (5) can be written as $$A_{\alpha \beta }=(Q_e)_\alpha (Q_f)_\beta +g_\alpha ^eg_\beta ^f\chi _Z+g_{}^{}{}_{\alpha }{}^{e}g_{}^{}{}_{\beta }{}^{f}\chi _Z^{},$$ (13) in the notation where the general neutral-current interaction is written as $$L_{NC}=eJ_\gamma ^\mu A_\mu +g_ZJ_Z^\mu Z_\mu +g_Z^{}^\mu J_Z^{}^\mu Z_\mu ^{}.$$ (14) Here, $`e=\sqrt{4\pi \alpha _{e.m.}}`$; $`g_Z=e/s_Wc_W`$ ($`s_W^2=1c_W^2\mathrm{sin}^2\theta _W`$) and $`g_Z^{}`$ are the $`Z`$ and $`Z^{}`$ gauge couplings, respectively. Moreover, in (13), $`\chi _i=s/(sM_i^2+iM_i\mathrm{\Gamma }_i)`$ are the gauge boson propagators with $`i=Z`$ and $`Z^{}`$, and the $`g`$’s are the left- and right-handed fermion couplings. The fermion currents that couple to the neutral gauge boson $`i`$ are expressed as $`J_i^\mu =_f\overline{\psi }_f\gamma ^\mu (L_i^fP_L+R_i^fP_R)\psi _f`$, with $`P_{L,R}=(1\gamma _5)/2`$ the projectors onto the left- and right-handed fermion helicity states. With these definitions, the SM couplings are $$R_\gamma ^f=Q_f;L_\gamma ^f=Q_f;R_Z^f=Q_fs_W^2;L_Z^f=I_{3L}^fQ_fs_W^2,$$ (15) where $`Q_f`$ are fermion electric charges, and the couplings in Eq. (13) are normalized as $$g_L^f=\frac{g_Z}{e}L_Z^f,g_R^f=\frac{g_Z}{e}R_Z^f,g_{}^{}{}_{L}{}^{f}=\frac{g_Z^{}}{e}L_Z^{}^f,g_{}^{}{}_{R}{}^{f}=\frac{g_Z^{}}{e}R_Z^{}^f.$$ (16) In what follows, we will limit ourselves to a few representative models predicting new gauge heavy bosons. Specifically, models inspired by GUT inspired scenarios, superstring-motivated ones, and those with Left-Right symmetric origin . These are the $`\chi `$ model occurring in the breaking $`SO(10)SU(5)\times U(1)_\chi `$, the $`\psi `$ model originating in $`E_6SO(10)\times U(1)_\psi `$, and the $`\eta `$ model which is encountered in superstring-inspired models in which $`E_6`$ breaks directly to a rank-5 group. As an example of Left-Right model, we consider the particular value $`\kappa =g_R/g_L=1`$, corresponding to the most commonly considered case of Left-Right Symmetric Model (LR). For all such grand-unified $`E_6`$ and Left-Right models the $`Z^{}`$ gauge coupling in (14) is $`g_Z^{}=g_Zs_W`$ . As they are constrained from present low-energy data and from recent data from the Tevatron , new vector boson effects at the LC are expected to be quite small and therefore should be disentangled from the radiative corrections to the SM Born predictions for the cross section. To this aim, in our numerical analysis we follow the strategy of Refs. -, in particular we use the improved Born approximation accounting for the electroweak one-loop corrections. ## 3 Model independent $`Z^{}`$ search and discovery limits According to Eqs. (3), (4) and (12), by the measurements of $`\sigma _+`$ and $`\sigma _{}`$ for the different initial electron beam polarizations one determines the cross sections related to definite helicity amplitudes $`A_{\alpha \beta }`$. From Eq. (13), one can observe that the $`Z^{}`$ manifests itself in these amplitudes by the combination of the product of couplings $`g_\alpha ^eg_\beta ^f`$ with the propagator $`\chi _Z^{}`$. In the situation $`\sqrt{s}M_Z^{}`$ we shall consider here, only the interference of the SM term with the $`Z^{}`$ exchange is important and the deviation of each helicity cross section from the SM prediction is given by $$\mathrm{\Delta }\sigma _{\alpha \beta }\sigma _{\alpha \beta }\sigma _{\alpha \beta }^{SM}=N_C\sigma _{\mathrm{pt}}\mathrm{\hspace{0.17em}2}\mathrm{Re}\left[\left(Q_eQ_f+g_\alpha ^eg_\beta ^f\chi _Z\right)\left(g_{}^{}{}_{\alpha }{}^{e}g_{}^{}{}_{\beta }{}^{f}\chi _Z^{}^{}\right)\right].$$ (17) As one can see, $`\mathrm{\Delta }\sigma _{\alpha \beta }`$ depend on the same kind of combination of $`Z^{}`$ parameters and, correspondingly, each such combination can be considered as a single ‘effective’ nonstandard parameter. Therefore, in an analysis of experimental data for $`\sigma _{\alpha \beta }`$ based on a $`\chi ^2`$ procedure, a one-parameter fit is involved and we may hope to get a slightly improved sensitivity to the $`Z^{}`$ with respect to other kinds of observables. As anticipated, in the case of no observed deviation one can evaluate in a model-independent way the sensitivity of process (1) to the $`Z^{}`$ parameters, given the expected experimental accuracy on $`\sigma _+`$ and $`\sigma _{}`$. It is convenient to introduce the general parameterization of the $`Z^{}`$-exchange interaction used, e.g., in Refs. : $$G_L^f=L_Z^{}^f\sqrt{\frac{g_Z^{}^2}{4\pi }\frac{M_Z^2}{M_Z^{}^2s}},G_R^f=R_Z^{}^f\sqrt{\frac{g_Z^{}^2}{4\pi }\frac{M_Z^2}{M_Z^{}^2s}}.$$ (18) An advantage of introducing the ‘effective’ left- and right-handed couplings of Eq. (18) is that the bounds can be represented on a two-dimensional ‘scatter plot’, with no need to specify particular values of $`M_Z^{}`$ or $`s`$. Our $`\chi ^2`$ procedure defines a $`\chi ^2`$ function for any observable $`𝒪`$: $$\chi ^2=\left(\frac{\mathrm{\Delta }𝒪}{\delta 𝒪}\right)^2,$$ (19) where $`\mathrm{\Delta }𝒪𝒪(Z^{})𝒪(SM)`$ and $`\delta 𝒪`$ is the expected uncertainty on the considered observable combining both statistical and systematic uncertainties. The domain allowed to the $`Z^{}`$ parameters by the non-observation of the deviations $`\mathrm{\Delta }𝒪`$ within the accuracy $`\delta 𝒪`$ will be assessed by imposing $`\chi ^2<\chi _{\mathrm{crit}}^2`$, where the actual value of $`\chi _{\mathrm{crit}}^2`$ specifies the desired ‘confidence’ level. The numerical analysis has been performed by means of the program ZEFIT, adapted to the present discussion, which has to be used along with ZFITTER , with input values $`m_{top}=175`$ GeV and $`m_H=300`$ GeV. In the real case, the longitudinal polarization of the beams will not exactly be $`\pm 1`$ and, consequently, instead of the pure helicity cross section, the experimentally measured $`\sigma _\pm `$ will determine the linear combinations on the right hand side of Eqs. (3) and (4) with $`|P_e|`$ (and $`|P_{\overline{e}}|`$) less than unity. Thus, ultimately, the separation of $`\sigma _{RR}`$ from $`\sigma _{LL}`$ will be obtained by solving the linear system of two equations corresponding to the data on $`\sigma _+`$ for, e.g., both signs of the electron longitudinal polarization. The same is true for the separation of $`\sigma _{RL}`$ and $`\sigma _{LR}`$ using the data on $`\sigma _{}`$. In the ‘linear’ approximation of Eq. (17), and with $`M_Z^{}\sqrt{s}`$, the constraints from the condition $`\chi ^2<\chi _{\mathrm{crit}}^2`$ can be directly expressed in terms of the effective couplings (18) as: $$|G_\alpha ^eG_\beta ^f|<\frac{\alpha _{e.m.}}{2}\sqrt{\chi _{crit}^2}\left(\frac{\delta \sigma _{\alpha \beta }^{SM}}{\sigma _{\alpha \beta }^{SM}}\right)|A_{\alpha \beta }^{SM}|\frac{M_Z^2}{s}.$$ (20) We need to evaluate the expected uncertainties $`\delta \sigma _{\alpha \beta }`$. To this aim, starting from the discussion of $`\sigma _+`$, we consider the solutions of the system of four equations corresponding to $`P_e=\pm P`$ and $`P_{\overline{e}}=0`$ in Eqs. (3) and (4): $`\sigma _{\mathrm{LL}}`$ $`=`$ $`{\displaystyle \frac{1+P}{P}}\sigma _+(P){\displaystyle \frac{1P}{P}}\sigma _+(P),`$ (21) $`\sigma _{\mathrm{RR}}`$ $`=`$ $`{\displaystyle \frac{1+P}{P}}\sigma _+(P){\displaystyle \frac{1P}{P}}\sigma _+(P),`$ (22) $`\sigma _{\mathrm{LR}}`$ $`=`$ $`{\displaystyle \frac{1+P}{P}}\sigma _{}(P){\displaystyle \frac{1P}{P}}\sigma _{}(P),`$ (23) $`\sigma _{\mathrm{RL}}`$ $`=`$ $`{\displaystyle \frac{1+P}{P}}\sigma _{}(P){\displaystyle \frac{1P}{P}}\sigma _{}(P).`$ (24) From these relations, adding the uncertainties, e.g. $`\delta \sigma _+(\pm P)`$ on $`\sigma _+(\pm P)`$ in quadrature, $`\delta \sigma _{RR}`$ has the form $$\delta \sigma _{RR}=\sqrt{\left(\frac{1+P}{P}\right)^2\left(\delta \sigma _+(P)\right)^2+\left(\frac{1P}{P}\right)^2\left(\delta \sigma _+(P)\right)^2},$$ (25) and $`\delta \sigma _{LL}`$ can be expressed quite similarly. Also, we combine statistical and systematic uncertainties in quadrature. In this case, if $`\sigma _+(\pm P)`$ are directly measured via the difference (10) of the integrated cross sections $`\sigma _{1+}(\pm P)`$ and $`\sigma _{2+}(\pm P)`$, one can see that $`\delta \sigma _+^{stat}`$ has the simple property: $`\delta \sigma _+(\pm P)^{stat}=\left(\sigma ^{SM}(\pm P)/ϵ_{int}\right)^{1/2}`$, where $`_{int}`$ is the time-integrated luminosity, $`ϵ`$ is the efficiency for detecting the final state under consideration and $`\sigma ^{SM}(\pm P)`$ is the polarized total cross section. For the systematic uncertainty, we use $`\delta \sigma _+(\pm P)^{sys}=\delta ^{sys}\left(\sigma _{1+}^2(\pm P)+\sigma _{2+}^2(\pm P)\right)^{1/2}`$, assuming that $`\sigma _{1+}(\pm P)`$ and $`\sigma _{2+}(\pm P)`$ have the same systematic error $`\delta ^{sys}`$. One can easily see that $`\delta \sigma _{LL}`$ can be obtained by changing $`\delta \sigma _+(P)\delta \sigma _+(P)`$ in (25) and that the expression for $`\delta \sigma _{RL}`$ and $`\delta \sigma _{LR}`$ also follow from this equation by $`\delta \sigma _+\delta \sigma _{}`$. Numerically, to exploit Eq. (17) with $`\delta \sigma _{\alpha \beta }`$ expressed as above, we assume the following values for the expected identification efficiencies and systematic uncertainties on the various fermionic final states : $`ϵ=100\%`$ and $`\delta ^{sys}=0.5\%`$ for leptons; $`ϵ=60\%`$ and $`\delta ^{sys}=1\%`$ for $`b`$ quarks; $`ϵ=35\%`$ and $`\delta ^{sys}=1.5\%`$ for $`c`$ quarks. Also, $`\chi _{crit}^2=3.84`$ as typical for 95% C.L. with a one-parameter fit. We take $`\sqrt{s}=0.5`$ TeV and a one-year run with $`_{int}=50fb^1`$. For for polarized beams, we assume 1/2 of the total integrated luminosity quoted above for each value of the electron polarization, $`P_e=\pm P`$. Concerning polarization, in the numerical analysis presented below we take three different values, $`P=`$1, 0.8 and 0.5, in order to test the dependence of the bounds on this variable. As already noticed, in the general case where process (1) depends on all four independent $`Z^{}ff`$ couplings, only the products $`G_R^eG_R^f`$ and $`G_L^eG_L^f`$ can be constrained by the $`\sigma _+`$ measurement via Eq. (17), while the products $`G_R^eG_L^f`$ and $`G_L^eG_R^f`$ can be analogously bounded by $`\sigma _{}`$. The exception is lepton pair production ($`f=l`$) with ($`el`$) universality of $`Z^{}`$ couplings, in which case $`\sigma _+`$ can individually constrain either $`G_L^e`$ or $`G_R^e`$. Also, it is interesting to note that such lepton universality implies $`\sigma _{RL}=\sigma _{LR}`$ and, accordingly, for $`P_{\overline{e}}=0`$ electron polarization drops from Eq. (4) which becomes equivalent to the unpolarized one, with a priori no benefit from polarization. Nevertheless, the uncertainty in Eq. (25) still depends on the longitudinal polarization $`P`$. The 95% C.L. upper bounds on the products of lepton couplings (without assuming lepton universality) are reported in the first three rows of Table 1. For quark-pair production ($`f=c,b`$), where in general $`\sigma _{RL}\sigma _{LR}`$ due to the appearance of different fermion couplings, the analysis takes into account the reconstruction efficiencies and the systematic uncertainties previously introduced, and in Table 1 we report the 95% C.L. upper bounds on the relevant products of couplings. Also, for illustrative purposes, in Fig. 1 we show the 95% C.L. bounds in the plane $`(G_R^e,G_R^b)`$, represented by the area limited by the four hyperbolas. The shaded region is obtained by combining these limits with the ones derived from the pure leptonic process with lepton universality. Thus, in general we are not able to constrain the individual couplings to a finite region. On the other hand, there would be the possibility of using Fig. 1 to constrain the quark couplings to the $`Z^{}`$ to a finite range in the case where some finite effect were observed in the lepton-pair channel. The situation with the other couplings, and/or the $`c`$ quark, is similar to the one depicted in Fig. 1. Table 1 shows that the integrated observables $`\sigma _+`$ and $`\sigma _{}`$ are quite sensitive to the indirect $`Z^{}`$ effects, with upper limits on the relevant products $`|G_\alpha ^eG_\beta ^f|`$ ranging from $`2.210^3`$ to $`4.810^3`$ at the maximal planned value $`P=0.8`$ of the electron longitudinal polarization. In most cases, the best sensitivity occurs for the $`\overline{b}b`$ final state, while the worst one is for $`\overline{c}c`$. Decreasing the electron polarization from $`P=1`$ to $`P=0.5`$ results in worsening the sensitivity by as much as 50%, depending on the final fermion channel. Regarding the role of the assumed uncertainties on the observables under consideration, in the cases of $`e^+e^{}l^+l^{}`$ and $`e^+e^{}\overline{b}b`$ the expected statistics are such that the uncertainty turns out to be dominated by the statistical one, and the results are almost insensitive to the value of the systematical uncertainty. Conversely, for $`e^+e^{}\overline{c}c`$ both statistical and systematic uncertainties are important. Moreover, as Eqs. (3) and (4) show, a further improvement on the sensitivity to the various $`Z^{}`$ couplings in Table 1 would obtain if both initial $`e^{}`$ and $`e^+`$ longitudinal polarizations were available . ## 4 Resolving power and model identification If a $`Z^{}`$ is indeed discovered, perhaps at a hadron machine, it becomes interesting to measure as accurately as possible its couplings and mass at the LC, and make tests of the various extended gauge models. To assess the accuracy, the same procedure as in the previous section can be applied to the determination of $`Z^{}`$ parameters by simply replacing the SM cross sections in Eqs. (19) and (25) by the ones expected for the ‘true’ values of the parameters (namely, the extended model ones), and evaluating the $`\chi ^2`$ variation around them in terms of the expected uncertainty on the cross section. ### 4.1 $`Z^{}`$ couplings to leptons We now examine bounds on the $`Z^{}`$ couplings for $`M_Z^{}`$ fixed at some value. Starting from the leptonic process $`e^+e^{}l^+l^{}`$, let us assume that a $`Z^{}`$ signal is detected by means of the observables $`\sigma _+`$ and $`\sigma _{}`$. Using Eqs. (22) and (21), the measurement of $`\sigma _+`$ for the two values $`P_e=\pm P`$ will allow to extract $`\sigma _{RR}`$ and $`\sigma _{LL}`$ which, in turn, determine independent and separate values for the right- and left-handed $`Z^{}`$ couplings $`R_Z^{}^e`$ and $`L_Z^{}^e`$ (we assume lepton universality). The $`\chi ^2`$ procedure determines the accuracy, or the ‘resolving power’ of such determinations given the expected experimental uncertainty (statistical plus systematic). In Table 2 we give the resolution on the $`Z^{}`$ leptonic couplings for the typical model examples introduced in Section 2, with $`M_Z^{}=1\mathrm{TeV}`$. In this regard, one should recall that the two-fold ambiguity intrinsic in process (1) does not allow to distinguish the pair of values of ($`g_\alpha ^e,g_\beta ^f`$) from the one ($`g_\alpha ^e,g_\beta ^f`$), see Eq. (17). Thus, the actual sign of the couplings $`R_Z^{}^e`$ and $`L_Z^{}^e`$ cannot be determined from the data (in Table 2 we have chosen the signs dictated by the relevant models). In principle, the sign ambiguity of fermionic couplings might be resolved by considering other processes such as, e.g., $`e^+e^{}W^+W^{}`$. Another interesting question is the potential of the leptonic process (1) to identify the $`Z^{}`$ model underlying the measured signal, through the measurement of the helicity cross sections $`\sigma _{RR}`$ and $`\sigma _{LL}`$. Such cross sections only depend on the relevant leptonic chiral coupling and on $`M_Z^{}`$, so that such resolving power clearly depends on the actual value of the $`Z^{}`$ mass. In Figs. 2a and 2b we show this dependence for the $`E_6`$ and the $`LR`$ models of interest here. In these figures, the horizontal lines represent the values of the couplings predicted by the various models, and the lines joining the upper and the lower ends of the vertical bars represent the expected experimental uncertainty at the 95% CL. The intersection of the lower such lines with the $`M_Z^{}`$ axis determines the discovery reach for the corresponding model: larger values of $`M_Z^{}`$ would determine a $`Z^{}`$ signal smaller than the experimental uncertainty and, consequently, statistically invisible. Also, Figs. 2a and 2b show the complementary roles of $`\sigma _{LL}`$ and $`\sigma _{RR}`$ to set discovery limits: while $`\sigma _{LL}`$ is mostly sensitive to the $`Z_\chi ^{}`$ and has the smallest sensitivity to the $`Z_\eta ^{}`$, $`\sigma _{RR}`$ provides the best limit for the $`Z_{LR}^{}`$ and the worst one for the $`Z_\chi ^{}`$. As Figs. 2a and 2b show, the different models can be distinguished by means of $`\sigma _\pm `$ as long as the uncertainty of the coupling of one model does not overlap with the value predicted by the other model. Thus, the identification power of the leptonic process (1) is determined by the minimum $`M_Z^{}`$ value at which such ‘confusion region’ starts. For example, Fig. 2a shows that the $`\chi `$ model cannot be distinguished from the LR, $`\psi `$ and $`\eta `$ models at $`Z^{}`$ masses larger than 2165 GeV, 2270 GeV and 2420 GeV, respectively. The identification power for the typical models are indicated in Figs. 2a and 2b by the symbols circle, diamond, square and triangle. The corresponding $`M_Z^{}`$ values at 95% C.L. for the typical $`E_6`$ and LR models are listed in Table 3, where the $`Z^{}`$ models listed in first columns should be distinguished from the ones listed in the first row assumed to be the origin of the observed $`Z^{}`$ signal. For this reason Table 3 is not symmetric. Analogous considerations hold also for $`\sigma _{LR}`$ and $`\sigma _{RL}`$. These cross sections give qualitatively similar results for the product $`L_Z^{}^eR_Z^{}^e`$, but with weaker constraints because of smaller sensitivity. ### 4.2 $`Z^{}`$ couplings to quarks In the case of process (1) with $`\overline{q}q`$ pair production (with $`q=c,b`$), the analysis is complicated by the fact that the relevant helicity amplitudes depend on three parameters ($`g_\alpha ^e`$, $`g_\beta ^q`$ and $`M_Z^{}`$) instead of two. Nevertheless, there is still some possibility to derive general information on the $`Z^{}`$ chiral couplings to quarks. Firstly, by the numerical procedure introduced above one can determine from the measured cross section the products of electrons and final state quark couplings of the $`Z^{}`$, from which one derives allowed regions to such couplings in the independent, two-dimensional, planes ($`L_Z^{}^e`$,$`L_Z^{}^q`$) and ($`L_Z^{}^e`$,$`R_Z^{}^q`$). The former regions are determined through $`\sigma _{LL}`$, and the latter ones through $`\sigma _{LR}`$. As an illustrative example, in Fig. 3 we depict the bounds from the process $`e^+e^{}\overline{b}b`$ in the ($`L_Z^{}^e`$,$`L_Z^{}^b`$) and ($`L_Z^{}^e`$,$`R_Z^{}^b`$) planes for the $`Z^{}`$ of the $`\chi `$ model, with $`M_Z^{}=1\mathrm{TeV}`$. Taking into account the above mentioned two-fold ambiguity, the allowed regions are the ones included within the two sets of hyperbolic contours in the upper-left and in the lower-right corners of Fig. 3. Then, to get finite regions for the quark couplings, one must combine the hyperbolic regions so obtained with the determinations of the leptonic $`Z^{}`$ couplings from the leptonic process (1), represented by the two vertical strips. The corresponding shaded areas represent the determinations of $`L_Z^{}^b`$, while the hatched areas are the determinations of $`R_Z^{}^b`$. Notice that, in general, there is the alternative possibility of deriving constraints on quark couplings also in the case of right-handed electrons, namely, from the determinations of the pairs of couplings ($`R_Z^{}^e`$,$`L_Z^{}^b`$) and ($`R_Z^{}^e`$,$`R_Z^{}^b`$). However, as observed with regard to the previous analysis of the leptonic process, the sensitivity to the right-handed electron coupling turns out to be smaller than for $`L_Z^{}^e`$, so that the corresponding constraints are weaker. The determinations of the $`Z^{}`$ couplings with the $`c`$ and $`b`$ quarks for the typical $`E_6`$ and LR models with $`M_Z^{}=1\mathrm{TeV}`$, are given in Table 2 where the combined statistical and systematic uncertainties are taken into account. Furthermore, similar to the analysis presented in Section 4.1 and the corresponding Figs. 2a and 2b, we depict in Figs. 4a and 4b the different models identification power as a function of $`M_Z^{}`$, for the reaction $`e^+e^{}\overline{b}b`$ as a representative example. The model identification power of the $`\overline{b}b`$ and $`\overline{c}c`$ pair production processes are reported in Table 3. ## 5 Conclusion We briefly summarize our findings concerning the $`Z^{}`$ discovery limits and the models identification power of process (1) via the separate measurement of the helicity cross sections $`\sigma _{\alpha \beta }`$ at the LC, with $`\sqrt{s}=0.5\mathrm{TeV}`$ and $`_{int}=25fb^1`$ for each value $`P_e=\pm P`$ the electron longitudinal polarization. Given the present experimental lower limits on $`M_Z^{}`$, only indirect effects of the $`Z^{}`$ can be studied at the LC. In general, the helicity cross sections allow to extract separate, and model-indpendent, information on the individual ‘effective’ $`Z^{}`$ couplings ($`G_\alpha ^eG_\beta ^f`$). As depending on the minimal number of free parameters, they may be expected to show some convenience with respect to other observables in an analysis of the experimental data based on a $`\chi ^2`$ procedure. In the case of no observed signal, i.e., no deviation of $`\sigma _{\alpha \beta }`$ from the SM prediction within the experimental accuracy, one can directly obtain model-independent bounds on the leptonic chiral couplings of the $`Z^{}`$ from $`e^+e^{}l^+l^{}`$ and on the products of couplings $`G_\alpha ^eG_\beta ^q`$ from $`e^+e^{}\overline{q}q`$ (with $`l=\mu ,\tau `$ and $`q=c,b`$). From the numerical point of view, $`\sigma _{\alpha \beta }`$ are found to just have a complementary role with respect to other observables like $`\sigma `$ and $`A_{\mathrm{FB}}`$. In the case $`Z^{}`$ manifestations are observed as deviations from the SM, with $`M_Z^{}`$ of the order of 1 TeV, the role of $`\sigma _{\alpha \beta }`$ is more interesting, specially as regards the problem of identifying the various models as potential sources of such non-standard effects. Indeed, in principle, they provide a unique possibility to disentangle and extract numerical values for the chiral couplings of the $`Z^{}`$ in a general way (modulo the aforementioned sign ambiguity), avoiding the danger of cancellations, so that $`Z^{}`$ model predictions can be tested. Data analyses with other observables may involve combinations of different coupling constants and need some assumption to reduce the number of independent parameters in the $`\chi ^2`$ procedure. In particular, by the analysis combining $`\sigma _{\alpha \beta }(l^+l^{})`$ and $`\sigma _{\alpha \beta }(\overline{q}q)`$ one can obtain information of the $`Z^{}`$ couplings with quarks without making assumptions on the values of the leptonic couplings. Numerically, as displayed in the previous Sections,for the class of $`E_6`$ and Left-Right models considered here the couplings would be determined to about $`360\%`$ for $`M_Z^{}=1\mathrm{TeV}`$. Of course, the considerations above hold only in the case where the $`Z^{}`$ signal is seen in all observables. Finally, one can notice that for $`\sqrt{s}M_Z^{}`$ the energy-dependence of the deviations $`\mathrm{\Delta }\sigma _{\alpha \beta }`$ is determined by the SM and that, in particular, the definite sign $`\mathrm{\Delta }\sigma _{\alpha \alpha }(l^+l^{})<0`$ ($`\alpha =L,R`$) is typical of the $`Z^{}`$. This property might be helpful in order to identify the $`Z^{}`$ as the source of observed deviations from the SM in process (1). ## Acknowledgements It is a pleasure to thank N. Paver for the fruitful and enjoyable collaboration on the topics covered here.
no-problem/9910/astro-ph9910134.html
ar5iv
text
# Multifrequency Observations of Giant Radio Pulses from the Millisecond Pulsar B1937+21 ## 1 Introduction Despite their remarkable frequency stability, most radio pulsars exhibit considerable pulse-to-pulse intensity fluctuations. A histogram of the pulse intensity typically shows a roughly exponential or Lorentzian shape (e.g., Manchester & Taylor (1977)), with a tail extending out to perhaps ten times the mean pulse strength. The Crab pulsar (PSR B0531+21), by contrast, exhibits frequent, very strong radio pulses extending to hundreds of times the mean pulse intensity (e.g., Lundgren et al. (1995)). These so-called “giant pulses” exhibit a characteristic power-law intensity distribution—a phenomenon that for many years was observed in no other pulsar. Despite hints from early observations (Wolszczan, Cordes, & Stinebring (1984)), it came as a surprise to find qualitatively similar behavior from PSR B1937$`+`$21, the first (and still fastest known) millisecond pulsar (Sallmen & Backer (1995); Cognard et al. (1996)). The Crab pulsar and PSR B1937$`+`$21 could hardly be less similar in their properties. The Crab pulsar, born in A.D. 1054, is the youngest known pulsar, and B1937+21 is one of the oldest, with a characteristic spin-down age of $`2\times 10^8`$ yrs. The inferred surface magnetic field strength of the Crab pulsar, $`B=4\times 10^{12}`$ G, is about $`10^4`$ times as high as that of B1937+21. Although the Crab is the fastest of the high field pulsars, with a period of $`P=33`$ ms, it is twenty times slower than the 1.56 ms B1937+21. The pulsars’ only identified common feature is their similarly strong magnetic field strength at the velocity-of-light cylinder, $`10^6`$ G, which is higher than that for any other known pulsar (Cognard et al. (1996)). Whether this is coincidence, or in some way responsible for the observed giant pulse behavior, is unknown. All previous studies of giant pulses from B1937+21 have been at or near 430 MHz with the exception of some limited, inconclusive data at 1.4 GHz (Wolszczan et al. 1984). We present below the first multifrequency study of giant pulses in B1937+21. In §2, we describe the observations and giant pulse search algorithms. A comparison of the arrival times of normal and giant pulse emission is discussed in §3. Following this, we discuss pulse morphology in §4 and the limits of accurate timing in §5. Using the arrival data, we present intensity distributions of the giant pulses and the contaminant noise in §6. In §7, we discuss the approximate spectrum of the largest giant pulses. Finally, in §8, we point to open questions and necessary future observations. ## 2 Observations and Signal Processing All observations were made using the Princeton Mark IV instrument (Stairs et al. (1999)) at the 305 m Arecibo telescope, between 1998 February 21 and 1999 August 1. Observations were made at three frequencies, as part of an ongoing timing study of this pulsar. The pulsar signal strength varied because of interstellar scintillation; observations made during times of strong signal were retained for the giant pulse analysis. The data used for this work included 30 minutes ($`10^6`$ pulses) at 430 MHz, 4 hours ($`10^7`$ pulses) at 1420 MHz, and 26 minutes ($`10^6`$ pulses) at 2380 MHz. After completion of the observations, the data were coherently dedispersed in software by convolution with a complex chirp function (Hankins & Rickett (1975)), to remove the progressive phase delays as a function of frequency caused by free electrons in the interstellar medium (ISM). The analysis pipeline has been described by Stairs et al. (1999). At the dispersion measure of PSR B1937$`+`$21(71), with a 10 MHz bandwidth, the dispersive smearing at 430 MHz, 1420 MHz, and 2380 MHz totals 74.16 ms, 2.06 ms, and 0.44 ms, respectively, before coherent dedispersion. After dedispersion, the signals from the two orthogonal polarizations were squared and cross multiplied to produce the four Stokes parameters. Because we were analyzing data taken primarily for other purposes, in most cases insufficient calibration data were available for high precision polarization calibration. We therefore concentrate on analysis of the total intensity data. After coherent dedispersion, the standard timing analysis pipeline was used to fold the data synchronously with the known topocentric period of PSR B1937$`+`$21, to produce average profiles. In a parallel analysis, the data were searched for strong individual pulses. Initial exploratory analysis confirmed that all strong pulses were confined to fairly narrow windows on the tails of the main pulse (MP) and interpulse (IP). At 430 MHz, giant pulses were therefore identified as in Cognard et al. (1996), by measuring the integrated flux density in 150$`\mu `$sec windows located on the tails of the MP and IP. At higher frequencies, where the giant pulses were much narrower (many lasting only a few $`0.1`$$`0.2\mu `$sec bins) and appeared in a region of pulse phase that was significantly wider than the individual pulses, giant pulses were identified by searching for pairs of bins with combined energy greater than a threshold level. Because of the relative computational efficiency of this procedure, we did not limit our search to particular regions of pulse phase. But as at 430 MHz, giant pulses were only detected in the tails of the MP and IP. (Note that this appears to be in conflict with the results of Wolszczan et al. (1984).) Interstellar scintillation strongly modulated the apparent intensity of the pulsar signal from day to day. Scintillation should affect the normal and giant pulse emission identically. Therefore, in order to compare giant pulse intensities observed on different dates, we accounted for these variations by calibrating the average normal emission flux density for each run to the power-law model given by Foster, Fairhead, and Backer (1991). They found that the flux density as a function of frequency could be expressed as $`F`$\[mJy\]$`=(25.9\pm 2.6)\nu ^{2.60\pm 0.05}`$, where $`\nu `$ is in GHz. We use units of \[Jy\] for flux density and \[Jy$`\mu `$sec\] for integrated flux density. ## 3 Giant pulse distribution in time and pulsar phase An important difference between giant pulses in the Crab pulsar and in PSR B1937$`+`$21 appears in the distribution of the pulses with respect to the star’s rotational phase. Individual Crab giant pulses arrive at phases distributed throughout most of the emission envelope of the normal pulse profile. In contrast, the early observations of giant pulses from B1937+21 at 430 MHz found that they arrived only in narrow regions on the tails of the two normal pulse components (Backer (1995); Cognard et al. (1996)). With our greater sensitivity, we confirm this result at 430 MHz and find the same behavior at 1420 and 2380 MHz. Figures 13 show the average flux due to all giant pulses as a function of pulse phase along with the average normal emission. For each run, giant pulses were selected using an intensity threshold chosen to minimize noise contamination. As is clear from these figures, the giant pulses occur well after the normal emission phase; in particular the giant pulses do not produce the “notch” emission on the trailing edge of the MP as had been speculated by Cognard et al. (1996). To characterize the average properties of the giant pulse emission, we have fit the average giant pulse profile at each frequency with a Gaussian model at 2380 and 1420 MHz and, to account for interstellar scattering, with a model consisting of a Gaussian convolved with an exponential tail at 430 MHz. We find best-fitting Gaussian widths (FWHM) of 3$`\mu `$s (4$`\mu `$s) for the MP (IP) giant pulse profile at 2380 MHz and 4.3$`\mu `$s (4.1$`\mu `$s) for the MP (IP) giant pulse profile at 1420 MHz. At 430 MHz, we find that the MP and IP giant pulse profiles can be adequately described as Gaussians with widths 6.6$`\mu `$s and 6.4$`\mu `$s, respectively, convolved with an exponential scattering tail with $`\tau =28\mu `$s. This scattering timescale $`\tau `$ is similar to that estimated from measurements of the normal profile, confirming that the structure observed in the low frequency average giant pulse profile is dominated by propagation effects. (Because our exponential scattering model is probably an oversimplification of the true effects of scattering on the signal (e.g., Sallmen et al. (1999)), our estimates of the intrinsic width of the mean giant pulse profile at 430 MHz should be considered an upper limit.) At high frequency, the windows in which giant pulse emission occur are much narrower than the average pulse emission windows. Indeed, they are remarkably narrow both absolutely and as a fraction of the pulsar period, each corresponding to less than one degree of rotational phase. They are, we believe, the sharpest stable features ever detected in a pulsar profile. It is this property that makes the mean giant pulse emission from PSR B1937$`+`$21 a potentially valuable fiducial point for high-precision timing observations, as discussed in §5 below. We have also investigated the distribution of giant pulses over time, finding consistency with Poisson statistics (so neighboring giant pulses appear uncorrelated). The distribution in time during one observation is illustrated in Fig. 4, which displays all pulses with integrated flux density $`30`$Jy$`\mu `$secs. Also shown are the brightest pulses, with integrated flux density $`80`$Jy$`\mu `$secs. The arrival time uncertainty for any given giant pulse is $`0.5\mu `$secs; this will be discussed further discussed in §4. We find no difference in the distributions of the more powerful and less powerful giant pulses, except for rate. The giant pulse distribution also appears very stable over this half-hour scan, with no apparent drifting or nulling periods, although the density drops off slightly at the end of the scan, especially in the IP, as interstellar scintillation reduces the mean pulsar flux relative to our threshold values. ## 4 Individual Pulse Morphology The average giant pulse profiles constrain the properties of the giant pulse emission region. Also of interest are the properties of individual giant pulses, which constrain the giant pulse emission mechanism itself. To study individual giant pulses, which are narrower than the mean giant pulse envelope, we must account for scattering effects even at frequencies above 430 MHz. As discussed above, scattering by a thin turbulent screen degrades a signal by effectively convolving it with a one-sided exponential tail. (Although more sophisticated models of the ISM will have more complex effects on the observed signal, we find the simple thin-screen model adequately describes the vast majority of giant pulses we have observed.) We have fit all candidate giant pulses with a Gaussian, $`A\text{exp}\{(tt_0)^2/(2\sigma ^2)\}`$, convolved with an exponential tail, $`e^{t/\tau }/\tau `$, with four fit parameters, $`A`$, $`t_0`$, $`\sigma `$, and $`\tau `$. Shown in Fig. 5 are the strongest giant pulses found at each frequency along with their fitted convolved Gaussians. After visually inspecting many giant pulses at all three frequencies, we see no evidence for intrinsic multiple-peaked emission, which contrasts with giant pulses observed from the Crab pulsar (Sallmen et al. (1999)). Although receiver noise dominates the profiles, most pulses show a fast rise followed by an exponential decay, consistent, on average, with scattering. We have more quantitatively verified our model for the pulse morphology by cross correlating all of the giant pulses with a standard giant pulse, shifting by the lag which maximizes the cross correlation (to correct for the observed pulse-to-pulse jitter), then folding. For any choice of the standard giant pulse, this has always produced a single-peaked, short-rise-time, exponential-decaying profile, which, itself, is well fit by our model. Gaussian widths (FWHM) of $`6`$, $`0.2`$, and $`0.2\mu `$secs and scattering timescales of $`\tau 29`$, $`0.2`$, and $`0.2\mu `$secs were determined at 430, 1420, and 2380 MHz, respectively, for both the MP and IP shifted, folded giant pulse profiles. As previously mentioned in §3, our exponential scattering model is probably an oversimplification of the true effects of scattering, implying that our estimates should be considered upper limits. Table Multifrequency Observations of Giant Radio Pulses from the Millisecond Pulsar B1937+21 lists the range of scattering timescales, $`\tau `$ \[$`\mu `$sec\], found at each frequency. Overall, we find the following ranges: $`\tau 13`$$`40\mu `$s (430 MHz), $`\tau \mathrm{}<1.1\mu `$s (1420 MHz), and $`\tau \mathrm{}<0.4\mu `$s (2380 MHz). These timescales are consistent with turbulent scattering, which has a frequency dependence of $`\nu ^{4\mathrm{to}4.4}`$ (e.g., Manchester & Taylor (1977)). In addition, we have determined approximate upper limits to the fitted Gaussian widths (FWHM) at each frequency of $`7\mu `$s (430 MHz), $`0.5\mu `$s (1420 MHz), and $`0.3\mu `$s (2380 MHz), which have not, however, been included in Table Multifrequency Observations of Giant Radio Pulses from the Millisecond Pulsar B1937+21. Scattering also affects the normal emission, primarily observations at lower frequencies. The best-fitting scattering parameters were determined independently in the two pulse components. In the MP, our model finds $`\tau 30\mu `$sec, and in the IP, $`\tau 40\mu `$sec. Both are in good agreement with the scattering time estimated from the giant pulse emission. The IP is well fit by this model, but our fit to the MP underestimates the flux in the tail, probably implying the unresolved presence at this frequency of the “notch” feature that is resolved on the trailing edge of the MP at higher frequencies. (This is not unexpected, since the feature increases in strength relative to the MP peak with decreasing frequency.) Note that scattering not only broadens but also delays the peak of the low-frequency pulsar signal by an amount that depends on the pulse shape; variability of the scattering strength therefore introduces significant timing errors at low frequency. For the 1420 and 2380 MHz normal emission, scattering is much less severe; therefore, we expect no significant scattering delay to the apparent MP and IP peak arrival times. ## 5 Timing High-precision timing measurements benefit from a sharp-edged timing signal. For this reason, the timing properties of the giant pulses are of considerable importance, especially since very high signal-to-noise ratios can be obtained for mean giant pulse profiles by using a signal thresholding technique to eliminate data when no giant pulse is present. High-precision alignment of pulsar profiles at different frequencies is not straightforward when the pulse shape is variable, since the choice of fiducial reference phase for alignment is arbitrary. We have aligned the three profiles in Figs. 13 by the peaks of the normal emission profiles, in the case of 430 MHz after accounting for the delay introduced by scattering. As is evident, at each frequency the giant pulses occur at approximately the same phase relative to the normal emission peaks. The slight delay at 2380 MHz and possibly at 430 MHz with respect to the 1420 MHz giant pulse profile is most likely due to our somewhat *ad hoc* alignment procedure. We present timing characteristics for each observation in Table Multifrequency Observations of Giant Radio Pulses from the Millisecond Pulsar B1937+21. Displayed for each scan are the separation in phase angle of the IP peak following the MP peak (Normal), the separation of the IP giant pulses from the MP giant pulses (Giant), and the delay (in \[$`\mu `$s\]) of the giant pulses with respect to the MP and IP emission peaks (with scattering taken into account for the 430 MHz observations, as will be discussed below). We now discuss each timing column in more detail. The separation of $`57`$$`58\mu `$secs between MP peak and average giant pulse (though only a lower limit of $`49\mu `$secs at 430 MHz), as well as that for the IP of $`65`$$`66\mu `$secs, is the same at all three frequencies. This yields tight constraints on the relative geometry of normal and giant pulse emission regions. The separation between the MP and IP giant pulses is $`189.5^{}`$ at all frequencies, slightly larger than the $`187.6^{}`$ separation the MP and IP normal emission peaks (though only a lower limit of $`185.6^{}`$ can be quoted at 430 MHz). The individual pulse arrival phases are Gaussian distributed, with widths of $`\sigma =1.5`$$`2.0\mu `$s, in good agreement with the width found for the average giant pulse profile (displayed in Figs. 2 and 3). This pulse-to-pulse jitter is evident in Fig. 4, where we have plotted the fractional giant pulse arrival bin versus pulse number for a 1420 MHz observation. It is interesting to ask whether observations of the giant pulse emission from PSR B1937$`+`$21 can be used to carry out higher precision timing studies of the pulsar than have been possible using the relatively broad normal emission profile (e.g., Kaspi, Taylor, & Ryba (1994)). Using normal pulse timing techniques, absolute precisions as small as $`0.12\mu `$s have been obtained for B1937+21 at 1420 MHz (Stairs (1998)). Just as typical long term timing studies depend on long-term stability of the normal emission profile, timing studies using the giant pulses will depend upon long-term stability of the giant pulse emission phase distribution. The consistency of our results with those of Cognard et al. are encouraging, but careful observations at high frequency over a period of years will be needed to test the ultimate power of giant pulse observations for high precision timing. Nevertheless, we have done preliminary estimates of the obtainable timing precision using the current data set. As noted above, a single giant pulse at 1420 MHz can be used to estimate the pulsar phase to $`1.5\mu `$s. The key question for timing is whether this precision can be improved by averaging over multiple pulses. If the pulse-to-pulse phase variations are uncorrelated, we expect the timing precision to improve as $`\sqrt{N_g}`$, where $`N_g`$ is the number of consecutive points averaged. In Fig. 6 , we show the r.m.s. scatter $`\sigma `$ within individual days as a function of $`N_g`$. We find that timing precision improves as expected to the limit of our data sets, $`N_g=64`$, corresponding to a level of 100–300 ns in a ten-minute observation. Also indicated in the figure is the best timing precision achieved using standard pulsar timing techniques, $`120`$ ns. Stairs et al. (1999) suggest that the limiting factor in their timing analysis may be pulse shape variations caused by variable interstellar scattering. If this is correct, then giant pulse observations may ultimately improve on normal pulse observations, because the effects of scattering can be much more easily measured and removed from the data. Whether these high-precision timing results on short timescales will lead to better long term timing depends primarily on the long term stability of the giant pulse emission characteristics, which will be studied in future work. ## 6 Intensity Distribution The primary distinguishing characteristic of the giant pulses observed from the Crab Pulsar and PSR B1937$`+`$21 is, of course, their intensity, and particularly their extended power-law intensity distribution. In Figs. 79, we plot the cumulative intensity distribution at each observing frequency. At low intensities, the distributions are dominated by the chi-square statistics of the noise and/or normal emission, but above a certain threshold the pulse strength distribution is roughly power-law distributed. Given our limited statistics for any given run, this simple model is adequate to describe the observed data. In Fig. 7, we plot the cumulative distribution of the integrated flux densities in $`150\mu `$sec windows after the MP and IP during a single 15-minute observation at 430 MHz (MJD 51364). Also plotted is the $`1.8`$ power-law slope, which Cognard et al. (1996) found for the cumulative giant pulse distribution at this same frequency. The mean signal-to-noise ratio for the normal emission peak in this observation was about $`0.14`$. Contamination from this emission causes a noticeable deviation at low flux levels from what would be seen with giant pulses and receiver noise alone, causing a steepening of the distribution towards low integrated flux densities. Although the distribution of normal pulse signal strengths is not known, we expect that removing normal pulses would produce better accord with a single power law distribution for the giant pulses. In Fig. 8, we have plotted the cumulative distribution of all giant pulses found at 1420 MHz for all $`4`$hours of data (using the threshold detection algorithm described above). Again, generally power-law behavior is observed, with a similar power-law exponent around $`1.8`$. In Fig. 9, we plot the cumulative intensity distribution for a $`26`$-minute 2380 MHz run (MJD 51391). Again we have plot a $`1.8`$ slope for comparison, which appears to fit the MP giant pulses and the most energetic IP giant pulses well, though more data are needed to strengthen this result. We have also calculated the fraction, $`R`$, of the total pulsar emission at each frequency that emerges in the form of giant pulses. We find the following ranges at each frequency for this fraction: $`0.15\%R9\%`$ (430 MHz), $`0.13\%R4\%`$ (1420 MHz), and $`0.10\%R1\%`$ (2380 MHz), where the lower limits were determined directly from Figs. 13 and the upper limits were determined from Figs. 79 by assuming a cumulative distribution with power-law slope of $`1.8`$ over all intensities for both the MP and IP giant pulses. From the folded normal emission alone, however, we can rule out large values for $`R`$ in these calculated ranges, implying that the single power-law model may not be valid at low intensities. Also consistent with Cognard et al. (1996), we find significantly stronger giant pulses following the MP than the IP. At a given frequency (e.g., 1/minute), the ratio of the strongest giant pulse associated with the MP to the strongest associated with the IP is very roughly the same as the ratio of the peak flux density in the MP to that in the IP. Despite the fact that giant pulses are separated in pulse phase from the normal emission, this suggests a relatively close association between the emission processes. ## 7 Spectrum The short timescale of the observed giant pulses imply that they are a relatively broadband phenomenon, with bandwidth greater than their inverse width. The limited bandpass available to the Mark IV instrument prevents stronger statements about the spectra of individual giant pulses. However, the similar arrival distributions and roughly similar arrival rates at each observational frequency point to a broadband phenomenon. Nevertheless, our observations can be used to constrain the average spectral properties of the giant pulse emission. To avoid complications arising from the use of different effective thresholds at each frequency (because of different source strengths and different receiver noise properties), we estimate the giant pulse spectrum by using the most powerful individual pulses observed during a given time period at each frequency. We have, as usual, calibrated our observations to the spectrum for the average normal emission flux density at each frequency from Foster et al. (1991), as discussed in §2. Figure 10 shows the intensities in \[Jy$`\mu `$sec\] of the top eight MP and top eight IP giant pulses at each frequency over 15 minutes, corresponding to the entire MJD 51364 (430 MHz) run and to 15-minute chunks from runs on MJD 50893 (1420 MHz) and MJD 51391 (2380 MHz) We find a somewhat steeper slope of $`3.1`$ for the giant pulse spectrum, compared to the $`2.6`$ slope for the normal emission spectrum. Although the precise slope of the giant pulse emission spectrum depends on the assumed normal emission spectrum, the result that the giant pulse emission is steeper than the normal emission is robust. However, if the apparent narrowing of the giant pulse emission region at higher frequency reflects a narrowing of a sharp emission cone, the slightly steeper spectrum of the giant pulse emission might be understood as the geometric effect of the position of the line of sight through the outer part of the emission region Assuming the giant pulses from the Crab pulsar are powered by curvature radiation, Sallmen et al. (1999) have calculated the necessary number and number density of radiating electrons. We perform the same calculation here for our observed giant pulses from PSR B1937$`+`$21. For coherent curvature radiation, the power emitted by $`N`$ electrons with relativistic factor $`\gamma =(1v^2/c^2)^{1/2}`$ travelling along magnetic field lines with radius of curvature $`\rho _c`$ is $$P_{curv}=N^2\left(\frac{2e^2\gamma ^4c}{3\rho _c^2}\right).$$ (1) From the peak of the largest 1420 MHz giant pulse in Fig. 5, we can calculate the maximum number of electrons needed in one bunch to produce the observed emission (at frequencies greater than 430 MHz). Assuming this pulse is broadband with spectral index, $`3.1`$, PSR B1937$`+`$21 is at a distance of 3.6 kpc, and giant pulse beaming is determined by the beam width $`\theta \gamma ^1`$, we find a total power at the peak of $`\gamma ^2\times 10^{34}`$erg$``$s<sup>-1</sup>. An upper limit on the giant pulse Gaussian FWHM of $`\mathrm{}<0.5\mu `$sec (at high frequencies), gives $`\gamma \mathrm{}>500`$, which implies a substantially smaller power requirement than the total spin-down energy loss rate of $`2\times 10^{36}`$erg$``$s<sup>-1</sup>. Setting $`P_{curv}`$ equal to the observed power yields $$N=10^{19}\left(\frac{\gamma }{500}\right)^3\left(\frac{\rho _c}{10^6\mathrm{cm}}\right).$$ (2) In order to preserve coherence at the highest observational frequency, these electrons must fit within a cube with volume $`\mathrm{}<\lambda ^3`$, where $`\lambda =12.6`$ cm at 2380 MHz, implying an electon density of $`n_e=5\times 10^{15}(\frac{\gamma }{500})^3(\frac{\rho _c}{10^6\mathrm{cm}})`$cm<sup>-3</sup>, a value $`50`$ times greater than the Goldreich-Julian density (Goldreich & Julian (1969)), $`n_{\mathrm{G}\mathrm{J}}=\mathrm{\Omega }B/2\pi ec=10^{14}(\frac{R}{R_{\mathrm{NS}}})^3`$cm<sup>-3</sup>, which is the electron number density required to power the normal emission via curvature radiation. Sallmen et al. (1999) similarly find a giant pulse electron density $`100`$ times greater (for $`\rho _c=10^7`$ cm) than the Goldreich-Julian value for the Crab pulsar. ## 8 Discussion The discovery of giant pulses from a millisecond pulsar was unexpected, and early hopes that identifying commonalities between PSR B1937$`+`$21 and the Crab pulsar might lead to a better understanding of the giant pulse emission mechanism have so far not been realized. As our study has confirmed, the giant pulse emission from PSR B1937$`+`$21 differs fundamentally from the Crab giant pulses, despite their common power-law behavior. The most intriguing characteristic of the high-frequency pulses form PSR B1937$`+`$21 is their very narrow widths and the very limited regions of pulse phase in which they occur. Despite the continued mystery about their origin, it appears likely that giant pulses from PSR B1937$`+`$21 may prove a valuable tool. As we have discussed, their narrow intrinsic width and large flux make them attractive fiducial reference points for timing studies of a pulsar that is already among the most precisely timed (Kaspi et al. 1994). Another intriguing possibility is to use the giant pulses as bright flashbulbs to study scattering in the ISM. Because the intrinsic pulses are very narrow, the pulse shape as observed at the Earth traces out the time delays introduced by multipath scattering. The combination of this information with VLBI studies of the scattering disk is a potentially powerful tool for studying the three dimensional distribution of scattering material. It is important, of course, to identify giant pulse emission from other pulsars. As we have noted, the integrated giant pulse emission from PSR B1937$`+`$21 is too weak to make noticeable features in the average pulse profiles, so careful, single pulse studies are required. Very fine time resolution is needed to avoid substantially smearing the high-frequency pulses from B1937+21 and reducing their signal-to-noise ratio, and it is insufficient to study only the windows of pulse phase where normal emission is found, as has sometimes been the case in past studies with coherent dedispersion instruments. Instruments that use hardware dedispersion followed by sampling (like the Princeton Mark III, Stinebring et al. (1992)) must also preserve sufficient dynamic range in the analog-to-digital conversion to detect and characterize pulses that are far stronger than the typical pulsar emission. Another subtlety concerning observation of giant pulses are possible projection effects. If the PSR B1937$`+`$21 giant pulse emission region is roughly Gaussian, then as the pulsar rotates, the giant pulse emission traces out less than $`1\%`$ of the entire sky, though the likelihood of detecting giant pulses from a known radio pulsar might be substantially enhanced over this estimate by correlations between the angular patterns of the normal and giant emission. Although searches for giant pulse emission from slow pulsars have been unsuccessful, only a very small fraction of millisecond pulsars have been studied sufficiently to detect or rule out giant pulses. A.K. would like to thank I. Stairs, in particular, for her assistance with the analysis, E. Splaver for his thorough explanations, J. Taylor and J. Bell-Burnell for their observing efforts, and the other members of the Princeton Pulsar Group, especially D. Nice for constructive comments on the draft. In addition, S.E.T. thanks his earlier collaborators in this work, I. Cognard and J. Shrauner. This research was funded in part by a grant from the National Science Foundation, which also supports A.K. through a graduate fellowship. Arecibo Observatory is operated by Cornell University for the NSF.
no-problem/9910/cond-mat9910338.html
ar5iv
text
# Cumulant ratios and their scaling functions for Ising systems in a strip geometry ## Abstract We calculate the fourth-order cumulant ratio (proposed by Binder) for the two-dimensional Ising model in a strip geometry $`L\times \mathrm{}`$. The Density Matrix Renormalization Group method enables us to consider typical open boundary conditions up to $`L=200`$. Universal scaling functions of the cumulant ratio are determined for strips with parallel as well as opposing surface fields. Introduction. An universality principle is a cornerstone of contemporary theory of phase transitions. According to this principle, the following sorts of quantities are universal: critical exponents, certain amplitude ratios and scaling functions . They differ each from other in their status. The (bulk) critical exponents are independent on boundary conditions, whereas two other groups are dependent. The critical exponents are known for many models (both exactly and approximately). The collection of results available for amplitude ratios is also rich, but significantly smaller than for exponents; see in Ref. for exhaustive information. Among amplitude ratios, so called cumulant ratios are of great importance. They supply some information on scaling functions (cumulants are proportional to derivatives of these functions at zero values of argument(s)); they measure deviation of magnetization fluctuations at criticality from gaussian distribution; moreover, they are closely related to some versions of renormalization group (it gains also reflection in terminology: cumulant ratios are customarily termed “renormalized coupling constant” in field theory). Cumulant ratios have also been used to locate the critical points and critical lines in many models . Most results for cumulants have been obtained for (partially or completely) periodic boundary conditions. For most extensively studied the two-dimensional Ising model, numerous results are available . However, there are also other, very natural ”open” boundary conditions: “free” (no surface fields), “wall$`++`$” (infinite parallel surface fields), “wall$`+`$” (infinite opposing surface fields). For these open boundary conditions, the number of results is very small. Only papers we know where such results are available are . Motivated by this situation, we state the aim of this paper: Calculation of universal cumulant ratios for the two-dimensional Ising model in a strip geometry under the following boundary conditions: “free”, “wall$`++`$”, “wall$`+`$”. We have calculated cumulant ratios using method called the Density Matrix Renormalization Group (DMRG). Since the DMRG is most powerful for open boundary conditions, it is particularly suited for our goals. Definition of cumulants. We consider the two-dimensional Ising system on a square lattice in a strip geometry ($`L`$ is width of the strip and $`N`$ is its length) with the Hamiltonian $$=J\left[\underset{<i,j>}{}s_is_jH\underset{i}{}s_iH_1\underset{i}{\overset{(1)}{}}s_iH_L\underset{i}{\overset{(L)}{}}s_i\right],$$ (1) where the first sum runs over all nearest-neighbour pairs of sites while the last two sums run over the first and the $`L`$-th column, respectively. $`H`$ is the bulk magnetic field, whereas $`H_1`$ and $`H_L`$ are the surface fields. $`H`$, $`H_1`$ and $`H_L`$ are dimensionless quantities (all of them are measured in units of $`J`$). In the course of calculations of the cumulant in the termodynamic limit, two limiting processes are taken: $`TT_c`$ and $`L\mathrm{}`$. In general, a value of the cumulant do depend on ordering of these limits . In our paper we analyze so-called ”massless” case (analogously as in ): $`T=T_c=2/\mathrm{ln}(1+\sqrt{2})`$ $`2.269185`$ followed by $`L\mathrm{}`$. Therefore, we do not notice the temperature dependence below. We also drop (as unnecessary) explicit dependence on surface fields until discussion of scaling functions. We consider the ratio of moments of magnetization proposed by Binder . Definitions of cumulant ratios for a system in a strip geometry have been widely presented in the literature . Let us first define $$U_L=\underset{N\mathrm{}}{lim}[N(1\frac{1}{3}<M^4><M^2>^2)],$$ (2) where $`M=_is_i`$ is the total (extensive) magnetization. Then, the cumulant ratio $`A_U`$ in question is $$A_U=\underset{L\mathrm{}}{lim}L^1U_L.$$ (3) An equivalent (but more convenient for us) formula for the above cumulant is as follows: Let $`\lambda (L;H)`$ be the largest eigenvalue of transfer matrix for the strip of width $`L`$ ($`T\mathrm{log}\lambda (L;H)`$ is the free energy for one column of spins). Define $$m_2(L)=\frac{\mathrm{d}^2}{\mathrm{d}H^2}\mathrm{log}\lambda (L;H)|_{H=0},$$ (4) $$m_4(L)=\frac{\mathrm{d}^4}{\mathrm{d}H^4}\mathrm{log}\lambda (L;H)|_{H=0}.$$ (5) Then our cumulant is equal to $$A_U=\underset{L\mathrm{}}{lim}r(L)\underset{L\mathrm{}}{lim}m_4(L)/3Lm_2^2(L).$$ (6) Our method of calculation is based directly on the definition (6). We first find logarithms of the largest eigenvalue $`\lambda (L;H)`$ for some values of $`H`$ (at fixed $`L`$). Next, we calculate numerically derivatives (4) and (5), then the ratio $`r(L)`$, and finally perform the extrapolation $`L\mathrm{}`$. Some technical details of calculations. We use the DMRG method for calculations of $`\mathrm{log}\lambda (L;H)`$. Originally, this method has been proposed by White for finding accurate approximations to the ground state and the low-lying excited states of quantum chains. Its heart is recursive construction of the effective Hamiltonian of a very large system using a truncated basis set, starting from an exact solution for small systems. Later, the DMRG was adapted by Nishino for two-dimensional classical systems. The DMRG has been applied successfully to many different problems and now it can be treated as a standard method, which is very flexible, relatively easy to implement and very precise. For a comprehensive review of a background, achievements and limitations of DMRG, see in Refs. . A factor crucial for precision of DMRG is so-called number of states kept $`m`$, describing the dimensionality of effective transfer matrix ; the larger number of states kept, the more accurate the value of the free energy. Using $`m=50`$ we can calculate the free energy with accuracy of the order $`10^{12}`$ for strips of width of the order $`L=200`$. This is an one order more than size of systems available by exact diagonalization of transfer matrix. This fact is crucial for us, because of using the extrapolation procedures. In our calculations we apply the finite system algorithm, developed by White for studying finite systems . Additional factor determining the accuracy of the method is number of sweeps, i.e. number of iterations made in order to obtain self-consistency of results. Our numerical experience shows that in most cases, it is sufficient to apply only one sweep (although in the ”wall$`+`$” case two sweeps are necessary – see below). In our calculations of cumulant ratios, we have also a factor limiting accuracy that is independent on the DMRG method: accuracy of numerical differentiation. In the procedure of numerical differentiation, a suitable choice of increment $`\mathrm{\Delta }H`$ of an argument is of crucial importance. It is clear that $`\mathrm{\Delta }H`$ should be taken as small as possible; on the other hand, due to finite accuracy in calculation of $`\lambda `$, an error of difference quotient increases with decreasing $`\mathrm{\Delta }H`$. The increments used in our calculations have been determined as a compromise between above two tendencies. Additional factor determining the accuracy of numerical differentiation, is a number of points used to calculate the derivative. We use formulas where a derivative is determined from the second-order Taylor expansion (i.e. we need $`n+3`$ values of function for $`n`$-th derivative; this way, an accuracy is of the order $`𝒪((\mathrm{\Delta }H)^3)`$.) Therefore, the $`m_2`$ was determined from $`5`$ points ($`3`$ points in symmetrical case, i.e. $`f(H)=f(H)`$) and from $`7`$ points for $`m_4`$ ($`4`$ points when the symmetry was present). We have tested correctness of our calculations in several ways. One of them was $`L`$-dependence on derivatives $`m_2`$ and $`m_4`$. Finite-Size Scaling (FSS) theory predicts the following dependence on the $`n`$-th derivative of the free energy as a function of the system size $`L`$: $$\frac{\mathrm{d}^nf}{\mathrm{d}H^n}(L)|_{H=0}L^{\stackrel{~}{d}+n\mathrm{\Delta }/\nu },$$ (7) where we have $`\mathrm{\Delta }=15/8`$ and $`\nu =1`$ for the two-dimensional Ising model. $`\stackrel{~}{d}`$ is a dimension of system in “finite-size direction”, i.e. it is a number of linearly independent directions along which a size of the system is finite. For finite systems (for instance a torus), $`\stackrel{~}{d}`$ is equal to space dimension of the system. In our case, the system is infinite in one direction (along the strip) and finite in the second direction (across the strip), so we have to take $`\stackrel{~}{d}=1`$. This assumption gives the following predictions for derivatives: $$m_2L^{\rho _2},m_4L^{\rho _4},$$ (8) where $`\rho _2=11/4`$ and $`\rho _4=13/2`$. An extrapolation procedure has been performed with use of the powerful BST method . Results: the “free” case. It corresponds to zero surface fields $`H_1=H_L=0`$ in the formula (1). We have performed calculations for $`L`$ in the range $`160L200`$ with step $`10`$; these values of $`L`$ were taken in all situations. We took an increment of “bulk” magnetic field $`\mathrm{\Delta }H=5\times 10^6`$, $`m=50`$ and one sweep. The results are listed in the Table. As a byproduct, we have tested the FSS predictions for $`L`$-dependence of derivatives $`m_2`$ and $`m_4`$. Values of corresponding exponents (see Eq. (8) ) are: $`\rho _2=2.7495(3)`$ and $`\rho _4=6.50(3)`$, so predictions of FSS are confirmed in excellent manner. The same conclusion is true in next two situations. As another test of correctness (and quality) of DMRG results, we have calculated ratios by immediate numerical diagonalization of transfer matrix for $`10L18`$ (L even; these values of L have also been used in the next cases). We proceeded as above, i.e. by calculation of logarithm of the largest eigenvalue for some values of bulk field $`H`$, followed by numerical differentiation of $`f(H)`$ and computation of ratio and extrapolation, without any “renormalization”. We took an increment $`\mathrm{\Delta }H=10^4`$. We have obtained $`A_U=1.094(1)`$; $`\rho _2=2.746(1)`$; $`\rho _4=6.46(1)`$. It is seen that the results are fully consistent with the DMRG calculations but less precise; we have the same situation for two other boundary conditions. Results: the “wall$`++`$” case. The “wall$`++`$” boundary condition corresponds to the assumption that all boundary spins have the same value and sign. It is equivalent to putting $`H_1=H_L=\mathrm{}`$ in (1). Numerical experience suggests that it is sufficient to take $`H_1=10`$ – for larger values of $`H_1`$ the changes of the free energy are negligible . The “wall$`++`$” configuration is more intricate, from numerical point of view, than “free” system. The complication is due to the fact that, for parallel surface fields of the same sign, the maximum of the free energy $`f(H)`$ does not appear for $`H=0`$ but it is shifted to a certain non-zero value $`H_0(L)`$. This phenomenon is called the capillary condensation . In order to calculate derivatives and ratios at zero magnetization (i.e. at the maximum of the free energy), we first have to find its position $`H_0(L)`$. FSS predicts the following dependence: $`H_0(L)L^{\mathrm{\Delta }/\nu }`$. From our DMRG calculations, we have obtained the value $`\mathrm{\Delta }/\nu =1.8749(2)`$. For the “wall$`++`$” configuration the free energy is not longer a symmetric function of the bulk field $`H`$, so we have been forced to calculate $`m_2`$ from 5 points and $`m_4`$ from 7 points. We have taken an increment $`\mathrm{\Delta }H=5\times 10^6`$, $`m=50`$ and one sweep. The results are presented in the Table. For exponents of $`m_2`$ and $`m_4`$, we have obtained: $`\rho _2=2.7504(3)`$ and $`\rho _4=6.5024(3)`$. The precision of these results is a little bit less than in the “free” case, although still very satisfactory (three significant digits). However, it should be stressed that here we must do much more numerical computations than in the “free” case, so some lack of precision is inevitable. Exact diagonalization of transfer matrix gave the following values: $`A_U=0.45(4)`$; $`\rho _2=2.75(1)`$; $`\rho _4=6.5(2)`$. Results: the “wall$`+`$” case. One of important physical implications of “$`+`$” boundary condition is the presence of an interface between “$`+`$” and “$``$” phases in the system. It causes large fluctuations, which have an implication in numerical practice, namely, two sweeps are necessary to ensure self-consistency of results. In our calculations, the value of surface field $`H_1=100`$ an increment $`\mathrm{\Delta }H=210^5`$, and $`m=40`$ were taken. The results are listed in the Table. Values of exponents are: $`\rho _2=2.7502(2)`$, $`\rho _4=6.502(2)`$. The procedure of exact diagonalization of transfer matrix gave the following values: $`A_U=0.305(2)`$; $`\rho _2=2.755(1)`$; $`\rho _4=6.50(2)`$. As a matter of some interest, let us remark that for the “wall$`+`$” boundary condition the $`L`$ dependence is much weaker than for the “free” and “wall$`++`$” situations. Table.Values of cumulant ratios for some values of $`L`$ $`L`$ $`r(L)`$, free $`r(L)`$, wall$`++`$ $`r(L)`$, wall$`+`$ 160 -1.098525 0.462556 -0.304831 170 -1.098234 0.462225 -0.304859 180 -1.097964 0.461859 -0.304883 190 -1.097723 0.461525 -0.304902 200 -1.097481 0.461133 -0.304915 $`\mathrm{}`$ -1.0932(3) 0.455(2) -0.3050(1) Scaling functions for ratios. The ”wall$`++/+`$”-type conditions can be treated as limiting case of the system with equal finite parallel/antiparallel ($`++/+`$) surface fields. Another limiting case is the ”free” boundary condition, where the values of surface fields are set to zero. One can expect that for intermediate situations, i.e. finite values of boundary field $`H_1`$, cumulants would be smooth functions of $`H_1`$. Particularly interesting are scaling properties of these functions. The scaling theory predicts that at criticality, the system depends only on one variable, namely dimensionless combination $`\zeta =LH_1^2`$ . In the other words, we expect that the cumulant $`r`$ should depend on the surface field $`H_1`$ and strip width $`L`$ only by combination $`\zeta `$. We have calculated $`r(L,H_1)`$ for both ”$`++`$” and ”$`+`$” boundary conditions for $`L=40,80,120`$ using $`m=40`$, and at full range of scaling variable. The results are presented on Figs. 1a and 1b. It is seen that scaling properties are confirmed in excellent manner. Limiting values of these functions (i.e. for $`\zeta =0`$ and $`\zeta \mathrm{}`$) are fully consistent with our more precise calculations, although convergence of ratio to its limit value is much faster for ”wall$`+`$” than for ”wall$`++`$”. As far we know, scaling functions for cumulants have been almost not studied so far. Only exception is paper ; however, the authors consider scaling functions different from ours. Summary. We have calculated cumulant ratios for Ising strips with three natural boundary conditions, almost not studied so far: “free”, “wall$`++`$” and “wall$`+`$” situations. We have applied the Density Matrix Renormalization Group method followed by numerical differentiation and extrapolation $`L\mathrm{}`$. We claim that our results are very precise (three or four significant digits). The precision is comparable with three other “top quality” methods used in similar calculations: Monte Carlo , some versions of Renormalization Group and analysis of high-temperature series . We have also calculated the quantity which apparently escaped the attention so far (at least for boundary conditions considered by us), namely, the scaling functions for cumulants. Such functions give information of how finite surface fields influence values of cumulants. This influence is significant – in one case (”$`++`$”) even the sign of cumulant changes upon growth of surface field. We do not know how fundamental this phenomenon is. At first glance, it seems to be related to the lack of symmetry (i.e. $`f(H_0+H)f(H_0H)`$) and to a non-zero value of the third derivative of the free energy $`f`$ with respect to $`H`$ at $`H_0`$. However, the other explanations are not excluded and we will discuss it elsewhere. Natural lines of continuation of our investigations are: testing of universality of cumulants and scaling functions (for other models in the two-dimensional Ising universality class, for example the hard squares model) and calculation of higher cumulants. This work is currently in progress. Most of our numerical calculations were performed on Pentium II machines under Linux. We used ARPACK package to calculate eigenvalues. Acknowledgments. This work was partially funded by KBN Grant No. 2P03B10616.
no-problem/9910/physics9910011.html
ar5iv
text
# A Compact 3H(p,𝛾)4He 19.8-MeV Gamma-Ray Source for Energy Calibration at the Sudbury Neutrino Observatory ## 1 Introduction The Sudbury Neutrino Observatory (SNO) is a new heavy water (D<sub>2</sub>O) Čerenkov solar neutrino detector. The detector is unique in its use of 1000 tonnes of D<sub>2</sub>O as target, which allows the detection of electron neutrinos and neutrinos of all active flavours through the following channels: $`\nu _e+d`$ $``$ $`p+p+e^{}1.44\text{ MeV}`$ (1) $`\nu _x+d`$ $``$ $`p+n+\nu _x2.22\text{ MeV}`$ (2) $`\nu _x+e^{}`$ $``$ $`\nu _x+e^{}`$ (3) This ability to measure the total flux of all active flavours of neutrinos originating from the Sun will allow SNO to make a model-independent test of the neutrino oscillation hypothesis. The SNO collaboration needs a high energy calibration point beyond the <sup>8</sup>B solar neutrino energy endpoint of $``$15 MeV. This calibration point is very important in understanding the detector’s energy response because Čerenkov light production is not exactly linear in energy (e.g. energy loss to low energy electrons below the Cěrenkov threshold). As the energy increases, the probability that a photomultiplier tube would get hit by more than one Čerenkov photon increases. Therefore, a calibration point beyond the solar neutrino energy endpoint will provide vital information on this multiple hit effect at energies beyond the solar neutrino endpoint. In the arsenal of calibration sources at SNO, the “$`pT`$” source, which employs the <sup>3</sup>H($`p,\gamma `$)<sup>4</sup>He reaction to generate 19.8-MeV gamma rays, has the highest energy. This $`pT`$ source is the first self-contained, compact, and portable high energy gamma-ray source ($`E_\gamma >`$10 MeV). In this paper various aspects of the construction and operation of the $`pT`$ source are described. In Section 2 the design criteria for a high energy gamma-ray calibration source at SNO are outlined. Attributes of the <sup>3</sup>H($`p,\gamma `$)<sup>4</sup>He reaction are discussed in Section 3. In Section 4, the design of the $`pT`$ source is described. Details involving the fabrication of the scandium tritide target and the assembly of the $`pT`$ source are summarized in Section 5. The experimental setups used in measuring the neutron and the gamma-ray output of the $`pT`$ source are described in Section 6. The results of these measurements can be found in Section 7, followed by the conclusions in Section 8. ## 2 Design Criteria for a High Energy Gamma-Ray Source One way to calibrate the high-energy response (10$`<E<`$20 MeV) of a large water Čerenkov detector like SNO is to use high-energy gamma rays generated from radiative-capture reactions induced by a particle beam. The devices that provide these high-energy gamma rays must be compact enough to be maneuvered to different regions in the D<sub>2</sub>O volume using the SNO calibration source manipulator system. The largest insertion port for calibration devices at SNO can accommodate devices up to about 30 cm in diameter and 75 cm in length. This physical constraint limits the actual size of such calibration devices. Because the SNO detector is essentially a 100% efficient, 4$`\pi `$ detector to gamma rays in the solar neutrino energy regime, one does not need to design a high-energy source with a high gamma-ray production rate. The centroid of the photopeak can be measured to better than 1% in less than an hour with a gamma-ray yield of 0.2 s<sup>-1</sup>. SNO is designed to run with MgCl<sub>2</sub> loaded in the heavy water to detect the free neutron in Reaction (2). The high energy gamma-ray source is required to have a low neutron production rate. This will minimize the signal interference of the gamma rays resulting from thermal neutron capture by <sup>35</sup>Cl in D<sub>2</sub>O in the “salt” running scenario and the dead time in the data acquisition system. A neutron production of less than 10<sup>4</sup> s<sup>-1</sup> is needed for the design goal of $`>`$0.2 $`\gamma `$ s<sup>-1</sup>. The $`pT`$ source must be available to calibrate the SNO detector whenever there is a change to the detector configuration, or when a high energy calibration is called for. An operational lifetime of $`>`$60 hours for the $`pT`$ source will be more than enough to calibrate the SNO detector during its anticipated life span. Electromagnetic interference between this high energy calibration source and the photomultiplier tube array must be minimal. For this reason, accelerator sources like the $`pT`$ source have to be run in direct current mode, instead of pulsed mode, to eliminate possible electromagnetic pickup by the photomultiplier tube array. ## 3 Attributes of a <sup>3</sup>H(p,$`\gamma `$)<sup>4</sup>He Source The <sup>3</sup>H(p,$`\gamma `$)<sup>4</sup>He reaction (see for example, Refs. and ) has a Q-value of 19.8 MeV. Since <sup>4</sup>He does not have a bound excited state, the gamma ray emitted in this reaction is monoenergetic. Building a compact gamma-ray calibration source using this reaction is an attractive proposal for several reasons. First of all, the projectile and the target have unit charge. Therefore, the effect of Coulomb suppression on the cross section for this reaction is less than reactions with other combinations of charged projectiles and targets. Hence, the beam energy and power can be minimised. This allows the beam to be run in a d.c. mode without incorporating a complicated cooling system for the target. As the Q-value of <sup>3</sup>H(p,n)<sup>3</sup>He is -0.763 MeV, the $`pT`$ source is essentially neutron-free if the proton energy is below this threshold. However, isotopic impurities and the isotopic exchange between the beam and the target will give rise to undesirable neutrons through the <sup>2</sup>H(t,n)<sup>4</sup>He, <sup>3</sup>H(d,n)<sup>4</sup>He, and <sup>3</sup>H(t,nn)<sup>4</sup>He reactions. In principle, one can eliminate this neutron production problem by mass analyzing the beam. However, this option is not possible in the $`pT`$ source given the physical size constraint mentioned in the last section. A monoenergetic calibration source like the $`pT`$ source is better than sources with multiple energy lines in calibrating water Čerenkov detectors which generally have poor energy resolution. ## 4 Design of the $`pT`$ Source In order to keep the system as clean as possible, the $`pT`$ source was built with ultra-high vacuum (UHV) hardware. A cross sectional drawing of the $`pT`$ source can be found in Figure 1. The source can essentially be divided into three sections: the gas-discharge line, the ion acceleration line and the target chamber. In the following, the design of these three sections is discussed. The gas-discharge line is a cold-cathode Penning ion source, which runs in d.c. mode with a very modest power consumption. The outer housing of the gas-discharge line consists of two glass-to-stainless-steel adapters<sup>1</sup><sup>1</sup>1Manufactured by Larson Electronic Glass, Redwood City, CA, USA. Each of these adapters is 7.62 cm in length with a piece of 1.27-cm long Pyrex glass to isolate the two ends. The electrodes E1, E2 and E3 are welded to these adapters. The use of these glass-to-stainless-steel adapters provides convenient high voltage isolation between the anode and the cathodes. The placement of the various electrodes in the gas-discharge line was designed using the simulation program MacSimion . In the design, efforts were made to minimise ion loss to the electrode walls; hence, a higher beam current can be attained for a given discharge current. The beam was spread over the target; this reduces the areal power density and improves the target’s longevity. Under the normal running scenario, the cathodes (E1 and E3) are kept at ground, whilst the anode (E2) is maintained at +2 kV d.c. A SAES St-172 getter (model LHI/4-7/200) is used as the hydrogen discharge gas reservoir for the ion source. The getter has 360 mg of a zirconium-vanadium-iron alloy active material, and is mounted to the BNC connector next to E1 in Figure 1. The axial magnetic field required in the discharge is provided by a cylindrical magnet composed of seven 13.34 cm (outer diameter) by 5.88 cm (inner diameter) by 1.91 cm (thick) barium ferrite Feroxdur ceramic rings<sup>2</sup><sup>2</sup>2The magnets are supplied by Master Magnetics, Inc., Castle Rock, CO, USA. (part number CR525C). The maximum magnetic field inside the central bore of the magnet is about 0.06 T. The ion acceleration line is a double-ended glass adapter<sup>3</sup><sup>3</sup>3Manufactured by MDC Vacuum Products Corp., Hayward, CA, USA. (part number DEG-150)., with one end attached to the gas-discharge line and the other connected to the target chamber which is biased at a negative high voltage. In this scheme, the construction of complicated accelerating and focusing electrodes is avoided, and the length can also be kept to a minimum. When the ions exit this acceleration line and enter the target chamber, they have acquired an energy equivalent to the target bias voltage, in addition to their ejection energy from the ion discharge region. At the end of the ion acceleration line in the $`pT`$ source is the target mount flange. The target is secured to a copper heat sink, as shown protruding from the flange in Figure 1, by a stainless steel screw-on cap. This mounting mechanism is designed to allow efficient target mounting in the tritium glovebox in which this operation is to be performed. The total length of the $`pT`$ source is only 50 cm. For deployment in SNO, it will be housed inside a 25.4-cm diameter by 60-cm long stainless steel cylindrical deployment capsule. The dimensions of this capsule are well within the physical limits imposed by the SNO calibration-source-deployment hardware. The expected yield of the $`pT`$ source was calculated. Because the cross section of the <sup>3</sup>H($`p,\gamma `$)<sup>4</sup>He reaction below 50 keV is not well known, the cross section at the operating voltage of the $`pT`$ source had to be extrapolated from existing data. It is shown in Ref. that the long-wavelength approximation formalism developed in Christy and Duck is inadequate in describing the <sup>3</sup>H($`p,\gamma `$)<sup>4</sup>He cross section at low energies because of the reaction’s exceptionally high binding energy. Using the lowest energy data ($`0.1E_p0.75`$ MeV) from Hahn et al. , the reaction cross section $`\sigma (E)`$ was extracted by performing a $`\chi ^2`$ minimization of the S-factor $`S(E)`$, which is related to the cross section $`\sigma (E)`$ as : $$\sigma (E)=\frac{S(E)}{E}\mathrm{exp}\left(\sqrt{\frac{E_G}{E}}\right),$$ (4) where $`E`$ is the energy in the center of mass frame, and $`E_G`$ is the Gamow energy. Because $`S(E)`$ is expected to be a slowly varying function at low energy, it was fitted to the data as a power series: $$S(E)S(0)\left(1+\frac{S^{}(0)}{S(0)}E+\frac{1}{2}\frac{S^{\prime \prime }(0)}{S(0)}E^2\right),$$ (5) where the parameters $`S(0)`$, $`S^{}(0)`$ and $`S^{\prime \prime }(0)`$ were extracted. Details of the extrapolation can be found in Ref. . In Figure 2, the extrapolated $`S(E)`$ for the <sup>3</sup>H($`p,\gamma `$)<sup>4</sup>He reaction is shown. The values of the fitted parameters are listed in Table 1. The cross section at proton energies of 25 keV and 30 keV are 0.19 $`\mu `$b and 0.30 $`\mu `$b respectively. The stopping power required in the yield calculation was calculated using the program SRIM . Figure 3 shows the estimated gamma-ray yield as a function of the the mass-1 content in a 50-$`\mu `$A, 27-keV beam in the constructed $`pT`$ source. This calculation assumed a total mixing of hydrogen isotopes between the beam and the target. The ion beam current was measured in situ by a calorimetric method and by a Faraday cup fitted with a secondary electron suppression scheme. These measurements were made with extra hardware installed in the target chamber of an untritiated model $`pT`$ source. In the calorimetric method, the temperature of a copper target, in which a heater was embedded to calibrate the beam power , was monitored. Beam current measured by both methods agreed with each other. The $`pT`$ source is capable of generating at least 50$`\mu `$A of total (atomic and molecular) beam current at a beam energy of 20 keV. The mass composition of the beam was also measured in situ by lengthening the target chamber and installing a home-built mass spectrometer in the model source. The mass-1 composition was determined to be (0.63$`\pm `$0.09) in the H<sub>2</sub> partial pressure range of 0.3$`\times `$10<sup>-3</sup> to 0.6$`\times `$10<sup>-3</sup> mbar, which is a factor of $``$5 lower than the normal operating pressure of the $`pT`$ source. The normal operating pressure of the source was chosen by considering the beam stability and longevity running in a continuous mode. The mass composition measurement could not be made at the normal operating H<sub>2</sub> pressure of the source due to increased beam scattering in the lengthened target chamber and the inadequate resolution of the spectrometer. ## 5 Construction of the $`pT`$ Source ### 5.1 Fabrication of the Scandium Tritide Target The most common metal hydride films use titanium as the “sorbent” . Singleton and Yannopoulos measured the loss rate of tritium in titanium tritide, yttrium tritide and scandium tritide films at elevated temperatures under several different ambient environments. It was demonstrated that both yttrium and scandium films have a lower tritium loss rate than titanium films under the testing conditions. Although this study was performed using moderately loaded tritiated films (Y:<sup>3</sup>H and Sc:<sup>3</sup>H ratios were $``$1:1), this general observation of scandium tritide films having very good thermal stability is believed to hold even for heavily loaded films. This property is essential for a target system which does not have an external cooling mechanism like the $`pT`$ source. Molybdenum was chosen as the substrate for the scandium film because of the strong adhesion between the two materials . To ensure high adhesion strength of the scandium film to the molybdenum substrate, it was prepared by going through a series of mechanical and chemical treatments prior to film deposition. A substrate disc of diameter 2.86 cm was first cut out from a 1-mm thick sheet of 99.95% pure molybdenum using the electro-discharge machining (EDM) technique. This was to minimise the use of machining oil on the substrate. The substrate was then sandblasted by fine glass beads in order to increase its effective surface area and enhance the film adhesion strength. The scandium film would peel off much more easily from a non-roughened substrate surface. The substrate was then treated chemically in a multi-stage process. It was first cleansed in acetone in an ultrasonic bath for half an hour. The substrate was subsequently ultrasonically cleansed in ethanol, then deionised water, for half an hour in each solvent. This sequence of chemical cleansing ensured that hydrocarbons that might have deposited on the substrate during the EDM process to be removed. The substrate surface was then etched in a 3 M nitric acid bath for 30 seconds. The whole chemical cleansing process was completed by a 30-minute deionised water wash in an ultrasonic bath. Once the substrate had gone through this series of preparation processes, it was mounted to a copper holder in which a 110-W coil heater was embedded and placed inside the ultra-high vacuum (UHV) evaporation system which is described below. The Mo substrate was centered on the 2.54-cm diameter central aperture of the holder. This heater block was outfitted with thermocouples for temperature monitoring. The substrate was baked at 400C in the evaporation system for about four days, then at 250C for about a week to reduce outgassing from its surface. Fabrication of the scandium tritide target, and the subsequent assembly of the $`pT`$ source were performed at the tritium laboratory at Ontario Hydro Technologies (OHT) in Toronto, Ontario, Canada. The schematic of the vacuum system is shown in Figure 4. To ensure that a high vacuum could be achieved in this tritium run, oil-free vacuum pumps and UHV hardware were used in this system. The evaporation chamber is a UHV six-way cross with an outer flange diameter of 15.24 cm. The tritium-compatible glovebox is continuously purged with dry nitrogen. The moisture level in the glovebox is typically 30 to 50 ppm by volume. The nitrogen purge gas is routed through a Zr<sub>2</sub>Fe tritium trap in order to remove its tritium content before venting . The exhaust of the vacuum system is also routed through a Zr<sub>2</sub>Fe trap before venting. Two high-current feedthroughs were connected to the evaporation chamber. A 5-coil conical tungsten evaporation basket<sup>4</sup><sup>4</sup>4R.D. Mathis Company, Part Number B12B-3x.025W was mounted between these feedthroughs. A (26$`\pm `$1)-mg lump of 99.99% pure, sublimed dendritic scandium was placed inside this basket, and positioned directly above the molybdenum substrate in the heater block. The separation between the bottom of the tungsten basket and the molybdenum substrate was (14$`\pm `$2) mm. A stainless steel shroud that had an orifice directly below the evaporation basket was positioned around the feedthrough-basket assembly to prevent deposition on the viewport in the evaporation chamber and to reflect radiation back to the coil to enhance heating efficiency. A quartz oscillator was installed at the end of an evaporator bellows as shown in the setup in Figure 4. When the deposition assembly is inserted into the evaporation chamber, the oscillator can be lowered to the back side of the assembly through an aperture in the main shroud and used to monitor the deposition rate of scandium. The distance between the scandium source (in the tungsten evaporation basket) and the oscillator was 27 cm. As shown in Figure 4, there are two main gas lines connected to the evaporation chamber in the vacuum system of the setup. One of these branches is connected to a 5-g depleted uranium bed. This uranium bed is used to store tritium which can be readily desorbed by raising it to sufficiently high temperature . In Table 2, the isotopic purity of the tritium gas in this bed is shown. Prior to film evaporation, the whole apparatus was baked for over a week at $``$150-200 C to reduce the outgassing rate of the evaporation system. The tungsten evaporation coil was also baked by running a 10 A current through it. The base pressure of the system was $``$6$`\times `$10<sup>-7</sup> mbar during the bakeout. After the baking, the evaporation system reached a base pressure of 5.8$`\times `$10<sup>-8</sup> mbar. After bakeout, the deposition assembly (i.e. the high current feedthrough-evaporation basket assembly) was delivered into the evaporation chamber by winding in the linear translation stage to which the deposition assembly flange was connected. The tungsten evaporation basket was positioned directly above the centre of the molybdenum substrate. The current fed to the tungsten basket was raised at a rate of about 1 A min<sup>-1</sup> during the first thirty minutes of the experiment. This rate was then decreased to 0.2 A min<sup>-1</sup> to lower the outgassing rate of the evaporation hardware. The basket current was raised up to 46 A, at which point the coil temperature was $``$1900 C. This was to ensure that all the scandium, whose melting point is 1539 C, was evaporated. Immediately after the scandium deposition, the deposition assembly was removed from the evaporation chamber by winding out the linear translation stage and closing a gate valve (V2 in Figure 4). Before tritium was let into the evaporation chamber, the evaporation chamber was isolated by closing the remaining gate valves (V1 and V3 in Figure 4) connected to it. These two steps would reduce the amount of tritium used in the subsequent tritiation process. The molybdenum substrate temperature was subsequently raised to 400 C to enhance tritium sorption by the scandium film later on. The uranium tritide bed was first heated to 135 C to drive out the <sup>3</sup>He from tritium decay in the bed. At this temperature, tritium is still “locked” inside the bed. The released <sup>3</sup>He was first pumped out of the system before the uranium bed temperature was raised to 220-240 C at which temperature the tritium is desorbed. In order to measure the amount of tritium sorbed by the scandium film, the tritium gas released from the uranium bed was first trapped in the small volume between valves V6 and V10 (see Figure 4) before releasing to the isolated evaporation chamber. This trap has a volume of (31.9$`\pm `$2.2) cm<sup>3</sup>. With the tritium pressure measured by the pressure transducer connected to this volume, the amount of tritium used could then be determined. In Figure 5 the pressure inside the evaporation chamber is plotted against the time after Doses 1, 7, 9 and 13 were injected. It is clear from the figure that the sorbing capacity of the scandium film decreased as the tritium concentration in the film increased. A total of (8.19$`\pm `$0.57) Ci of tritium gas was injected in 13 different doses into the chamber. It was found that 89.9% of the tritium that was injected into the chamber was absorbed by the (5.7$`\pm `$0.6) mg scandium film on the target heater block. This corresponds to a <sup>3</sup>H/Sc atomic ratio of (2.0$`\pm `$0.2). The target substrate subtended a smaller solid angle than the heater block to which the substrate was mounted. After correcting for the solid angle, the tritium activity on the target substrate was found to be (3.3$`\pm `$0.8) Ci. ### 5.2 Assembly of the $`pT`$ Source The ion source must be cleansed before it could accept the tritiated target. If the outgassing rate of the ion source is too high, the getter would lose most of its capacity on pumping the residual gas in the source, rather than serving its purpose as the hydrogen discharge gas reservoir. The ion source was cleansed chemically and mounted to a tritium-free bakeout system. The ion source was baked at 150 C for about two weeks. The bakeout vacuum system was flushed with argon for approximately 5 to 10 minutes daily during this bakeout period. This flushing procedure did improve the overall cleanliness of the vacuum system. After the target fabrication, the ion source was removed from the bakeout system and wrapped in layers of Parafilm$`^{\text{TM}}`$ which is a flexible, thermoplastic material. It was used to minimise tritiated particles depositing on the outer surface of the ion source once it was taken into the glovebox where the target evaporation system was set up. The tritiated target was removed from the evaporation system and mounted to the $`pT`$ source. The ion source was then connected to the vacuum system as indicated in Figure 4. After the system had reached its base pressure, H<sub>2</sub> was let into the system, and an ion beam was allowed to strike and to bombard the target for 5 minutes. During this time, the beam energy was gradually increased from 0 to 25 keV. This procedure was necessary to cleanse the Penning electrodes by electro-discharge. Contamination on the target, which might have deposited on the target surface during the target mounting process, would also be removed by this brief beam bombardment. It was found that if this step were not carried out, the getter in the source would not be able to handle the residual gas load in the source once sealed. The St-172 getter had to be activated before loading hydrogen to it. To activate the getter, it was heated for 10 minutes at 800C by passing a 4.5 A current through it. Once activated, the getter current was lowered to about 1.6 A in order to maintain a temperature of 200C. The getter was then loaded with hydrogen by allowing an ambient H<sub>2</sub> pressure of 3.3$`\times `$10<sup>-4</sup> mbar into the ion source. After 30 minutes, $``$200 cm<sup>3</sup> mbar of H<sub>2</sub> would have been absorbed by the 360 mg of active material in the getter. The getter loading procedure was completed by turning off the getter current, and by pumping out the residual H<sub>2</sub> gas in the ion source. After the base pressure was reached, the source was isolated and detached from the rest of the vacuum system by closing the metal-seal valve on the source. The source was subsequently removed from the glove box, and its outer surface was de-contaminated. ## 6 Experimental Setup for Measuring the Neutron and Gamma- Ray Yields of the $`pT`$ Source ### 6.1 Gamma-Ray Detection Systems After the $`pT`$ source was constructed at OHT, a quality assurance test was first performed at Queen’s University at Kingston, ON, Canada. The source was subsequently transported to the University of Washington for a measurement of the gamma-ray angular distribution in the <sup>3</sup>H($`p,\gamma `$)<sup>4</sup>He reaction . In the quality assurance test, a 12.7-cm diameter by 7.6-cm long bismuth germanate (Bi<sub>4</sub>Ge<sub>3</sub>O<sub>12</sub>, or BGO) crystal was used as the gamma-ray detector . In the angular distribution measurement, three 14.5-diameter by 17.5 cm cylindrical barium fluoride (BaF<sub>2</sub>) crystals were used. In Figures 6 and 7 the orientations of the $`pT`$ source with respect to the detectors in the two different test systems are shown. ### 6.2 Neutron Detection System Because of beam-target mixing, fast neutrons are generated through the <sup>2</sup>H(t,n)<sup>4</sup>He, <sup>3</sup>H(d,n)<sup>4</sup>He, and <sup>3</sup>H(t,nn)<sup>4</sup>He reactions. The neutron output of the $`pT`$ source during its lifetime was monitored by neutron-proton elastic scattering in organic scintillators. The neutron detector was a 12.7-cm diameter by 5.1-cm thick Bicron BC 501 liquid scintillator, which was optically coupled to a Hamamatsu R1250 photomultiplier tube (PMT). A Piel 112 pulse shape discriminator (PSD) was used to perform pulse shape discrimination on gammas and fast neutrons generated by the $`pT`$ source. The neutron-gamma separation ability in the neutron detection system is demonstrated in Figure 8. ## 7 Gamma-Ray and Neutron Yields of the Source The gamma-ray and neutron production rates by the $`pT`$ source are summarised in this section. The $`pT`$ source was operated in the quality assurance test and in a measurement of the gamma-ray angular distribution in the $`pT`$ reaction. During the 98.8 hours of operational lifetime of the $`pT`$ source, data was taken at beam energies of 22, 27 and 29 keV. ### 7.1 Gamma-Ray Yields In the quality assurance test at Queen’s, the source was run at a beam energy of 22 keV for 3 hours. The gamma-ray output was subsequently increased by raising the beam energy to 27 keV, and ran for another 17.9 hours. Energy calibration of the BGO detector was provided by the <sup>22</sup>Na 0.511-MeV and 1.275-MeV lines, the <sup>1</sup>H(n,$`\gamma `$)<sup>2</sup>H 2.22-MeV line, and the <sup>12</sup>C(4.4 MeV) de-excitation line. In Figure 9 the cosmic-ray-background-subtracted energy spectrum from part of the data taken at 27 keV in the quality assurance test is shown. The figure shows a fit using a response function for the BGO spectrometer generated by GEANT . The measured gamma-ray yield of the $`pT`$ source during its testing at 27 keV is (0.67$`\pm `$0.11) s<sup>-1</sup>. The gamma-ray yield at 22 keV could not be extracted because of low statistics. In the angular distribution measurement, the gamma-ray detectors were energy calibrated by a variety of sealed sources: <sup>137</sup>Cs(0.662 MeV), <sup>207</sup>Bi(1.063 MeV), <sup>12</sup>C(4.44 MeV), and <sup>16</sup>O(6.13 MeV). Without a readily available energy source with an energy close to 19.8 MeV, Monte Carlo simulation using GEANT was relied upon to calculate the response of the detectors. The simulation program was checked against the data taken with a strength calibrated <sup>13</sup>C($`\alpha `$,n)<sup>16</sup>O source. Energy spectra were taken with this <sup>13</sup>C($`\alpha `$,n)<sup>16</sup>O source placed at the centre of the BaF<sub>2</sub> detector system. At the time of this experiment, this source had a strength of (4.1$`\pm `$0.1)$`\times `$10<sup>3</sup> $`\gamma `$ s<sup>-1</sup>. Because of its high neutron output, energy spectra were taken with a 2.5-cm thick slab of lead placed between the source and the detectors to extract the neutron induced spectra. By comparing these two types of spectra, the gamma-ray line shape could then be extracted for each detector. In Figure 10, the GEANT generated line shape is compared to an experimentally determined spectrum. After correcting for the effects of lead absorption, neutron induced background and dead-time, the number of detected gamma rays and efficiency ($`\epsilon _{exp}`$) were extracted. The average ratio between $`\epsilon _{exp}`$ and the GEANT calculated efficiency ($`\epsilon _{MC}`$), $`\epsilon _{exp}/\epsilon _{MC}`$, was found to be (1.01$`\pm `$0.04). The gamma-ray penetration function $`\eta _\gamma (\theta )`$ was measured for the 6.13-MeV gamma-ray line in the three BaF<sub>2</sub> detectors. This source was positioned inside an untritiated model $`pT`$ source, the mechanical construction of which was identical to the real $`pT`$ source, at the location where the tritiated target would be mounted. The gamma-ray detection rate was then measured experimentally in a procedure similar to the efficiency measurement above. By comparing this detection rate and the one without the presence of the model source, the average penetration factor over the solid angle subtended by the detectors $`\eta _\gamma (\theta )_{\mathrm{\Omega }_{det}}`$ was extracted. The average percentage difference between the measured values and the simulated ones is $`\pm `$3%. To extract the gamma-ray yield of the $`pT`$ source, the calibrated “beam-on” data were fitted to a composition of a cosmic-ray background and the 19.8-MeV line shape for an isotropic source located at the target surface in the $`pT`$ source as generated by GEANT simulation. Because the emitted gamma rays in the <sup>3</sup>H($`p,\gamma `$)<sup>4</sup>He reaction have a predominant $`\mathrm{sin}^2\theta `$ angular distribution , the extracted gamma-ray amplitude from the fit was corrected for this distribution. The rate at $`E_p=`$29 keV during the gamma-ray angular distribution measurement in the last 47.2 hours of the source’s lifetime was (0.36$`\pm `$0.03) s<sup>-1</sup>. The gamma-ray production rate in between the quality assurance run and this time was not evaluated because of a noise problem in the electronics system, and the yield could not be extracted reliably. In Figure 11, the gamma-ray production rate was renormalised to that for a 29-keV atomic beam. It is clear that the gamma-ray yield decreased over time and is due to beam-target mixing and target sputtering in the source. This point will be discussed after evaluating the neutron yields in the next section. ### 7.2 Neutron Yields In the $`pT`$ source most of the neutrons are generated through the <sup>3</sup>H+<sup>3</sup>H interaction. Although the discharge gas stored into the hydrogen reservoir in the $`pT`$ source was initially free of any tritium, tritium would get into the discharge gas through beam-target exchange after a period of beam bombardment. Moreover, deuterium present in the discharge gas (at a 1.5$`\times `$10<sup>-4</sup> level) and in the target (at a 1.2$`\times `$10<sup>-3</sup> level) would enhance neutron production by the source through the <sup>3</sup>H(d,n)<sup>4</sup>He reaction. In the following the results of this neutron production measurement are presented. The fast neutron detection efficiency of the liquid scintillator was calibrated using an <sup>241</sup>Am-<sup>9</sup>Be source which generates neutrons through <sup>9</sup>Be($`\alpha `$,n)<sup>12</sup>C. This source has a calibrated neutron strength of (7.1$`\pm `$0.7)$`\times `$10<sup>3</sup> n s<sup>-1</sup> and was placed on the axis of the detector with a separation of 20.6 cm, the same distance between the tritiated target and the neutron detector in the gamma-ray angular distribution runs. Gamma rays and neutrons generated by the source could be cleanly separated by pulse shape discrimination. The net neutron count rate was extracted after the correction of a (7.1$`\pm `$0.1)% dead time and the subtraction of a background rate of 0.7 s<sup>-1</sup>. The detection efficiency ($`\epsilon \mathrm{\Delta }\mathrm{\Omega }/4\pi `$) was found to be $`(3.6\pm 0.4)\times 10^3`$. Neutrons generated by the $`pT`$ source would inevitably be scattered or absorbed by its construction material. Hence the detected neutron rate ($`R_{det}`$) would be less than the actual $`pT`$-source generated rate ($`R_{gen}`$) by a reduction factor $`\eta _n`$. To measure this reduction coefficient, the <sup>241</sup>Am-<sup>9</sup>Be source was placed on the target mount inside the untritiated model source. This model source was then placed in the same orientation to the liquid scintillator as in the gamma-ray angular distribution runs. After correcting for the dead time and background, and comparing the neutron detection rate to that in the calibration runs without the presence of this model source, it was found that the $`pT`$ source hardware absorbed or scattered $`(38\pm 6)`$ % of the neutrons that are generated inside the source. Because there was a variation in beam intensity on target from run to run, the neutron production rate was normalised to the current drawn from the target bias supply in order to provide a fair comparison. This current was a combination of the actual ion current on target and the contribution from secondary electron emission. This current was monitored during all the experimental runs. The $`pT`$ source does not have any internal secondary electron suppression scheme because of physical constraints imposed by the SNO calibration hardware. Two assumptions were made in extracting this neutron generation rate by the $`pT`$ source: 1. the neutrons generated by the $`pT`$ source have the same energy spectrum as fast neutron spectrum from the <sup>241</sup>Am-<sup>9</sup>Be calibration source; 2. the angular distribution of neutrons generated by the $`pT`$ source is isotropic as in the <sup>241</sup>Am-<sup>9</sup>Be case. Neutrons are produced predominantly by the <sup>3</sup>H+<sup>3</sup>H interaction in the $`pT`$ source. The reactions that are energetically possible in this system are: $`{}_{}{}^{3}\text{H}(t,nn)^4\text{He}`$ (6) $`{}_{}{}^{3}\text{H}(t,n_1)^5\text{He}^{}(n)^4\text{He}`$ (7) $`{}_{}{}^{3}\text{H}(t,n_0)^5\text{He}(n)^4\text{He}.`$ (8) In a measurement at a triton energy $`E_t`$=500 keV, the branching ratio for these reactions was found to be 70%:20%:10% (in the same order as they appear above) . The neutron energy spectrum for each of these reactions is somewhat different. Without any final-state effect, the direct three-body breakup reaction in reaction (6) would yield neutrons at an average energy of $`\frac{1}{2}\frac{5}{6}Q`$. With a $`Q`$-value of 11.3 MeV, the neutron energy spectrum from reaction (6) would be a broad peak centered at about 4.7 MeV. This shape is indeed very similar to the neutron spectrum from <sup>9</sup>Be($`\alpha `$,n) sources . The ground state transition (8) yields a 10.4-MeV neutron $`n_0`$, followed by a 0.9-MeV secondary neutron. The neutron detection efficiency for the liquid scintillator is almost null at 0.9 MeV. Reaction (7) is a sequential decay proceeding through a broad <sup>5</sup>He excited state at about 2 MeV. Because of the small branching ratio for this excited state transition, it would not contribute much to the uncertainty in the extracted neutron generation rate by the $`pT`$ source. The uncertainty in the extracted $`pT`$-source neutron rate due to the secondary neutrons is at most 15% if one assumes none of the secondary neutrons from (7) and (8) were detected. The uncertainty in the extracted $`pT`$-source neutron rate due to $`n_0`$ and the 14 MeV monoenergetic neutron from <sup>3</sup>H(d,n)<sup>4</sup>He was estimated to be 9% at $`E_p`$=29 keV. Although the neutron detector was placed in different orientations to the $`pT`$ source in the quality assurance runs and in the gamma-ray angular distribution measurement, continuous beam-target exchange rendered it impossible to extract the neutron angular distribution without the presence of a second neutron detector for normalisation purposes. Wong et al. measured the angular distribution for the <sup>3</sup>H+<sup>3</sup>H system at $`E_t`$=500 keV. They found that the ground state transition neutron group is isotropic to within an accuracy of $`\pm `$10%. They also found that in the neutron energy range of 2 to 7.5 MeV, the continuum neutron group is also isotropic to within an accuracy of $`\pm `$20% in the laboratory angle range of 4 to 100. For the <sup>3</sup>H(d,n)<sup>4</sup>He reaction, the angular distribution is isotropic at and below the resonance . Given these facts, the assumption that the neutrons emitted by the $`pT`$ source are isotropic was made in the yield evaluation. In order to look at the time variation of the neutron production rate by the $`pT`$ source more closely, the neutron production rates for all the runs were renormalised to the same atomic beam energy at 29 keV. In other words, the rate in all of the $`E_p`$=22 keV and 27 keV runs were scaled up by a factor corresponding to the difference in cross section at that atomic beam energy to that at 29 keV. The resulting plot is shown in Figure 11. In Figure 11, it is clear how the neutron production rate in the $`pT`$ source varied over time. The neutron production rate was gradually increasing initially. This is a clear indication of beam-target exchange, as tritium in the target gets into the discharge gas stream. The neutron production rate then began to decrease. This can be explained by the fact that the rate of hydrogen isotope exchange was reaching an equilibrium, and sputtering of the target became the dominant process. The target sputtering effect had caused the build-up of a thin film on the high voltage insulator in the acceleration section of the source. Under the normal operating condition, one end of this insulator is grounded whilst the other end is biased at $``$-30 kV. With a thin conductive film build-up on the insulator, high voltage could no longer be maintained without breakdown. This build-up effect limited the lifetime of the $`pT`$ source to 98.8 hours. The neutron production rate of the $`pT`$ source during calibration in the SNO detector was estimated. Using the highest data point in Figure 11, the maximum neutron generation rate was estimated to be less than (2.5$`\pm `$0.4)$`\times `$10<sup>3</sup> n s<sup>-1</sup>. The uncertainty here does not include the monoenergetic neutron and the secondary neutron contributions discussed above. However, the estimated rate quoted above should be seen as the upper limit of neutron production as it was estimated using the highest data point in the data. In Figure 12, the results of a Monte Carlo simulation of the SNO detector response to neutrons and gamma rays generated by the $`pT`$ source are shown. This simulation was performed using the SNO Monte Carlo and analysis program SNOMAN . In this simulation, fast neutrons generated by the $`pT`$ source were assumed to be monoenergetic at 4.7 MeV. Full $`pT`$ source and deployment capsule geometries were employed in this simulation, but neutron absorbers inside the source’s stainless steel deployment housing were not. This is equivalent to assuming the worst possible neutron leakage into the heavy water. A neutron production rate of 2,500 s<sup>-1</sup> and a gamma-ray production rate of 0.6 s<sup>-1</sup> were assumed. The spectra in the figure represent about 3 hours of run time in the SNO detector. From these figures, it is clear that the neutron production rate of the $`pT`$ source is low enough for an accurate measurement of the 19.8-MeV photopeak. ## 8 Conclusions A functional 19.8-MeV gamma-ray source using the <sup>3</sup>H($`p,\gamma `$)<sup>4</sup>He reaction was built. This $`pT`$ source met all the physical and operational requirements for energy calibration at the SNO detector. This is the first self-contained, compact and portable high energy ($`E_\gamma >`$10 MeV) gamma-ray source of this type. Techniques to fabricate high-quality scandium deuteride and tritide targets were developed. The tritiated target had a Sc:<sup>3</sup>H atomic ratio of 1:2.0$`\pm `$0.2. In the testing of the $`pT`$ source, 19.8-MeV gamma rays from the $`pT`$ reaction were observed and found to be sufficient for calibrating the SNO detector. The neutron production rate by the $`pT`$ source is also low enough that the neutron background would not mask the gamma-ray signal during calibration. Because of the time variation of its output, this $`pT`$ source is not suitable for efficiency calibration. The operational lifetime of the $`pT`$ source was 98.8 hours. Operation was terminated by a thin conducting layer deposited on the high voltage insulator in the ion acceleration line, which caused a high voltage breakdown across the insulator. The origin of this layer was scandium sputtering off the target surface. A second $`pT`$ source has been constructed with minor engineering changes to reduce this deposition effect. Calibration of large water Čerenkov detectors at energies near the solar neutrino endpoint has been a difficult problem. This proof-of-principle experiment of the $`pT`$ source opens a window for more convenient calibration standards in the future. One area in which the $`pT`$ source can be improved is to implement a beam analyser to reduce the beam power on the target, and to reduce the neutron output of the $`pT`$ source. This feature was not instrumented in this project because of stringent constraints on the physical size of calibration sources that can be deployed in the SNO detector. We thank Mel Anaya, Tom Burritt, Mark Hooper, Clive Morton, Hank Simons, and Doug Will for their technical support at various stage of this project. We thank David Sinclair for his careful reading of the manuscipt and his valuable comments. One of us (AWPP) would like to thank the University of British Columbia for a University Graduate Fellowship. This work was supported by the Natural Sciences and Engineering Research Council of Canada, and by the US Department of Energy under Grant Number DE-FG06-90ER40537. Table 1 $`\chi ^2`$ minimisation results in fitting the S-factors from Hahn et al. Table 2 Isotopic composition of the tritium gas used in the target. Figure 1 Cross sectional drawing of the $`pT`$ source. Figure 2 Extrapolated S-factors for the <sup>3</sup>H($`p,\gamma `$)<sup>4</sup>He reaction. Measured S-factors by Perry and Bame and Hahn et al. are shown as data points in this plot. Hahn et al. used a BGO detector and a NaI detector in their measurements, and the results for these two detectors are shown separately here. The solid curve is the $`\chi ^2`$ fitted curve to the combined data in Hahn et al. Figure 3 Estimated gamma-ray yield from the $`pT`$ source. The yield is plotted against the mass-1 fraction $`f_1`$ in a 50$`\mu `$A, 27-keV beam. Hydrogen isotopes in the beam and the target were assumed to be completely mixed. The yield shown here should be treated as the upper limit because target degradation was not taken into account in the calculation. The dotted lines are the calculated uncertainties based on the uncertainties in the physical parameters of the constructed $`pT`$ source and the cross section (). Figure 4 Schematic of the scandium tritide target evaporation vacuum system. Most of the setup is enclosed in a dry nitrogen environment inside a glovebox (from ). Figure 5 Tritium pumping by the scandium film. The pumping curve for Doses 1, 5, 9 and 13 are shown here. Figure 6 Top view of the BGO detector setup for the quality assurance testing of the $`pT`$ source. The separation between the liquid scintillator (LS) and the target of the $`pT`$ source is about 36 cm. Figure 7 Schematic of the BaF<sub>2</sub> detector system. The three BaF<sub>2</sub> detectors were oriented at 45 (D<sub>45</sub>), 90 (D<sub>90</sub>), and 135 (D<sub>135</sub>) to the beam direction, whilst the neutron detector was oriented at 2 to the beam direction. The separation between the centre of the target and the front face of the BaF<sub>2</sub> crystals was 35.6 cm for D<sub>90</sub>, and 25.4 cm for D<sub>45</sub> and D<sub>135</sub>. The neutron detector was located at 20.6 cm from the centre of the target. Figure 8 Timing distribution of liquid scintillator pulses generated by neutrons and gamma rays in a <sup>9</sup>Be($`\alpha `$,n)<sup>12</sup>C source. Neutrons are cleanly separated from the gamma-rays using the pulse shape discrimination scheme outlined in the text. Figure 9 Background-subtracted BGO energy spectrum in the quality assurance run at Queen’s University. The data points constitute the background-subtracted energy spectrum. The histogram shown is a fit using a response function for the BGO spectrometer generated by GEANT. The measured yield of the $`pT`$ source during its running at 27 keV is (0.67$`\pm `$0.11) s<sup>-1</sup>. The excess near 16 MeV was due to statistical fluctuation, as this was not observed in the later running of the source. Figure 10 Comparing GEANT generated gamma-ray line shape to measurement. The data points correspond to the 6.13-MeV line from a calibrated <sup>16</sup>O de-excitation source. The solid histogram is the GEANT generated line shape. Figure 11 Scaled neutron and gamma-ray production by the $`pT`$ source at $`E_p`$=29 keV. The rates were normalised to the current drawn from the target power supply during the runs. Also, the production rates for the $`E_p`$=22 keV and 27 keV runs have been scaled to the $`E_p`$=29 keV level. The scaling was done by assuming a pure atomic beam of protons or tritons since the contribution to the signals from molecular ions are much smaller. The “error bars” on the accumulated beam time for the gamma-ray results represent the time intervals in which the mean production rates were calculated. The gamma-ray yield could not be extracted reliably between the 20-th and the 50-th hour of the source lifetime because a noise problem in the electronics system, which was subsequently eliminated. Figure 12 Monte Carlo simulated SNO photomultiplier tube array response to neutrons and gamma rays that are generated by the $`pT`$ source. The abscissa value, $`N_{hits}`$, is the number of photomultiplier tube hits in the SNO detector. The $`N_{hits}`$-to-energy calibration in this Monte Carlo represents our best estimate, but not the calibrated response of the SNO detector. In the pure D<sub>2</sub>O running scenario (top panel), the peak centering at $`N_{hits}`$50 is the 6.25 MeV photopeak from <sup>2</sup>H(n,$`\gamma `$)<sup>3</sup>H. In the salt running scenario, neutron capture on <sup>35</sup>Cl generates a gamma cascade with a total energy of 8.6 MeV. This is the reason for the broader neutron capture peak in the bottom panel. In these figures, a neutron production rate of 2,500 s<sup>-1</sup> and a gamma-ray production rate of 0.6 s<sup>-1</sup> were assumed. The sharp “peak” in the bottom panel arises from scaling of the Monte Carlo spectrum to correspond to the neutron production rate above. The spectra represent about 3 hours of run time in the SNO detector.
no-problem/9910/astro-ph9910545.html
ar5iv
text
# La Frontera a Alto Corrimiento al Rojo: Historia de la Formación de las Galaxias ## 1 Introducción Ciertas cuestiones que han resultado en grandes avances en la historia de la astronomía y la cosmología tienen, además, un interés especial para la humanidad porque tratan sobre fenómenos totalmente imprescindibles para nuestra existencia en el universo: la fuente de energía del Sol, la formación de sistemas planetarios, y la síntesis de los elementos en el interior de las estrellas son ejemplos que acuden a la mente. La cosmología moderna, después de habernos encaminado por vez primera al estudio científico del origen del universo observable en su totalidad, en el marco de la teoría de la Gran Explosión, trae consigo también otra cuestión dentro de esa categoría: el origen de las fluctuaciones primordiales y la formación de las galaxias. El universo debe empezar con condiciones iniciales prácticamente homogéneas para resultar en el estado presente de homogeneidad a gran escala; pero la existencia de fluctuaciones de densidad primordiales, de una amplitud $`\delta \rho /\rho 10^5`$, es absolutamente necesaria para la formación de las galaxias. La gravedad puede amplificar y llevar las fluctuaciones iniciales al colapso no lineal, pero no puede crearlas. El proceso que generó esas fluctuaciones en el universo primitivo, sumido todavía en el misterio, dió lugar a la gran diversidad del universo no lineal, incluida la existencia de la vida, y evitó la continuación indefinida del universo lineal y homogéneo, conteniendo únicamente un mar de radiación y átomos de hidrógeno y helio. Estamos actualmente en la era de exploración y descubrimiento hacia la frontera de alto corrimiento al rojo o, en otras palabras, de las mayores distancias des de las que es posible recibir mensajes en el universo. Dada la velocidad de la luz, las grandes distancias proporcionan a la cosmología la evidencia de la historia pasada de la formación de las galaxias. Durante las últimas tres décadas, nuestra visión del pasado del universo ha progresado continuamente con el descubrimiento de galaxias y núcleos activos a distancias cada vez mayores, el estudio de fondos de radiación cósmica a distintas frecuencias, y el análisis de espectros de absorción del hidrógeno intergaláctico interpuesto en la dirección de fuentes luminosas. Presentamos en este artículo de revisión un breve resumen del estado actual de la teoría y observaciones sobre la formación de las galaxias y la evolución del medio intergaláctico. No podemos hacer justicia en ese breve artículo al inmenso número de trabajos publicados en diversos temas de gran impacto sobre el estudio de la formación de las galaxias. Entre otros artículos de revisión, Ellis (1997) describe el estudio de galaxias débiles, y Rauch (1998) presenta una excelente exposición de observaciones y teorías del bosque Ly$`\alpha `$ de hidrógeno intergaláctico. Cabe recomendar asimismo varios artículos en libros de conferencias recientes de gran utilidad para ponerse al día en ese campo: véase, por ejemplo, Madau 1999, Steidel 1998a,b. ## 2 Teoría de la Materia Invisible Fría De entre los distintos modelos propuestos de formación de galaxias, la teoría de la Materia Invisible Fría (que abreviaremos MIF; en inglés, Cold Dark Matter) ha resultado ser la de mayor éxito, y claramente favorecida por las observaciones. La teoría postula que la materia invisible, cuya presencia se deduce de las determinaciones dinámicas de la masa de galaxias y cúmulos (p.e., Trimble 1987), consiste en objetos o partículas “frías”, sin dispersión de velocidades inicial, y que las fluctuaciones primordiales de densidad son adiabáticas (es decir, manteniéndose constante la razón de la densidad de fotones, bariones y materia invisible), Gausianas, y con un espectro de potencias invariante de escala (véase por ejemplo, Blumenthal et al. 1984, Ostriker 1993). Generalmente, se supone que las fluctuaciones fueron generadas en un período de inflación por algún proceso que se mantuvo constante mientras el rango de escalas observable en el presente cruzaba el horizonte de acontecimientos, lo cual implica la invariancia de escala. Inicialmente, la teoría MIF se consideró de forma casi exclusiva dentro del modelo cosmológico con densidad de materia crítica, $`\mathrm{\Omega }\rho /\rho _{crit}=1`$ (donde $`\rho `$ es la densidad media de materia y $`\rho _{crit}`$ es la densidad crítica). Desde los inicios de la cosmología observacional, las medidas de la densidad media del universo indicaron un valor $`\mathrm{\Omega }<1`$ (véase, p.e. Gott et al. 1974). Este resultado observacional se ha mantenido hasta la actualidad: la densidad media de luz en la banda B es en el presente $`\rho _B2\times 10^8hL_{}\mathrm{Mpc}^3`$ (Zucca et al. 1997 y referencias incluídas; utilizamos aquí$`H_0=100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$), y la razón masa-luminosidad en diversos sistemas colapsados, desde galaxias individuales a los mayores cúmulos, medida a radios suficientemente grandes, tiende generalmente a valores $`150hM/L_B500h`$ (p.e., Bahcall, Lubin, & Dorman 1995), resultando en una densidad de materia $`\overline{\rho }5\times 10^{10}h^2M_{}\mathrm{Mpc}^3`$, y $`\overline{\rho }/\rho _{crit}=\mathrm{\Omega }0.2`$. El valor de $`\mathrm{\Omega }`$ así deducido a partir de la densidad luminosa del universo puede verse afectado por el efecto del sesgo de las galaxias. En principio, la mayor parte de la masa del universo podría ubicarse en los grandes vacíos, sin pertenecer a ninguna estructura virializada conteniendo galaxias donde la masa total pueda determinarse a través de los métodos dinámicos habituales. Esa fue la posibilidad bajo la cual el modelo Einstein-de Sitter ($`\mathrm{\Omega }=1`$) fue forzado a resguardarse de la evidencia observacional. Sin embargo, varios avances durante la última década han confirmado un valor de $`\mathrm{\Omega }0.3`$. En los cúmulos de galaxias de mayor masa, la fracción de la masa constituída por bariones deducida por la intensidad y el espectro de rayos X emitidos por el gas caliente es $`f_{bar}0.06h^{3/2}`$ (S. D. M. White et al. 1993; D. A. White & Fabian 1994). Dada la densidad de bariones deducida de la teoría de nucleosíntesis primordial, $`\mathrm{\Omega }_b0.019h^2`$ (p.e., Burles & Tytler 1998), y el hecho de que la fracción de masa bariónica en los cúmulos más masivos debe ser representativa de la fracción media en el universo (White et al. 1993), deducimos un valor $`\mathrm{\Omega }0.3h^{1/2}`$. Al mismo tiempo, las observaciones recientes de curvas de luz de supernovas tipo Ia deducen el mismo valor de $`\mathrm{\Omega }`$, concluyendo además que la expansión del universo está acelerando de la forma esperada en el modelo del universo con geometría espacial plana, donde la constante cosmológica proporciona la densidad de energía adicional necesaria para llegar a la densidad crítica (Perlmutter et al. 1998, Riess et al. 1998). Otros métodos de medir el valor de $`\mathrm{\Omega }`$ son, por lo general, consistentes con este resultado (por ejemplo, el valor $`\mathrm{\Omega }0.3`$ con una constante cosmológica $`\mathrm{\Lambda }=1\mathrm{\Omega }`$ también es favorecido por el valor de la constante de Hubble y la edad de las estrellas más antiguas en cúmulos globulares). Varias observaciones en el futuro próximo (tales como supernovas Tipo Ia y fluctuaciones en la radiación cósmica de fondo) deberán permitir medir $`\mathrm{\Omega }`$ con mayor precisión, y clarificar si la reciente aceleración del universo se debe a una constante cosmológica o al modelo más general de un campo escalar con presión negativa (p.e., Peebles & Vilenkin 1998 y referencias incluídas). Una vez adoptamos el modelo cosmológico deducido a partir de esas observaciones, las predicciones de la tería MIF se ajustan bien a los datos observacionales sobre estructura a gran escala, tales como la función de correlación espacial de las galaxias, la abundancia de cúmulos de galaxias, la evolución en CR de esas cantidades, y las fluctuaciones en la radiación cósmica de fondo recientemente detectadas. Para obtener esas predicciones, es preciso calcular el espectro de potencias de las fluctuaciones primordiales. El espectro de potencias de las fluctuaciones de densidad se obtiene en la teoría MIF calculando la evolución en el régimen lineal de esas fluctuaciones una vez cruzan el horizonte (para artículos de revisión, véase Efstathiou 1990, Bond 1996, Bertschinger 1996). En el límite de pequeñas escalas, las fluctuaciones entraron en el horizonte durante la época en que la densidad media del universo era dominada por radiación. Los bariones están entonces ligados a la radiación, y la presión radiativa resulta en una oscilación de las fluctuaciones, impidiendo el crecimiento gravitatorio. La materia invisible no interactúa con la radiación, y por lo tanto aumenta sus fluctuaciones de densidad. No obstante, ese crecimiento gravitatorio es muy lento incluso en la ausencia de acoplamiento con la radiación, cuando la radiación domina la densidad de energía. Dado que la densidad de radiación en el presente es $`\mathrm{\Omega }_{rad}=aT_{rad}^4/(\rho _{crit}c^2)=2.47\times 10^5h^2`$ (donde $`T_{rad}=2.76`$ K es la temperatura de la radiación de fondo), la época de igualación de las densidades de materia y radiación corresponde al CR $`1+z=4.04\times 10^4\mathrm{\Omega }h^2`$. La longitud del horizonte en esta época era del orden $`cH^1(z)=cH_0^1\mathrm{\Omega }^{1/2}(1+z)^{3/2}`$, y por lo tanto la longitud comóvil era $`L_{ig}=cH_0^1\mathrm{\Omega }^{1/2}(1+z_{ig})^{1/2}15(\mathrm{\Omega }h)^1h^1\mathrm{Mpc}`$ (donde utilizamos la notación habitual para la constante de Hubble en el presente, $`H_0=100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, y $`H(z)`$ es la constante de Hubble a CR $`z`$). A escalas comóviles mucho menores que el horizonte en la época de igualación, la amplitud de fluctuaciones de densidad debe conservar la forma de las fluctuaciones primordiales cuando emergen del horizonte, ya que las fluctuaciones empezaron a crecer solamente a partir de la época de igualación, y el crecimiento en la época anterior fue muy lento. Eso implica que la amplitud de fluctuaciones tiende a una constante para el caso más habitual de invariancia de escala, con $`n=1`$ (donde el espectro primordial es $`P(k)k^n`$). Pero en el límite de escalas mucho mayores, $`LL_{ig}`$, las fluctuaciones crecen en el régimen lineal proporcionalmente al factor de escala, $`a(1+z)^1`$, desde el momento en que entran en el horizonte; dado que la longitud comóvil del horizonte es proporcional a $`(1+z)^{1/2}`$, el factor total de crecimiento a una época fija debe ser proporcional a $`L^2`$. Mostramos en la Figura 1 la dispersión en la fluctuación relativa de la masa total contenida dentro de una esfera de radio comóvil $`R`$, cuando la evolución lineal de fluctuaciones primordiales se extrapola hasta el presente. Conviene recordar aquí que la distribución de probabilidad de esta masa es Gausiana (dada la suposición de que el campo de densidades es Gausiano), con la dispersión mostrada en la Figura 1. La línea sólida es para el modelo $`\mathrm{\Omega }=0.3`$, $`\mathrm{\Lambda }=0.7`$, $`n=1`$, $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, y hemos normalizado las fluctuaciones a $`\sigma _8<\sigma ^2>^{1/2}(R=8h^1\mathrm{Mpc})=0.9`$. La línea de rayas es para el modelo $`\mathrm{\Omega }=1`$, $`\mathrm{\Lambda }=0`$, $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, y $`\sigma _8=0.55`$. Las curvas han sido calculadas con las fórmulas presentadas por Hu & Sugiyama (1996). Además del comportamiento esperado en el límite de pequeñas y grandes escalas, vemos también que el modelo de la constante cosmológica dispone de mayor potencia a grandes escalas, debido al mayor valor de la longitud característica $`L_{ig}(\mathrm{\Omega }h)^1`$. La normalización del espectro de fluctuaciones, dada por el parámetro $`\sigma _8`$, debe considerarse como un parámetro ajustable de la teoría. Su valor ha sido escogido aquí para reproducir aproximadamente la abundancia de cúmulos de galaxias en el presente (Eke, Cole, & Frenk 1996). Desde que las fluctuaciones de intensidad de la radiación de fondo de microondas fueron detectadas por el satélite COBE, la normalización viene fijada independientemente por esas fluctuaciones (Bennett et al. 1996; Bunn & White 1997). Esas dos medidas de la amplitud concuerdan para el modelo con constante cosmológica con $`\mathrm{\Omega }0.3`$, mientras que son claramente conflictivas si se supone $`\mathrm{\Omega }=1`$. Es notable el hecho de que cuando se toma el valor de $`\mathrm{\Omega }`$ deducido de las observaciones de la geometría global del universo y la densidad promedio de materia, la teoría MIF predice correctamente la relación entre las fluctuaciones de la radiación de fondo y la abundancia de cúmulos de galaxias, y concuerda además con la correlación espacial de galaxias (Jing, Mo, & Börner 1998 y referencias incluídas), así como la evolución de la abundancia de cúmulos en CR (Bahcall & Fan 1998). ## 3 Colapso gravitatorio no-lineal: formación de halos En la teoría MIF, la materia invisible empieza a formar objetos virializados (o “halos”) por colapso gravitatorio a partir de las escalas más pequeñas, cuando la fluctuación de densidad alcanza un valor $`1`$, entrando en el régimen no lineal. Luego, cuando las fluctuaciones a escalas mayores llegan al colapso, halos de mayor masa se forman por mersión de halos de masa menor colapsados anteriormente. Sólo podemos presentar aquí un resumen muy breve de la teoría de formación de halos y galaxias; para una discusión mucho más extensa, véase White (1996). Un modelo analítico de gran utilidad para entender el proceso del colapso no lineal, llevando a la formación y mersión de halos, es el modelo de Press-Schechter (Press & Schechter 1974). El modelo se basa en la solución analítica del colapso de una perturbación de densidad constante con simetría esférica, en cuyo caso el colapso al punto central sucede cuando se alcanza el valor de la sobredensidad extrapolada linealmente $`\delta =\delta _c=3/5(3\pi /2)^{2/3}=1.686`$ (Peebles 1980). La distribución de la densidad promediada sobre una esfera de radio $`R`$ es una Gausiana con dispersión $`\sigma (R)`$ (Figura 1), y el modelo Press-Schechter consiste en suponer que la fracción de la masa del universo que ha colapsado en halos de masa mayor que $`M=(4\pi /3)\overline{\rho }R^3`$ es igual a $`\mathrm{erf}\{\delta _c/[2^{1/2}\sigma (R)]\}`$. Nótese que esa fracción es en realidad dos veces la fracción del volumen con sobredensidad superior a $`\delta _c`$ en las condiciones iniciales. Aunque la introducción del factor 2 puede justificarse mejor con un tratamiento en el espacio de Fourier (Bond et al. 1991), el modelo de Press-Schechter debe considerarse únicamente como una aproximación que se utiliza frecuentemente debido a su gran simplicidad, y especialmente al hecho de que sus predicciones se ajustan a resultados numéricos de la abundancia de halos con bastante precisión (p.e., Governato et al. 1998; a la práctica, el parámetro $`\delta _c`$ puede considerarse ajustable a los resultados numéricos, en vez de tomar el valor requerido para el colapso esférico). Una vez obtenida la abundancia de halos de masa mayor que $`M`$, se obtiene por diferenciación la abundancia diferencial, y asignamos a cada escala $`R`$ una dispersión de velocidades para el halo obtenida también por el modelo esférico: $`\sigma ^2=(3\pi /2)^{2/3}[H(z_c)R]^2/2`$, donde $`H(z_c)`$ es la constante de Hubble en el momento del colapso. Básicamente, esa relación nos dice que la dispersión de velocidad de un cúmulo es del orden de la velocidad de expansión de Hubble sobre la región comóvil desde la cual el cúmulo ha colapsado, en el momento del colapso. La dispersión de velocidad de un halo (o, análogamente, el cociente de la masa sobre el radio) es la propiedad de un halo que determina las cantidades observables en galaxias y cúmulos, que son la dispersión de velocidad de las galaxias, la temperatura del gas difuso en equilibrio dinámico en el halo (determinada con espectros de rayos X), o la deflección gravitatoria de luz de fondo (medida en lentes gravitatorias). En las Figuras 2(a,b), las tres líneas sólidas gruesas nos dan la dispersión de velocidades de un halo formado a partir de una fluctuación de densidad inicial igual a $`(1,2,3)\sigma `$, en función del CR, para los mismos dos modelos utilizados en la Figura 1 (para el modelo con constante cosmológica en la Fig. 2b, los valores de $`\delta _c`$ y de la dispersión de velocidades correspondiente a cada escala $`R`$ deben ser modificados debido al efecto de repulsión de la constante cosmológica, y el ritmo de crecimiento lineal de fluctuaciones es también distinto; véase p.e., Viana & Liddle 1996). De acuerdo con la distribución Gausiana, la fracción de masa que ha colapsado en halos de mayor dispersión de velocidades es, respectivamente, (64%,10%,0.6%). Tomemos, por ejemplo, el momento presente ($`z=0`$) para el modelo $`\mathrm{\Omega }=0.3`$ (Figura 2b). Ese modelo nos predice que un $`0.6\%`$ de la masa en el presente está en objetos colapsados con dispersión de velocidad mayor que $`1000\mathrm{km}\mathrm{s}^1`$. Esos halos se corresponden evidentemente con los cúmulos de galaxias más masivos, y se formaron a partir del colapso de raras fluctuaciones a escalas grandes, $`R15h^1\mathrm{Mpc}`$, donde $`3\sigma (R)=\delta _c`$. Las líneas de rayas nos dan el valor de la masa de los halos (con un factor 10 en masa separando líneas sucesivas), e indican que la masa de uno de esos cúmulos es $`2\times 10^{15}M_{}`$. La mayor parte de la masa del universo debe encontrarse en objetos formados a partir de fluctuaciones más habituales: una fluctuación $`1\sigma `$ produce un halo con dispersión de velocidades $`200\mathrm{km}\mathrm{s}^1`$, un valor típico de los pequeños grupos de galaxias, tales como el grupo Local, donde residen la mayoría de las galaxias en el universo. La masa indicada en la Figura 2 se refiere siempre a la masa total del halo en el momento de su formación. Por ejemplo, la galaxia de la Vía Láctea pudo formarse a $`z2`$, en un halo con $`\sigma 150\mathrm{km}\mathrm{s}^1`$ de masa $`M10^{12}M_{}`$, y en el presente el Grupo Local está colapsando en un halo mayor. Los dos modelos en la Figura 2 han sido normalizados para ajustar sus predicciones a la abundancia observada de cúmulos. La dispersión de velocidades puede relacionarse con la temperatura del gas difuso en el halo cuando está en equilibrio hidorstático: $`kT/\mu =\sigma ^2`$. Mostramos también la temperatura en el eje vertical. Hemos utilizado aquí la masa media para el caso de gas primordial ionizado: $`\mu 0.6m_H10^{24}\mathrm{g}`$. Una propiedad general de la teoría MIF es que, a medida que las fluctuaciones colapsan a escalas cada vez mayores, los halos deben mergerse y formar nuevos halos con mayor dispersión de velocidad; además, el ritmo de crecimiento de la masa y la dispersión de velocidad es mucho más rápido a alto CR que en el presente. Eso es una consecuencia directa de la forma del espectro de potencias en la Figura 1: a pequeñas escalas, la amplitud de fluctuaciones es prácticamente constante, y por lo tanto la masa de los halos aumenta rápidamente cuando esas fluctuaciones empiezan a colapsar. Pero a mayores escalas, la amplitud disminuye con mucha más rapidez con la escala, y por lo tanto las fluctuaciones a escalas grandes colapsan mucho más tarde. ## 4 Formación de galaxias Una pregunta central en la teoría de formación de galaxias ha sido la siguiente: cuál es la causa de la diferencia entre una galaxia y un cúmulo de galaxias? Evidentemente, los cúmulos de galaxias han colapsado gravitatoriamente, pero deducimos a partir de observaciones de rayos X que la mayor parte de la materia bariónica está en forma de gas difuso y caliente distribuído en el halo. La masa total del gas difuso en un cúmulo masivo puede superar las $`10^{14}M_{}`$, pero no se forman nunca galaxias de masa tan grande. Esa cuestión fue investigada por Binney (1977), Rees & Ostriker (1977), Silk (1977), y especialmente en el contexto de la teoría MIF por White & Rees (1978). En el colapso gravitatorio de un halo, la materia bariónica se calienta inicialmente en ondas de choque, hasta la temperatura necesaria para permanecer en equilibrio hidrostático. Luego, para poder condensar a una galaxia central, es preciso que el gas pierda su energía térmica por procesos de radiación. Si el tiempo necesario para que toda la energía térmica de los bariones sea radiada es corto comparado con el tiempo entre mersiones sucesivas de halos, entonces el gas va a poder concentrarse hacia el centro del halo, y el proceso de colapso gravitatorio puede proseguir hasta llegar a la formación de estrellas (despues de fragmentación de nubes de gas, y posiblemente de haber formado un disco debido a la conservación del momento angular; véase Fall & Efstathiou 1980, Fall & Rees 1985) o de un objeto masivo central, dando lugar a un núcleo activo. En cambio, si el tiempo de enfriamiento es demasiado largo, nuevas mersiones para formar halos de mayor masa van a calentar de nuevo el gas, manteniéndolo en estado difuso. La condición que debemos analizar es, por consiguiente, que el tiempo de enfriamiento del gas sea igual a la edad del universo (que es aproximadamente el tiempo entre mersiones sucesivas). Los procesos de enfriamiento más importantes son la radiación bremsstrahlung, y la excitación y ionización colisional de iones. A temperaturas $`T10^5K`$, el enfriamiento por hidrógeno y helio domina, pero en el rango $`10^5K<T<10^7K`$ los iones de elementos pesados son importantes (p.e., Gaetz & Salpeter 1983). En las figuras 2(a,b), las líneas de puntos indican la dispersión de velocidad de halos en los que el tiempo de enfriamiento es igual a la edad del universo, para tres valores de la metalicidad del gas, $`(0,0.1,1)Z_{}`$. Por encima de las líneas de puntos, el tiempo de enfriamiento es demasiado largo y se predice que el gas caliente se mantiene en el halo, mientras que por debajo de las curvas el gas puede enfriarse y formar una galaxia. El ritmo de enfriamiento por radiación se ha calculado suponiendo que una fracción $`\mathrm{\Omega }_b/\mathrm{\Omega }`$ de la masa del halo está en forma de gas difuso, donde $`\mathrm{\Omega }_b=0.019h^2`$, de acuerdo con las mediciones de la abundancia de deuterio (Burles & Tytler 1998), y suponiendo el valor de la sobredensidad media de los halos en el modelo del colapso esférico (igual a $`18\pi ^2`$ para $`\mathrm{\Omega }=1`$). Si las galaxias pueden formarse en halos por debajo de las líneas de puntos, obtenemos la predicción de que hasta $`z4`$, el colapso de un halo debió resultar siempre en la formación de una galaxia central, pero a menores CR los halos más masivos pueden contener solamente galaxias formadas anteriormente, y la mayor parte del gas difuso no puede enfriarse. Mencionamos aquí que, incluso en los cúmulos de mayor masa, una parte del gas en la región central (con mayor densidad) puede radiar más rápidamente; la razón por la cual eso no da resultado a grandes tasas de formación estelar en las galaxias centrales de cúmulos masivos es una cuestión que permanece sin resolverse (Fabian, Nulsen, & Canizares 1994). Vemos, pues, que la física de enfriamiento del gas proporciona una explicación satisfactoria del límite superior de la masa de las galaxias. Otra cuestión distinta es el límite inferior, y eso nos lleva a una transición fundamental en el universo para la historia de formación de las galaxias: la reionización del medio intergaláctico. ## 5 La reionización del universo En la teoría de la Gran Explosión, la materia del universo debe formar átomos por vez primera cuando la temperatura de la radiación de fondo disminuye hasta $`T3000`$ K, a $`z10^3`$. La materia intergaláctica permanece posteriormente en estado atómico, hasta el momento en que el colapso no-lineal de las perturbaciones lleva a la formación de objetos que emiten radiación ionizante, y pueden ionizar de nuevo el gas difuso. Las observaciones del espectro de fuentes luminosas a alto CR (generalmente, cuásares) demuestran que el medio intergaláctico fue ionizado anteriormente a $`z5`$. La luz de una fuente a longitudes de onda menores que la línea Ly$`\alpha `$ de hidrógeno atómico puede ser absorbida por un átomo a lo largo de su trayectoria a través del universo, en el punto en que la longitud de onda se ha corrido hasta coincidir con la línea Ly$`\alpha `$ . Si una parte importante de la densidad bariónica media del universo estuviera distribuída por el espacio en forma atómica, la profundidad óptica de absorción sería enorme ($`10^5`$ a $`z=3`$), con lo cual el flujo debería disminuir a cero abruptamente a la longitud de onda de Ly$`\alpha `$ (Gunn & Peterson 1965). Lo que se observa en realidad es la presencia de múltiples líneas de absorción que producen un decremento neto del flujo de solamente un $`30\%`$ a $`z=3`$. Eso implica que el medio intergaláctico está altamente ionizado, con una fracción neutra de $`10^5`$. Esa diminuta fracción del hidrógeno intergaláctico puede explicar satisfactoriamente las propiedades de las líneas de absorción observadas (denominadas usualmente como el “bosque Ly$`\alpha `$ ”), originadas en las variaciones de la densidad del gas que no ha colapsado todavía en halos virializados de gran densidad. Varios trabajos recientes, utilizando modelos analíticos y simulaciones numéricas, han mostrado que la teoría MIF predice de forma genérica que el medio intergaláctico ionizado debe dar lugar a ese bosque de líneas (véase McGill 1990; Bi, Börner, & Chu 1992; Bi 1993; Miralda-Escudé & Rees 1993; Cen et al. 1994; Zhang et al. 1995, 1998; Hernquist et al. 1996; Miralda-Escudé et al. 1996). El hecho de que la densidad del gas neutro es proporcional al cuadrado de la densidad del gas cuando se establece el equilibrio entre recombinación y fotoionización por el fondo cósmico ionizante es la causa de que un medio continuo da lugar a un espectro de absorción con la apariencia de “líneas” individuales, cada vez que se encuentra un máximo de la densidad del gas a lo largo de la línea de visual. El espectro de absorción tiene una escala de suavizamiento natural de $`20\mathrm{km}\mathrm{s}^1`$, debido a la dispersión de velocidad térmica del gas fotoionizado (con temperatura $`T2\times 10^4`$ K). El tamaño transversal de las estructuras de absorción Ly$`\alpha `$ debe ser del orden de su dispersión de velocidad multiplicada por la edad del universo, ya que tales estructuras no han tenido todavía tiempo de colapsar y de llegar a un equilibrio hidrostático; observaciones recientes del tamaño transversal en pares de cuásares (Bechtold et al. 1994; Dinshaw et al. 1994) confirman esa predicción. Qué causó la reionización del universo? Dos mecanismos pueden ionizar el gas intergaláctico: fotoionización, o ionización colisional una vez el gas ha sido calentado por ondas de choque provenientes de alguna explosión de gran energía. La fotoionización es el método más eficiente para ionizar el medio de baja densidad, ya que requiere menos energía, y la radiación se transporta con gran eficacia a todas las regiones del espacio. Los dos tipos de fuentes de radiación ionizante conocidas que pueden formarse tan pronto como los primeros halos colapsan a pequeñas escalas son estrellas y núcleos activos (producidos por agujeros negros masivos en el centro de una galaxia). La cantidad de estrellas necesaria para ionizar el universo entero es solamente una parte muy pequeña de todas las estrellas que se han formado hasta el presente, y puede relacionarse fácilmente con la metalicidad media producida. Ya que tanto los elementos pesados provenientes de explosiones supernova como la radiación ionizante son producidas por estrellas masivas, la razón de esas dos cantidades está relativamente fijada: característicamente, una estrella de $`30M_{}`$ fusiona unas $`4M_{}`$ de hidrógeno en la secuencia principal, obteniendo una energía $`0.03M_{}c^2`$, de la cual $`0.01M_{}c^2`$ se emite en fotones ionizantes. Esa misma estrella produce unas $`5M_{}`$ de elementos pesados en la explosión supernova. Si los fotones son absorbidos por hidrógeno intergaláctico (necesitando una energía de $`20`$ eV para cada ionización, o una fracción $`2\times 10^8`$ de la masa-energía en reposo), la masa ionizada es de $`5\times 10^5M_{}`$, y por lo tanto la metalicidad media aumenta sólo hasta $`10^5`$ una vez se ha emitido un fotón ionizante para cada barión. Dado que la metalicidad media en el universo presente es mucho mayor, es evidente que la primera generación de estrellas puede fácilmente ionizar todo el universo (p.e., Couchman & Rees 1986). Evidentemente, los cuásares pueden ser también las fuentes dominantes para la reionización, dado que la eficiencia en convertir la masa acretada por un agujero negro en radiación ionizante es generalmente mucho mayor que para estrellas. Las observaciones de la abundancia de cuásares y de la intensidad del fondo cósmico de radiación ionizante a $`z3`$ indican que los cuásares son probablemente los mayores contribuyentes de esa radiación cósmica, pero la naturaleza de las fuentes causantes de la reionización a mayor CR es todavía incierta (Miralda-Escudé & Ostriker 1990; Madau 1991, 1992, 1999; Haardt & Madau 1996; Rauch et al. 1997; Haiman & Loeb 1997, 1998; Miralda-Escudé, Haehnelt, & Rees 1999). ## 6 Las primeras galaxias Las primeras galaxias donde pudieron formarse las primeras estrellas y cuásares surgieron, en la teoría MIF, en los primeros halos que colapsaron donde el gas pudo enfriarse. El hidrógeno atómico solamente empieza a radiar en líneas de excitación cuando la temperatura supera los $`10^4`$ K; a temperaturas inferiores, las colisiones con electrones térmicos no tienen nunca la energía necesaria para excitación a los niveles atómicos $`n=2`$. El hidrógeno molecular proporciona la única fuente de enfriamiento a menores temperaturas. En la materia primordial, en ausencia total de elementos pesados, el hidrógeno molecular se forma a partir del ión $`H^{}`$ (Saslaw & Zipoy 1967), el cual resulta a su vez de colisiones de hidrógeno con los electrones del residuo de ionización que permanece después de la época de recombinación (Peebles 1968). Ese proceso de formación de moléculas es muy ineficaz (en el presente, el hidrógeno molecular se forma en la superficie de granos de polvo interestelar), y sólo una pequeña fracción del hidrógeno forma moléculas. El enfriamiento resultante sólo es suficiente para resultar en la formación de galaxias cuando la temperatura de un halo supera los $`2000K`$ (Tegmark et al. 1997; Abel et al. 1998). En las Figuras 2(a,b), la línea sólida a mayor CR indica la mínima temperatura para que un halo formado a CR $`z`$ pueda enfriar su gas y formar una galaxia, que hemos reproducido de Tegmark et al. (1997). Vemos que la teoría MIF, con los parámetros obtenidos de observaciones mencionadas antes, predice que las primeras estrellas pudieron formarse a $`z20`$, en halos de dispersión de velocidades de $`5\mathrm{km}\mathrm{s}^1`$ y masa total $`10^7M_{}`$. Esa primera generación de galaxias, formadas a través del enfriamiento por hidrógeno molecular, formó probablemente una cantidad muy pequeña de estrellas debido a la ineficacia de esta forma de enfriamiento (véase también Haiman, Rees, & Loeb 1997). En esta época, el ritmo de mersiones es muy elevado y la masa y dispersión de velocidad de los halos aumenta rápidamente con el tiempo; así pues, inmediatamente después de la formación de esas primeras estrellas, el enfriamiento atómico en halos de $`10^8M_{}`$ empieza a ser activo, dando lugar a una nueva generación de galaxias que debió reionizar el universo. Una vez el medio intergaláctico es reionizado, el enfriamiento es suprimido por la fotoionización, que calienta el gas y disminuye la abundancia de átomos que pueden ser excitados colisionalmente. La línea sólida delgada a bajo CR nos da la temperatura en la cual el tiempo de enfriamiento es menor que el tiempo de Hubble para el gas ionizado (hemos adoptado aquí el modelo de Haardt & Madau 1996 para el fondo de radiación ionizante). Las galaxias pueden formarse solamente por encima de esta línea una vez la reionización se ha completado. En realidad, la fotoionización tiene también otro efecto: el calentamiento del gas durante la reionización puede producirse antes del colapso, y luego la temperatura del gas sube adiabáticamente cuando la densidad aumenta si el enfriamiento no es importante (recordemos que las líneas indicando un tiempo de enfriamiento igual a la edad del universo son para una sobredensidad de $`18\pi ^2178`$, el valor obtenido en el modelo esférico en el momento de la virialización, y que el tiempo de enfriamiento es mayor a menor densidad). Debido a este efecto, el gas en halos con $`\sigma 30\mathrm{km}\mathrm{s}^1`$ no puede en general enfriarse para formar galaxias (p.e., Thoul & Weinberg 1996). Otro efecto que puede disminuir la eficiencia de formación de galaxias en halos de baja dispersión de velocidad es el hecho de que la energía liberada por el propio proceso de formación estelar puede crear un viento galáctico con energía suficiente para calentar y expulsar el gas en el proceso de acreción (p.e., Dekel & Silk 1986). Ese proceso parece ser necesario para evitar un exceso de galaxias de baja luminosidad comparado con las observaciones (White & Frenk 1991; Navarro & Steinmetz 1997). ## 7 Observaciones de galaxias a alto CR Grandes avances en cosmología observacional han tenido lugar recientemente, con el descubrimiento de gran número de galaxias a alto CR. Una de las técnicas de mayor éxito es la selección fotométrica de objetos débiles para detección de la brecha de Lyman (“Lyman break”; Guhathakurta, Tyson, & Majewski 1990; Steidel et al. 1996). Todas las galaxias, cuya luz resulta de la superposición de los espectros de muchas estrellas, muestran en su espectro una caída abrupta del flujo a la longitud de onda del límite de Lyman, como vemos en los modelos de espectros de distintas edades con ritmo constante de formación estelar en la Figura 3, que reproducimos de Bruzual & Charlot (1993). La brecha se produce en la atmósfera de las estrellas masivas, que producen la mayor parte de la radiación ultravioleta. Además, la presencia de hidrógeno atómico en el medio interestelar de la galaxia resulta generalmente en la absorción de la mayor parte de los fotones ionizantes, aumentando la amplitud de la brecha. A alto CR, la brecha de Lyman se corre hasta longitudes de onda en el visible, lo que permite seleccionar las galaxias a partir de sus colores. Por ejemplo, un objeto azul en V-R y muy rojo en B-V debe tener la brecha entre las bandas B y V, situándolo a $`z4`$. El número de galaxias detectadas por este método, de magnitudes en el rango $`23B29`$, ha permitido la primera medición de la función de luminosidad y de la tasa global de formación estelar a alto CR, que resulta ser mucho mayor que la actual (Madau et al. 1996; Steidel et al. 1999). La cantidad total de estrellas formadas en estas galaxias, en el intervalo $`1z5`$, puede dar cuenta de la mayor parte de la población estelar vieja en el universo presente. Sin embargo, esas estimaciones están sujetas todavía a varias incertidumbres: en general, solamente la tasa de formación de estrellas masivas se puede deducir de esas observaciones, y existe la posibilidad de que la función de masa inicial fuera distinta a alto CR. Por otra parte, la tasa total de formación estelar podría ser mucho mayor si galaxias de baja luminosidad que permanecen por debajo de los límites de detección dominaran la emisión total, o si la mayor parte de la radiación ultravioleta es absorbida por polvo interestelar y reemitida en el infrarojo lejano (p.e., Calzetti 1999 y referencias incluídas). Efectivamente, la radiación reemitida por polvo constituye otra forma de detección de galaxias a alto CR. Recientemente, la radiación de fondo en el infrarojo lejano, debida a la combinación de todas las galaxias, ha sido detectada por COBE (Fixsen et al. 1998; Hauser et al. 1998), y fuentes individuales se han descubierto con el nuevo detector SCUBA (Eales et al. 1998; Hughes et al. 1998; Smail et al. 1997), sugeriendo que gran parte de la radiación estelar a alto CR pudo ser procesada por polvo. El origen de esas galaxias a alto CR puede comprenderse simplemente en la teoría MIF a partir de la Figura 4. Las líneas punteadas nos indican aquí la magnitud aparente de una galaxia (en el sistema $`AB`$, donde la magnitud es $`AB=48.62.5log_{10}(f_\nu )`$, y el flujo $`f_\nu `$ se expresa en unidades cgs), formada en un halo de dispersión de velocidad $`\sigma `$ en función de $`z`$, para un modelo de máxima luminosidad en que todos los bariones contenidos en el halo forman estrellas en un intervalo de tiempo igual a la mitad de la edad del universo en el momento del colapso. En otras palabras, el modelo representa la máxima eficiencia de formación estelar posible, donde toda la materia forma estrellas en un tiempo del orden del tiempo dinámico del halo de materia invisible (véase Miralda-Escudé & Rees 1998 para más detalles del modelo). Las galaxias de mayor masa pueden empezar a formarse a $`z4`$, cuando el gas puede enfriarse según la figura 2b. El flujo máximo de esas galaxias corresponde a $`AB22`$. La mayor parte de las galaxias de la brecha de Lyman son algo más débiles, como es de esperar cuando el ritmo de conversión de gas a estrellas es menos eficiente, y cuando una parte de la radiación ultravioleta es absorbida por polvo. Vemos también que las galaxias más masivas pueden formarse más fácilmente cuando el gas ha sido ya enriquecido, debido al mayor ritmo de enfriamiento a alta metalicidad. Puesto que las galaxias detectadas a alto CR de mayor luminosidad están asociadas con las mayores fluctuaciones de densidad a las mayores escalas de colapso gravitatorio, su correlación espacial debería ser mucho mayor que la correlación de la masa, como se espera de los altos picos de densidad en un campo Gausiano (Kaiser 1984). Esa correlación ha sido detectada, y es en general consistente con las expectativas en la teoría MIF (Adelberger et al. 1998; Giavalisco et al. 1998; Kauffmann, Nusser, & Steinmetz 1997). Los estudios de la correlación de galaxias a alto CR abren un nuevo abanico enorme de posibilidades para observar la evolución de estructura a gran escala, que hemos solamente empezado a investigar. A mayor $`z`$, la absorción del flujo ultravioleta a $`\lambda <1216\mathrm{\AA }`$ debido al bosque Ly$`\alpha `$ aumenta rápidamente, de tal forma que a $`z5`$, la caída de flujo en Ly$`\alpha `$ se convierte en la característica más importante para seleccionar objetos a este CR. Así pues, la técnica para encontrar galaxias a $`z>5`$ va a ser muy parecida a la de la brecha de Lyman, sustituyendo ésta por el hoyo de Gunn-Peterson (Gunn & Peterson 1965). Aunque inicialmente el término del hoyo de Gunn-Peterson se utilizó únicamente para referirse a la absorción en Ly$`\alpha `$ producida por el medio intergaláctico atómico antes de la reionización, el medio ionizado puede dar lugar también a un hoyo cuando la fracción neutra es suficientemente grande para que todas las partes del medio intergaláctico, incluso las de menor densidad en los vacíos, absorban esencialmente todo el flujo. Debido al aumento de la densidad media del gas en el universo, y al mayor ritmo de recombinación a alto CR, es de esperar que prácticamente todo el flujo a longitudes de onda menores que la línea Ly$`\alpha `$ sea absorbido a $`z6`$, incluso si la reionización ocurrió a mayor CR (Miralda-Escudé et al. 1999). Finalmente, otra técnica importante de detección de galaxias a alto CR es mediante la línea de emisión de Ly$`\alpha `$ (Thompson et al. 1995; Thommes et al. 1998; Meisenheimer et al. 1998; Hu, Cowie, & McMahon 1998). En regiones de formación estelar, la mayor parte de la radiación ionizante emitida por estrellas jóvenes es generalmente absorbida por hidrógeno interestelar, y la energía se reemite en fotones de recombinación, siendo Ly$`\alpha `$ la línea más brillante. La búsqueda de líneas de emisión de objetos a alto CR tiene la ventaja de que el fondo de cielo puede reducirse observando solamente en una banda estrecha de longitudes de onda, especialmente cuando se seleccionan longitudes de onda de buena transparencia atmosférica (especialmente importante a $`z5`$, cuando la línea Ly$`\alpha `$ se corre hasta el infrarojo). Es evidente a partir de las Figuras 2 y 4 que la dificultad en la detección de galaxias a CR progresivamente mayor aumenta rápidamente para $`z5`$, puesto que el flujo de las galaxias debe disminuir no sólo debido al mayor CR, sino a la baja luminosidad de las primeras galaxias, debida a su menor masa comparado con las galaxias actuales. Esa predicción de la teoría MIF sugiere que en el futuro, las fuentes detectadas a mayor CR podrían ser supernovas, las cuales debieron ocurrir tan pronto como las primeras estrellas se formaron (Miralda-Escudé & Rees 1997). La proyectada misión NGST podría detectar supernovas hasta $`z10`$ (Stockman & Mather 1997). Una posibilidad interesante que podría permitir acelerar el estudio observacional de las primeras estrellas es que los estallidos de rayos gamma, con sus brillantes contrapartidas ópticas (p.e., Metzger et al. 1997), sean un fenómeno asociado con estrellas masivas, ocurriendo por consiguiente a los mayores CR donde existían estrellas. Las contrapartidas ópticas serían detectables en observatorios terrestres, corridas al infrarojo, mientras que en el visual no habría contrapartida debido al hoyo de Gunn-Peterson y absorción por fotoionización. En el caso de los estallidos de rayos gamma, el reto observacional podría consistir en identificar una pequeña fracción de esos eventos a mayor CR que cualquier otra fuente conocida, y distinguirlos de estallidos a menor CR pero con un gran enrojecimiento debido a polvo interestelar cerca de la fuente, que pueden tener características fotométricas similares. ## 8 Conclusiones La cosmología observacional está entrando en una nueva etapa de descubrimiento, con nuevas técnicas para la detección de galaxias débiles que empujan la frontera de alto CR hacia la época de la formación de las primeras galaxias. Al mismo tiempo, la medición precisa de las fluctuaciones en la radiación de fondo, la realización de nuevos escrutamientos de CR de galaxias, y la continuación de la búsqueda de supernovas a alto CR para la medición de la geometría cósmica, prometen fijar los parámetros del universo y del modelo cosmológico. Hemos visto que la teoría MIF reproduce con gran éxito las observaciones acumuladas hasta el presente sobre estructura a gran escala. Sin embargo, el estado actual de la teoría deja muchas preguntas abiertas: cuál es la naturaleza de la materia invisible? Existe realmente una “energía de vacío” que da cuenta de la densidad necesaria para alcanzar la densidad crítica? Cuál es la naturaleza de esta energía de vacío, cuál es su ecuación de estado, y por qué existe? Qué proceso generó las fluctuaciones de densidad? Qué determinó su amplitud? Son las fluctuaciones primordiales exactamente adiabáticas y Gausianas, y es su espectro perfectamente invariante de escala, o existen pequeñas desviaciones de esta simple hipótesis? Esas preguntas, que nos llevan al misterio de la época de inflación y al origen del universo, ocuparán probablemente el centro de interés en el futuro de la cosmología observacional, cuyo avance permitirá también descifrar la historia de la formación de galaxias, desde el colapso de las primeras estrellas hasta el presente. Quisiera agradecer David Weinberg por proporcionar un código para calcular el espectro de potencias, y Gustavo Bruzual, Stephane Charlot y Max Tegmark por permitir la reproducción de resultados de sus trabajos en este artículo. Agradezco también diversas conversaciones con Martin Haehnelt, Martin Rees y David Weinberg.
no-problem/9910/quant-ph9910030.html
ar5iv
text
# Criteria for Continuous-Variable Quantum Teleportation ## 1 Introduction What is quantum teleportation? The original protocol of Bennett et al. specifies the idea with succinct clarity. The task set before Alice and Bob is to transfer the quantum state of a system in one player’s hands onto a system in the other’s. The agreed upon resources for carrying out this task are some previously shared quantum entanglement and a channel capable of broadcasting classical information. It is not allowed to physically carry the system from one player to the other, and indeed the two players need not even know each other’s locations. One of the most important features of the protocol is that it must be able to work even when the state—though perfectly well known to its supplier, a third party Victor—is completely unknown to both Alice and Bob. Because the classical information broadcast over the classical channel can be minuscule in comparison to the infinite amount of information required to specify the unknown state, it is fair to say that the state’s transport is a disembodied transport . Teleportation has occurred when an unknown state $`|\psi `$ goes in and the same state $`|\psi `$ comes out. But that is perfect teleportation. Recent experimental efforts show there is huge interest in demonstrating the phenomenon in the laboratory—a venue where perfection is unattainable as a matter of principle. The laboratory brings with it a new host of issues: if perfect teleportation is unattainable, when can one say that laboratory teleportation has been achieved? What appropriate criteria define the right to proclaim success in an experimental setting? Searching through the description above, there are several heuristic breaking points, each asking for quantitative treatment. The most important among these are: 1. The states should be unknown to Alice and Bob and supplied by an actual third party Victor. 2. Entanglement should be a verifiably used resource, with the possibility of physical transportation of the unknown states blocked at the outset. There should be a sense in which the output is “close” to the input—close enough that it could not have been made from information sent through a classical channel alone. 3. Each and every trial, as defined by Victor’s supplying a state, should achieve an output sufficiently close to the input. When this situation pertains, the teleportation is called unconditional. (If that is impractical, conditional teleportation—where Alice and Bob are the arbiters of success—may still be of interest; but then, at the end of all conditioning, there must be a state at the output sufficiently close to the unknown input.) To date only the Furusawa et al. experiment has achieved unconditional experimental teleportation as defined by these three criteria. The Boschi et al. experiment fails to meet Criteria (1) and (2) because their Victor must hand off a (macroscopic) state-preparing device to Alice instead of an unknown state and because of a variety of low system efficiencies . The Bouwmeester et al. experiment fails to meet Criteria (2) and (3) because their output states—just before they are destroyed by an extra “verification” step—can be produced via communication through a classical channel alone . In a similar vein, the Nielsen et al. experiment fails to meet these criteria because there is no quantum entanglement shared between Alice and Bob at any stage of the process . But the story cannot stop there. Beside striving for simply better input-output fidelities or higher efficiencies, there are still further relevant experimental hurdles to be drawn from Ref. : 1. The number of bits broadcast over the classical channel should be “minuscule” in comparison to the information required to specify the “unknown” states in the class from which the demonstration actually draws. 2. The teleportation quality should be good enough to transfer quantum entanglement itself instead of a small subset of “unknown” quantum states. 3. The sender and receiver should not have to know each other’s locations to carry the process through to completion. And there are likely still more criteria that would seem reasonable to one or another reader of the original protocol (depending perhaps upon the particular application called upon). The point is, these two lists together make it clear that the experimental demonstration of quantum teleportation cannot be a cut and dried affair. On the road toward ideal teleportation, there are significant milestones to be met and passed. Important steps have been taken, but the end of the road is still far from sight. The work of the theorist in this effort is, among other things, to help turn the heuristic criteria above into pristine theoretical protocols within the context of actual experiments. To this end, we focus on Criterion 2 in the context of the Furusawa et al. experiment where the quantum states of a set of continuous variables are teleported (as proposed in Refs. ). The question is, by what means can one verify that Alice and Bob—assumed to be at fixed positions—actually use some quantum entanglement in their purported teleportation? How can it be known that they did not use the resource of a classical channel alone for the quantum state’s transport? What milestone must be met in order to see this? Answering these questions fulfills a result already advertised in Ref. and reported in the abstract of the present paper. Our line of attack is to elaborate on an idea first suggested in Ref. . A cheating Alice and Bob who attempt to make do with a classical channel alone, must gather information about the unknown quantum state if they are to have any hope of hiding their cheat. But then the limitations of quantum mechanics strike in a useful way. As long as the allowed set of inputs contains some nonorthogonal states, there is no measurement procedure that can reveal the state’s identity with complete reliability. Any attempt to reconstruct the unknown quantum state will be flawed necessarily: information gathering about the identity of a state in a nonorthogonal set disturbs the state in the process . The issue is only to quantify how much disturbance must take place and to implement the actual comparison between input and output in an objective, operationally significant way. If the experimental match (or “fidelity”) between the input and output exceeds the bound set by a classical channel, then some entanglement had to have been used in the teleportation process. The remainder of the paper is structured as follows. In the following section, we discuss the motivation behind choosing the given measure of fidelity that we do. We stress in particular the need for a break with traditional quantum optical measures of signal transmission, such as signal-to-noise ratio, etc., used in the area of quantum nondemolition (QND) research . In Section 3, we derive the optimal fidelity that can be achieved by a cheating Alice and Bob whose teleportation measurements are based on optical heterodyning as in the experiment of Furusawa et al. . This confirms that a fidelity of $`1/2`$ or greater is sufficient to assure the satisfaction of Criterion 2 in that experiment. We close in Section 4 with a few remarks about some open problems and future directions. ## 2 Why Fidelity? Ideal teleportation occurs when an unknown state $`|\psi `$ goes into Alice’s possession and the same state $`|\psi `$ emerges in Bob’s. What can this really mean? A quantum state is not an objective state of affairs existing completely independently of what one knows. Instead it captures the best information available about how a quantum system will react in this or that experimental situation .<sup>1</sup><sup>1</sup>1On this bit of foundational theory, it seems most experimentalists can agree. See in particular page S291 of Zeilinger Ref. where it is stated that: “The quantum state is exactly that representation of our knowledge of the complete situation which enables the maximal set of (probabilistic) predictions for any possible future observation. … If we accept that the quantum state is no more than a representation of the information we have, then the spontaneous change of the state upon observation, the so-called collapse or reduction of the wave packet, is just a very natural consequence of the fact that, upon observation, our information changes and therefore we have to change our representation of the information, that is, the quantum state. From that position, the so-called measurement problem is not a problem but a consequence of the more fundamental role information plays in quantum physics as compared to classical physics.” This forces one to think carefully about what it is that is transported in the quantum teleportation process. The only option is that the teleported $`|\psi `$ must always ultimately refer to someone lurking in the background—a third party we label Victor, the keeper of knowledge about the system’s preparation. The task of teleportation is to transfer what he can say about the system he placed in Alice’s possession onto a system in Bob’s possession: it is “information” in its purest form that is teleported, nothing more. The resources specified for carrying out this task are the previously shared entanglement between Alice and Bob and a classical channel with which they communicate. Alice performs a measurement of a specified character and communicates her result to Bob. Bob then performs a unitary operation on his system based upon that information. When Alice and Bob declare that the process is complete, Victor should know with assurance that whatever his description of the original system was—his $`|\psi `$—it now holds for the system in Bob’s possession. Knowing with assurance means that there really is a system that Victor will describe with $`|\psi `$, not that there was a system that he would have described with $`|\psi `$ just before Alice and Bob declared completion (i.e., as a retrodiction based upon their pronouncement) . In any real-world implementation of teleportation, a state $`|\psi _{\mathrm{in}}`$ enters Alice and Bob’s dominion and a different state (possibly a mixed-state density operator) $`\widehat{\rho }_{\mathrm{out}}`$ comes out. As before, one must always keep in mind that these states refer to what Victor can say about the given system (see footnote 2). The question that must be addressed is when $`|\psi _{\mathrm{in}}`$ and $`\widehat{\rho }_{\mathrm{out}}`$ are similar enough to each other that Criterion 2 must have been fulfilled. We choose to gauge the similarity between $`|\psi _{\mathrm{in}}`$ and $`\widehat{\rho }_{\mathrm{out}}`$ by the “fidelity” between the two states. This is defined in the following way<sup>2</sup><sup>2</sup>2In order to form this quantity, we must of course assume a canonical mapping or identification between the input and output Hilbert spaces. Any unitary offset between input and output should be considered a systematic error, and ultimately taken into account by readjusting the canonical mapping. See Ref. for a misunderstanding of this point. The authors there state, “… fidelity does not necessarily recognize the similarity of states which differ only by reversible transformations. … \[This suggests\] that additional measures are required … based specifically on the similarity of measurement results obtained from the input and output of the teleporter, rather than the inferred similarity of the input and output states.” As shown presently, the fidelity measure we propose does precisely that for all possible measurements, not just the few that have become the focus of present-day QND research.: $$F(|\psi _{\mathrm{in}},\widehat{\rho }_{\mathrm{out}})\psi _{\mathrm{in}}|\widehat{\rho }_{\mathrm{out}}|\psi _{\mathrm{in}}.$$ (1) This measure has the nice property that it equals 1 if and only if $`\widehat{\rho }_{\mathrm{out}}=|\psi _{\mathrm{in}}\psi _{\mathrm{in}}|`$. Moreover it equals 0 if and only if the input and output states can be distinguished with certainty by some quantum measurement. The thing that is really important about this particular measure of similarity is hinted at by these last two properties. It captures in a simple and convenient package the extent to which all possible measurement statistics produceable by the output state match the corresponding statistics produceable by the input state. To see what this means, take any observable (generally a positive operator-valued measure or POVM ) $`\{\widehat{E}_\alpha \}`$ with measurement outcomes $`\alpha `$. If that observable were performed on the input system, it would give a probability density for the outcomes $`\alpha `$ given by $$P_{\mathrm{in}}(\alpha )=\psi _{\mathrm{in}}|\widehat{E}_\alpha |\psi _{\mathrm{in}}.$$ (2) On the other hand, if the same observable were performed on the output system, it would give instead a probability density $$P_{\mathrm{out}}(\alpha )=\mathrm{tr}(\widehat{\rho }_{\mathrm{out}}\widehat{E}_\alpha ).$$ (3) A natural way to gauge the similarity of these two probability densities is by their overlap: $$\mathrm{overlap}=\sqrt{P_{\mathrm{in}}(\alpha )P_{\mathrm{out}}(\alpha )}𝑑\alpha .$$ (4) It turns out that regardless of which observable is being considered , $$\mathrm{overlap}^2\psi _{\mathrm{in}}|\widehat{\rho }_{\mathrm{out}}|\psi _{\mathrm{in}}.$$ (5) Moreover there exists an observable that gives precise equality in this expression . In this sense, the fidelity captures an operationally defined fact about all possible measurements on the states in question. Let us take a moment to stress the importance of a criterion such as this. It is not sufficient to attempt to quantify the similarity of the states with respect to a few observables. Quantum teleportation is a much more serious task than classical communication. Indeed it is a much more serious task than the simplest forms of quantum communication, as in quantum key distribution. In the former case, one is usually concerned with replicating the statistics of only one observable across a transmission line. In the latter case, one is concerned with reproducing the statistics of a small number of fixed noncommuting observables (the specific ones required of the protocol) for a small number of fixed quantum states (the specific ones required of the protocol). A full quantum state is so much more than the quantum measurements in these cases would reveal: it is a catalog for the outcome statistics of an infinite number of observables. Good quality teleportation must take that into account. A concrete example can be drawn from the traditional concerns of quantum nondemolition measurement (QND) research. There a typical problem is how well a communication channel replicates the statistics of one of two quadratures of a given electromagnetic field mode , and most often then only for assumed Gaussian statistics. Thinking that quantum teleportation is a simple generalization of the preservation of signal-to-noise ratio, burdened only in checking that both quadratures are transmitted faithfully, is to miss much of the point of teleportation. Specifying the statistics of two noncommuting observables only goes an infinitesimal way toward specifying the full quantum state when the Hilbert space is an infinite dimensional one . This situation is made acute by noticing that two state vectors can be almost completely orthogonal—and therefore almost as different as they can possibly be—while still giving rise to the same $`x`$ statistics and the same $`p`$ statistics. To see an easy example of this, consider the two state vectors $`|\psi _+`$ and $`|\psi _{}`$ whose representations in $`x`$-space are $$\psi _\pm (x)=\left(\frac{2a}{\pi }\right)^{1/4}\mathrm{exp}\left((a\pm ib)x^2\right),$$ (6) for $`a,b0`$. In $`k`$-space representation, these state vectors look like $$\stackrel{~}{\psi }_\pm (k)=\left(\frac{a}{2\pi }\right)^{1/4}\sqrt{\frac{a\pm ib}{a^2+b^2}}\mathrm{exp}\left(\frac{aib}{4(a^2+b^2)}k^2\right).$$ (7) Clearly neither $`x`$ measurements nor $`p`$ measurements can distinguish these two states. For, with respect to both representations, both wave functions differ only by a local phase function. However, if we look at the overlap between the two states we find: $$\psi _{}|\psi _+=\sqrt{\frac{a(a+ib)}{a^2+b^2}}.$$ (8) Taking $`b\mathrm{}`$, we can make these two states just as orthogonal as we please. Suppose now that $`|\psi _+`$ were Victor’s input into the teleportation process, and—by whatever means—$`|\psi _{}`$ turned out to be the output. By a criterion that only gauged the faithfulness of the transmissions of $`x`$ and $`p`$ , this would be perfect teleportation. But it certainly isn’t so! Thus the justification of the fidelity measure in Eq. (1) as a measure of teleportation quality should be abundantly clear. But this is only the first step in finding a way to test Criterion 2. For this, we must invent a quantity that incorporates information about the teleportation quality of many possible quantum states. The reason for this is evident: in general it is possible to achieve a nonzero fidelity between input and output even when a cheating Alice and Bob use no entanglement whatsoever in their purported teleportation. This can come about whenever Alice and Bob can make use of some prior knowledge about Victor’s actions. As an example, consider the case where Alice and Bob are privy to the fact that Victor wishes only to teleport states drawn from a given orthogonal set. At any shot, they know they will be given one of these states, just not which one. Then, clearly, they need use no entanglement to “transmit” the quantum states from one position to the other. A cheating Alice need only perform a measurement $`𝒪`$ whose eigenstates coincide with the orthogonal set and send the outcome she obtains to Bob. Bob can use that information to resynthesize the appropriate state at his end. No entanglement has been used, and yet with respect to these states perfect teleportation has occurred. This example helps define the issue much more sharply. The issue turns on having a general statement of what it means to say that Alice and Bob are given an unknown quantum state? In the most general setting it means that Alice and Bob know that Victor draws his states $`|\psi _{\mathrm{in}}`$ from a fixed set $`𝒮`$; they just know not which one he will draw at any shot. This lack of knowledge is taken into account by a probability ascription $`P(|\psi _{\mathrm{in}})`$. That is: > All useful criteria for the achievement of teleportation must be anchored in whatever $`𝒮`$ and $`P(|\psi _{\mathrm{in}})`$ are given. A criterion is senseless if the states to which it is to be applied are not mentioned explicitly. This makes it sensible to consider the average fidelity between input and output $$F_{\mathrm{av}}=_𝒮P(|\psi _{\mathrm{in}})F(|\psi _{\mathrm{in}},\widehat{\rho }_{\mathrm{out}})d|\psi _{\mathrm{in}},$$ (9) as a benchmark capable of eliciting the degree to which Criterion 2 is satisfied. If $`𝒮`$ consists of orthogonal states, then no criterion whatsoever (short of watching Alice and Bob’s every move) will ever be able to draw a distinction between true teleportation and the sole use of the classical side channel. Things only become interesting when the set $`𝒮`$ consists of two or more nonorthogonal quantum states : for only then will $`F_{\mathrm{av}}=1`$ never be achievable by a cheating Alice and Bob. By making the set $`𝒮`$ more and more complicated, we can define ever more stringent tests connected to Criterion 2. For instance, consider the simplest nontrivial case: take $`𝒮=𝒮_0=\{|\psi _0,|\psi _1\}`$, a set of just two nonorthogonal states (with a real inner product $`x=\mathrm{cos}\theta `$). Suppose the two states occur with equal probability. Then it can be shown that the best thing for a cheating Alice and Bob to do is this. Alice measures an operator whose orthogonal eigenvectors symmetrically bestride $`|\psi _0`$ and $`|\psi _1`$. Using that information, Bob synthesizes one of two states $`|\stackrel{~}{\psi }_0`$ and $`|\stackrel{~}{\psi }_1`$ each lying in the same plane as the original two states, but each tweaked slightly toward the other by an angle $$\varphi =\frac{1}{2}\mathrm{arctan}\left[\left(\frac{1+\mathrm{sin}\theta }{1\mathrm{sin}\theta }+\mathrm{cos}2\theta \right)^1\mathrm{sin}2\theta \right].$$ (10) This (optimal) strategy gives a fidelity $$F_{\mathrm{av}}=\frac{1}{2}\left(1+\sqrt{1x^2+x^4}\right).$$ (11) Even in the worst case (when $`x=1/\sqrt{2}`$), this fidelity is always relatively high—it is always above 0.933 . This shows that choosing $`𝒮_0`$ to check for the fulfillment of Criterion 2 is a very weak test. For an example of the opposite extreme, consider the case where $`𝒮`$ consists of every normalized vector in a Hilbert space of dimension $`d`$ and assume that $`𝒮`$ is equipped with the uniform probability distribution (i.e., the unique distribution that is invariant with respect to all unitary operations). Then it turns out that the maximum value $`F_{\mathrm{av}}`$ can take is $$F_{\mathrm{av}}=\frac{2}{d+1}.$$ (12) For the case of a single qubit, i.e., $`d=2`$, Alice and Bob would only have to achieve a fidelity of $`2/3`$ before they could claim that they verifiably used some entanglement for their claimed teleportation. But, again, this is only if Victor can be sure that Alice and Bob know absolutely nothing about which state he inputs other than the dimension of the Hilbert space it lives in. This last example finally prepares us to build a useful criterion for the verification of continuous quantum-variable teleportation in the experiment of Furusawa et al. . For a completely unknown quantum state in that experiment would correspond to taking the limit $`d\mathrm{}`$ above. If Victor can be sure that Alice and Bob know nothing whatsoever about the quantum states he intends to teleport, then on average the best fidelity they can achieve in cheating is strictly zero! In this case, seeing any nonzero fidelity whatsoever in the laboratory would signify that unconditional quantum teleportation had been achieved. But making such a drastic assumption for the confirmation set $`𝒮`$ would be going too far. This would be the case if for no other reason because any present-day Victor lacks the experimental ability to make good his threat. Any Alice and Bob that had wanted to cheat in the Furusawa et al. experiment would know that the Victor using their services is technically restricted by the fact that only a handful of manifestly quantum or nonclassical states have ever been generated in quantum optics laboratories . By far the most realistic and readily available laboratory source available to Victor is one that creates optical coherent states of a single field mode for his test of teleportation. Therefore in all that follows we will explicitly make the assumption that $`𝒮`$ contains the coherent states $`|\alpha `$ with a Gaussian distribution centered over the vacuum state describing the probability density on that set. As we shall see presently, it turns out that in the limit that the variance of the Gaussian distribution approaches infinity—i.e., the distribution of states becomes ever more uniform—the upper bound for the average fidelity achievable by a cheating Alice and Bob using optical heterodyne measurements is $$F_{\mathrm{av}}=\frac{1}{2}.$$ (13) Any average fidelity that exceeds this bound must have come about through the use of some entanglement. ## 3 Optimal Heterodyne Cheating We now verify Eq. (13) within the context of the Furusawa et al. experiment. There, the object is to teleport an arbitrary coherent state of a finite bandwidth electromagnetic field. (The extension of the single mode theory of Ref. to the multimode case is given in Ref. .) We focus for simplicity on the single mode case. The quantum resource used for the process is one that entangles the number states $`|n`$ of two modes of the field. Explicitly the entangled state is given by $$|E_{\mathrm{AB}}=\frac{1}{\mathrm{cosh}r}\underset{n=0}{\overset{\mathrm{}}{}}(\mathrm{tanh}r)^n|n_\mathrm{A}|n_\mathrm{B},$$ (14) where $`r`$ measures the amount of squeezing required to produce the entangled state. In order to verify that entanglement was actually used in the experiment, as discussed in the previous section, we shall assume that the test set $`𝒮`$ is the full set of coherent states $`|\beta `$, $$|\beta =\mathrm{exp}(|\beta |^2/2)\underset{0}{\overset{\mathrm{}}{}}\frac{\beta ^n}{\sqrt{n!}}|n,$$ (15) where the complex parameter $`\beta `$ is distributed according to a Gaussian distribution, $$p(\beta )=\frac{\lambda }{\pi }e^{\lambda |\beta |^2}.$$ (16) Ultimately, of course, we would like to consider the case where Alice and Bob are completely ignorant of which coherent state is drawn. This is described by taking the limit $`\lambda 0`$ in what follows. It is well known that the measurement optimal for estimating the unknown parameter $`\beta `$ when it is distributed according to a Gaussian distribution is the POVM $`\{\widehat{E}_\alpha \}`$ constructed from the coherent state projectors according to $$\widehat{E}_\alpha =\frac{1}{\pi }|\alpha \alpha |,$$ (17) first suggested by Arthurs and Kelly . This measurement is equivalent to optical heterodyning . These points make this measurement immediately attractive for the present considerations. On the one hand, maximizing the average fidelity (as is being considered here) is almost identical in spirit to the state-estimation problem of Ref. . On the other, in the Furusawa et al. experiment a cheating Alice who uses no entanglement actually performs precisely this measurement. We therefore consider an Alice who performs the measurement $`\{\widehat{E}_\alpha \}`$ and forwards on the outcome—i.e., the complex number $`\alpha `$—to Bob.<sup>3</sup><sup>3</sup>3We caution however that the present considerations do not prove the optimality of heterodyne measurement for an arbitrarily adversarial Alice and Bob—they simply make it fairly plausible. Complete optimization requires the consideration of all POVMs that Alice can conceivably perform along with explicit consideration of the structure of the fidelity function considered here, not simply the variance of an estimator as in the state-estimation problem. More on this issue can be found in Ref. . The only thing Bob can do with this information is generate a new quantum state according to some rule, $`\alpha |f_\alpha `$. Let us make no a priori restrictions on the states $`|f_\alpha `$. The task is first to find the maximum average fidelity $`F_{\mathrm{max}}(\lambda )`$ Bob can achieve for a given $`\lambda `$. For a given strategy $`\alpha |f_\alpha `$, the achievable average fidelity is $`F(\lambda )`$ $`=`$ $`{\displaystyle p(\beta )(p(\alpha |\beta )|f_\alpha |\beta |^2d^2\alpha )d^2\beta }`$ (18) $`=`$ $`{\displaystyle p(\beta )(\frac{1}{\pi }|\alpha |\beta |^2|f_\alpha |\beta |^2d^2\alpha )d^2\beta }`$ (19) $`=`$ $`{\displaystyle \frac{\lambda }{\pi ^2}}{\displaystyle e^{\lambda |\beta |^2}e^{|\alpha \beta |^2}|f_\alpha |\beta |^2d^2\beta d^2\alpha }`$ (20) $`=`$ $`{\displaystyle \frac{\lambda }{\pi ^2}}{\displaystyle e^{|\alpha |^2}f_\alpha |\left(\mathrm{exp}\left((1+\lambda )|\beta |^2+2\mathrm{R}\mathrm{e}\alpha ^{}\beta \right)|\beta \beta |d^2\beta \right)|f_\alpha d^2\alpha }.`$ (21) Notice that the operator enclosed within the brackets in Eq. (21), i.e., $$\widehat{𝒪}_\alpha =\mathrm{exp}\left((1+\lambda )|\beta |^2+2\mathrm{R}\mathrm{e}\alpha ^{}\beta \right)|\beta \beta |d^2\beta ,$$ (22) is a positive semi-definite Hermitian operator that depends only on the real parameter $`\lambda `$ and the complex parameter $`\alpha `$. It follows that $$f_\alpha |\widehat{𝒪}_\alpha |f_\alpha \mu _1(\widehat{𝒪}_\alpha ),$$ (23) where $`\mu _1(\widehat{X})`$ denotes the largest eigenvalue of the operator $`\widehat{X}`$. With this, Bob’s best strategy is apparent. For each $`\alpha `$, he simply adjusts the state $`|f_\alpha `$ to be the eigenvector of $`\widehat{𝒪}_\alpha `$ with the largest eigenvalue. Then equality is achieved in Eq. (23), and it is just a question of being able to perform the integral in Eq. (21). The first step in carrying this out is to find the eigenvector and eigenvalue achieving equality in Eq. (23). This is most easily evaluated by unitarily transforming $`\widehat{𝒪}_\alpha `$ into something that is diagonal in the number basis, picking off the largest eigenvalue, and transforming back to get the optimal $`|f_\alpha `$. (Recall that eigenvalues are invariant under unitary transformations.) The upshot of this procedure is best illustrated by working backward toward the answer. Consider the positive operator $$\widehat{P}=e^{(1+\lambda )|\beta |^2}|\beta \beta |d^2\beta .$$ (24) Expanding this operator in the number basis, we find $$\widehat{P}=\pi \underset{n=0}{\overset{\mathrm{}}{}}(2+\lambda )^{(n+1)}|nn|.$$ (25) So clearly, $$\mu _1(\widehat{P})=\frac{\pi }{2+\lambda }.$$ (26) Now consider the displaced operator $$\widehat{Q}_\alpha =\widehat{D}\left(\frac{\alpha }{1+\lambda }\right)\widehat{P}\widehat{D}^{}\left(\frac{\alpha }{1+\lambda }\right),$$ (27) where $`\widehat{D}(\nu )`$ is the standard displacement operator . Working this out in the coherent-state basis, one finds $`\widehat{Q}_\alpha `$ $`=`$ $`{\displaystyle e^{(1+\lambda )|\beta |^2}|\beta +\frac{\alpha }{1+\lambda }\beta +\frac{\alpha }{1+\lambda }|d^2\beta }`$ (28) $`=`$ $`{\displaystyle \mathrm{exp}\left((1+\lambda )\left|\gamma \frac{\alpha }{1+\lambda }\right|^2\right)|\gamma \gamma |d^2\gamma }`$ (29) $`=`$ $`\mathrm{exp}\left({\displaystyle \frac{|\alpha |^2}{1+\lambda }}\right){\displaystyle \mathrm{exp}\left((1+\lambda )|\gamma |^2+2\mathrm{R}\mathrm{e}\alpha ^{}\gamma \right)|\gamma \gamma |d^2\gamma }`$ (30) $`=`$ $`\mathrm{exp}\left({\displaystyle \frac{|\alpha |^2}{1+\lambda }}\right)\widehat{𝒪}_\alpha .`$ (31) Using this in the expression for $`F(\lambda )`$ we find, $`F(\lambda )`$ $`=`$ $`{\displaystyle \frac{\lambda }{\pi ^2}}{\displaystyle \mathrm{exp}\left(\left(1\frac{1}{1+\lambda }\right)|\alpha |^2\right)f_\alpha |\left(\widehat{D}\left(\frac{\alpha }{1+\lambda }\right)\widehat{P}\widehat{D}^{}\left(\frac{\alpha }{1+\lambda }\right)\right)|f_\alpha d^2\alpha }`$ (32) $``$ $`{\displaystyle \frac{1}{\pi }}{\displaystyle \frac{\lambda }{2+\lambda }}{\displaystyle \mathrm{exp}\left(\frac{\lambda }{1+\lambda }|\alpha |^2\right)d^2\alpha }`$ (33) $`=`$ $`{\displaystyle \frac{1+\lambda }{2+\lambda }}.`$ (34) Equality is achieved in this chain by taking $$|f_\alpha =D\left(\frac{\alpha }{1+\lambda }\right)|0=|\frac{\alpha }{1+\lambda }.$$ (35) Therefore the maximum average fidelity is given by $$F_{\mathrm{max}}(\lambda )=\frac{1+\lambda }{2+\lambda }.$$ (36) In the limit that $`\lambda 0`$, i.e., when Victor draws his states from a uniform distribution, we have $$F_{\mathrm{max}}(\lambda )\frac{1}{2},$$ (37) as advertised in Ref. . It should be noted that nothing in this argument depended upon the mean of the Gaussian distribution being $`\beta =0`$. Bob would need to minimally modify his strategy to take into account Gaussians with a non-vacuum state mean, but the optimal fidelity would remain the same. ## 4 Conclusion Where do we stand? What remains? Clearly one would like to develop a toolbox of ever more stringent and significant tests of quantum teleportation—ones devoted not only to Criterion 2, but to all the others mentioned in the Introduction as well. Significant among these are delineations of the fidelities that must be achieved to ensure the honest teleportation of nonclassical states of light, such as squeezed states. Some work in this direction appears in Ref. , but one would like to find something more in line with the framework presented here. Luckily, a more general setting for this problem can be formulated as it will ultimately be necessary to explore any number of natural verification sets $`𝒮`$ and their resilience with respect to arbitrarily adversarial Alice and Bob teams. ## 5 Acknowledgments We thank Jason McKeever for suggesting the nice example in Eq. (6) and thank J. R. Buck and C. M. Caves for useful discussions. This work was supported by the QUIC Institute funded by DARPA via the ARO, by the ONR, and by the NSF. SLB was funded in part by EPSRC grant GR/L91344. CAF acknowledges support of the Lee A. DuBridge Fellowship.
no-problem/9910/gr-qc9910095.html
ar5iv
text
# Proposed Experiments to Test the Unified Description of Gravitation and Electromagnetism through a Symmetric Metric ## Abstract If gravitation and electromagnetism are both described in terms of a symmetric metric tensor, then the deflection of an electron beam by a charged sphere should be different from its deflection according to the Reissner-Nordstrøm solution of General Relativity. If such a unified description is true, the equivalence principle for the electric field implies that the photon has a nonzero effective electric charge-to mass ratio and should be redshifted as it moves in an electric field and be deflected in a magnetic field. Experiments to test these predictions are proposed. Of all the unification schemes for gravitation and electromagnetism suggested so far, the simplest is the one through a symmetric metric tensor $`g_{\mu \nu }`$ . In this scheme gravitation and electromagnetism curve the spacetime in exactly the same way, as a result of which the interpretation of the metric tensor as the gravitational field proper must be given up. If this scheme of unified description does indeed correspond to reality, it must possess testable deviations from Einstein’s general relativity (hereafter GR) as well as new physical phenomena. The purpose of this letter is, therefore, to propose experiments through which this new scheme can be tested. To this end, we shall discuss three topics and their experimental implications. I. The Line Element for a Spherically Symmetric Distribution of Matter and Charge: In Einstein’s GR theory, the gravitational field around a spherical distribution of mass $`M`$ and charge $`Q`$ located at $`r=0`$ is described by the field equation $$R^{\mu \nu }=\frac{8\pi G}{c^4}T_{EM}^{\mu \nu },$$ (1) where $`T_{EM}^{\mu \nu }`$ is the usual traceless tensor of the electromagnetic field of the charge $`Q`$. The spherically symmetric solution of eq.(1) for the line element (the invariant interval) is known as the Reissner-Nordstrøm solution . It is given by <sup>1</sup><sup>1</sup>1We use the conventions of Misner, Thorne, and Wheeler for metrics, curvatures, etc. $`ds^2=\left(12{\displaystyle \frac{GM}{c^2r}}+{\displaystyle \frac{Gk_eQ^2}{c^4r^2}}\right)c^2dt^2+\left(12{\displaystyle \frac{GM}{c^2r}}+{\displaystyle \frac{Gk_eQ^2}{c^4r^2}}\right)^1dr^2+`$ $`r^2d\theta ^2+r^2sin^2\theta d\varphi ^2,`$ (2) where $`G`$ and $`k_e`$ are the gravitational and electric constants, c is the speed of light. In our scheme, the equation describing the dynamical effects of the gravitational as well as the electric field around such a mass and electric charge distribution on a test particle of mass $`m`$ and electric charge $`q`$ is $$R^{\mu \nu }=0.$$ (3) The solution of eq.(3) is similar to the Schwarzschild solution and is easily found to be $`ds^2=\left(12{\displaystyle \frac{GM}{c^2r}}+2{\displaystyle \frac{q}{m}}{\displaystyle \frac{k_eQ}{c^2r}}\right)c^2dt^2+\left(12{\displaystyle \frac{GM}{c^2r}}+2{\displaystyle \frac{q}{m}}{\displaystyle \frac{k_eQ}{c^2r}}\right)^1dr^2`$ $`+r^2d\theta ^2+r^2sin^2\theta d\varphi ^2.`$ (4) Comparison of the third terms in $`g_{00}`$ of equations (2) and (4) reveal the philosophy of our unification. In eq.(1), the electric field of the charge distribution contributes to the gravitational field of the matter. Whereas in our scheme, there is an equivalence principle for the electromagnetic field as well , and the right-hand side of eq.(3) is zero, as opposed to eq.(1) of GR; the electric field does not contribute to the gravitational field, it asserts itself separately. To test which of the third terms in $`g_{00}`$ of equations (2) and (4) reflects the physical reality, consider a positively charged metallic sphere of radius $`R`$, mass $`M`$, and electric charge $`Q`$. The electric potential on the surface of the sphere is $$V(R)=\frac{k_eQ}{R},$$ (5) in terms of which the $`g_{00}`$ are $`g_{00}^{RN}=\left(12{\displaystyle \frac{m_G}{r}}+{\displaystyle \frac{GR^2}{k_ec^4r^2}}V(R)^2\right);`$ $`g_{00}^{MGR}=\left(12{\displaystyle \frac{m_G}{r}}+2{\displaystyle \frac{q}{m}}{\displaystyle \frac{R}{c^2r}}V(R)\right),`$ (6) where the first one corresponds to the Reissner-Nordstrøm (RN) solution and the second one to ours, which we call “modified general relativity” (hereafter MGR), and $`m_G=GM/c^2`$. Now, for a sphere of $`M=1kg`$, $`R=5cm`$, and an electric potential of $`10^3V`$ on the surface of the sphere, we have, for an electron just grazing the sphere <sup>2</sup><sup>2</sup>2Note that in the Reissner-Nordstrøm case, the contribution of the electric charge of the sphere to its gravitational field turns out to be much smaller than the mass term $`2GM/c^2r`$ for reasonable values of $`r`$. $`g_{00}^{RN}=\left(11.48\times 10^{26}+9.19\times 10^{49}\right)1;`$ $`g_{00}^{MGR}=\left(11.48\times 10^{26}3.91\times 10^3\right)0.996.`$ (7) Thus, the space around such a charged sphere is extremely closed to being flat in the Reissner-Nordstrøm case and is approximated perfectly by the metric of special relativity, the Minkowski metric. In our case, however, there is a great deal of deviation from flatness that can assert itself in the trajectory of an electron moving in the viscinity of the sphere. The trajectory of an electron ($`q=e`$) moving in the gravitational and electric fields, however weak they are, of such a sphere is described by $$\frac{d^2x^\mu }{ds^2}+\mathrm{\Gamma }_{\alpha \beta }^\mu \frac{dx^\alpha }{ds}\frac{dx^\beta }{ds}=\frac{q}{mc^2}F_\alpha ^\mu \frac{dx^\alpha }{ds},$$ (8) in the Reissner-Nordstrøm case with $`\mathrm{\Gamma }_{\alpha \beta }^\mu `$ calculated from eq.(2), and by $$\frac{d^2x^\mu }{ds^2}+\mathrm{\Gamma }_{\alpha \beta }^\mu \frac{dx^\alpha }{ds}\frac{dx^\beta }{ds}=0,$$ (9) in our scheme with $`\mathrm{\Gamma }_{\alpha \beta }^\mu `$ calculted from eq(4). To simplify the notation, let us, as usual, write the line element in the form $$ds^2=e^\eta c^2dt^2+e^\eta dr^2+r^2d\theta ^2+r^2sin^2\theta d\varphi ^2.$$ (10) The nonzero components of $$\mathrm{\Gamma }_{\alpha \beta }^\mu =\frac{1}{2}g^{\mu \nu }\left(g_{\nu \alpha ,\beta }+g_{\nu \beta ,\alpha }g_{\alpha \beta ,\nu }\right),$$ (11) that we need in our calculation are <sup>3</sup><sup>3</sup>3The other nonzero components of $`\mathrm{\Gamma }_{\alpha \beta }^\mu `$ that are not required in our calculation have not been quoted here. $`\mathrm{\Gamma }_{01}^0`$ $`=`$ $`\mathrm{\Gamma }_{10}^0={\displaystyle \frac{1}{2}}{\displaystyle \frac{d\eta }{dr}},\mathrm{\Gamma }_{13}^3=\mathrm{\Gamma }_{31}^3={\displaystyle \frac{1}{r}},`$ (12) Using $$A_\mu =(\mathrm{\Phi }_E,\stackrel{}{A})=(k_e\frac{Q}{r},0),$$ (13) the nonzero components of the electromagnetic field strength tensor $$F_{\mu \nu }=\frac{A_\nu }{x^\mu }\frac{A_\mu }{x^\nu },$$ (14) are $$F_{01}=F_{10}=k_e\frac{Q}{r^2}.$$ (15) Confining the motion of the electron in the $`\theta =\pi /2`$ plane not only simplifies the calculation a lot but also the experiment to be described later. We, then, obtain the following euations from eq.(8) for the coordinates $`x^0=ct`$ and $`x^3=\varphi `$ $$\frac{d^2t}{ds^2}+\frac{d\eta }{dr}\frac{dr}{ds}\frac{dt}{ds}=\frac{q}{mc^3}e^\eta \frac{k_eQ}{r^2}\frac{dr}{ds},$$ (16) $$\frac{d^2\varphi }{ds^2}+\frac{2}{r}\frac{dr}{ds}\frac{d\varphi }{ds}=0,$$ (17) where we have put $`d\theta /ds=0`$. A further simplification is achieved by trading the equation for the coordinate $`x^1=r`$ with the one that follows from the condition of timelike geodesics $$g_{\mu \nu }\frac{dx^\mu }{ds}\frac{dx^\nu }{ds}=1,$$ (18) which gives $$e^\eta \left(\frac{dr}{ds}\right)^2+r^2\left(\frac{d\varphi }{ds}\right)^2e^\eta c^2\left(\frac{dt}{ds}\right)^2+1=0.$$ (19) Equations (16) and (17) can be integrated to yield, respectively $$\frac{dt}{ds}=\frac{e^\eta }{c}\left(\frac{qk_eQ}{mc^2}\frac{1}{r}+a\right),$$ (20) $$r^2\frac{d\varphi }{ds}=h,$$ (21) where $`a`$ and $`h`$ are integration constants. Noting that $`dr/ds=(dr/d\varphi )(d\varphi /ds)`$ and inserting equations (20) and (21) in eq.(19) and then dividing by $`e^\eta `$ we get $$\left(\frac{du}{d\varphi }\right)^2+u^2e^\eta \left(\frac{qk_eQ}{mc^2}u+a\right)^2\frac{1}{h^2}+\frac{e^\eta }{h^2}=0,$$ (22) where we have set $`u=1/r`$. For the Reissner-Nordstrøm solution we now put $`e^\eta 1`$. Differentiating eq.(22) with respect to $`\varphi `$ and removing the factor $`du/d\varphi `$ we get $$\frac{d^2u}{d\varphi ^2}+u=\frac{m_E}{h^2}+\frac{m_E^2}{h^2}u$$ (23) where we have set the constant $`a=1`$ so that when $`h=l/mc`$, with $`l=mr^2\dot{\varphi }`$ being the ordinary angular momentum, the first term on the right-hand side of eq.(23) agrees with the Newtonian (hereafter N) expression $$\frac{d^2u}{d\varphi ^2}+u=\frac{m^2c^2}{l^2}m_E.$$ (24) Here $$m_E=\frac{q}{m}\frac{k_eQ}{c^2}=\frac{q}{mc^2}RV(R)$$ (25) has the dimension of length and corresponds to $`m_G=GM/c^2`$ in the Schwarzschild solution. Eq.(23) describes the trajectory of a charged test particle when $`g_{11}g_{00}1`$ in the Reissner-Nordstrøm solution. Hence, it also describes exactly the trajectory of a test charge in an electric field according to special relativity. The second term on the right-hand side of eq.(23) is a special relativistic correction to the Newtonian result. As for eq.(9), we get $$\frac{d^2t}{ds^2}+\frac{d\eta }{ds}\frac{dt}{ds}=0$$ (26) instead of eq.(16), and $$\frac{dt}{ds}=\frac{e^\eta }{c}$$ (27) instead of eq.(20) with $`a=1`$. Equations (17) and (21) do not change. Proceeding as before, we find $$\frac{d^2u}{d\varphi ^2}+u=\frac{m_E}{h^2}+3m_Eu^2.$$ (28) Recall that terms involving $`m_G`$ on the right-hand sides of equations (23) and (28) have been dropped because $`m_G<<m_E`$ for the metallic sphere we are considering. It should be noted that when $`m_E`$ is replaced with $`m_G`$ in eq.(28), the equation of a neutral test particle of mass $`m`$ moving in the Schwarzschild field of a spherical mass $`M`$ is obtained. Since $`mch`$ is the conserved angular momentum of the test charge in its rest frame, we need to express $`h`$ in terms of $`l`$, the ordinary angular momentum of the test charge in the laboratory frame (with respect to the coordinate time $`t`$). In the Reissner-Nordstrøm case,equations (20) and (21), with $`e^\eta =1`$, yield $$h=\frac{l}{mc}\left(1+m_Eu\right),$$ (29) and in our scheme equations (21) and (27), with $`e^\eta =\left(12m_Eu\right)^1`$, yield $$h=\frac{l}{mc}\left(12m_Eu\right)^1.$$ (30) Equations (23) and (28) then reduce to $$\frac{d^2u}{d\varphi ^2}+u=\frac{m^2c^2}{l^2}\frac{m_E}{\left(1+m_Eu\right)},$$ (31) which is the orbit equation for the Reissner-Nordstrøm solution, and $$\frac{d^2u}{d\varphi ^2}+u=\frac{m^2c^2}{l^2}m_E\left(12m_Eu\right)^2+3m_Eu^2,$$ (32) which is the orbit equation in our scheme. We now propose the following experiment to distinguish between the two equations; (31) and (32): Consider a vacuum chamber in the shape of a rectangular metallic box. Let a metallic ball of radius $`R`$ positively charged to a potential of $`V(R)`$ be hanged freely from an insulating thread. Let an electron gun be located at angle $`\alpha `$ on the equatorial plane of the ball at a distance $`r_i`$ from the ball’s center. The point of emergence of the electrons may be taken to be on the negative $`y`$ axis and thus has $`\varphi =3\pi /2`$. Put a calibrated phosphorous screen on the positive y axis at $`\varphi =5\pi /2`$. Make a large enough glass window on the side of the box facing the screen (or monitor the position of the electron beam on the screen electronically) to observe where the electron beam hits on the screen. Equations (31) and (32) can be solved numerically for $`u`$, and hence for $`r`$. The two initial conditions required are $`u(\varphi =3\pi /2)=r_i^1`$ and $`du/d\varphi (\varphi =3\pi /2)=\sqrt{1sin^2\alpha }/(r_isin\alpha )`$, where as above $`\alpha `$ is the angle the initial velocity $`v_i`$ of the electrons makes with the positive y axis. In obtaining the second initial condition we have made use of $`dr/dt=\dot{r}=(dr/d\varphi )(d\varphi /dt)=r^{}\dot{\varphi }`$, $`v^2=\dot{r}^2+r^2\dot{\varphi }^2`$, and $`l=mr^2\dot{\varphi }=mv_ib`$, where $`b=r_isin\alpha `$ is the impact parameter of the electrons. The solutions of the equations (31) and (32) can thus be found numerically at any value of the angle $`\varphi `$, and especially on the positive $`y`$ axis. We have tabulated some examplary results in Tables 1 and 2 <sup>4</sup><sup>4</sup>4In our calculations we have used the relativistic expression $`eV_{AC}=m_ec^2/\sqrt{1v_i^2/c^2}m_ec^2`$ to calculate $`v_i`$, the initial velocity of the electrons. For an anode-cathode voltage of $`V_{AC}=1000V`$ for the electron gun, this gives $`c/v_i=16.0077`$, whereas the nonrelativistic expression gives $`c/v_i=15.9843`$. The positions in the Tables are very sensitive to variations in $`c/v_i`$.. It is seen that in all cases the prediction of our scheme for the position of the electron beam on the screen is distinctly different from the Newtonian and Reissner-Nordström (RN) predictions. One may be curious as to why the dispersion (see the last columns in Tables 1 and 2), $`r_Nr_{MGR}`$, between the Newtonian and Modified General Relativistic trajectories decreases as the potential $`V(R)`$ of the sphere increases. For weak potentials the curvature of spacetime is small and the angle between the two trajectories is large, as a result of which the two trajectories disperse more from each other at large distances from the sphere <sup>5</sup><sup>5</sup>5The same phenomenon occurs in gravity between the Newtonian and general relativistic trajectories of a neutral test particle moving in the gravitational (Schwartzschild) field of a spherical mass. Replace $`m_E`$ with $`m_G`$ in equations (24) and (28) to get the gravitational equations.. Therefore, by measuring the position of the electron beam on the screen the correct theory can be distinguished. In Figures 1-4, the trajectory of the electrons is drawn according to the three theories, where again the differences in the trajectories are seen with certainty. For an anticipated difference of about 3-5 cm between $`r_N`$ and $`r_{MGR}`$, a rectangular metallic box with dimensions $`130cm\times 30cm\times 30cm`$ with a circular lid near the top of one end, and a glass window on the side facing the screen may be built very easily <sup>6</sup><sup>6</sup>6If evacuating the box is not a problem, a longer box can be built to obtain larger $`r_Nr_{MGR}`$ (see Table 1). Figures 3 and 4, on the other hand, suggest that a much smaller box could be used for very large V(R) and anode-cathode voltage for the electron gun. Mathematically this is true. But the distances and angles must then be determined with perfect precision.. A rotary-diffusion pump system can easily obtain the desired vacuum required for the electron gun to work. Care must be taken to set the angles and the distances as precisely as possible because the solutions are very sensitive to variations in them. One may wonder, if in scattering experiments of the Rutherford type a deviation in the cross-section should have been seen due to the electrical curvature of the spacetime. For the scattering of $`\alpha `$ particles off gold nuclei, the correction term $`2(q/m)_\alpha k_eQ_{Gold}/(c^2r)=1.2\times 10^{16}/r`$ to $`g_{00}^{MGR}`$ turns out to be between $`10^3`$ and $`10^{16}`$ for $`10^{13}mr1m`$, where $`r`$ is the position of the alpha-particle from the target nucleus. So, within the precision of these experiments, no deviation from the cross-section can be seen. II. The Electrical Redshift of Light: If true, one immediate and dramatic consequence of the gravito-electromagnetic unified description in our scheme is that light should undergo a redshift as it travels against a uniform electric field. The existence of the electrical redshift can be inferred from the eqivalence principle for the electric field . Consider a cabin and two clocks seperated by a horizontal distance $`d`$ in it, all with the same $`q/m`$ ratio. For definiteness, assume the charges are positive. Let the cabin be accelerating to the left at the rate $`a=(q/m)E`$ to simulate an electric field $`E`$ directed to the right. An inertial observer describes the following chain of events: The right and left-hand clocks are both accelerating to the left with acceleration $`a`$. The right-hand clock is sending photons to the left-hand clock at the rate $`\nu _R`$ photons per second. It takes time $`t=d/c`$ for a photon to reach the left-hand clock, during which time the velocity of the left-hand clock increases by $`\mathrm{\Delta }v=(q/m)Ed/c`$. Therefore the rate $`\nu _L`$ that the photons are detected by the left-hand clock is decreased by a Doppler redshift $$\nu _L=\nu _R\left(1\frac{\mathrm{\Delta }v}{c}\right)=\nu _R\left(1\frac{q}{m}\frac{Ed}{c^2}\right).$$ (33) This means that the frequency of a photon detected by the left-hand clock undergoes a Doppler shift exactly as in eq.(33). Therefore the fractional change in the frequency of the photons is $$\frac{\mathrm{\Delta }\nu }{\nu }=\frac{\nu _L\nu _R}{\nu _R}=\frac{q}{m}\frac{Ed}{c^2},$$ (34) where now $`\nu `$ refers to the photon frequency. Then according to the eqivalence principle, the same redshift must be observed as light travels to the left in a uniform static electric field $`E`$ directed to the right. Note, strange as it may sound though, that the above argument implies that the photon behaves in an electric field as if it has a nonzero “effective electric charge” and hence an electric charge-to-mass ratio $`(q/m)_\gamma `$ <sup>7</sup><sup>7</sup>7This is similar to the gravitational situation in which the photon has “effective” gravitational and inertial masses and $`(m_g/m_i)_\gamma =1`$. In the electrical case, however, we do not know the value of $`(q/m_i)_\gamma `$. It must be determined from the experiment. <sup>8</sup><sup>8</sup>8Having found out that photons have a nonzero electric-charge- to mass ratio, we point out that the cabin and the clocks then must have this very same ratio so that the equivalence principle for the electric field is applicable. However, one should not conclude from this that, in reality the atoms (clocks) emitting and absorbing the photons must have the same electric charge-to-mass ratio as the photons. This can be seen by excluding the clocks from the cabin in the above thought experiment, or from the conservation of energy argument as applied to a particle moving in a uniform electric field and converting to a photon. This argument does not involve any “clocks”.. Note, however, that the above argument does not fix the sign of the effective charge of the photon. If the effective charge is negative, photons then would be redshifted as they moved in the same direction as the electric field. Hence, assuming a positive “effective electric charge” for the photon, the conservation of energy of a particle moving in an electric field and then converting to a photon, just like a particle falling in a gravitational field and then converting to a photon , yields the same redshift as in eq.(34). An experiment of the Pound-Rebka-Snider type can be done to verify the redshift and/or to put a limit on the $`(q/m)_\gamma `$ of the photon. A $`\mathrm{\Delta }\lambda /\lambda `$ of $`10^{15}`$ should be seen for a voltage difference of about $`100V`$ between the detection and emission points of the photons if $`(q/m)_\gamma =1C/kg`$ <sup>9</sup><sup>9</sup>9Note, as we have pointed out in , that in a different system of units the electric charge $`q`$ and the mass $`m`$ may be measured in the same unit. In such a system of units, $`(q/m)_\gamma =\pm 0/0\pm 1`$ seems more likely.. If, on the other hand, $`(q/m)_\gamma =0.1C/kg`$ or $`0.01C/kg`$, the required voltage difference would be about $`10^3V`$ or $`10^4V`$, respectively. Before we end this section, we would like to remark that a nonzero $`(q/m)_\gamma `$ implies that light would be deflected or scattered off as it passess a charged spherical object just as it is deflected by a massive spherical object like the sun. The magnitude of the deflection, however, is so small, even for $`(q/m)_\gamma =1C/kg`$, that a laboratory experiment does not seem possible. III. The Deflection of Light in a Magnetic Field: Another consequence of a nonzero elctric charge-to mass ratio for the photon is that light would be deflected in a magnetic field. Consider a uniform static magnetic field $`B`$ directed downward in the $`z`$ direction. Let a light beam be emitted from a point and travel in the $`xy`$ plane so that the velocity of the light beam is perpendicular to the magnetic field. The light beam should travel in a counterclockwise circle of radius $$R=\frac{1}{(q/m)_\gamma }\frac{c}{B},$$ (35) which follows from the equality of the centripetal and magnetic forces on a single photon. Let $`d`$ be a straight distance that a photon would have travelled had it been not deflected by the magnetic field. Then the deflection $`\mathrm{\Delta }`$, the distance from the end of the distance $`d`$ to the actual position of the photon on the circle, is $$\mathrm{\Delta }=\frac{1}{(q/m)_\gamma }\frac{c}{B}\left\{1cos\left[sin^1\left(\left(\frac{q}{m}\right)_\gamma \frac{Bd}{c}\right)\right]\right\}.$$ (36) Tabulated in Table 3 are the deflections for $`(q/m)_\gamma =1C/kg`$ a light beam would suffer as a function of $`B`$ and the straight distance $`d`$, the distance light is allowed to travel when $`B=0`$. We see that a deflection of a tenth of a millimeter is expected for $`B=1T`$ and $`d=250m`$. A uniform magnetic field extending to a desired length can easily be obtained by placing a number of electromagnets end-to-end. The positions of a light beam on a “film” in the absence and presence of the magnetic field can be measured. The distance between the two positions would be the anticipated deflection. In this letter, we have proposed three experiments to test whether or not gravitation and electromagnetism have a unified description through a symmetric metric tensor. The experiment of the deflection of an electron beam by a positively charged sphere, which is to show if a distribution of electric charge curves the spacetime independently of its gravitational field, is the simplest one and shoud be done first. The other two experiments depend strongly on the predicted electric-charge-to-mass ratio for the photon. A negative result in these experiments would still be useful to place an upper limit on $`(q/m)_\gamma `$. Acknowledgements We are grateful to Prof. Mahjoob O. Taha for invaluable discussions. We thank Mr Cüneyt Elibol, Dr Orhan Özhan , and Dr Arif Akhundov for various comments. References M. Özer, On the Equivalence Principle and a Unified Description of Gravitation and Electromagnetism, gr-qc/9910062. A. Einstein, Ann. d. Phys. 49(1916)769. H. Reissner, Ann. d. Phys. 50(1916)106. G. Nordstrøm, Proc. Kon. Ned. Akad. Wet. 20(1918)1238. C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation, W. H. Freeman and Company, 1973. K. Schwarzschild, Berl. Ber. (1916)189. See, for example, ref., p.187. R. V. Pound and G. A. Rebka, Phys. Rev. Lett., 4(1960)337. R. V. Pound and J. L. Snider, Phys. Rev., B 140(1965)788.
no-problem/9910/cond-mat9910074.html
ar5iv
text
# Quantum Spin Systems: From Spin Gaps to Pseudo Gaps ## 0.1 Introduction <sup>1</sup><sup>1</sup>footnotetext: and Moscow Inst. of Physics and Technology, 141700 Dolgoprudny, Russia<sup>2</sup><sup>2</sup>footnotetext: and LPTM, Univ. de Cergy, 2 Av. A. Chauvin, 95302 Cergy-Pontoise Cedex, France There is a general consensus that part of the unusual physics of doped two-dimensional spin systems, i.e. the observation of pseudo gaps and high temperature superconductivity, can be mapped onto one dimension. As the pseudo gaps are evident not only in transport and thermodynamic measurements but also in NMR spectroscopy they certainly involve spin degrees of freedom. It was predicted that the binding of mobile holes in spin ladders can lead either to a superconducting or a charge-ordered ground state. The observation of superconductivity in the spin ladder/chain compound $`\mathrm{Sr}_{14\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ and the discussion of a phase separation into 1D spin and charge stripes in high temperature superconductors (HTSC) and related compounds encouraged this assumption . However, since for $`\mathrm{Sr}_{14\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ there is some evidence of a crossover toward a two-dimensional system and a possible vanishing of the spin gap under pressure it is not clear whether two-leg or the recently studied three-leg ladders provide useful analogs to HTSC . Therefore, an investigation of the excitation spectrum of low dimensional spin systems, in particular in compounds with a spin gap is important and may shed some light on the similarities and differences between both classes of materials. ## 0.2 Structural Elements of Low Dimensional Spin Systems In the systems discussed here the low energy excitations are mainly due to the spin degrees of freedom. The magnetic properties may often be described by the Heisenberg exchange spin Hamiltonian. If, in addition, the exchange is restricted to low dimensions then chains, spin ladders, and further systems with a more complex exchange pattern are realized. Two building principles are used to reduce the superexchange of a 3d ion-oxygen configuration to less than three dimensions. These are on the one hand an enlarged distance or missing bridging oxygen between two 3d ion-sites or on the other hand a superexchange path with an angle close to 90. Due to the Kanamori-Goodenough rule (vanishing superexchange via perpendicular oxygen O2p-orbitals) a non collinear exchange path leads to a magnetic insulation of, e.g. neighboring CuO chains. In this way compounds representing chains, zigzag double chains or ladders with different numbers of legs are realized. Fig. 0.1 shows a comparison of several possible 3d ion-oxygen configurations. Compounds that incorporate these structural elements exhibit a number of unusual properties which are related to strong quantum fluctuations. ## 0.3 Excitation Spectrum and Phase Diagram The excitation spectrum of a one-dimensional spin system (spin chain) with nearest neighbor exchange coupling is characterized by a degeneracy of the singlet ground state with triplet excitations in the thermodynamic limit . Assuming negligible spin anisotropies the ground state is not magnetically ordered even for T=0 and there are gapless excitations. The spin-spin correlations are algebraically decaying. The elementary excitations in such a system are therefore described as massless asymptotically free pairs of domain wall-like solitons or s=1/2 spinons. A quantum phase transition from this gapless critical state into a gapped spin liquid state may be induced by a dimerization, i.e. an alternation of the coupling constants between nearest neighbors, or by a sufficient frustration due to competing next nearest neighbor antiferromagnetic exchange. This gapped state is characterized by extremely short ranged spin-spin correlations and may be described as an arrangement of weakly interacting spin dimers . A simple representative of the quantum disordered state is the two-leg spin ladder with a larger exchange coupling along the rungs than along the legs of the ladder . The singlet ground state is composed of spin dimers on the rungs. An excitation in the picture of strong dimerization corresponds to breaking one dimer leading to a singlet-triplet excitation $`\mathrm{\Delta }_{01}`$. Studies on three-, four- or five-leg ladders led to the conjecture that ladders with an even number of legs have a spin gap while odd-leg ladders are gapless . A family of compounds that may represent these systems are the Sr cuprates, e.g. the two-leg ladder compound $`\mathrm{SrCu}_2\mathrm{O}_3`$ and the system $`\mathrm{Sr}_{14\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ that is composed of a chain and a ladder subcell and moreover shows superconductivity under pressure . In the limit of an infinite number of coupled chains a two-dimensional Heisenberg system is obtained and the spin gap vanishes. This limit has also been used to study the two-dimensional high temperature superconductors. Within this framework, also weakly doped two- and three-leg ladder were theoretically investigated . ## 0.4 Magnetic Bound States in $`\mathrm{𝐂𝐮𝐆𝐞𝐎}_\mathrm{𝟑}`$ and $`\mathrm{𝐍𝐚𝐕}_\mathrm{𝟐}𝐎_\mathrm{𝟓}`$ A salient feature of low dimensional quantum spin systems with a gapped excitation spectrum is the existence of magnetic bound states, i.e. triplet excitations that are confined to bound singlet or triplet states . These states are characterized by a well-defined excitation with an energy reduced with respect to the energy of a two-particle continuum of ”free” triplet excitations. In the case of a spin chain the binding energy originates from frustration and/or interchain interaction. In general, these states may therefore be used to study the triplet-triplet interaction, the coupling parameters and the phase diagram of the system. Magnetic bound states of singlet character may be investigated using light scattering experiments. The light scattering process involved results from a spin-conserving exchange mechanism . For these investigations spin-Peierls compounds are very promising as they show a transition from a homogeneous to a dimerized phase for temperatures below the spin-Peierls temperature $`\mathrm{T}_{\mathrm{SP}}`$. Therefore, excitations of these systems may be characterized due to their behavior in dependence on temperature, i.e. as function of the dimerization of the spin system. Magnetic bound states have been identified in light scattering experiments on $`\mathrm{CuGeO}_3`$ as a single and on $`\mathrm{NaV}_2\mathrm{O}_5`$ as multiple singlet states . Raman spectra of $`\mathrm{CuGeO}_3`$ shown in Fig. 0.2 show for T$`<`$$`\mathrm{T}_{\mathrm{SP}}`$=14 K additional dimerization-induced modes which are zone-folded phonons with the exception of one mode at 30 cm<sup>-1</sup>. The Fano-lineshape of these modes at 104 cm<sup>-1</sup> and 224 cm<sup>-1</sup> is caused by spin-phonon coupling. The excitation at $`30\mathrm{cm}^1`$ is identified as a singlet bound state. Its energy $`\mathrm{\Delta }_{00}`$=30 cm<sup>-1</sup> $`\sqrt{3}\mathrm{\Delta }_{01}`$, with $`\mathrm{\Delta }_{01}`$=16.8 cm<sup>-1</sup> the singlet-triplet gap and the quasi-linear increase of its intensity with decreasing temperature support this interpretation . Corresponding experiments on the compound $`\mathrm{NaV}_2\mathrm{O}_5`$ with T<sub>SP</sub>=34 K given in Fig. 0.3, show more transition-induced modes. Using the criteria discussed above, three modes at 67, 107 and 134 cm<sup>-1</sup> are candidates for singlet bound states. In addition there is a decrease of the background scattering intensity for frequencies $`\mathrm{\Delta }\omega `$$`<`$120 cm<sup>-1</sup> which is indicative of 2$`\mathrm{\Delta }_{01}`$ in agreement with magnetic susceptibility data . This compound differs from the spin chain system $`\mathrm{CuGeO}_3`$ in the sense that it represents a quarter-filled spin ladder that only for T$`>`$ T<sub>SP</sub> may be mapped on a spin chain . Furthermore, there is strong evidence that the transition at T<sub>SP</sub> is not a spin-Peierls transition but an electronically driven dimerization connected with a charge ordering of the s=1/2 V<sup>4+</sup> and V<sup>5+</sup> on the rungs of the ladders . ## 0.5 The Doped Chain/Ladder System $`\mathrm{𝐒𝐫}_{\mathrm{𝟏𝟒}𝐱}\mathrm{𝐂𝐚}_𝐱\mathrm{𝐂𝐮}_{\mathrm{𝟐𝟒}}𝐎_{\mathrm{𝟒𝟏}}`$ In the compound $`\mathrm{Sr}_{14\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ , which incorporates both $`\mathrm{CuO}_2`$ chains and $`\mathrm{Cu}_2\mathrm{O}_3`$ ladders, a substitution of Sr by the isovalent Ca together with applied pressure leads first to a transfer of holes from the chains to the ladders followed by a delocalization of the holes . Superconductivity is observed for pressures around 3GPa with a maximum transition temperature of T<sub>c</sub>=12 K . Ca-substitution and applying pressure reduces the b and c axis parameters leading to strong changes of the electronic properties, e.g., a reduction of the anisotropy in the resistivity. In samples with x=11.5 the anisotropy of the resistivity $`\rho _a`$/$`\rho _c`$ at T=50 K decreases from 80 (P=0) to 10 (P=4.5 GPa), i.e. it shifts towards a more two-dimensional behavior . For x=0 a singlet-triplet gap in the chains, $`\mathrm{\Delta }_{01\mathrm{chain}}`$=140 K or 125 K, has been determined using magnetic susceptibility and NMR experiments , while a gap in the ladder of $`\mathrm{\Delta }_{01\mathrm{ladder}}`$=375 K in neutron scattering experiments or 550 K in NMR has been observed. For x$``$0 the gap in the chain system rapidly disappears. However, the effect on the gap in the ladder system is unclear. While in NMR experiments a strong decrease of the gap with substitution from $`\mathrm{\Delta }_{01\mathrm{ladder}}`$=550 K (x=0) to 270 K (x=11.5) has been observed , the corresponding neutron experiments show no change at all . In optical conductivity measurements inspired by similar results in HTSC the opening of a ”pseudo gap” is claimed . Finally, with applied pressure NMR experiments indicate a change of the gap in the ladder to a ”pseudo spin gap” . Although the coexistence of this gap with superconductivity would be a very important piece of evidence, these results could up to now neither be proved nor disproved by other methods. Concerning the origin of the smaller gap in the chains a dimerization and charge ordering is discussed. Indeed, superstructure peaks that increase in intensity for temperatures below 50 K are observed in X-ray scattering on samples with x=0 . Surprisingly, the corresponding dimers are formed in the chains between the Cu spins that are separated by 2 times the distance between the nearest neighbor Cu ions. The distance between two neighboring dimers is 4 times the distance of nearest neighbor Cu ions. Therefore, the dimerization corresponds to ordered Zhang-Rice singlets on the chains. The importance of these singlet states is also discussed for the 2D HTSC . In NMR experiments on $`\mathrm{Sr}_{14\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ the existence of both Cu<sup>2+</sup> and Cu<sup>3+</sup> in the chains has been verified . In Raman scattering experiments with light polarization parallel to the ladder direction (bb) both gaps are identified as a renormalization of the scattering intensity to lower frequency at 2$`\mathrm{\Delta }_{01\mathrm{chain}}`$=280 K and 2$`\mathrm{\Delta }_{01\mathrm{ladder}}`$=700 K. These values are close to the frequencies found in the above discussed neutron experiments (see Fig. 0.4 and phonon spectra in Ref. ), and differ substantially from the NMR results. The signatures of both chain and ladder gaps weaken and broaden with increasing temperature till they disappear for temperatures above 100 and 350 K for the chain and ladder, respectively. Furthermore, additional modes are observed at low temperatures at 360 and 375 cm<sup>-1</sup>. Although these modes may be phonons, it is interesting to note that their energies correspond to 1.48$`\mathrm{\Delta }_{01\mathrm{ladder}}`$ and 1.54$`\mathrm{\Delta }_{01\mathrm{ladder}}`$, respectively, making them candidates for singlet bound states of the ladder. In Fig. 0.5 Raman spectra of $`\mathrm{Sr}_{14\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ with different x=0, 2, 5 and 12 are compared. Strong changes of the phonon lines in the frequency range 120 cm<sup>-1</sup> $`<`$$`\mathrm{\Delta }\omega `$$`<`$350 cm<sup>-1</sup> are evident that may be related to a change of the commensurability of the chain and ladder subcells. In addition, the gap of the chain subsystem is suppressed with increasing Ca substitution. In contrast to these effects, the signature of the gap in the ladder subsystem is only broadened but not shifted in frequency. This supports the negligible substitution dependence of $`\mathrm{\Delta }_{01\mathrm{ladder}}`$ observed in neutron scattering. The additional modes that are tentatively attributed to bound states are also not influenced by Ca substitution. ## 0.6 Conclusion The low energy excitation spectrum of low dimensional spin systems has been under intense investigation during the last years. Both CuGeO<sub>3</sub> and NaV<sub>2</sub>O<sub>5</sub> can be considered as model compounds as a spin gap opens below a phase transition temperature T<sub>SP</sub>. In inelastic light scattering experiments this spin gap is evidenced by a renormalization of the background intensity below 2$`\mathrm{\Delta }_{01}`$. Furthermore, well defined singlet bound states consisting of two triplet excitations are found. As their multiplicity and binding energy crucially depend on system parameters their analysis gives a wealth of information on the principal magnetic interactions in the system. These bound states are the magnetic analog of exciton states in semiconductors. It has been argued that the pseudo gap in HTSC can be understood in terms of a spin gap. In this context, the investigation of $`\mathrm{Sr}_{14\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ should be very useful, as this substance consists of both ladders and dimerized chains and becomes superconducting. It therefore can be understood as a link between the low dimensional spin gap systems and HTSC. Inelastic light scattering on $`\mathrm{Sr}_{14\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ samples with x=0 shows, in close analogy to $`\mathrm{CuGeO}_3`$ and $`\mathrm{NaV}_2\mathrm{O}_5`$, a drop in intensity for frequencies below 2$`\mathrm{\Delta }_{01}`$. Possibly, magnetic bound states for the ladder emerge as well. For x $``$ 0 the gap in the chains vanishes in agreement with results of other methods. On the other hand, the gap of the ladder persists even for x=12, the doping concentration for which superconductivity occurs under applied pressure. It will be of particular interest to follow the evolution of the spin gap approaching the superconducting phase. Therefore, measurements under hydrostatic pressure are highly desirable and under preparation. A comparison of NMR, neutron and Raman scattering results shows that the first method does not sample the same physical quantity as the other two. This problem is not fully understood. The question how this spin gap and the pseudo gap as observed in HTSC are related could be addressed by these investigations. It is questionable whether the spin gap and the pseudo gap can be directly identified in $`\mathrm{Sr}_{14\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ as proposed in Ref. as the energy scales of the superconducting gap and the spin gap are different by more than an order of magnitude. Nevertheless, the study of these low dimensional quantum spin systems is of fundamental importance for the understanding of collective quantum phenomena in strongly correlated electron systems, such as magnetism and superconductivity. Acknowledgement: Single crystalline samples were kindly provided by M. Weiden, E. Morre, C. Geibel, and F. Steglich (MPI-CPfS, Dresden), U. Ammerahl, G. Dhalenne, and A. Revcolevschi (Univ. Paris-Sud), J. Akimitsu (Aoyama-Gakuin Univ., Tokyo). Further support by the DFG under SFB341 and the BMBF under Fkz 13N6586/8 and 13N7329/3, and by the INTAS Project 96-410 are kindly acknowledged.
no-problem/9910/gr-qc9910105.html
ar5iv
text
# Long-range acceleration induced by a scalar field external to gravity and the indication from Pioneer 10/11, Galileo and Ulysses Data ## 1 Introduction Recently, results from an almost twenty years study of radio metric data from Pioneer 10/11, Galileo and Ulysses spacecraft have been published by a team of the NASA (Anderson et al. ), indicating an apparent anomalous, constant, acceleration acting on the spacecraft with a magnitude of the order $`8.510^8`$ cm/s<sup>2</sup>, directed towards the Sun, to within the accuracy of the Pioneers’ antennas and a steady frequency drift, a ”clock acceleration”, of about $`610^9`$ Hz/s. A number of potential causes have been ruled out by the authors, namely gravity from Kuiper belt, gravity from the Galaxy, spacecraft ”gas leaks”, anisotropic heat (coming from the RTGs) reflection off of the back of the spacecraft high-gain antennae (Katz’s proposal , see Anderson et al. ), radiation of the power of the main-bus electrical systems from the rear of the craft (Murphy’s proposal , see Anderson et al. ), errors in the planetary ephemeris, and errors in the accepted values of the Earth’s orientation, precession, and nutation, as well as nongravitational effects such as solar radiation pressure, precessional attitude-control maneuvers and a possible nonisotropic thermal radiation due to the Pu<sup>238</sup> radioactive thermal generators. Indeed, according to the authors, none of these effects explain the apparent acceleration and some are 3 orders of magnitude or more too small, so they conclude that there is an unmodeled acceleration towards the Sun of $`(8.09\pm 0.20)10^8`$ cm/s<sup>2</sup> for Pioneer 10, $`(8.56\pm 0.15)10^8`$ cm/s<sup>2</sup> for Pioneer 11, $`(12\pm 3)10^8`$ cm/s<sup>2</sup> for Ulysses and $`(8\pm 3)10^8`$ cm/s<sup>2</sup> for Galileo. The authors plan to utilize two different transmission frequencies in further analysis to give an answer to whether there is some unknown interaction of the radio signals with the solar wind. Since no ”standard physics” plausible explanations for the residual acceleration has been found so far, the authors considered the possibility that the origin of the anomalous signal is the effect of a modification of gravity, for instance by adding a Yukawa force to the Newtonian or Milgrom’s proposed modification of gravity (Milgrom ). They concluded however that neither easily works. If the cause is dark matter, the amount needed to be consistent with the accuracy of the ephemeris should be only of order a few times $`10^6M_{}`$ even within the orbit of Uranus (Anderson et al. ). Above all, the authors point out that the residual acceleration is too large to have remained undetected in the planetary orbits of the Earth and Mars. Indeed, the Viking ranging data limit any unmodeled radial acceleration acting on the Earth and Mars to no more than $`0.110^8`$ cm/s<sup>2</sup>. Because of this severe constraint, the authors argue that, if the anomalous radial acceleration is of gravitational origin, it probably violates the principle of equivalence. But, an alternative is that the anomalous acceleration is asymptotically constant, rather than constant at all radii from the center of the solar system to the present location of Pioneer 10/11 spacecraft. In this paper we propose an alternative explanation, based on the possible existence of a long range (non gravitational) scalar field, $`\varphi `$, which respects the (weak) equivalence principle. This possibility was previously introduced by Mbelek , to account for the rotational curves of spiral galaxies, as an alternative for dark matter. It gives, for the Pioneer 10/11 spacecraft, the correct order of magnitude for both the anomalous acceleration, $`a_P`$, and the clock acceleration, $`a_t`$. It proves to remain consistent with the planetary orbits determined from the Viking data. As for the ordinary matter, the $`\varphi `$-field is a gravitational source through its energy-momentum tensor. A forthcoming paper will present the fundamental symmetry that may support it, from the background of classical fields theory. The plan of this paper is as follows : in section 2, we set the $`\varphi `$-field equation. Then, after linearization we divide space in three characteristic regions and find approximate exterior solutions for a static spherically symmetric source, actually the Sun. In section 3, Einstein equations are solved in the weak fields approximation to account for the metric tensor in the presence of the $`\varphi `$-field (out of the Sun). In section 4, the equation of motion of a test body in the presence of the $`\varphi `$-field is established. Solutions are found in the weak fields and low velocity limit. Then the anomalous long-range acceleration $`a_P`$ is derived for the different regions of space. In section 5, an interpretation of the data is proposed. In section 6, the steady frequency drift $`a_t`$ is derived by using the equivalence principle. We finally conclude by an estimation of the cosmological constant by exploiting the declining part of the rotational curve (RC) of the dwarf galaxy DDO 154. ## 2 The scalar field equation A manner to generate a constant radial acceleration could result from the introducion of a linear potential term in the Lagrangian of a test particle. An example is provided by the exterior solution of the locally conformal invariant Weyl gravity for a static, spherically symmetric source (Mannheim and Kazanas ). Unfortunately, Perlick and Xu , by matching the exterior solution to an interior one that satisfies the weak energy condition and a regularity condition at the center, show that this leads to contradiction of Mannheim and Kazanas’s suggestion. They conclude that the conformal Weyl gravity is not able to give a viable model of the solar system. This paper presents an alternative solution, under the form of a real scalar field, external to gravity but which satisfies the equivalence principle. We show below that it leads to the desired ” Pioneer effect ”, although it does not modify, as required, the orbital properties of the internal planets. The field $`\varphi `$ obeys the equation $$_\nu ^\nu \varphi =U^{}(\varphi )J,$$ (1) where the symbol $`_\nu `$ stands for the covariant derivative compatible with the Levi-Civita connection. Equation (1) may be derived from Einstein equations provided that the energy-momentum tensor of the $`\varphi `$-field is of the form $`T_{\mu \nu }^{(\varphi )}=_\mu \varphi _\nu \varphi g_{\mu \nu }[\frac{1}{2}_\lambda \varphi ^\lambda \varphi U(\varphi )J𝑑\varphi ]`$ (up to a positive multiplicative dimensionality constant, $`\kappa `$, for $`\varphi `$ is dimensionless in this paper) and the energy-momentum tensor of the ordinary matter (matter or radiation other than the $`\varphi `$-field) is divergenceless (e.g., zero for the exterior solution and of the perfect fluid form for the interior solution). In the rest of the paper, we apply the weak field approximation to the real classical scalar field $`\varphi `$ and to the gravitational potentials, so that Newtonian physics apply. For a weak gravitational field, equ(1) above will write merely $$_\mu ^\mu \varphi =U^{}(\varphi )J.$$ (2) The potential $`U`$ denotes the self-interaction of $`\varphi `$, and we note $`U^{}(\varphi )=\frac{U}{\varphi }`$. The source term $`J`$, an external source function, takes gravity into account, as a source for the field $`\varphi `$. Of course, $`\varphi `$ also acts as a source for gravity (through Einstein equations). We consider this latter action below in section $`3`$. In the weak field approximation, $`J`$ depends on the Newtonian gravitational potential $`V_N`$, the only relevant scalar quantity related to a weak gravitational field. Thus, we write at first order $`J=J(\frac{V_N}{c^2})\frac{V_N}{r_0^2c^2}`$, where the constant $`r_0`$ defines a characteristic length scale (see subsection $`\mathrm{5.1.1}`$ for an estimation of $`r_0`$). The minus sign comes from the requirement that the effect of $`\varphi `$ is similar to that of gravitation, so that $`\frac{d\varphi }{dr}`$ and $`\frac{dV_N}{dr}`$ have the same sign (as we will see, the $`\varphi `$-field generates an acceleration term $`\frac{d\varphi }{dr}`$, up to a positive multiplicative factor). This is in accordance with our previous study on the RC of spiral galaxies in which we found the positivity of $`\frac{d\varphi }{dr}`$ necessary for the $`\varphi `$-field mimics a great part of the missing mass . In the solar system, $`\frac{d\varphi }{dr}`$ remains positive. The scalar field $`\varphi `$ is positive definite throughout this paper. Here we explore the effect of $`\varphi `$ in the solar system, i. e., in the potential $`V_N=c^2r_s/2r`$ created by the static central mass of the the Sun, $`r`$ being the radius from the centre (we choose as usual a zero value of the Newtonian potential at infinity), and $`r_s`$ the Schwarzschild radius of the Sun. The problem has spherical symmetry, so that equation (2) yields finally $$\frac{d^2\varphi }{dr^2}+\frac{2}{r}\frac{d\varphi }{dr}=U^{}(\varphi )+\frac{r_s}{2rr_0^2}.$$ (3) We will calculate the resulting $`\varphi `$-field and show that it creates an asymptotically constant acceleration: we solve the equation with the limiting condition (imposed by the weak fields approximation) that the field $`\varphi `$ and its derivative $`\frac{d\varphi }{dr}`$ are bounded for any given region of space. In addition, the field $`\varphi `$ must vanish (up to an additive constant) if one sets $`M=0`$, since the central mass $`M`$ is its source ; the same condition applies to $`\frac{d\varphi }{dr}`$. As a first step to resolve equ(3), let us neglect for the moment the contribution of the self-interaction. The solution is $$\varphi =C+\frac{r_s}{4r_{0}^{}{}_{}{}^{2}}r\frac{A}{r}$$ (4) and thence, $$\frac{d\varphi }{dr}=\frac{r_s}{4r_0^2}+\frac{A}{r^2},$$ (5) where A and C are constants of integration. The constant of integration $`A`$ of the dimension of a length obviously depends on $`r_s`$ since the central mass is the source of the $`\varphi `$-field. Accordingly, we may set $`A=\zeta r_s/2`$, where $`\zeta `$ is a positive dimensionless constant that we will assume hereafter of the order unity. The positivity of $`\zeta `$ is inferred by the positivity of the spatial derivative $`\frac{d\varphi }{dr}`$ at any distance from the centre. Note that the particular value $`\zeta =1`$ involves the identity of the potential term $`A/r`$ with the Newtonian one $`V_N/c^2`$. We will show that, in a certain radius range and at sufficiently large distances from the centre, this represents the true solution to equ(3): the radial acceleration induced by the $`\varphi `$-field (neglecting its self-interaction), proportional to $`\frac{d\varphi }{dr}`$, remains asymptotically constant. This will be referred to as the ”Pioneer effect” throughout. In order to solve the complete equation, we need to know the form of $`U(\varphi )`$, although we will see that many results remain independent of this choice. To illustrate, we choose here a quartic self-interaction potential $`U=U(0)+\frac{1}{2}\mu ^2\varphi ^2+\frac{\sigma }{4}\varphi ^4`$, where $`\sigma <0`$ is the self-coupling coefficient of the scalar field ; $`\frac{1}{2}\mu ^2\varphi ^2`$ is the ”mechanical” mass term with $`\mu =\frac{m_\varphi c}{\mathrm{}}`$, $`m_\varphi `$ denoting the mass of the scalar field. The reason for choosing a quartic polynomial form is that the corresponding quantum field theory should be renormalizable (Madore ). This potential presents two extrema : one minimum at $`\varphi _I=\varphi (r_I)`$, with $`U^{}(\varphi _I)=0`$ and $`U^{\prime \prime }(\varphi _I)>0`$ ; and one maximum at $`\varphi _{III}=\varphi (r_{III})=\mu /\sqrt{\sigma }`$, with $`U^{}(\varphi _{III})=0`$ and $`U^{\prime \prime }(\varphi _{III})<0`$ ($`U^{\prime \prime }(\varphi )=\frac{^2U}{\varphi ^2}`$). There is also an inflexion point at $`\varphi _{II}=\varphi (r_{II})=\mu /\sqrt{3\sigma }`$, with $`U^{}(\varphi _{II})>0`$ but $`U^{\prime \prime }(\varphi _{II})=0`$. Since $`\varphi `$ increases monotonically with respect to $`r`$, this corresponds to three regions I, II, III in space with $`r_I<r_{II}<r_{III}`$. Moreover, the monotony of $`\varphi `$ together with the relation $`\varphi _{III}=\sqrt{3}\varphi _{II}`$ that links $`\varphi (r_{III})`$ to $`\varphi (r_{II})`$ involves (using the solution (4), in the first approximation) : $$r_{III}\sqrt{3}r_{II}$$ (6) Let us call $`\varphi _0`$, generically, a local extremum of $`U(\varphi )`$ or $`U^{}(\varphi )`$. In the neighbour region of space, $`\varphi \varphi _01`$, equation (3) may be solved in the weak field approximation by linearizing the function $`U^{}(\varphi )`$ about $`\varphi _0`$. This yields $$\frac{d^2\varphi }{dr^2}+\frac{2}{r}\frac{d\varphi }{dr}U^{}(\varphi _0)U^{\prime \prime }(\varphi _0)(\varphi \varphi _0)=\frac{r_s}{2r_{0}^{}{}_{}{}^{2}r}$$ (7) * in the first region of space (region I), $`\varphi _0=\varphi _I`$ * in region II, $`\varphi _0=\varphi _{II}`$ * in region III, $`\varphi _0=\varphi _{III}`$. Besides, as for the Higgs mechanism of symmetry breaking which too involves a scalar field and a quartic self-interaction potential, an analogy can be made with a well known phenomenon in solid state physics : the Meissner effect which is a phase transition of the second kind, between the superconducting to the normal state. Here, the field $`\varphi `$ plays the role of the magnetic flux and $`U^{\prime \prime }(\varphi )`$ plays the role of the difference $`\mathrm{\Delta }T=T_cT`$ between the temperature, $`T`$, of the solid and its critical temperature, $`T_c`$. * In region I, equ(7) reads $$\frac{d^2\varphi }{dr^2}+\frac{2}{r}\frac{d\varphi }{dr}\mu ^2(\varphi \varphi _I)=\frac{r_s}{2rr_0^2},$$ (8) with solution $$\varphi =\varphi _I\frac{\overline{\lambda }^2}{2r_{0}^{}{}_{}{}^{2}}\frac{r_s}{r}(1e^{(rr_I)/\overline{\lambda }})$$ (9) which implies $$\frac{d\varphi }{dr}=\frac{\overline{\lambda }^2}{2r_{0}^{}{}_{}{}^{2}}\frac{r_s}{r^2}[1(1+\frac{r}{\overline{\lambda }})e^{(rr_I)/\overline{\lambda }}],$$ (10) where $`\overline{\lambda }=1/\mu `$ characterizes the dynamical range of the $`\varphi `$-field in region I. Clearly, it is necessary that $`r_I=0`$ for the solution (9) be consistent with the conditions on $`\varphi `$ and $`\frac{d\varphi }{dr}`$. Let us notice that, for a sufficiently massive $`\varphi `$-field ($`m_\varphi 10\sqrt{2}\mathrm{}/cr_{}=1.810^{17}eV/c^2`$, where $`r_{}`$ denotes the radius of the central mass), $`\frac{d\varphi }{dr}`$ is smaller by a factor 1/100 than the same quantity that would be involved by relation (5). This means that the $`\varphi `$-field is expelled out from the central region I, so that the Pioneer effect is destroyed here. This situation appears analogous to the Meissner effect where the magnetic flux is expelled out in the superconducting state ($`\mathrm{\Delta }T>0`$). That the $`\varphi `$-field is expelled from region I ($`U^{\prime \prime }(\varphi )>0`$), grants that the orbits of the internal planets are not modified. Its significant action on matter is restricted to regions II and III. Figure 1 shows the predicted curve $`y=a_P/a_P^{\mathrm{}}`$ versus $`x=r/\overline{\lambda }`$ for region I. * In region II and about $`r_{II}`$, an approximate solution $`\varphi ^{}`$ is obtained by solving equation (7) on account that $`\varphi _0=\varphi _{II}`$ ; one finds : $$\varphi ^{}=C+\frac{r_s}{4r_{0}^{}{}_{}{}^{2}}r\frac{A}{r}+\frac{U^{}(\varphi _{II})}{6}r^2$$ (11) and $$\frac{d\varphi ^{}}{dr}=\frac{r_s}{4r_{0}^{}{}_{}{}^{2}}+\frac{A}{r^2}+\frac{U^{}(\varphi _{II})}{3}r,$$ (12) where $`A`$ and $`C`$ are constants of integration. Clearly, the extra potential $`U^{}(\varphi _{II})r^2/6`$ will behave like a positive cosmological constant type term. Let us notice that the condition of the weak field approximation, $`\varphi \varphi _{II}1`$, is always satisfied as long as $`rr_{II}r_{0}^{}{}_{}{}^{2}/r_s`$ in as much as the $`\mathrm{\Lambda }`$ term is neglected. In region II below or beyond $`r_{II}`$, an improved solution $`\varphi =\varphi ^{}+\delta \varphi `$ is obtained in the first approximation by adding to the previous solution $`\varphi ^{}`$ a correction term $`\delta \varphi `$. This involves : $$\frac{d^2\delta \varphi }{dr^2}+\frac{2}{r}\frac{d\delta \varphi }{dr}U^{\prime \prime }(\varphi ^{})\delta \varphi =0$$ (13) Now, the ”curvature” $`U^{\prime \prime }(\varphi )`$, and hence its mean value, is positive between $`r_I`$ and $`r_{II}`$ but negative between $`r_{II}`$ and $`r_{III}`$. In the first approximation, one may write $`U^{\prime \prime }(\varphi <\varphi _{II})U^{\prime \prime }(\varphi _I)=1/\overline{\lambda }^2`$ and $`U^{\prime \prime }(\varphi >\varphi _{II})k^2`$, with $`k`$ a positive constant. We show in the following that $`\lambda =\frac{2\pi }{k}`$ defines a wavelength related to the part of region II beyond $`r_{II}`$. So, replacing $`U^{\prime \prime }(\varphi ^{})`$ by the value $`k^2`$, equation (13) becomes in the first order approximation : $$\frac{d^2\delta \varphi }{dr^2}+\frac{2}{r}\frac{d\delta \varphi }{dr}+k^2\delta \varphi =0.$$ (14) The solution of the above equation is of the form $$\delta \varphi =\frac{B}{r}\mathrm{sin}(kr\mathrm{\Phi }_{II}),$$ (15) where $`B`$ is a constant of integration and $`\mathrm{\Phi }_{II}`$ is a phase offset. Consequently, on account of the solution (11) and the continuity of $`\varphi `$ at the radius $`r_{II}`$, the first order solution of equation (3) writes in region II beyond $`r_{II}`$ : $$\varphi =C+\frac{r_s}{4r_{0}^{}{}_{}{}^{2}}r\frac{A}{r}+\frac{U^{}(\varphi _{II})}{6}r^2+\frac{B}{r}\mathrm{sin}k(rr_{II}),$$ (16) $$\frac{d\varphi }{dr}=\frac{r_s}{4r_{0}^{}{}_{}{}^{2}}+\frac{A}{r^2}+\frac{B}{r^2}[\frac{2\pi r}{\lambda }\mathrm{cos}(\frac{2\pi (rr_{II})}{\lambda })\mathrm{sin}(\frac{2\pi (rr_{II})}{\lambda })]+\frac{U^{}(\varphi _{II})}{3}r.$$ (17) Below $`r_{II}`$, the solution is of the form : $$\varphi =\varphi _I\frac{\overline{\lambda }^2}{2r_{0}^{}{}_{}{}^{2}}\frac{r_s}{r}(1e^{r/\overline{\lambda }})+C+\frac{r_s}{4r_{0}^{}{}_{}{}^{2}}r\frac{A}{r}+\frac{U^{}(\varphi _{II})}{6}r^2$$ (18) The continuity of $`\varphi `$ at the radius $`r_{II}`$ involves : $$\varphi _I=\frac{\overline{\lambda }^2}{2r_{0}^{}{}_{}{}^{2}}\frac{r_s}{r_{II}}(1e^{r_{II}/\overline{\lambda }})$$ (19) Further, the above solution involves a critical radius $`r_c`$ at which the solutions of both regions I and II are connected. This critical radius is a solution of the following equation : $$C+\frac{r_s}{4r_{0}^{}{}_{}{}^{2}}r\frac{A}{r}+\frac{U^{}(\varphi _{II})}{6}r^2=0.$$ (20) The constant $`C`$ is determined from relation (20) by requiring that $`r_s`$ and $`A`$ vanish whenever one sets $`M`$ equal to zero. One finds $`C=U^{}(\varphi _{II})/6`$ $`r_{c}^{}{}_{}{}^{2}`$ and therefore : $$r_c=\sqrt{2\zeta }r_0$$ (21) We will neglect throughout the contribution of cosmological constant type terms to the dynamics of the ordinary matter at the scale of the solar system since this is known to be very small at present epoch. * In region III, the solution is of the form : $$\varphi =\varphi _{III}+\frac{D}{r}\{1\mathrm{cos}[\frac{2\pi }{\lambda ^{}}(rr_{III})]\}$$ (22) which implies $$\frac{d\varphi }{dr}=\frac{D}{r^2}\{1\mathrm{cos}[\frac{2\pi }{\lambda ^{}}(rr_{III})]\frac{2\pi r}{\lambda ^{}}\mathrm{sin}[\frac{2\pi }{\lambda ^{}}(rr_{III})]\},$$ (23) where $`D`$ is a constant of integration and $`\lambda ^{}=2\pi /\sqrt{U^{\prime \prime }(\varphi _{III})}`$ defines a wavelength for the $`\varphi `$-field in region III. Hence, $`the`$ $`\varphi `$-$`field`$ $`would`$ $`have`$ $`a`$ $`damped`$ $`oscillatory`$ $`behavior`$ $`in`$ $`the`$ $`regions`$ $`of`$ $`space`$ $`where`$ $`U^{\prime \prime }(\varphi )<0`$. ## 3 Einstein equations ### 3.1 The gravitational field sources The metric tensor $`g_{\mu \nu }`$ is solution of the Einstein equations $$R_{\mu \nu }\frac{1}{2}Rg_{\mu \nu }=\frac{8\pi G}{c^4}T_{\mu \nu }.$$ (24) In the presence of the scalar field $`\varphi `$, its right-hand side $$T_{\mu \nu }=T_{\mu \nu }^{}+T_{\mu \nu }^{(\varphi )}$$ (25) incorporates the energy-momentum tensor of the ordinary matter, $`T_{\mu \nu }^{}`$, and the energy-momentum tensor of the $`\varphi `$-field itself. Let us emphasize that the scalar field considered in this paper is external to gravity (like the electromagnetic field) but obeys the equivalence principle (unlike the electromagnetic field). ### 3.2 The weak fields approximation Let us denote as $`g_{\mu \nu }^{}`$ (resp. $`g^{\mu \nu }`$) the solution of the Einstein equations for $`\varphi =0`$ and $`g_{\mu \nu }`$ (resp. $`g^{\mu \nu }`$) the components of the metric tensor in the presence of the $`\varphi `$-field (all greek indices run over 0, 1, 2, 3 and $`x^0=ct`$) ; $`R_{\mu \nu }`$ denotes the Ricci tensor, $`R=g^{\mu \nu }R_{\mu \nu }`$ is the curvature scalar (Einstein’s summation convention is adopted throughout this paper) and the $`\mathrm{\Gamma }_{\alpha \beta }^{\mu ^{}s}`$ are the Christoffel symbols. Hereafter, whenever we assume spherical symmetry : $`x^1=r`$, $`x^2=\theta `$, $`x^3=\phi `$ (for the sake of simplicity, for planar motion $`\phi =\frac{\pi }{2}`$ in the following), otherwise the $`x^{i^{}s}`$ denote the Cartesian coordinates ($`i=1,2,3`$). Einstein equations rewrite $$R_{\mu \nu }=\frac{8\pi G}{c^4}[(T_{\mu \nu }^{}\frac{1}{2}T^{}g_{\mu \nu })+(T_{\mu \nu }^{(\varphi )}\frac{1}{2}T^{(\varphi )}g_{\mu \nu })],$$ (26) where $`T^{}=g^{\alpha \beta }T_{\alpha \beta }^{}`$ is the trace of $`T_{\mu \nu }^{}`$ and $`T^{(\varphi )}=g^{\alpha \beta }T_{\alpha \beta }^{(\varphi )}`$ is the trace of $`T_{\mu \nu }^{(\varphi )}`$. In the weak field approximation, one gets in particular : $`T_{00}^{(\varphi )}\frac{1}{2}T^{(\varphi )}g_{00}=\kappa (U(\varphi )+J𝑑\varphi )`$ and $`T_{00}^{}\frac{1}{2}T^{}g_{00}=\frac{1}{2}\rho c^2`$ (weak gravitational field approximation). Furthermore, one has in the first approximation $$R_{00}=\frac{1}{2}^2g_{00}.$$ (27) So, we may write : $$g_{00}=1+2\frac{V_NV_\varphi }{c^2}$$ (28) with $$^2V_\varphi =\frac{8\pi G}{c^2}\kappa (U(\varphi )+J𝑑\varphi )$$ (29) $`^2V_N=4\pi G\rho `$ and $`g_{00}^{}=1+2V_N/c^2`$, where $`\rho `$ is the density of the ordinary matter. Derivating partially equ(29) with respect to $`\varphi `$ then comparing with equ(2) yields : $$\frac{V_\varphi }{\varphi }=(\frac{V_\varphi }{\varphi })_{r=r_I}+\frac{8\pi G}{c^2}\kappa \varphi ,$$ (30) on account that the derivative $`\frac{V_\varphi }{\varphi }`$ should be bounded even when extrapolated at $`r=0`$. ## 4 Equation of motion The equation of motion of a test body in the presence of the scalar field $`\varphi `$ writes in curved spacetime : $$\frac{du^\mu }{ds}+\mathrm{\Gamma }_{\alpha \beta }^\mu u^\alpha u^\beta =^\mu \varphi +\frac{d\varphi }{ds}u^\mu .$$ (31) This means that a force term $`F^\mu =mc^2[^\mu \varphi (d\varphi /ds)u^\mu ]`$ enters in the right-hand side of the equation of motion of a test body of mass $`m`$ in the presence of the $`\varphi `$-field. The first term of the right-hand side, $`^\mu \varphi `$, is analogous to the electric part of the electromagnetic force whereas the second one $`(d\varphi /ds)u^\mu `$ is analogous to the magnetic part. Both terms are necessary to satisfy the unitarity of the velocity 4-vector ($`u_\mu u^\mu =1`$, hence $`u_\mu (u^\nu _\nu )u^\mu =0`$. Equation (31) may be derived from the Lagrangian : $$L=\frac{mc^2}{2}e^\varphi (g_{\mu \nu }u^\mu u^\nu +1).$$ (32) ### 4.1 Motion in weak fields with low velocity In the weak fields and low velocity limit, equation (31) simplifies to $$\frac{d^2x^i}{dt^2}=c^2\mathrm{\Gamma }_{00}^ig^{ii}\frac{\varphi }{x^i}c^2+\frac{d\varphi }{dt}\frac{dx^i}{dt},$$ (33) Now, $`g^{ii}1`$ and $`\mathrm{\Gamma }_{00}^i1/2g^{ii}g_{00}/x^i`$. Hence, the equation of motion rewrites in vectorial notation : $$\frac{d^2\stackrel{}{r}}{dt^2}=\stackrel{}{}V_Nfc^2\stackrel{}{}\varphi +\frac{d\varphi }{dt}\frac{d\stackrel{}{r}}{dt}$$ (34) where we have set $`f=\frac{(V_\varphi /c^2)}{\varphi }1`$. In next section $`4.2`$, we show that $`f`$ is positive. ### 4.2 Derivation of the long-range acceleration $`a_P`$ The projection of equation (34) above in plane polar coordinates $`(r,\theta )`$ yields, assuming $`dr/dtc\sqrt{f}`$, the radial component of the acceleration vector, $$a_r=\frac{GM}{r^2}f\frac{d(\varphi c^2)}{dr}.$$ (35) In the low velocity limit, the tangential component of the acceleration vector, $`a_\theta `$, is equal to zero (conservation of the angular momentum). Clearly, relation (35) is of the form : $$a_r=(a_N+a_P)$$ (36) where $`a_N=\frac{GM}{r^2}`$ is the magnitude of the Newtonian radial acceleration and $`a_P`$ is the radial acceleration induced by the scalar field, $$a_P=fc^2\frac{d\varphi }{dr}.$$ (37) Relation (37) applies to any region of space out of the central mass. Besides, since $`d\varphi /dr>0`$, $`f`$ must be positive for $`a_P`$ mimics a missing mass gravitational field (see section $`7`$ below). Hence, it follows that the radial acceleration induced by the scalar field will be directed towards the central mass as observed for Pioneer 10/11, Ulysses and Galileo. #### 4.2.1 Region I In region I and for $`r\overline{\lambda }`$, the scalar field is expelled out and consequently equation (34) simplifies to : $$\frac{d^2\stackrel{}{r}}{dt^2}=(1+f_0\frac{\overline{\lambda }^2}{r_{0}^{}{}_{}{}^{2}})\stackrel{}{}V_N$$ (38) or equivalently $$\frac{d^2\stackrel{}{r}}{dt^2}=G\frac{M+M_{hidden}}{r^2}\stackrel{}{u}_r,$$ (39) where $`f_0=f(0)`$, $`\stackrel{}{u}_r=\stackrel{}{r}/r`$ is the radial unitary vector and $`M_{hidden}=f_0(\overline{\lambda }/r_0)^2M`$ mimics a hidden mass term (so that the true dynamical mass of the Sun differs from its luminous mass, $`M_{}`$, by the amount $`f_0(\overline{\lambda }/r_0)^2M_{}`$). It is worth noticing that the kind of missing mass which is invoked here mimics a spherical distribution of dark matter located within the Sun rather than a solar halo dark matter. In this respect, we will distinguish the hidden mass from dark matter. Throughout, hidden mass means extra terms involving the $`\varphi `$-field and that mimic a mass term. Both hidden mass and dark matter define the missing mass. Furthermore, since the $`\varphi `$-field respects the equivalence principle, equation (38) involves a maximum shift $`Z`$ on the frequency of a photon given by : $$Z=(1+f_0\frac{\overline{\lambda }^2}{r_{0}^{}{}_{}{}^{2}})\mathrm{\Delta }V_N/c^2.$$ (40) Equation (40) above allows us as yet to put an upper bound on the possible value of $`f_0`$. Indeed, analysis of the data from the tests of local position invariance (”the outcome of any local non-gravitational experiment is independent of where and when in the universe it is performed”) yields a limit $`f_0<210^4r_{0}^{}{}_{}{}^{2}/\overline{\lambda }^2`$ (see C. M. Will ). The local position invariance is one of the three pieces of the equivalence principle and, since the $`\varphi `$-field respects the equivalence principle, this is a crucial test for this field. As we will see further, the $`\varphi `$-field passes the current tests. Indeed, one finds that $`f_0`$ is of the order $`10^6`$ (see subsection 5.1.2). In addition, the study of the possible effect of dark matter on the motion of the outer planets involves that the missing mass within the Sun is necessarily less than $`10^6M_{}`$ (see Anderson et al. ). Thence, we may conclude that $`\overline{\lambda }r_0`$. Clearly, a $`\varphi `$-field of mass $`m_\varphi 1.810^{17}eV/c^2`$ passes all the current tests. #### 4.2.2 Region II Let us neglect for the moment the contribution of the damped oscillations. We will also neglect the $`\varphi `$-term in relation (30) so that $`ff_0`$ (i.e., we neglect the anharmonic terms). Replacing $`\frac{d\varphi }{dr}`$ by the expression (5) obtained for region II, relation (37) yields : $$a_P=a_P^{\mathrm{}}(1+2\frac{r_{}^{}{}_{0}{}^{}{}_{}{}^{2}}{r^2})$$ (41) where we have set $$r_{}^{}{}_{0}{}^{}=\sqrt{\zeta }r_0$$ (42) and $$a_P^{\mathrm{}}=\frac{f_0}{2}\frac{GM}{r_0^2}$$ (43) turns out to be the asymptotic radial residual acceleration. Besides, combining relations (21) and (42) above yields $$r_c=\sqrt{2}r_{}^{}{}_{0}{}^{}$$ (44) and thence $$a_P(r=r_c)=2a_P^{\mathrm{}}$$ (45) which is also the maximum possible value for $`a_P`$. Relation (44) may also be derived by requiring the continuity of the derivative $`d\varphi /dr`$ at radius $`r_c`$ (neglecting the $`\mathrm{\Lambda }`$ term). ## 5 Interpretation of the data It has been questionned why the Pioneer effect has gone undetected in the planetary orbits of the Earth and Mars. Precisely, the Viking ranging data limit any unmodeled radial acceleration acting on Earth and Mars to no more than $`0.110^8`$ cm/s<sup>2</sup>. Indeed, since the Pioneer effect is expected in region II but not in region I, there must be some critical radius $`r_c`$ which allows one to distinguish between these two region of space within the solar system. As region II is defined about the radius $`r_{II}`$, one may reasonably consider that $`r_c`$ is of the order $`r_{II}/10`$ and accordingly the anomalous acceleration $`a_P`$ should be negligibly small below the radius $`r_c/2`$. Indeed, our estimate of $`r_{II}`$, given in subsection $`5.2`$, is in accordance with the estimate of $`r_c`$ given in subsection $`\mathrm{5.1.1}`$ and both estimations corroborate the fact that the Pioneer effect is negligibly small below the asteroid belt. Our scalar field, external to gravity but which respects the equivalence principle, provides a solution to both the anomalous radial acceleration observed on the spacecraft and the absence of a comparable effect on the Earth or Mars. Indeed, it is worth noticing that all the spacecraft which undergo the Pioneer effect were located at radii well beyond the orbital radius of Mars when the data were received from them (the closest spacecraft, Galileo and Ulysses, were in the vicinity of Jupiter). Moreover, ”no magnitude variation of $`a_p`$ with distance was found, within a sensitivity of $`210^8`$ cm/s<sup>2</sup> over a range of $`40`$ to $`60`$ AU”. On account of these facts, we conclude that the Pioneer effect is a distance effect and $`a_p`$ is rather asymptotically constant within the regions hitherto crossed by the spacecraft. Above all, the scalar field approach leads to the same conclusion. ### 5.1 Estimate of $`a_P`$ for Pioneer 10/11 using Ulysses data To start with, let us recall that no magnitude variation of $`a_P`$ with distance was found , within a sensitivity of $`210^8`$ cm/s<sup>2</sup> over a range of $`40`$ to $`60`$ AU (the data analysis of unmodeled accelerations began when Pioneer 10 was at 20 AU from the Sun). Thus we may set $`a_Pa_P^{\mathrm{}}`$ for the Pioneer 10/11. Since we need to be given at least one point in the curve $`a_P`$ versus r to be able to determine all the parameters needed, our strategy will consist to use a piece of information from the Ulysses data (the nearest point) to compute the Pioneer 10/11 data (the farthest points). It is worth noticing that the piece of information considered by itself gives no information on the magnitude of the long-range acceleration of the spacecraft. It is this feature that makes the adopted procedure relevant. To compute $`a_P^{\mathrm{}}`$ we need to estimate first $`r_0`$ and $`f_0`$. #### 5.1.1 estimate of $`r_0`$ and $`r_c`$ As one can see, for $`r=2r_{}^{}{}_{0}{}^{}`$, relation (41) implies that $`a_P=\frac{3}{2}a_P^{\mathrm{}}`$. Now, this was observed for Ulysses in its Jupiter-perihelion cruise out of the plane of the ecliptic (at $`5.5`$ AU). This is also consistent with Galileo data (strongly correlated with the solar radiation pressure ; correlation coefficient equal to 0.99) if one adopts for the solar radiation pressure (directed away from the Sun) a bias contribution to $`a_P`$ equal to $`(4\pm 3)10^8`$ cm/s<sup>2</sup>. Hence, we conclude that $`r_{}^{}{}_{0}{}^{}`$ is approximately equal to half of Jupiter’s orbital radius, that is $`r_{}^{}{}_{0}{}^{}2.75`$ AU and consequently $`r_c3.9`$ AU on account of relation (44). Let us assume for the moment $`\zeta `$ equal to unity ; this leads to conclude that $`r_02.75`$ AU. Figure 2 shows the shape predicted for the curve $`a_P/a_P^{\mathrm{}}`$ versus the radius. The plot starts from the radius $`r_c=3.9`$ AU to the radius $`r=60`$ AU. #### 5.1.2 estimate of $`f_0`$ and derivation of the magnitude of $`a_P^{\mathrm{}}`$ In our study on the RC of spiral galaxies, we found that $`f_0`$ is of the order $`\frac{v_{max}^2}{c^2}`$, where $`v_{max}`$ denotes the maximum rotational velocity. This seems to be a general order of magnitude for this parameter. So, in what follows, we derive an estimation of $`f_0`$ using the relation $`f_0\frac{v_{max}^2}{c^2}`$, where $`v_{max}`$ is a maximum velocity to be determined for the solar system. In the case of interest in this paper, the $`\varphi `$-field under consideration though external to gravity is generated from the Sun (or any other star we would have considered). Therefore, it seems natural that $`v_{max}`$ should be a typical velocity that is related to the matter components of this star and not a peculiar orbital velocity. A suitable value for $`v_{max}^2`$ (see Ciufolini ), perhaps the best for it involves thermodynamics parameters solely, is given by the ratio $`P_c/\rho _c`$ (assuming the perfect gas), where $`P_c`$ and $`\rho _c`$ denote respectively the central pressure and mass density of the star under consideration. Taking the value of $`T_c`$ given by solar models (Stix , Brun et al. ), the expression $`v_{max}^2=\frac{P_c}{\rho _c}`$ gives for the Sun : $`f_0=(1.72\pm 0.04)10^6`$ and $`a_P^{\mathrm{}}=(6.8\pm 0.2)10^8`$ cm/s<sup>2</sup>, in good agreement with the recent results which give $`a_P^{\mathrm{}}=(7.29\pm 0.17)10^8`$ cm/s<sup>2</sup> as the most accurate measure of the anomalous acceleration of Pioneer 10 (Turyshev et al. ). Further, the value computed for $`a_P^{\mathrm{}}`$ may be corrected to $`a_P^{\mathrm{}}=(7.23\pm 0.2)10^8`$ cm/s<sup>2</sup> by identifying $`\lambda `$ with $`r_0`$ (see subsection $`5.2`$ below) : hence, $`\zeta 1.07`$. ### 5.2 Damped oscillations and vanishing of $`a_P`$ Figure 1 of the paper of Anderson et al. shows an almost harmonic oscillation of $`a_P`$ (nothing is said about this by the authors themselves though) for Pioneer 10 which starts at the radius $`r_{II}=56.7\pm 0.8`$ AU with an amplitude $`a_{Pm}`$ of the order $`\frac{1}{4}10^8`$ cm s<sup>-2</sup> (this is derived by comparison with the uncertainty on $`a_P`$ for Pioneer 10) and a wavelength $`\lambda =2.7\pm 0.2`$ AU the value of which turns out to be quite identical to that of $`r_0`$ (let us notice by passing that $`r_{II}/\lambda `$ is an integer (= $`21`$)). With these observational data, relation (17) involves, for $`\zeta 1`$ and $`B=A/16`$, $`a_{Pm}0.2610^8`$ cm s<sup>-2</sup> between $`56`$ AU and $`60`$ AU as can be seen in figure 3. Furthermore, the calculations carried out in subsection $`2`$ lead to predict the decline of $`a_P`$ in the form of damped oscillations beyond $`r=r_{III}`$. Hence, since $`r_{III}>\sqrt{3}r_{II}`$, we may confidently expect the decline of $`a_P`$ to occur only beyond $`r=96.8`$ AU. Let us emphasize that the spatial periodicity $`\lambda `$ involves a temporal periodicity $`T_P=\lambda /v_P`$, where $`v_P`$ is the speed of the spacecraft. Hence, the periodicity of one year found for Pioneer 10 has nothing to do with the orbital periodicity of the Earth. Indeed, the coincidence just comes from the fact that $`v_P=2.66`$ AU/yr for Pioneer 10 (at least since 1987). As a consequence, $`T_P`$ should be greater than one year for Pioneer 11 since Pioneer 10 is faster mooving than Pioneer 11. ## 6 Derivation of the steady frequency drift using the equivalence principle Since, the $`\varphi `$-field obeys the equivalence principle, the steady frequency drift may be explained in another way than the Doppler effect thanks to this principle (cf. Misner et al ). Actually, the steady frequency drift and the corresponding ”clock acceleration” $`a_t=2.810^{18}s/s^2`$ shown by the Compact High Accuracy Satellite Motion Progam analysis of Pioneer 10 data may also be interpreted as the analogous of the gravitational redshift linked to the extra potential term $`V_P=a_P𝑑r`$ associated to the scalar field. Indeed, the frequency drift $`\frac{d\mathrm{\Delta }\nu }{dt}`$ as well as the clock acceleration $`a_t=\frac{d(\frac{\mathrm{\Delta }\nu }{\nu })}{dt}`$ follow from the relation $`\frac{\mathrm{\Delta }\nu }{\nu }=\frac{V_P(r_{})V_P(r)}{c^2}=\frac{1}{c^2}_r_{}^{r_{}+ct}a_P𝑑r`$, where $`r_{}`$ denotes the orbital radius of the Earth and $`r=r_{}+ct`$ (one way, as considered by the authors) is the distance of the spacecraft from the Earth. Therefore, on account that $`dr=cdt`$ for the photons (one way), one obtains the observed relation $`a_P=a_tc`$. In this way, the identity $`a_P=a_tc`$ seems more natural since, in this approach, it is indeed the photons received on Earth from the spacecraft that are concerned instead of the spacecraft themselves. ## 7 Conclusion In this paper, we have presented a possible explanation of the ”Pioneer effect”, without being in conflict with the Viking data or the planetary ephemeris. This is based on a possible interaction of the spacecraft with a long-range scalar field, $`\varphi `$, which respects the equivalence principle. Like any other form of matter-energy, the $`\varphi `$-field is a gravitational source through its energy-momentum tensor. Conversely, its source is the Newtonian potential of the ordinary matter (in this case, the Sun). The calculations were performed in the weak fields approximation, with $`U`$ a quartic self-interaction potential. They gave, near the spacecraft, a residual radial acceleration directed towards the Sun, with a magnitude $`a_P`$ asymptotically constant (in region II). Both $`a_P`$ and the corresponding clock acceleration, $`a_t`$, computed from our formulas are in fairly good agreement with the observed values. Moreover, a scalar field of mass $`m_\varphi 1.810^{17}eV/c^2`$ will be expelled from region I in a way quite analogous to the Meisner effect in a superconducting medium. This limits $`a_P`$ to no more than $`0.110^8`$ cm/s<sup>2</sup>, from the radius of the Sun to $`r_c=3.9`$ AU. It is also found that the $`a_P`$ term should be accompanied with damped oscillations in the intermediary region between region II and region III. We also predict beyond $`r_{III}=97`$ AU (that is, about the year 2009 or so, for Pioneer 10) the vanishing of $`a_P`$ in the form of damped oscillations. Furthermore, the scalar field theory, as developped in this paper, also gives good fits for the rotational curves of spiral galaxies as shown in a previous study. Moreover, the same field acts at the cosmological scales like a cosmological constant. Preliminary estimations (Mbelek and Lachièze-Rey, in preparation) from the dynamics of the external region of the dwarf galaxy DDO 154, actually the sole galaxy for which the edge of the mass distribution has been reached (see Carignan & Purton ) led to a value $`\mathrm{\Omega }_\mathrm{\Lambda }=0.43`$ (H<sub>0</sub>/100 km s<sup>-1</sup> Mpc<sup>-1</sup>)<sup>-2</sup> in fairly good agreement with the value $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ deduced from the Hubble diagram of the high-redshift type Ia supernovae (Perlmutter et al. , Schmidt et al. , Riess et al. , Garnavich et al. ) for H<sub>0</sub> about $`75`$ km s<sup>-1</sup>/Mpc.
no-problem/9910/cond-mat9910444.html
ar5iv
text
# Coulomb Blockade Ratchet ## Abstract We investigate the transport properties of a new class of ratchets. The device is constructed by applying an ac voltage to the metallic single electron tunneling transistor, and a net transport current is induced by the time-dependent bias-voltage, although the voltage value is on the average zero. The mechanism underlying this phenomenon is the Coulomb blockade of the single electron tunneling. The directions and the values of the induced net currents can be well controlled by the gate-voltage. A net transport current has also been observed even in the absence of the external bias-voltages, which is attributed to the noise in the circuit. The principle of mechanical ratchets has been analyzed in the course of the development of modern physics. Recently, the idea of Feynman’s thermal ratchet has been generalized to account for the macroscopic motion of particles in a unbiased asymmetric periodic potential such as material transport in biological systems. A quantum rectifier using the superposition of a sinusoidal oscillation and its second harmonic with a phase shift as bias-voltage applied to a periodic symmetric potential has been predicted to be able to create a net current flow. The prediction has been confirmed by experiments using a 2D-array of triangular-shaped anti-dots. A ballistic rectifier using a single anti-dot as an asymmetric artificial scatterer in a semiconductor microjunction has been demonstrated experimentally to guide carriers in a predetermined spatial direction, thus behaving like a four-diode-bridge rectifier. A geometric quantum ratchet has also been realized by applying an ac voltage bias to a triangular-shaped quantum dot to create a net current. In these two cases, there are no periodic potentials in the system. Instead, the charge carriers move randomly in an asymmetric structure in the direction of the transport current, and can drift out to create a current. In this letter, we present our investigation of a novel class of ratchets. In our device, we do not need external electrostatic potentials to confine the charge carriers, namely electrons, neither do we need specially prepared geometrical confinement. The central part of our ratchet is formed by a metallic grain of arbitrary shape, which is brought within a distance of a couple of Ångstrom from two metallic leads. Electrons can tunnel between the leads and the grain. If the grain is sufficiently small, and coupled properly to voltage sources, such a device is actually the single electron tunneling transistor (SET), where the central grain is called the island, which is coupled to the gate voltage $`V_g`$ via the gate capacitance $`C_g`$, and the leads are directly coupled to the transport voltages. The transport current as a function of dc bias- and gate-voltage has been extensively investigated both theoretically and experimentally in recent years. It becomes clear that in such a device with sufficiently large tunnel resistances at sufficiently low temperatures, tunneling of even a single electron is not allowed for vanishing gate voltage as long as the dc bias-voltage is smaller than the threshold value $`V_c=e/2(C_1+C_2+C_g)`$, where $`C_1`$ and $`C_2`$ are the capacitances of the tunnel junctions. As a result, there is a finite bias-voltage, but no current flows. This phenomenon is known as the Coulomb blockade, which can be lifted out by increasing the transport voltage or tuning the gate voltage. Distinct Coulomb staircases in the $`IV`$ curves have been observed in our device when the parameters of the two tunnel junctions are made different from each other. For the SET used in our present investigation, the $`IV`$ curve is shown in Fig. 1 for $`V_g=0`$ and $`T=4.2K`$. The corresponding parameters of the device are found to be $`C_1=7.6`$ aF, $`R_1=1.0`$ M$`\mathrm{\Omega }`$, $`C_2=7.6`$ aF, $`R_2=105.0`$ M$`\mathrm{\Omega }`$, $`C_g=2.0`$ aF, and $`E_c=4.7`$ meV. If the circuit is biased by an ac voltage, the average bias voltage is zero, the average transport current would be, prima facie, also zero just as in the case of vanishing transport voltage. This is certainly true for classical tunnel junctions, where the capacitances of the junctions are large, and thus the Coulomb charging energies are negligible compared to the thermal energy even at fairly low temperatures so that the I-V characteristics are Ohmic. For our device with the charging energy of $`4.7meV`$, however, the I-V characteristics at liquid helium temperature are strongly nonlinear due to the Coulomb blockade, and the relation $`I(V)=I(V)`$ is in general not satisfied. Therefore the net current is nonvanishing, even if the average transport voltage is zero. To get a pronounced net current induced by the ac bias-voltage, the two tunnel junctions should have different parameters so that the I-V curves are strongly asymmetric with respect to the applied voltages. In the experiment reported here, we choose $`R_1R_2`$, and indeed we have observed clearly transport currents induced by ac bias-voltages, which have well-defined lineshapes extended periodically for a wide range of the gate-voltage. Moreover, we can control both the direction and the magnitude of the net currents by tuning the gate-voltage. The experimental data are found to agree very well with the theoretical calculations based on the constant charging energy model. Since the only physical reason that we obtain the net current by applying ac bias-voltage is the Coulomb blockade of the single electron tunneling, we call our device a Coulomb blockade ratchet. The rectifier effect in a lateral 2D quantum dot SET made in a semiconductor nanostructure has been observed by Weis et al. However, in a semiconductor SET the transport cannot be simply attributed to the Coulomb blockade, but depends on the details of the device such as material, external confinement, and coupling of the energy levels in the quantum dot to the leads. On the contrary, our device is merely characterized by junction resistances, junction capacitances, and gate capacitance, and the induced net current is perfectly periodic in the gate-voltage, and determined explicitly by the above-mentioned parameters of the device. In addition to the case of the ac biased circuit, we have also studied the transport properties of the SET driven by noise sources, which lead to detectable net currents, even in the absence of any applied ac voltages. The transport theory of the SET in the parameter range of our devices biased by dc voltages is well-established. The dc current of the SET can be calculated via $`I(V)=e{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}\sigma (n,V)[\mathrm{\Gamma }_1^+(n,V)\mathrm{\Gamma }_1^{}(n,V)]`$ (1) $`=e{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}\sigma (n,V)[\mathrm{\Gamma }_2^+(n,V)\mathrm{\Gamma }_2^{}(n,V)].`$ (2) Here the tunneling rates of an electron tunneling from (-) or onto (+) the central island with n excess electrons via the first (1) or the second (2) junction are labeled by $`\mathrm{\Gamma }_{1,2}^\pm (n,V)`$, and the probability of $`n`$ excess electrons on the island is denoted by $`\sigma (n,V)`$. Since the resistances of the tunnel junctions in our experiments are typically of the order of 100 $`M\mathrm{\Omega }`$, and the capacitances are of the order of 10 $`aF`$, the RC time is thus of the order of $`10^9s`$, which is much larger than the tunneling time. The latter is of the order of $`10^{15}s`$ for metallic tunnel junctions. In the experiments, we have varied the frequencies of the ac voltages $`\omega /2\pi `$ between $`100Hz`$ and $`10MHz`$. In this frequency range the period of the bias signal $`𝒯=2\pi /\omega `$ is much larger than the intrinsic characteristic time-scales of the SET, such as electron tunneling time or capacitance charging time, so that the SET can follow the variation of the ac signals adiabatically. Hence the induced net current is given by the mean value of the transport voltage, $`I_{\mathrm{net}}=I[V(\phi )]`$, where $`I[V(\phi )]`$ is given by Eq. 1 with $`V(\phi )=V_{\mathrm{am}}\mathrm{cos}(\phi )`$ and $`\phi =\omega t`$ . Since $`I[V(\phi )]`$ is a periodic function of time, the mean value of it can be calculated either within a time interval much larger that the period $`𝒯`$, or within a period of the applied ac voltage. Apparently, the net current expressed in the above formula has no explicit frequency-dependence, which is confirmed by our experiments for various frequencies between $`100Hz`$ and $`10MHz`$. The fabrication and operation of our SET has been published elsewhere, and will not be repeated here. The only difference is that the island and the leads in the present device are made of palladium instead of gold. The tunnel gaps were deliberately tuned to be strongly asymmetric in resistance in order to enhance the ratchet effect. To give a comprehensive description of the device, we have investigated the net current as a function of the gate voltage for various amplitudes of the ac bias voltages. For large amplitudes of the bias-voltages, we found that by decreasing the amplitude of the ac bias-voltages, the amplitudes of the induced currents get smaller rapidly, and the lineshapes shrink. Typical curves are shown in Fig. 2 (dots) for large amplitude of the applied ac voltage where the lineshapes are sine-like, and in Fig. 3 (dots) for intermediate amplitude of the applied ac voltage where the variation of the net current is relatively slow in the Coulomb blockade regime. In the figures the corresponding theoretical curves of the net current induced by the applied bias-voltage as a function of the gate-voltage calculated according to Eq. 1 are also shown (dashed lines). By further decreasing amplitude of the applied ac voltage the induced net current is expected to decrease and finally vanish for zero amplitude. However, in the experiments, we observed a fairly pronounced transport current as a periodic function of the gate-voltage, even when our applied ac bias-voltage was vanishing, as shown by the dots in Fig. 4. We explain this by considering that the input terminal of the device picks up the external noise that acts as a background, time-dependent bias-voltage with zero average values. In our experimental setup this is reasonable because there are coaxial cables several meters long between the instruments/filtering at room temperature, and the sample. The presence of noise sets the lower limit for the amplitude of the induced current. By measuring the input noise to the SET in the frequency range of $`10Hz`$ to $`100kHz`$, we found that the dominating noise source has a broad spectrum with almost constant intensity smaller than $`10^4mV/\sqrt{Hz}`$. However this broad band noise is unlikely to induce the measured net current, because the induced net current depends strongly on the amplitude of the ac bias-voltage. For the parameters of our device, the net current induced by the applied ac bias-voltage becomes smaller in the order of magnitude if the amplitude of the ac bias-voltage decreases from a few $`mV`$ to a value smaller than $`1mV`$. This is at least one order of magnitude larger than the rms value of the broad band noise voltage in our circuit. Even if the rms value of the broad band noise is larger than $`1mV`$, the amplitude of the signal in each small frequency interval is still very small, while the bandwidth of the noise spectra remains large. As a consequence, the contribution of the broad band noise to the induced net current is negligible. To consolidate this argument, we have deliberately applied a broad band noise source with the rms value of input voltage as large as $`8.5mV`$ to the device. As shown by the crosses in Fig. 4, this additional noise source has indeed little effect on both the amplitude and the lineshape of the net current as compared to the one of the unbiased device. We attribute the net current in the unbiased device to the noise source with large amplitudes in the high frequency range. The measured data can be fitted reasonably well by the standard sequential tunneling theory if one chooses the amplitude of the fitting ac bias-voltage to be $`2.3mV`$, as shown by the dashed line in Fig. 4. Furthermore, we have calculated the induced net current as a function of the gate-voltage in the presence of both the applied ac voltage and the circuit noise voltage. The circuit noise is modeled by the ac voltage with the amplitude obtained from Fig. 4, and with the frequency much higher than that of the applied voltage. Then the current for a given point of the applied voltage is calculated as the mean value with respect to the circuit noise in a time interval much smaller than the period of the applied voltage, but much larger than the period of the modeled ac voltage of the circuit noise, i.e. $`\stackrel{~}{I}[V(\phi )]=I[V_{\mathrm{am}}\mathrm{cos}(\phi )+\stackrel{~}{V}_{\mathrm{am}}\mathrm{cos}(\stackrel{~}{\phi })]`$, and the observed current is thereafter calculated as the mean value with respect to the applied ac signal, $`\stackrel{~}{I}_{\mathrm{net}}=\stackrel{~}{I}[V(\phi )]`$. The results are shown by the solid lines in Fig. 2 and Fig. 3. For a fairly large amplitude of the ac bias-voltage, the influence of the circuit noise is weak, as shown in Fig. 2, while for an intermediate amplitude of the ac bias-voltage, the influence is significant. As shown in Fig. 3, the result of the ac plus noise bias-voltage agrees very well with the experimental data for the whole curve, while the one of the pure ac bias-voltage gives the correct amplitude and the lineshape near to the resonance at $`C_gV_g=e/2`$ module e, yet a somewhat smaller value near to the Coulomb blockade at $`C_gV_g=0`$ module e. From the above analysis, it seems that the measured unbiased net current is very likely to be induced by the circuit noise with large amplitudes in the high frequency range. In summary, we have investigated the Coulomb blockade ratchet using the metallic single electron tunneling transistor. We have observed fairly long periods of the net currents induced by the ac bias-voltages, which agree very well with the sequential tunneling theory. In the absence of applied voltages, we also observed pronounced, periodic net currents, which are probably induced by the circuit noise. We have hence shown how the background noise is, in a ratchet-like fashion, transformed by the single electron device into a net current of electrons. The authors would like to thank H. Linke for valuable comments on our manuscript, and A. Löfgren, P. Omling, and H. Xu for stimulating discussions.
no-problem/9910/nucl-th9910051.html
ar5iv
text
# Nuclear Matter with Quark-Meson Coupling II: Modeling a Soliton Liquid ## 1 Introduction This is the second of two papers that seek to select and develop a soliton model of nucleons for application to dense matter and, in particular, the transition to deconfined quark matter. In the previous paper, hereafter referred to as (I), we presented the motivation for our choice of a particular class of nontopological soliton models, the Friedberg-Lee type models , which have explicit quark degrees of freedom and a dynamical confinement mechanism resulting from a composite scalar gluon field that forms a solitonic bag in which constituent quarks then reside. We considered extensions of these models that include explicit meson degrees of freedom coupled linearly to the quarks in order to obtain a reasonable description of nuclear interactions. For simplicity, we have studied — and here also study — only models that include scalar and vector mesons, neglecting in these initial investigations any possible explicit effects of pions in dense matter (although the scalar meson can be considered an effective two-pion resonance). We avoid any double counting of hadronic degrees of freedom by including only nuclear constituent quarks in our calculations. Gluons enter our calculations only through the scalar glueball field; perturbative gluonic effects are ignored in nuclear matter. The distinguishing feature between the various models we considered in (I) is the precise form of the coupling between the quarks and the glueball field. We compared the various FL models by studying dense matter within the Wigner-Seitz approximation. In general, one can find parameter choices that produce reasonable results for free nucleon properties independent of the particular form of the quark-glueball coupling. It is in dense matter, where quark bags begin to touch, that the various models are distinguished. We found that models which have a quark-glueball coupling in accord with the dictates of the chiral chromodielectric model ($`\chi `$CD) show a behavior more in line with phenomenology. In these models, the quark-gluon coupling is of leading order two or greater in the glueball field, which is essential for the elimination of transitions to unphysical quark plasma phases at unrealistically low densities. Furthermore, it was found that coupling the quarks to a scalar meson field ensures saturation. The quark-meson coupling is taken to be independent of the glueball field within the mean field approximation to avoid unphysical transitions in dense matter, as was detailed in (I). Here we wish to further study the model selected in (I) by going beyond the relatively rough approximations used there in modeling the liquid state. The Wigner-Seitz approximation consists in assuming that each nucleon is confined by interactions with its nearest neighbors to a given volume, equal to the inverse of the baryon density, known as the Wigner-Seitz cell. For a solid cubic lattice, for example, the Wigner-Seitz cell is a cube. Here, we are interested not in the solid but rather the liquid state, and so the usual choice is to take the Wigner-Seitz cell to be a sphere in the hope that one thereby better models the disorder of a liquid. This, however, is clearly not enough: if we look at models of the liquid state used by physical chemists in order to describe molecular liquids (models that passed out of use several decades ago after the development of large-scale computers enabling the use of molecular dynamics and Monte Carlo techniques), the Wigner-Seitz approximation employed in (I) corresponds to the cell model of Lennard-Jones and Devonshire , which does much better at reproducing the solid state than the liquid state . Instead, we shall employ a refinement of the cell model — namely, the Significant Structure Theory of Jhon and Eyring — which introduces holes into the system in order to account for the disorder present in liquids. As a matter of fact, in (I) we assumed that the Wigner-Seitz cell is simply a sort of “average snapshot” of the nuclear medium felt by the quarks inside an otherwise freely moving nucleon. Thus we proceeded by subtracting away energy due to spurious center of mass motion (due to the fact that the nucleon was not constructed by putting the quarks in a good momentum state), and then took the kinetic energy of the system to be that of a free Fermi gas. Clearly, this approximation can only be justified at low densities. As the density increases the motion of an individual nucleon is affected by the medium, and this leads us to consider the Wigner-Seitz cell not just as a boundary upon the quark wave functions that build up a nucleon, but also as a restriction upon the motion of the nucleon itself. This leads to the considerations of the previous paragraph, which shall be further developed in Sec. 3. The outline of the paper is as follows. The nontopological soltion model used is reviewed in Sec. 2. In Sec. 3 we discuss various attempts to model a soliton liquid, then motivate and introduce the particular model based on siginificant liquid structures that we shall use here. In Sec. 4 we present the resulting equations of state for nuclear matter and discuss the transition to quark matter. A general summary and discussion of the two papers is given in Sec. 5. ## 2 The Model The nontopological soliton model we study here is based upon the chiral chromodielectric model of Fai, Perry and Wilets . In its full version, the model contains quark and gluon degrees of freedom. A scalar glueball field $`\sigma `$ couples to the quarks, and colored gluons $`A_\mu ^a`$are treated perturbatively. The scalar field provides absolute confinement of both quarks and gluons and gives consituents a mass. Meson exchange is surely present in this model, but for simplicity we alter the original $`\chi `$CD by dropping the gluon field $`A_\mu ^a`$ and ignoring sea quarks. Instead, as in quark-meson coupling models, we introduce a scalar meson $`\varphi `$. The vector meson $`V_\mu `$, which provides repulsion in quantum hadrodynamics, is not necessary here since the soliton structure provides repulsion between nucleons, and so for simplicity we set $`V_\mu =0`$. We assume the scalar meson couples linearly to the quarks and take the quark-meson vertex to be independent of $`\sigma `$. The Lagrangian density for our model is $``$ $`=`$ $`\overline{\psi }\left[i\gamma ^\mu _\mu g(\sigma )g_s\varphi \right]\psi +{\displaystyle \frac{1}{2}}_\mu \sigma ^\mu \sigma U(\sigma )`$ (1) $`+{\displaystyle \frac{1}{2}}_\mu \varphi ^\mu \varphi {\displaystyle \frac{1}{2}}m_s^2\varphi ^2{\displaystyle \frac{1}{4}}F_{\mu \nu }F^{\mu \nu },`$ where $$U(\sigma )=\frac{a}{2!}\sigma ^2+\frac{b}{3!}\sigma ^3+\frac{c}{4!}\sigma ^4+B.$$ (2) We have set the current quark masses to zero. The parameters are as chosen in (I). The constants of the potential are $`a=50`$fm<sup>-2</sup>, $`b=1300`$fm<sup>-1</sup> and $`c=10^4`$, so that $`U(\sigma )`$ has a local minimum at $`\sigma =0`$ and a global minimum at $`\sigma =\sigma _v=0.285`$fm<sup>-1</sup>, the vacuum value. The mass of the glueball excitation associated with the $`\sigma `$ field is $`m_{GB}=\sqrt{U^{\prime \prime }(\sigma _v)}=1.82`$GeV and the bag constant is $`B=46.6`$MeV/fm<sup>3</sup>. The quark-$`\sigma `$ coupling is $$g(\sigma )=g_\sigma \sigma _v[\frac{1}{\kappa (\sigma )}1],$$ (3) where we choose the chromodielectric function $`\kappa (\sigma )`$ to be $$\kappa (\sigma )=1+\theta (x)x^3[3x4+\kappa _v];x=\sigma /\sigma _v,$$ (4) In the following we take $`g_\sigma =3`$ and $`\kappa _v=.1`$. We solve the Euler-Lagrange equations in the mean field approximation, replacing the glueball field $`\sigma `$ by the classical soliton solution $`\sigma (\stackrel{}{r})`$ and the meson field by its expectation value in the nuclear medium $`<\varphi >=\varphi _0`$. The resulting equations for the quark and the scalar soliton field are solved in a Wigner-Seitz cell of radius $`R`$ by implementing boundary conditions based upon Bloch’s theorem, as detailed in (I). The quark spinor in the lowest band is assumed to be an s-state $$\psi _k=\left(\begin{array}{c}u_k(r)\\ i\sigma \widehat{r}v_k(r)\end{array}\right)\chi ,$$ (5) and we make the simplifying assumption of identifying the bottom of the lowest band by the demand that the derivative of the upper component of the Dirac function disappears at $`R`$, and the top of that band by the demand that the value of the upper component is zero at $`R`$ . The resulting equations for the spinor components are $$\frac{du_k}{dr}+\left[g(\sigma )g_s\varphi _0+ϵ_k\right]v_k=0$$ (6) $$\frac{dv_k}{dr}+\frac{2v_k}{r}+\left[g(\sigma )g_s\varphi _0ϵ_k\right]u_k=0.$$ (7) The equation for the soliton field is $$^2\sigma +U^{}(\sigma )+g^{}(\sigma )\rho _s(r)=0.$$ (8) The quark density $`\rho _q`$ and the quark scalar density $`\rho _s`$ are given by $$\rho _q(r)=\frac{n_q}{4\pi \overline{k}^3/3}_0^{\overline{k}}d^3k\left[u_k^2(r)+v_k^2(r)\right],$$ (9) $$\rho _s(r)=\frac{n_q}{4\pi \overline{k}^3/3}_0^{\overline{k}}d^3k\left[u_k^2(r)v_k^2(r)\right],$$ (10) where the band is filled up to $`\overline{k}`$. The quark functions are normalized to unity in the Wigner-Seitz cell. The boundary conditions for the soliton field are $`\sigma ^{}(0)=\sigma ^{}(R)=0`$. The boundary conditions for the quark functions at the origin are given by $`u(0)=u_0`$ and $`v(0)=0`$, where $`u_0`$ is determined by the normalization condition $$_0^R4\pi r^2𝑑r(u(r)^2+v(r)^2)=1.$$ (11) The boundary conditions at $`r=R`$ are given by $`u_b^{}(R)=0`$ and $`v_b(R)=0`$ for the bottom of the lowest band, and $`u_t(R)=0`$ for the top of this band. Using these equations we can solve for the corresponding $`ϵ_b`$ and $`ϵ_t`$. We assume the tight-binding dispersion relation $`ϵ_k=ϵ_b^2+(ϵ_tϵ_b)\mathrm{sin}^2(\pi s/2)`$, with $`s=k/k_t`$, and that the band is filled right to the top — that is, $`\overline{k}=k_t`$. With this dispersion relation and filling, the nucleon energy is given by $$E_N=3n_q_0^1𝑑ss^2\left\{ϵ_b+(ϵ_tϵ_t)\mathrm{sin}^2\left(\frac{\pi s}{2}\right)\right\}+_0^R4\pi r^2𝑑r\left[\frac{1}{2}\sigma ^{}(r)^2+U(\sigma )\right].$$ (12) In order to correct for the spurious center of mass motion in the Wigner-Seitz cell the nucleon mass at rest is taken to be $$M_N=\sqrt{E_N^2<P_{cm}^2>_{WS}},$$ (13) where $`<P_{cm}^2>_{WS}=n_q<p_q^2>_{WS}`$ is the sum of the expectation values of the squares of the momenta of the $`n_q`$=3 quarks. At low density the band width vanishes and the quarks are confined in separate bags. Then we can assume the individual nucleons move around as a gas of fermions with effective mass $`M_N`$ given by Eq. (13), so the nucleon energy is $`E_N^{(g)}=\sqrt{M_N^2+k^2}`$. The total energy density at nuclear density $`\rho _B`$ is thus $$_g=\frac{\gamma }{(2\pi )^3}_0^{k_F}𝑑\stackrel{}{k}\sqrt{M_N^2+k^2}+\frac{1}{2}m_s^2\varphi _0^2,$$ (14) where $`\gamma =4`$ is the spin-isospin degeneracy of the nucleons. The Fermi momentum of the nucleons is related to the baryon density through the relation $$\rho _B=\frac{\gamma }{6\pi ^2}k_F^3=\frac{3}{4\pi R^3}.$$ (15) The total energy per baryon is given by $`E_g=_g/\rho _B`$. We have used the label $`g`$ to indicate that these expressions correspond to a gas-like phase. The constant scalar meson field $`\varphi _0`$ is determined by the thermodynamic demand of minimizing $``$: $$\frac{_g}{\varphi _0}=0.$$ (16) Our mean field equations are similar to those of quantum hadrodynamics, the difference here being that the nucleon now has structure and thus the meson field couples to the nucleon through its quarks. Let us review the treatment of dense matter in (I). We started with the so-called Wigner-Seitz approximation, which is often used in soliton calculations, since it is the simplest picture of dense matter available. In this approximation, each soliton is confined to a unit cell, which we can view from two different perspectives. If we choose periodic conditions at the cell boundary, we have the usual Bloch approach to the solid state: the quarks play the role of the electrons and the scalar glueball field takes the role of the ions in the usual crystal formulation. If instead we want to model the liquid state, we need to somehow introduce some disorder into the system. From this view, then, we take the Wigner-Seitz approximation as a sort of averaging over the rest of the system: for the purposes of constructing an individual nucleon, we ignore the motion of the nucleons, instead adding “by hand” the kinetic energy of a free gas to the system. This is justified to the extent that each nucleon’s motion describes slow degrees of freedom, whereas the constituents are fast degrees of freedom that react essentially instantaneously to changes in the relative arrangement of the nucleons — that is, if the Born-Oppenheimer approximation is valid. Thus in (I) we calculated the equation of state of nuclear matter within the second scheme. The mass of the nucleon was calculated as a function of the radius of the (spherical) Wigner-Seitz cell, for which it was then necessary to subtract away spurious center-of-mass motion. This was done using approximate relations discussed in detail in . Note that within the first scheme, the quarks are not assumed to be in a state of good momentum, and this kinetic energy is not spurious. ## 3 Significant Structure Theory In (I) we used the Wigner-Seitz approximation to determine the effective nucleon mass in nuclear matter, but then gave the nucleons the kinetic energy of a Fermi gas. In this approximation, the quarks feel the nuclear medium, but the nucleons themselves do not. That is, we assume the quarks adjust instantly to the medium, forming 3$`q`$ collective states that move relatively slowly and essentially freely through the medium. The interactions between nucleons occur only indirectly through the effective mass and the mesonic mean field. Clearly, such an approximation cannot be accurate at high densities, when the finite size of the nucleon becomes important, for nucleons will not then move freely. Instead, we need to model the liquid state, where any individual nucleon will range about the system over long time scales, but will be localized on shorter time scales. One would still like to approach the problem using the Wigner-Seitz approximation, insisting now that the nucleon also feels the medium and does not leave the cell. Clearly, this is a very restrictive assumption, and physical chemists long ago understood that this corresponds more closely to the solid than the liquid state. Nevertheless, this will provide a starting point for the model of the liquid state we shall adopt in the following, so let us pursue this approach further. Ideally, we would like to allow our nucleon to rattle about in its Wigner-Seitz cell in order to extract a potential. This would entail dropping the assumptions of spherical symmetry for the bag and the use of only $`s`$-wave quark states. Instead, we shall attempt to model the nucleon’s motion at high density as follows. First, we assume that the motion is harmonic — that is, the center of mass $`𝐑_{cm}`$ of the nucleon is never far from the center of the WS cell $`𝐑_0=0`$. Now, the average of a harmonic potential in the ground state is $$V=\frac{1}{2}M_N\omega _N^2𝐑_{cm}^2=\frac{3}{4}\omega _N,$$ (17) from which we find the natural frequency $`\omega _N`$ as a function of the effective nucleon mass $`M_N`$ and the mean square of the center of mass coordinate. Next, we identify $`𝐑_{cm}`$ as the center of the quark distribution in the cell, ignoring thereby any motion of the soliton bag $`\sigma `$. (Viewing Fig. (3) of (I), we see that the soliton bag begins to be “squeezed” by the WS boundary at $`R1.5`$fm, so that at least for densities higher than this we might argue that the bag is essentially fixed.) Taking $`𝐑_{cm}`$ to be the center of mass coordinate of three quarks, we have $$𝐑_{cm}^2=\left[\frac{1}{3}(𝐫_1+𝐫_2+𝐫_3)\right]^2=\frac{1}{3}r_q^2_{WS}.$$ (18) Now we identify the last average with the mean square charge radius of the nucleon $`r^2_{WS}`$ $`=`$ $`{\displaystyle \frac{_0^Rd^3rr^2\rho _q(r)}{_0^Rd^3r\rho _q(r)}}`$ $`=`$ $`{\displaystyle \frac{1}{\frac{4}{3}\pi \overline{k}^3}}{\displaystyle _0^{\overline{k}}}d^3k{\displaystyle \frac{_0^Rd^3rr^2\left[u_k^2(r)+v_k^2(r)\right]}{_0^Rd^3r\left[u_k^2(r)+v_k^2(r)\right]}}.`$ This is clearly only a rough estimate of the center of mass motion of the nucleon. The nucleon energy in the solid is now $$E_N^{(s)}=M_N+\frac{3}{2}\omega _N,\mathrm{with}\omega _N=\frac{3}{2M_NR_{cm}^2}.$$ (20) This approximation corresponds to subtracting away spurious kinetic energy from the soliton energy Eq. (12), namely, $`E_{sp}=E_NE_N^{(s)}`$ $`=`$ $`\sqrt{M_N^2+P_{cm}^2_{WS}}M_N{\displaystyle \frac{3}{2}}\omega `$ $``$ $`{\displaystyle \frac{1}{2M_N}}\left(P_{cm}^2_{WS}{\displaystyle \frac{9}{2R_{cm}^2_{WS}}}\right).`$ Approximating $`R_{cm}^2`$ by $`\frac{1}{3}r^2_{WS}`$ can only be valid at high density. At low density, this surely breaks down for then the quarks cannot reach the WS boundary unless the bag itself is allowed to move. Moreover, the present approximation corresponds to treating the soliton matter as an Einstein solid (we have implicitly averaged over the Bloch momenta $`𝐊`$ corresponding to the lattice of nucleons: a more accurate treatment of the solid would put the three quarks in a state of good $`𝐊=𝐤_1+𝐤_2+𝐤_3`$ and find a frequency $`\omega _N(𝐊)`$ that is a function of the Bloch momentum). The total energy density in this “solid” phase is $$_s=\rho _B\left(M_N+\frac{3}{2}\omega _N\right)+\frac{1}{2}m_s^2\varphi _0^2.$$ (22) The constant scalar meson field $`\varphi _0`$ is again determined by the thermodynamic demand of minimizing $``$. Both $`M_N`$ and $`\omega `$ depend implicitly upon $`\varphi _0`$, so that this equation must be solved iteratively in conjunction with those for $`\sigma `$ and $`\psi `$. This gives the equation of state for solid nuclear matter, which is not of much interest in itself. However, this is useful for building a model of the liquid state at high density based upon significant structure theory . The essential idea is to isolate those configuations that make the significant contribution to the partition function. Based upon experimental observations of molecular liquids, the liquid state is viewed as a close-packed lattice with holes present that destroy any long-range order. The volume of the system increases by increasing the number of holes, with the volume of each hole equal to the average volume occupied by a close-packed molecule, since this balances the competing demands for greater entropy and lower energy. A molecule neighboring a hole can move into the vacancy, creating a new hole at the site it left. Each hole thus replaces three vibrational with three translational degrees of freedom. Thus the liquid is represented as a combination of molecules with solid-like properties (those next to filled sites) and gas-like properties (those next to vacant sites). Now consider this model of the liquid state applied to dense solitonic matter. (Such an application has been studied previously for dense skyrmion matter in Ref. .) First, we note that our assumptions of a spherical Wigner-Seitz cell, which reflects a sort of average over nearest neighbor positions, and the Einstein approximation that $`\omega _N`$ is independent of the Bloch momentum $`𝐊`$, which ignores long-range correlated vibrations of the nucleons — approximations that would be rather severe if we truly wished to model a solid — are instead appropriate for the present application. We need only add the holes to ensure the disorder corresponding to a liquid state. So let $`V_l`$ be the total volume of the liquid and $`v`$ be the volume of each cell (occupied or unoccupied). If there are $`N`$ nucleons and $`N_h`$ holes, then $`V_s=Nv`$ is the total volume of the occupied cells and $`V_g=V_lV_s=N_hv`$ is the total volume of the holes. On average, a nucleon will encounter a neighbor on a fraction $`\frac{N}{N+N_h}=\frac{V_s}{V_l}`$ of its trips and a hole on a fraction $`1\frac{V_s}{V_l}=\frac{V_g}{V_l}`$ of its trips. Thus there are $`3N\frac{V_s}{V_l}`$ solid-like and $`3N\frac{V_g}{V_l}`$ gas-like degrees of freedom, and the liquid partition function is $$Z_l=Z_s^{N\frac{V_s}{V_l}}Z_g^{N(1\frac{V_s}{V_l})}.$$ (23) This results in the following nucleon energy in the liquid state: $$E_N^{(l)}=\frac{V_s}{V_l}E_N^{(s)}(v)+\frac{V_lV_s}{V_l}\stackrel{~}{E}_N^{(g)}(n_g),$$ (24) where $`n_g=\frac{N_g}{V_g}=\frac{N}{V_l}`$ is the “density” of the gas-like part of the system, which from the last equality is equal to the baryon density $`\rho _B`$. The average solid-like nucleon energy $`E_N^{(s)}(v)`$ is given by Eq. (20), with $`M_N`$ and $`\omega _N`$ depending upon $`v`$ through the Wigner-Seitz radius $`R=(3v/4\pi )^{1/3}`$. The average gas-like nucleon energy is instead $$\stackrel{~}{E}_N^{(g)}(n_g)=\frac{3\gamma }{4\pi k_g^3}_0^{k_g}d^3k\sqrt{M_N^2(v_l)+k^2},$$ (25) with $`k_g=(6\pi ^2n_g/\gamma )^{1/3}=(6\pi ^2\rho _B/\gamma )^{1/3}`$ and $`v_l=V_l/N=1/n_g`$. The total energy density of the system in the liquid phase is then $$_l(v)=\rho _B^2vE_N^{(s)}(v)+\rho _B(1\rho _Bv)\stackrel{~}{E}_N^{(g)}(\rho _B)+\frac{1}{2}m_s^2\varphi _0^2\frac{1}{2}m_V^2V_0^2.$$ (26) Once again, $`\varphi _0`$ is determined by minimizing $`_l`$, and so for nonzero $`g_s`$ we must solve anew the equations for $`\sigma `$ and $`\psi `$ consistent with the mean field value $`\varphi _0(v,\rho _B)`$ determined from this new liquid equation of state. In Eq. (26), the cell volume $`v`$ is to be taken as a parameter. Note that the WS cell volume is no longer $`1/\rho _B`$ when holes are present. Instead, we can define a new radius $`R_l=(3/4\pi \rho _B)^{1/3}`$ which is half the average spacing between nucleons in the liquid state. There is some question as to whether one should take the effective nucleon mass in (25) to be $`M_N(v)`$ or $`M_N(v_l)=M_N(\rho _B^1)`$. Since a single nucleon can have both gas-like and solid-like degrees of freedom, it might be more consistent to use the former choice in calculating the gas-like part of the nucleon energy. However, there seems little reason to insist that the inertial masses corresponding to the solid-like harmonic motion and the gas-like translational motion are the same. Moreover, the latter choice guarantees that our liquid EOS reduces to the Fermi gas EOS given by (14) at low density. Thus our equation of state (26) interpolates smoothly between high-density and low-density pictures of the liquid state. ## 4 Results ### 4.1 Nuclear matter In Figs. 1-3 we show the energy per baryon of $`\chi `$CD solitonic nuclear matter for three different values of the quark-meson coupling. In each graph we display the “gas” equation of state (14) and a set of significant liquid structure curves characterized by various choices of the WS cell volume $`v=4\pi R^3`$ in (26). In addition, the upper curve labelled “solid” in these figures is given by (14) with the condition $`n_g=1/v`$ — or, equivalently, $`R_l=R`$ — which implies that the number of holes is zero, so that the nuclear motion is entirely solid-like. Note that our approximations in estimating $`\omega _N`$ break down at low density. In particular, the “solid” curve is not to be trusted above $`R1.5`$fm: here, the bag is no longer squeezed by the WS boundary and its motion probably can no longer be ignored. As $`v0`$, the significant structure curve approaches the “gas” curve, which certainly underestimates the energy per nucleon. Without empirical evidence of a transition from liquid to solid nuclear matter, we are unable to fix $`v`$ directly. (Some models predict such a transition; however, it is more widely believed that the transition to the quark-gluon plasma will preclude a transition to solid nuclear matter.) Here, we can just treat $`v`$ as a parameter in choosing the EOS that best fits the empirical data at the saturation point. We find that, for the parameter sets we studied, the curves that best fit the empirical EOS for nuclear matter are given by the values $`g_s=2,R=0.4`$fm and $`g_s=3,R=0.7`$fm. The values of the Fermi momentum, binding energy, and compression modulus at the saturation point are $`k_s=1.06`$fm<sup>-1</sup>, $`E_s=22`$MeV and $`K=1082`$MeV for the former, and $`k_s=0.95`$fm<sup>-1</sup>, $`E_s=24`$MeV and $`K=671`$MeV for the the latter curve. These are to be compared with the empirical values $`k_s=1.36`$fm<sup>-1</sup>, $`E_s=16`$MeV and $`K200`$MeV. In particular, we see a significant improvement in the value of the compression modulus with respect to the calculation of (I), although it is still rather high with respect to empirical values. ### 4.2 Quark matter In the high density limit, the preferred phase in the nontopological soliton models we are studying is a uniform plasma characterized by the solution $`\sigma =0`$. That is, the soliton bags disappear, and one is left with a quark gas. The energy density of the quark plasma is $$_q=\frac{3k_F^4}{2\pi ^2}+B,$$ (27) where the Fermi momentum is related to the baryon density $`\rho _B`$ as $`k_F=(6\pi ^2\rho _B/\gamma )^{1/3}`$ — the degeneracy of the quark gas is $`n_q\gamma `$ and the quark density is $`n_q`$=3 times the baryon density. The bag constant is $`B=46.6`$$`MeV/fm^3`$ for our choice of parameters. In contrast to our assumption about the solitonic phase, the energy density of the quark gas is altered significantly by perturbative gluonic contributions, especially at higher densities. In this phase, the $`\chi `$CD model in the mean field approximation is equivalent to perturbative QCD. As shown in , for example, adding the lowest order gluonic corrections then gives $$_q=\frac{3k_F^4}{2\pi ^2}\left\{1+\frac{2\alpha _s}{3\pi }+\frac{\alpha _s^3}{3\pi ^2}\left[6.79+2\mathrm{ln}\left(\frac{2\alpha _s}{\pi }\right)\right]\right\}+B,$$ (28) where the leading-logarithm expression for the strong coupling constant is $$\alpha _s(k_F)=\frac{6\pi }{29\mathrm{ln}(k_F/\mathrm{\Lambda })}.$$ (29) We take the QCD scale parameter to be $`\mathrm{\Lambda }180`$-$`200`$MeV. In Fig. 4 we show the two nuclear matter curves selected above along with the quark-gluon plasma EOS given here. In addition, we show an “empirical” nuclear matter EOS given by $$E_B\frac{K}{18}\left(\frac{k_F^3}{k_s^3}1\right)^2+M_N^{(as)}E_s,$$ (30) with $`M_N^{(as)}E_S=1160`$MeV and $`K=200`$MeV. (The bag constant $`B`$ appearing in the quark-gluon plasma energy has been set in fitting the free soliton mass and rms radius, and thus for purposes of comparison we take the low density limit of the “empirical” curve to coincide with the free soliton mass.) Note that the $`g_s=2`$ shown here has a transition to the solid state at $`k_F=(9\pi )^{1/3}/2R=3.8`$fm<sup>-1</sup> and the $`g_s=3`$ at $`2.2`$fm<sup>-1</sup>, both well past the transition points to the quark-gluon plasma. Clearly, the saturation points of the model curves occur at densities that are too low. Nevertheless, the very fact that we find qualitative agreement with the empirical nuclear matter EOS is quite encouraging. ## 5 Conclusions and Outlook In this paper we have developed an improved modeling of the liquid state of solitonic matter based upon the significant structure model used in physical chemistry. We have applied this to the study of nuclear matter within a nontopological soliton model with explicit quark degrees of freedom. In particular, initial studies in (I) indicated that among several such similar models, the chiral chromodielectric model gave results most in line with empirical expectations. When a scalar meson is allowed to couple to the quarks, saturation can be achieved. The calculations in (I) were based upon a modeling of nuclear matter that assumes the solitons move essentially freely through the medium. This clearly will break down at and above nuclear saturation density, where the size of the individual solitons and the spacing between solitons in the medium become comparable. The significant structure model of the liquid state used in the present paper, however, is designed for densities near the transition to the solid state. We have thus developed here an equation of state that interpolates between models designed for low and high densities. It is encouraging to find that without changing the free nucleon properties we can adjust the quark-meson coupling to reproduce the saturation point of nuclear matter found in (I). Indeed, we find that the compression modulus is improved significantly when calculated using the significant liquid structure model. With the $`\chi `$CD model we are able to treat the transition to the quark-gluon plasma consistently. This is the point of developing this model: confinement is dynamical. Thus the parameters governing the nuclear and plasma phases are in principle the same. Clearly, the results we have found here, while qualitatively in agreement with empirical estimates, are not quantitatively useful. In particular, the saturation point of nuclear matter occurs at far too low a density for the parameters used in our calculation. Moreover, the mass of the free nucleon is too high. We have not tried too hard to improve our results by adjusting the parameters of the model. Probably one can find a better choice of parameters than those for which we have done the calculations here, but we must emphasize that there seems to be a limit in how well one can do. This can be seen already in studying the free nucleon. There, we are unable to find a parameter set that reproduces the nucleon mass and rms radius exactly while still giving reasonable values for the glueball and bag constant. We compromised here by fitting the nucleon rms radius well while keeping the glueball mass and bag constant in their accepted ranges. This resulted in a free nucleon mass that was too high. In trying to reproduce the empirical saturation point of nuclear matter, however, we found a general scaling phenomenon. Keeping the saturation energy roughly correct, it seems difficult to change the parameters in such a way as to get the correct density expect by a rough overall scaling of all predicted quantities. Thus getting the correct saturation density entails lowering the rms radius along with a corresponding increase in the mass of the nucleon. This is suggestive. As a matter of fact, the quark-meson model we developed here is better suited for nuclear matter than for isolated nucleons. This is because an isolated nucleon will surround itself with a pion cloud, whereas (unless pion condensation occurs) the effects of pions in nuclear matter are likely to be small if we already have scalar mesons. Thus we can consider the model used here as actually better used for describing the quark core of the nucleon. The addition of the pion will allow the quark core to shrink and lower the energy of the free nucleon with respect to that of the quarks. With the shrinking of the quark core, one can expect a corresponding decrease in the volume per nucleon at the saturation point. Thus we view the addition of pions as an essential improvement to our model. This, of course, was always obvious: we certainly cannot hope to reproduce the long-range part of the nucleon-nucleon interaction without pions. Our results indicate that the presence of pions is necessary in order to reproduce the structure of the nucleon as well. This comes as little surprise. There are of course other improvements that can be made upon our calculations, such as a better handling of the Bloch boundary conditions and quark wave functions and an improved treatment of the corrections due to spurious center of mass motion. Perturbative corrections due to gluons and mesons about the mean field should eventually be considered as well. Having said (that is, written) this about improving the model, we should not lose sight of the original goal of our work. What we wanted to do first of all was see if we could distinguish among the various nontopological soliton models on the market by studying dense matter. To this we can answer that the chiral chromodielectric model appears to be more in line with empirical expectations. Then, we wanted to develop as simple a model as possible that could give a rough qualitative fit to both free nucleon and dense nuclear matter properties, thus providing a reasonable starting point for more sophisticated models that can provide truly quantitative predictions. To this we can also answer that the chiral chromodielectric model, when modified according to the local uniform approximation by the addition of a scalar meson field, would appear to be such a model. In fact, this model not only gives a good qualitative and rough quantitative fit to the empirical equation of state, it also predicts an increase of the nuclear rms radius in the nuclear medium, a result that is in accord with the EMC effect. Moreover, the results presented here seem to indicate that with the addition of pions we would have good possibilities of obtaining a quantitatively accurate fit to single nucleon properties and to the empirical nuclear matter equation of state. This would provide a reliable estimate of the bag constant and therefore a consistent and accurate treatment of the transition from nuclear to quark-gluon matter as a true transition between two phases of a single model.
no-problem/9910/astro-ph9910464.html
ar5iv
text
# Simulations of Relativistic Jets with GENESIS ## 1 Introduction Astrophysical jets are continuous channels of plasma produced by some active galactic nuclei that are currently observed in radio frequences. The relativistic nature of the plasma has been inferred from (esentially) two observational evidences: (i) the existence of superluminal motions of some radio components and, (ii) the high flux variability (even smaller than one day for some sources). Since several years the dynamical and morphological properties of axisymmetric relativistic jets are investigated by means of relativistic hydrodynamic simulations (e.g.,). In addition, relativistic MHD simulations have been performed in 2D () and 3D (). In their 3D simulations, and , have studied mildly relativistic jets (Lorentz factor, $`W=4.56`$) propagating both along and obliquely to an ambient magnetic field. In this work we report on high-resolution 3D simulations of relativistic jets with the largest beam flow Lorentz factor performed up to now (7.09), the largest resolution (8 cells per beam radius), and covering the longest time evolution (75 normalized time units; a normalized time unit is defined as the time needed for the jet to cross a unit length). These facts together with the high performance of our hydrodynamic code allowed us to study the morphology and dynamics of 3D relativistic jets for the first time. The calculations have been performed with the high–resolution 3D relativistic hydrodynamics code GENESIS , which is an upgraded version of the code developed by . GENESIS integrates the 3D relativistic hydrodynamic equations in conservation form in Cartesian coordinates including an additional conservation equation for the beam-to-external density fraction to distinguish between beam and external medium fluids. The code is based on a method of lines which first discretizes the spatial part of the relativistic Euler equations and solves the fluxes using the Marquina’s flux formula . Then the semidiscrete system of ordinary differential equations is solved using a third order Runge-Kutta algorithm . High spatial accuracy is achieved by means of a PPM third order interpolation . The computations were performed on a Cartesian domain (X,Y,Z) of size $`15R_b\times 15R_b\times 75R_b`$ ($`120\times 120\times 600`$ computational cells), where $`R_b`$ is the beam radius. The jet is injected at $`z=0`$ in the direction of the positive $`z`$-axis through a circular nozzle defined by $`x^2+y^2R_b^2`$. Beam material is injected with a beam mass fraction $`f=1`$, and the computational domain is initially filled with an external medium ($`f=0`$). We have considered a 3D model corresponding to model C2 of , which is characterized by a beam-to-external proper rest-mass density ratio $`\eta =0.01`$, a beam Mach number $`M_b=6.0`$, and a beam flow speed $`v_b=0.99c`$ ($`c`$ is the speed of light) or a beam Lorentz factor $`W_b7.09`$. An ideal gas equation of state with an adiabatic exponent $`\gamma =5/3`$ describes both the jet matter and the ambient gas. The beam is assumed to be in pressure equilibrium with the ambient medium. The evolution of the jet was simulated up to $`T150R_b/c`$, when the head of the jet is about to leave the grid. The scaled final time $`T\mathrm{4.6\hspace{0.17em}10}^4\left(R_b/100\mathrm{pc}\right)\mathrm{yr}`$ is about two orders of magnitude smaller than the estimated ages of powerful jets. Hence, our simulations cannot describe the long term evolution of these sources. Non–axisymmetry was imposed by means of a helical velocity perturbation at the nozzle given by $$v_b^x=\zeta v_b\mathrm{cos}\left(\frac{2\pi t}{\tau }\right),v_b^y=\zeta v_b\mathrm{sin}\left(\frac{2\pi t}{\tau }\right),v_b^z=v_b\sqrt{1\zeta ^2},$$ (1) where $`\zeta `$ is the ratio of the toroidal to total velocity and $`\tau `$ the perturbation period (i.e.,$`\tau =T/n`$, $`n`$ being the number of cycles completed during the whole simulation). This velocity field causes a differential rotation of the beam. The perturbation is chosen such that it does not change the velocity modulus, (i.e., mass, momentum and energy fluxes of the beam are preserved). ## 2 Morphodynamics of 3D relativistic jets Here we consider two models: A, which has a $`1\%`$ perturbation in helical velocity ($`\zeta =0.01`$) and $`n=50`$ and B, with $`\zeta =0.05`$ and $`n=15`$. Figure 1 shows various quantities of the model A in the plane $`y=0`$ at the end of the simulation. Two values of the beam mass fraction are marked by white contour levels. The beam structure is dominated by the imposed helical pattern with amplitudes of $`0.2R_b`$ and $`1.2R_b`$ for A and B, respectively. The overall jet’s morphology is characterized by the presence of a highly turbulent, subsonic cocoon. The pressure distribution outside the beam is nearly homogeneous giving rise to a symmetric bow shock (Fig. 1b) in model A. Model B shows a very inhomogeneous pressure distribution in the cocoon. As in the classical case , the relativistic 3D simulation shows less ordered structures in the cocoon than the axisymmetric models. As seen from the beam mass fraction levels, the cocoon remains quite thin ($`2R_b`$) in A and widens ($`4R_b`$) in B. The flow field outside the beam shows that the high velocity backflow is restricted to a small region in the vicinity of the hot spot (Fig. 1e), the largest backflow velocities ($`0.5c`$) being significantly smaller than in 2D models. The flow annulus with high Lorentz factor found in axisymmetric simulations is also present, but it is reduced to a thin layer around the beam and possesses sub-relativistic speeds ($`0.25c`$) in model A and mildly relativistic ($`0.7`$) in B. The size of the backflow velocities in the cocoon do not support relativistic beaming in case of small perturbations but such possibility is open in larger ones. Within the beam the perturbation pattern is superimposed to the conical shocks at about 26 and 50 $`R_b`$. The beam of A does not exhibit the strong perturbations (deflection, twisting, flattening or even filamentation) found by other authors for 3D classical hydrodynamic jets; for 3D classical MHD jets). This can be taken as a sign of stability, although it can be argued that our simulation is not evolved far enough. For n15p01, the beam is about to be disrupted at the end of our simulation. Obviously, the beam cross section and the internal conical shock structure are correlated (Figure 1). The helical pattern propagates along the jet at nearly the beam speed which could yield to superluminal components when viewed at appropriate angles. Besides this superluminal pattern, the presence of emitting fluid elements moving at different velocities and orientations could lead to local variations of the apparent superluminal motion within the jet. This is shown in Fig. 1f, where we have computed the mean (along each line of sight, and for a viewing angle of 40 degrees) local apparent speed. The distribution of apparent motions is inhomogeneous and resembles that of the observed individual features within knots in M87 . The jet can be traced continuously up to the hot spot which propagates as a strong shock through the ambient medium. Beam material impinges on the hot spot at high Lorentz factors in A case, but the beam Lorentz factor strongly decreases for B. We could not identify a terminal Mach disk in the flow. We find flow speeds near (and in) the hot spot much larger than those inferred from the one dimensional estimate. This fact was already noticed for 2D models by and suggested by them as a plausible explanation for an excess in hot spot beaming. We find a layer of high specific internal energy (typically more than a tenfold larger than that of the gas in the beam core, see Fig. 1d) surrounding the beam like in previous axisymmetric models . The region filled by the shear layer is defined by $`0.2<f<0.95`$. It is mainly composed of forward moving beam material at a speed smaller than the beam speed (Fig. 1e). The intermediate speed of the layer material is due to shear in the beam/cocoon interface, which is also responsible for its high specific internal energy. The shear layer broadens with distance from 0.2$`R_b`$ near the nozzle to 1.1$`R_b`$ (in A) or 2.0$`R_b`$ (in B) near the head of the jet. The diffusion of vorticity caused by numerical viscosity is responsible for the formation of the boundary layer. Although being caused by numerical effects (not by the physical mechanism of turbulent shear) the properties of PPM–based difference schemes are such that they can mimic turbulent flow to a certain degree . The existence of such a boundary layer has been invoked by several authors to interpret a number of observational trends in FRI radio sources. Such a layer will produce a top-bottom asymmetry due to the light aberration , and additionally, it can be used to explain the rails in polarization found by . Other authors have found evidence for these boundary layers in FRIIs (3C353) radio sources. The jet’s propagation proceeds in two distinct phases. First it propagates according to a linear of 1D phase, and then the behavior depends on the strength of the perturbation: it accelerates to a propagation speed which is $`20`$% larger than the corresponding 1D estimate in model A or it deccelerates up to $`0.37c`$. The second result partially agrees with the one obtained by . The axial component of the momentum of the beam particles (integrated across the beam) along the axis decreases by more than a 30% within the first 60 $`R_b`$. Neglecting pressure and viscous effects, and assuming stationarity the axial momentum should be conserved, and hence the beam flow is decelerating. The momentum loss goes along with the growth of the boundary layer. In model A, although the beam material decelerates, its terminal Lorentz factor is still large enough to produce a fast jet propagation. On the other hand, in 3D, the beam is prone to strong perturbations which can affect the jet’s head structure. In particular, a simple structure like a terminal Mach shock will probably not survive when significant 3D effects develop. It will be substituted by more complex structures in that case, e.g., by a Mach shock which is no longer normal to the beam flow and which wobbles around the instantaneous flow direction. Another possibility is the generation of oblique shocks near the jet head due to off–axis oscillations of the beam. Both possibilities will cause a less efficient deceleration of the beam flow at least during some epochs. At longer time scales the growth of 3D perturbations will cause the beam to spread its momentum over a much larger area than that it had initially, which will efficiently reduce the jet advance speed.
no-problem/9910/astro-ph9910330.html
ar5iv
text
# HST/STIS ultraviolet spectroscopy of the supersoft X-ray source RX J0439.8-6809 Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. ## 1 Introduction Among the optically identified persistent supersoft X-ray sources, RX J0439.8$``$6809 (hereafter RX J0439) has the largest X-ray luminosity, but is also the faintest one in the visual light. After its discovery with ROSAT Greiner et al. (1994) and its optical identification with a faint blue star in the LMC van Teeseling et al. (1996), we are still puzzled by its exotic nature. The ROSAT X-ray spectrum of RX J0439 is consistent with an object in the LMC with a radius of a few times $`10^9`$ cm, an effective temperature of $`3\times 10^5`$ K, and a luminosity of $`10^{38}`$ erg s<sup>-1</sup>. These parameters suggest that RX J0439 is a shell-burning white dwarf or pre-white dwarf. Two scenarios have been suggested by van Teeseling et al. van Teeseling et al. (1997) that could explain the observed characteristics of RX J0439. (a) RX J0439 is a supersoft X-ray binary, and the high X-ray luminosity is powered by stable nuclear shell burning. The absence of any X-ray or optical variability in RX J0439, combined with its optical faintness, excludes the presence of a quasi-main-sequence companion. If RX J0439 belongs to the supersoft X-ray binaries, it must be a double-degenerate system containing two semi-detached white dwarfs, with an orbital period of a few minutes only. In this case, binarity is extremely hard to prove: with no or only a tiny accretion disc and a faint degenerate companion, the flux is dominated by the accreting shell-burning white dwarf. This model could explain the absence of strong emission lines which have been observed in all other known supersoft X-ray binaries (see e.g. overview by van Teeseling 1998), as well as the fact that the optical spectrum of RX J0439 is consistent with the Rayleigh-Jeans tail of the X-ray component. (b) RX J0439 is an exceptionally hot pre-white dwarf on the horizontal shell-burning track. In this case, RX J0439 would be the hottest star of this type known so far. However, to match the observed X-ray luminosity, such a pre-white dwarf would have to be rather massive, with accordingly short evolution times. ## 2 HST/STIS observations HST/STIS ultraviolet observations of RX J0439 were carried out on 1998 November 17. Due to the faintness of the object, the target acquisition was performed on a nearby bright star, and RX J0439 was positioned in the $`52\mathrm{}\times 0.5\mathrm{}`$ slit with a subsequent offset. A 2100 s exposure far-UV spectrum (FUV, 1150$``$1730 Å) and a 1700 s near-UV spectrum (NUV, 1600$``$3200 Å) were obtained with the G140L and G230L gratings, respectively. Using the MAMA detectors in the time-tagged mode, we have obtained in addition to the ultraviolet spectra two photon event files which contain an entry ($`t,x,y`$) for each photon, with $`t`$ the arrival time at a $`125\mu \mathrm{s}`$ time resolution and $`x,y`$ the detector coordinates. In order to extract light curves of RX J0439 from the two observations we proceeded as follows. Two-dimensional raw FUV and NUV detector images were obtained by summing up all photons registered in each individual pixel ($`x,y`$). FUV source+background counts were extracted within a box 35 pixel wide in cross-dispersion direction and covering $`1230\mathrm{\AA }\lambda 1720\mathrm{\AA }`$, excluding the strong geocoronal $`\mathrm{Ly}\alpha `$ emission (but including the weak O I$`\lambda `$ 1302 airglow). Background counts were extracted from two adjacent empty regions on the detector with the same wavelength coverage as the source spectrum. From the NUV photon event file, we extracted source+background and background counts covering the entire observed wavelength range in a similar way. ## 3 The ultraviolet spectrum The ultraviolet spectrum of RX J0439 is a blue continuum which contains a broad $`\mathrm{Ly}\alpha `$ absorption line and weak absorption features at $`1260`$ Å, $`1300`$ Å, and $`1335`$ Å, which we identify as interstellar absorption of Si II$`\lambda `$ 1260, O I$`\lambda `$ 1302/Si II$`\lambda `$ 1304, and C II$`\lambda `$ 1335 (Fig. 1). Some spurious structure remains in the $`\mathrm{Ly}\alpha `$ profile and around 1300 Å after the subtraction of the airglow $`\mathrm{Ly}\alpha `$ and O I emission. The observed $`\mathrm{Ly}\alpha `$ absorption is centred at $`1217`$ Å. Due to the offset target acquisition, RX J0439 might not have been perfectly centred in the aperture, which could account for the error in the wavelength zero point. The equivalent widths measured from the three metal absorption features, $`750\pm 300`$ mÅ, $`450\pm 200`$ mÅ, and $`500\pm 200`$ mÅ, respectively, are compatible with just the galactic foreground absorption (e.g. Gänsicke et al. 1998) and make it plausible that RX J0439 is located relatively far in the outskirts of the LMC van Teeseling et al. (1996). Interpreting the observed $`\mathrm{Ly}\alpha `$ profile as interstellar absorption, we have estimated the neutral hydrogen column density along the line of sight towards RX J0439. Using a pure damping profile we find $`N_{\mathrm{HI}}=(4\pm 1)\times 10^{20}`$ cm<sup>-2</sup>. This value is lower than those found for CAL 83 and RX J0513.9$``$6951 Gänsicke et al. (1998) and is consistent with the estimated galactic foreground column density of $`N_{\mathrm{HI}}=4.5\times 10^{20}`$ cm<sup>-2</sup> Dickey & Lockman (1990). As for the interstellar metal lines, there can be no significant $`\mathrm{Ly}\alpha `$ absorption by interstellar or circumstellar material in the LMC. The upper limit on the reddening derived from the G230L spectrum, $`E_{\mathrm{B}\mathrm{V}}0.1`$, is consistent with the $`N_{\mathrm{HI}}`$ column density. The derived value for $`N_{\mathrm{HI}}`$ is also consistent with the relatively low absorption column found from the ROSAT X-ray spectrum van Teeseling et al. (1996), and we will use $`N_\mathrm{H}=4\times 10^{20}`$ cm<sup>-2</sup> for the total hydrogen column towards RX J0439 throughout the rest of this paper. The absence of N V$`\lambda `$ 1240 and He II$`\lambda `$ 1640 emission, which has been observed in CAL 83, CAL 87, and RXJ 0513.9$``$6951 Gänsicke et al. (1998); Hutchings et al. (1995), reminds of the pure continuum spectra observed in the optical. However, the rather noisy blue end of the G140L spectrum of RX J0439 contains two possible emission features, which are centred at $`1178`$ Å and $`1184`$ Å and are detected at a $`23\sigma `$ level. Possible identifications are C III$`\lambda `$ 1175.7 and C IV$`\lambda `$ 1184.7, even though two reasons argue against these transitions. (1) If C IV were present in the atmosphere of RX J0439, the resonance line C IV$`\lambda `$ 1550 should be much stronger than C IV$`\lambda `$ 1184.7. (2) C III is increasingly less populated for temperatures in excess of 120 000 K, which is much lower than the temperature derived below from the overall spectrum. Such a low temperature, however, could be found on the irradiation-heated surface of an accretion disc or degenerate secondary star in the case of a (so far unproven) binary nature of RX J0439. ## 4 The absence of ultraviolet variability Figure 2 shows the background subtracted count rate of RX J0439 binned in 10 s and 60 s intervals for both gratings. Neither observation shows significant variability. To determine the upper limit to any random variability we have performed a Monte Carlo simulation using the observed errors: The G140L observation gives with a $`3\sigma `$ upper limit of $`1.0`$ count s<sup>-1</sup> (corresponding to 0.04 mag) the strongest constraint on possible ultraviolet variability. The G230L observation gives a $`3\sigma `$ upper limit of $`1.7`$ count s<sup>-1</sup> (corresponding to 0.13 mag). The upper limit of $`0.04`$ mag in the ultraviolet is even more stringent than the $`3\sigma `$ upper limit of $`0.07`$ mag in the optical (van Teeseling et al. 1997). Combined with a visual magnitude of $`V=21.6`$, the absence of any optical or ultraviolet variability excludes an irradiated companion with the size of a main-sequence star. Although the flux in a double-degenerate supersoft X-ray binary is dominated by the expanded shell-burning primary, a small quasi-sinusoidal modulation is expected from the changing aspect of the irradiated degenerate helium donor star. The stringent upper limit on the ultraviolet variability, however, implies a low orbital inclination (even in the case of a double-degenerate supersoft X-ray binary), or a rather ineffective heating of the degenerate companion, or no companion at all, i.e. a single pre-white dwarf. <sup>1</sup><sup>1</sup>1A substantial UV modulation as the result of reprocessing on the secondary is expected and observed in double-degenerates containing a neutron star and a brown dwarf (e.g. Arons & King (1993); Anderson et al. 1997). In these LMXBs, however, the neutron star contributes only little to the observed UV flux, while in RX J0439 the shell-burning white dwarf is the dominant source of UV radiation. ## 5 The absence of long-term X-ray variability In Fig. 3 we have plotted the count rate of RX J0439 from our ROSAT HRI monitoring of RX J0439. The 25 pointings cover a period of almost 3 years. The count rate is not significantly variable, and the data exclude a temperature change (at constant bolometric luminosity) larger than $`\pm \mathrm{1\hspace{0.17em}000}`$ K yr<sup>-1</sup>. Comparison of the temperature and luminosity of RX J0439 (Sect. 5) with calculations of evolutionary tracks of post-AGB stars (e.g. Blöcker 1995) shows that if RX J0439 is a pre-white dwarf on the horizontal shell-burning track, its mass must be $`0.9`$$`M_{}`$ (Fig. 4). However, these massive white dwarfs evolve so fast near the turn-over to the white dwarf cooling track that even in the short history of ROSAT observations of RX J0439 we should have seen a significant increase in the HRI count rate (note that on the horizontal shell-burning track the effective temperature increases at nearly constant bolometric luminosity). A lower mass white dwarf \[e.g. the ($`M_{\mathrm{ZAMS}}`$, $`M_\mathrm{H}`$) = (5 $`M_{}`$, 0.836 $`M_{}`$) track of Blöcker 1995\] evolves slow enough to be consistent with the available X-ray data, but has a luminosity of $`5\times 10^{37}`$$`\mathrm{erg}\mathrm{s}^1`$ at the appropriate temperatures, which is significantly lower than that of RX J0439. This implies either that the shell-burning in RX J0439 is powered by accretion or that RX J0439 is able to stay at its position in the Hertzsprung-Russell diagram by nuclear burning with a much longer lifetime than predicted by the evolutionary tracks. With respect to the last possibility it should be noted that RX J0439 is rather close to the theoretical carbon-burning main sequence. ## 6 The combined X-ray, ultraviolet and optical spectrum Our ultraviolet spectrum of RX J0439 perfectly agrees both in flux and in slope with the observed optical spectrum presented by van Teeseling et al. van Teeseling et al. (1996). The combined spectrum has a very blue slope consistent with the Rayleigh-Jeans tail of a very hot object, and it is possible to model the entire observed spectrum from X-rays to optical with a single optically thick component. This does not imply, however, that we can exclude additional flux in the ultraviolet and optical, provided that the additional flux has a very blue spectrum as well. Such additional flux is required if the supersoft X-ray component has a higher temperature (and consequentially a smaller inferred radius and bolometric luminosity) than derived with the assumption that the ultraviolet flux is the Rayleigh-Jeans tail of this X-ray component. It is striking how well the overall spectrum of RX J0439, including the absence of any detectable spectral features, matches a single absorbed blackbody with a temperature of $`\mathrm{295\hspace{0.17em}000}`$ K (Fig. 5). If we assume a distance of 50 kpc, this blackbody gives a radius $`R=5\times 10^9`$ cm and a bolometric luminosity $`L=1.6\times 10^{38}`$$`\mathrm{erg}\mathrm{s}^1`$. Because the lack of spectral features, in particular in the ROSAT X-ray spectrum, may be the result of the rather limited spectral resolution and signal-to-noise ratio, we follow van Teeseling et al. van Teeseling et al. (1996) and have fitted $`\mathrm{log}g=7`$ white dwarf model atmospheres to the combined X-ray, ultraviolet and optical spectrum (in fact a $`\chi ^2`$ fit to the ROSAT spectrum with the demand that the Rayleigh-Jeans tail matches the observed ultraviolet and optical flux). In addition to LTE spectra van Teeseling et al. (1994), we have now also used NLTE white dwarf spectra Rauch (1997) in order to investigate which ultraviolet lines might be expected to be present. Van Teeseling et al. van Teeseling et al. (1996) already argued that the atmosphere must contain a significant amount of metals. If not, there would be either an excess of soft X-ray flux or an excess of ultraviolet and optical flux. Even models with 10% solar metal abundance suffer this problem. With metal abundances within a factor of 2 of solar, both the LTE and the NLTE models give an acceptable fit to the overall spectrum with $`T_{\mathrm{eff}}\mathrm{300\hspace{0.17em}000}`$ K and $`L1\times 10^{38}`$$`\mathrm{erg}\mathrm{s}^1`$. Because all models are scaled to the same optical flux, the resulting fit parameters of the LTE and NLTE models do not differ significantly. The near-solar models predict strong Ne VIII absorption edges at 0.22 keV and 0.24 keV. For a cosmic helium abundance, the ultraviolet spectra predict a non-negligible He II$`\lambda `$ 1640 absorption line which is, however, not detected at the present signal-to-noise ratio. This implies either that the assumption of a single spectral component is incorrect, or that the absorption is filled in with emission (without extra continuum flux), or that the white dwarf atmosphere is helium-poor. Note that also in the optical no He II$`\lambda `$ 4686 could be detected and that it appears unlikely that absorption is filled in without the appearance of emission lines Metal rich and helium poor is reminiscent of the exotic and extremely hot ($`\mathrm{200\hspace{0.17em}000}`$ K) PG 1159 star H1504+65, which is the only known pre-white dwarf whose surface is free from both hydrogen and helium Werner (1991); Werner & Wolff (1999). Indeed, the overall spectrum of RX J0439 can be fitted very well (i.e. best $`\chi ^2`$ of all used models) with a single $`\mathrm{log}g=7`$ spectrum with a pure CO composition. The inferred effective temperature is $`\mathrm{310\hspace{0.17em}000}`$ K, the radius at 50 kpc is $`6\times 10^9`$ cm, and the corresponding luminosity is $`3\times 10^{38}`$$`\mathrm{erg}\mathrm{s}^1`$. At this point, we can only speculate about further common properties of the two stars. In contrast to the featureless optical continuum of RX J0439, the optical spectrum of H1504+65 contains numerous high excitation C and O lines. VLT and Chandra observations of RX J0439 are scheduled to probe for emission/absorption features which will allow a more detailed spectral modelling. ## 7 Conclusions Our single orbit of HST/STIS ultraviolet data of RX J0439 have confirmed the previous findings: RX J0439 is a very extreme object, whether as a single post-AGB star or as an accreting (double-degenerate) supersoft X-ray binary. It seems that the flux at all wavelengths is dominated by a very luminous shell-burning white dwarf which is able to maintain its position in the Hertzsprung-Russell diagram near the turnover into the white dwarf cooling track for $`>8`$ years. The combined X-ray, ultraviolet and optical spectrum is consistent with a single spectral component with $`T\mathrm{300\hspace{0.17em}000}`$ K and $`L10^{38}`$ erg s<sup>-1</sup>, and suggests a high metalicity. Interestingly, both the very good fit with a pure CO model, the absence of long-term variability, and the proximity of RX J0439 to the theoretical carbon-burning main sequence, raises the speculative (but spectacular) possibility that RX J0439 represents a completely new type of star. ###### Acknowledgements. This research was supported by the DLR under grant 50 OR 96 09 8 and 50 OR 99 03 6. We thank Falk Herwig for comments on the evolution of He-burners, Norbert Langer for interesting discussions, Howard Lanning for technical support with the HST observations, and the referee, Peter Kahabka, for helpful comments.
no-problem/9910/math9910182.html
ar5iv
text
# Indexed identity and fuzzy set theory ## 1 Introduction Fuzzy set theory appeared for the first time in 1965, in a famous paper by L. A. Zadeh . Since then a lot of fuzzy mathematics has been created and developed. Nevertheless, concepts like fuzzy set, fuzzy subset, and fuzzy equality (between two fuzzy sets) usually depend on the concept of grade of membership. This procedure does not allow us to define fuzzy equality between, e.g., two Urelemente, or even between two potatoes, up to the case that we consider a potato as a set. We present a concept of identity index which allows us to say how similar are two objects, even in the case that these two objects are not sets or fuzzy sets. Such a procedure allows us to define a sort of fuzzy membership (or grade of membership), fuzzy sets, fuzzy inclusion and fuzzy operations among fuzzy sets like union and intersection. This paper is the first one of a series dedicated to the concept of indexed identity and its applications. ## 2 Indexed Identity We use standard logical notation: $`\neg `$ is negation, $``$ is conjunction, $``$ is disjunction, $``$ is conditional, and $``$ is biconditional. ‘$``$’ and ‘$``$’ denote the existence and universal quantifiers, respectively. Next, we define an indexed variables system by means of a set-theoretical predicate, following Suppes ideas about axiomatization . By set theory we mean Zermelo-Fraenkel set theory (with or without Urelemente). But it is obvious that our system may be defined into the scope of other set theories. ###### Definition 1 An indexed variables system is an ordered pair $`X,\mathrm{\Xi }`$ that satisfies the following seven axioms: $`X`$ is a non-empty set. $`\mathrm{\Xi }=\{_r\}_{rR}`$ is a family of binary predicates defined on the elements of $`X`$, where $`R`$ is a subset of the interval of real numbers $`[0,1]`$, such that $`1R`$. When the ordered pair $`(x,y)X\times X`$ satisfies the binary predicate $`_r`$, we denote it by $`x_ry`$. If $`x_ry`$ then $`y_rx`$. $`x_1y`$ iff $`x=y`$. If $`x_ry`$ and $`rs`$ then $`\neg (x_sy)`$. $`xyr(x_ry)`$. ###### Definition 2 The distinction between two elements of $`X`$ is given by $`D(x,y)=1r`$ iff $`x_ry`$. $`D(x,y)+D(y,z)D(x,z)`$ For the sake of simplicity we can call $`X`$ as an indexed variables system or an indexed system. We call the binary relation $`_r`$ indexed identity with index $`r`$ or simply an indexed identity. Here follows an intuitive interpretation of the axioms and primitive concepts. $`X`$ is our space of variables. The sentence $`x_ry`$ corresponds to say that $`x`$ and $`y`$ do have an identity index $`r`$, where $`0r1`$. If $`r`$ is 1 (one) then $`x`$ and $`y`$ are identical objects. If $`r`$ is not 1 then $`x`$ and $`y`$ are different objects. Nevertheless, if $`r`$ is a number ‘close’ to 1, then $`x`$ and $`y`$ are very ‘similar’ objects, i.e., ‘almost identical’ objects. On the other hand, if $`r`$ is close to 0 (zero) then $`x`$ and $`y`$ are very different objects. They have ‘almost zero-identity’. This is reflected in the last axiom. According to F7 if $`x`$ and $`y`$ are very similar (index $`r`$ equal to 0.9, for example) and $`y`$ and $`z`$ are also very similar (index $`r`$ equal to 0.9, as another example), then $`D(x,y)=D(y,z)=0.1`$. So, the distinction between $`x`$ and $`z`$ should be less or equal to 0.2, i.e., they should have an identity index greater or equal to 0.8. In the particular case where $`D(x,y)=D(y,z)=0`$, then $`D(x,z)=0`$, i.e., if $`x=y`$ and $`y=z`$ then $`x=z`$. Hence, axiom F7 is a generalization of the transitivity property of usual equality. The condition, in axiom F2 that $`1R`$ ensures consistency, since $`x_1yx=y`$. It is very important to emphasize that our mathematical framework is based on usual set theory, like Zermelo-Fraenkel’s, for example. ###### Theorem 1 $`D(x,y)`$ is a distance between two points, and $`X,\mathrm{\Xi }`$ induces a metric space $`X,D`$. Proof: According to F4 and definition (2), we have $`D(x,x)=0`$. According to F4 $`D(x,y)>0`$ if $`xy`$. According to F3 $`D(x,y)=D(y,x)`$. According to F7 $`D(x,z)D(x,y)+D(y,z)`$. So, $`D(x,y)`$ is the distance between $`x`$ and $`y`$; and $`X,D`$ is a metric space.$`\mathrm{}`$ Note that $`D`$ is a metric such that $`D(x,y)[0,1]`$. ###### Theorem 2 Any metric space $`M,d`$ induces an indexed variables system $`M,\mathrm{\Xi }`$ Proof: If $`M,d`$ is a metric space, then $`d:M\times M\mathrm{}^+`$ is a metric, where $`\mathrm{}^+`$ is the set of nonnegative real numbers. If we define a function $`f:\mathrm{}^+[0,1]`$ such that $`f(x)=\frac{x}{1+x}`$, then it is easy to prove that the function $`D:M\times M[0,1]`$ given by $`D(a,b)=f(d(a,b))`$ is a metric whose images belong to the set $`[0,1)`$. That is, we can define $`D(a,b)`$ as a distinction between $`a`$ and $`b`$ in the sense that $`a_rb`$ iff $`D(a,b)=1r`$, where $`r(0,1]`$. The reader can easily verify that $`M,\mathrm{\Xi }`$ is an indexed variables system, where $`\mathrm{\Xi }=\{_r\}_r`$. $`\mathrm{}`$ ## 3 How to Calculate the Identity Index? One natural question is: how to calculate the index $`r`$? In other words, what are our criteria to say how similar $`x`$ and $`y`$ are? One possible answer is the use of a family $`\{A_i\}`$ of unary predicates defined over the elements of $`X`$. For practical purposes we could consider a finite family $`\{A_i\}_{iF}`$, where $`F`$ is a finite set with cardinality, say, 100. If $`x`$ and $`y`$ are objects that share 93 predicates of the family $`\{A_i\}_{iF}`$, then they have 93% of similarity, i.e., $`x_{0.93}y`$. If $`x`$ and $`y`$ have nothing in common, then they are totally different objects, i.e., $`x_{0.00}y`$. On the other hand, if $`x`$ and $`y`$ share all the predicates of the family $`\{A_i\}_{iF}`$, then $`x`$ and $`y`$ are identical objects, that is, they are the very same object with two different names or labels. Of course, this procedure just works if we make an adequate choice of possible predicates that objects of a given universe $`X`$ may (or not) satisfy. Such an adequate choice of predicates depends on the problem that we want to solve. It is important to say what do we mean by ‘two objects that share 93 predicates’. We say that $`x`$ and $`y`$ share one given predicate $`A_i`$ iff we have $`A_i(x)A_i(y)`$ or $`\neg A_i(x)\neg A_i(y)`$. We say that $`x`$ and $`y`$ do not share the predicate $`A_i`$ iff we have $`A_i(x)\neg A_i(y)`$ or $`\neg A_i(x)A_i(y)`$. We say that $`x`$ and $`y`$ share $`n`$ predicates iff there are $`n`$ different predicates $`A_i`$ that $`x`$ and $`y`$ share. As a simple example consider the set $`X=\{2,3,8\}`$, and the following family of predicates $`\{A_1,A_2,A_3\}`$, where $`A_1(x)`$ means that $`x`$ is an even number, $`A_2(x)`$ says that $`x`$ is an odd number and $`A_3(x)`$ corresponds to say that $`x`$ is a prime number. In this case we can calculate the index $`r`$ of indistinguishability between two elements $`x`$ and $`y`$ of $`X`$ as it follows: $$r=\frac{\text{number of predicates }A_i\text{ that }x\text{ and }y\text{ share}}{\text{total number of predicates}}$$ (1) So, we have $`2_{1/3}3`$, $`3_08`$, $`2_{2/3}8`$, $`2_12`$, $`3_13`$, and $`8_18`$, since $`A_1(2)\neg A_2(2)A_3(2)\neg A_1(3)A_2(3)A_3(3)A_1(8)\neg A_2(8)\neg A_3(8)`$. Thus $`D(2,3)=\frac{2}{3}`$, $`D(3,8)=1`$, and $`D(2,8)=\frac{1}{3}`$. It is easy to verify that $`X`$ is an indexed variables system. So, in this context, the number 2 is more similar to number 8, than to number 3. In this same context, numbers 3 and 8 have nothing in common, since their index of indistinguishability $`r`$ is zero. This example is recalled in the next Sections, in order to illustrate some definitions and theorems. As a final remark, note that definition (1) does not give any hint on how to calculate the index $`r`$. Actually, the calculation of $`r`$ depends on the particular problem that we want to solve by using the concept of indexed identity. Any generalization of equation (1) does not necessarily encompass all possible methods for calculating $`r`$. ## 4 ‘Fuzzy’ Set Theory with Indexed Identity In this Section we present an i-fuzzy set theory (‘i’ stands for indexed) based on the concept of indexed identity. We also show that this is a special case of Zadeh’s original fuzzy set theory. ###### Definition 3 If $`X`$ is an indexed variables system then an i-fuzzy set is a function $`F:XR`$, such that $`x,rF`$ iff $`yX(y_rx)`$. According to current literature , a fuzzy subset of a given $`X`$ is a function from $`X`$ into $`[0,1]`$. So: ###### Theorem 3 Every i-fuzzy set is a fuzzy set. Proof: Straightforward from definitions of fuzzy set and i-fuzzy set.$`\mathrm{}`$ This last theorem allows us to establish a relationship between fuzzy set theory (a la Zadeh) and our indexed variables system. ###### Definition 4 The set of all i-fuzzy sets $`F:XR`$ is denoted by $`(X;R)`$. ###### Definition 5 Let $`X`$ be an indexed variables system. If $`F`$ is an i-fuzzy set and $`x`$ is an element of $`X`$, then $`x_rF`$ iff $`x,rF`$. In other words, $`x_rF`$ iff $`F(x)=r`$. ###### Definition 6 If $`F`$ and $`G`$ are i-fuzzy sets, the union $`FG`$ is a function $`FG:XR`$ defined as it follows: $`x,rFG`$ iff $`r=\mathrm{max}\{F(x),G(x)\}`$. ###### Definition 7 If $`F`$ and $`G`$ are i-fuzzy sets, the intersection $`FG`$ is a function $`FG:XR`$ defined as it follows: $`x,rFG`$ iff $`r=\mathrm{min}\{F(x),G(x)\}`$. ###### Theorem 4 The union of two i-fuzzy sets is an i-fuzzy set. Proof: According to definition (6), if $`x,rFG`$ then $`r=F(x)`$ or $`r=G(x)`$. Since $`F`$ is an i-fuzzy set, then there exists $`yX`$ ($`X`$ is a given indexed variables system) such that $`y_{F(x)}x`$. Analogously, there is $`zX`$ where $`z_{G(x)}x`$. So, there is $`w`$ (which is $`y`$ or $`z`$) such that $`w_{\mathrm{max}\{F(x),G(x)\}}x`$. Hence, $`FG`$ is an i-fuzzy set.$`\mathrm{}`$ ###### Theorem 5 The intersection of two i-fuzzy sets is an i-fuzzy set. Proof: Analogous to the previous proof.$`\mathrm{}`$ ###### Definition 8 Let $`X`$ be an indexed variables system. If $`F`$ and $`G`$ are i-fuzzy sets then the distinction between $`F`$ and $`G`$ is given by $$D(F,G)=\underset{xX}{sup}|F(x)G(x)|,$$ where $`sup`$ stands for the supremum. ###### Definition 9 If $`F`$ and $`G`$ are i-fuzzy sets then $`F_rG`$ iff $`D(F,G)=1r`$. The set of predicates $`_r`$ defined on elements of $`(X;R)`$ is denoted by $`\mathrm{\Xi }`$. ###### Theorem 6 $`(X;R),\mathrm{\Xi }`$ is an indexed variables system. Proof: We have to prove that $`(X;R),\mathrm{\Xi }`$ satisfies axioms F1F7. So, we split this proof into seven parts. Hence, if $`F`$, $`G`$, and $`H`$ are i-fuzzy sets: 1. Since $`X`$ is nonempty, then $`(X;R)`$ is nonempty. This verifies axiom F1. 2. According to definition (3), $`0F(x)1`$ and $`0G(x)1`$, for all $`xX`$. So, $`0sup_{xX}|G(x)F(x)|1`$, i.e., $`0D(F,G)1`$. Since $`D(F,G)=1r`$, where $`F_rG`$ (definition (9)), then $`\mathrm{\Xi }=\{_r\}_{rR}`$ is a family of binary predicates defined on the elements of $`(X;R)`$, where $`R`$ is a subset of the interval of real numbers $`[0,1]`$, such that $`1R`$. As a matter of fact, the proof that $`1R`$ is given in step 4 of this proof. This verifies axiom F2. 3. $`sup_{xX}|F(x)G(x)|=sup_{xX}|G(x)F(x)|`$, i.e., $`D(F,G)=D(G,F)`$. So, according to definition (9), $`F_rG`$ iff $`G_rF`$. This verifies axiom F3. 4. $`F_1G`$ iff $`F=G`$, according to definitions (8) and (9). This verifies axiom F4. 5. If $`F_rG`$ then $`D(F,G)=1r`$. According to definition (8) there is no $`s`$ such that $`sr`$ and $`D(F,G)=1s`$. So, there is no $`s`$ such that $`sr`$ and $`F_sG`$. This verifies axiom F5. 6. Since $`F`$ and $`G`$ are limited functions, there is always a supremum $`sup_{xX}|G(x)F(x)|`$. So, $`xyr(F_rG)`$. This verifies axiom F6. 7. Since $`(X;R)`$ is a space of limited functions, then $`(X;R),D`$ is a metric space, where $`D`$ is given as in definition (8). This occurs because the distinction between two fuzzy sets is the well known metric of uniform convergence or sup metric. Then the triangle inequality is satisfied. This verifies axiom F7.$`\mathrm{}`$ ###### Corollary 1 $`D(F,G)`$ is a distance between two i-fuzzy sets, and $`(X;R),\mathrm{\Xi }`$ induces a metric space $`(X;R),D`$. Recalling the example given in Section III, we can define, e.g., the following i-fuzzy sets: $`F=\{8,1,2,2/3,3,0\}`$, $`G=\{3,1,2,1/3,8,0\}`$, and $`H=\{2,1,8,2/3,3,1/3\}`$. So, $`FG=\{8,1,3,1,2,2/3\}`$, $`FG=\{3,0,8,0,2,1/3\}`$, and $`D(H,FG)=2/3`$. ## 5 Indexing Predicates In this Section we give a general procedure which allows us to index any predicate, at least in principle. Since our mathematical framework is set theory, consider a set $`S`$ defined by means of the Separation Schema of Zermelo-Fraenkel set theory: $$S=\{xX;P(x)\}$$ where $`X`$ is a given set and $`P`$ is a given predicate. So, if $`X`$ is ‘the set of human beings’ and $`P`$ is the predicate ‘to be smart’, then $`S`$ corresponds to the set of smart people. The question now is: how to index predicate $`P`$? In other words: how to index the set $`S`$? The procedure that we suggest follows in the next paragraphs. ###### Definition 10 Let $`S`$ and $`X`$ be the sets given above. If $`x`$ is an element of $`X`$, the distinction between $`x`$ and $`S`$ is $$D(x,S)=\underset{yS}{inf}D(x,y)$$ Note that this last definition allows us to index the predicate $`P`$, i.e., the set $`S`$. By indexing $`P`$ we mean the definition of $`D(x,S)`$. ###### Definition 11 $`x_rS`$ iff $`D(x,S)=1r`$. ###### Example 1 If $`X`$ is ‘the set of human beings’ and $`P`$ is the predicate ‘to be smart’, then $`S`$ corresponds to the set of smart people. If we define a distinction function $`D(x,S)`$ where $`xX`$, then we are indexing the concept of ‘being smart’. If $`D(Einstein,S)=0.5`$, then $`Einstein_{0.5}S`$, i.e., $`Einstein`$ is a half-smart person. The index $`r`$ corresponds to a degree of smartness. ###### Example 2 If $`X`$ is the set of subsets of a given set $`T`$ and $`P`$ is the predicate ‘to be open’ (in the usual topological sense), then $`S`$ corresponds to the topology of a topological space. If we define a distinction function $`D(x,S)`$ where $`xX`$, then we are indexing the concept of ‘being open’. If $`D(a,S)=0.7`$, then $`a_{0.3}S`$, i.e., $`a`$ has a 0.3 degree of openess. These two examples can help us to see how powerful is this method of indexation. In the the second example we showed how to index the concept of ‘open set’ in a topological space. But we can discuss about how to index the very concept of topological space. If we want to index the predicate ‘to be a topological space’ rather than topological concepts like ‘to be open’ or ‘to be compact’, then we need: (1) a universe class $`X`$ which corresponds to the collection of ordered pairs $`T,𝒯`$ of sets; and (2) a predicate $`P`$ such that $`P(\mathrm{})`$ iff $`T𝒯`$ such that (i) $`\mathrm{}=T,𝒯`$, (ii) $`T`$ is a non-empty set, (iii) the elements of $`𝒯`$ are subsets of $`T`$, (iv) $`\mathrm{}𝒯`$, (v) $`T𝒯`$, (vi) if $`t_1`$ and $`t_2`$ are elements of $`𝒯`$ then $`t_1t_2𝒯`$, and (vii) an arbitrary union of elements of $`𝒯`$ is still an element of $`𝒯`$. Besides, we need a distinction function (which is a metric) $`D:X\times X[0,1]`$. Since in Zermelo-Fraenkel set theory there is no such a thing like the set of all ordered pairs of sets, then we cannot found our mathematical framework into usual set theory. We could consider $`X`$ as a category. In this case we should extend definition (1) to a category-theoretical predicate, which is a task for future works. Something analogous could be said about groups, vector spaces, lattices, fields, and other mathematical theories usually founded into the scope of set theory. ## 6 Conclusions The main advantages of our mathematical framework are: 1. It is a kind of fuzzy mathematics (in the intuitive sense), which fuzzifies the concept of equality rather than that one of membership (as in the original work of Zadeh). So, it offers another point of view in the process of fuzzification. 2. It allows us to use the theory of metric spaces, at least in principle, in order to derive theorems in fuzzy set theory. 3. It allows us a generalized method of ‘fuzzification’ of predicates, in the sense given in the previous Section. ## 7 Acknowledgements We akcnowledge with thanks Aurélio Sartorelli and Soraya R. T. Kudri for helpful suggestions and criticisms.
no-problem/9910/hep-ph9910445.html
ar5iv
text
# ONE–LOOP RADIATIVE CORRECTIONS TO CHARGINO PAIR PRODUCTION ## Acknowledgments I am thankful to my collaborators S.F. King and D.A. Ross for their invaluable contribution to the work presented here. I also thank H. Baer and the Physics Department of the Florida State University for their hospitality, where I was supported by the U.S. DOE contract number DE-FG02-97ER41022. ## References
no-problem/9910/astro-ph9910304.html
ar5iv
text
# Untitled Document A Speckle Experiment during the Partial Eclipse S. K. Saha, B. S. Nagabhushana, A. V. Ananth and P. Venkatakrishnan Indian Institute of Astrophysics, Bangalore 560034 Abstract An experiment for the speckle reconstruction of solar features was developed for observing the partial eclipse of the sun as viewed from Bangalore on October 24, 1995. No data could be obtained because of cloudy sky but the experimental details are described. Key Words: Solar speckle, Image Reconstruction, Lunar limb. 1. Introduction Many problems in solar physics require information about the solar surface features at the highest possible angular resolution. The earth’s atmosphere blurs the images. In the case of night-time astronomy, image reconstruction techniques have been developed (Labeyrie, 1970, Knox and Thompson, 1974, Weigelt, 1978), that take advantage of a nearby point source as a reference for the reconstructions. This is not possible for the solar features. The lunar limb provides a sharp edge as a reference object during solar eclipses. A solar speckle experiment was planned for observing the solar eclipse of Oct. 24, 1995 visible as a partial eclipse from Bangalore. The Instrumentation A Carl-Zeiss 15 cm Cassegrain-Schmidt reflector was used as the telescope for the experiment. To prevent heating of the optics, an aluminised glass plate was fixed in front of the telescope that reflected back 80 percent of the sunlight and transmitted only 20 percent. A 3 nm passband filter centered at 600 nm was placed after this, followed by another polaroid mounted on a rotatable holder as shown in Figure 1. The amount of light falling on to the camera can be adjusted by rotating the second polaroid. A pin-hole of 1 mm diameter was placed at the focal plane for isolating a small field-of-view. A microscope objective reimages the pin-hole on to the camera. The camera is a EEV CCD camera operated in the TV mode. The images can be acquired with exposure time of 20 ms using a Data Translation<sup>TM</sup> frame-grabber card DT-2861 and subsequently stored on to the hard disk of a PC/AT computer. Result No images of the partially eclipsed sun could be acquired due to unfavourable weather conditions at Bangalore on Oct. 24, 1995. The image reconstruction involves the treatment of both amplitude errors and phase errors. The 20 ms exposure time is small enough to preserve phase errors. Any of the schemes for phase reconstruction that satisfactorily reproduces the lunar limb would be valid for solar features close to the limb, i. e., within the isoplanatic patch. Also, the limb reconstruction would be valid only for phase distortions along one dimension (in a direction normal to the lunar limb). In spite of these shortcomings, the limb data would have provided additional constraints for techniques like blind iterative deconvolution. Acknowledgments The personnel of Bangalore workshop, in particular Messrs T. Periyanayagam and N. Thimmaiah, provided excellent support for fabricating the instrument. Mr. V. Gopinath of photonics division, Mr. R. M. Paulraj of mechanical design section, Mr. K. Padmanabhan of electrical division, A. S. Babu of Electronics division also helped in various ways. The enthusiastic help rendered by Messers V. Krishnakumar, K. Sankarasubramanian and R. Sreedharan (all Ph. D students) during the testing phase is also acknowledged. References Knox K.T. & Thompson B.J., 1974, Astrophys. J. 193, L45. Labeyrie A., 1970, Astron. Astrophys., 6, 85. Weigelt G., 1978, Appl. Opt. 17, 2660. Figure Caption Figure 1: Schematic layout of the instrument.
no-problem/9910/cond-mat9910221.html
ar5iv
text
# The Fermi surface of Bi2Sr2CaCu2O8 ## Abstract We study the Fermi surface of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> (Bi2212) using angle resolved photoemission (ARPES) with a momentum resolution of $`0.01`$ of the Brillouin zone. We show that, contrary to recent suggestions, the Fermi surface is a large hole barrel centered at ($`\pi ,\pi `$), independent of the incident photon energy. The Fermi surface, the locus in momentum space of gapless electronic excitations, is a central concept in the theory of metals. Despite the fact that the optimally doped high temperature superconductors display an anomalous normal state with no well-defined quasiparticles , many angle resolved photoemission spectroscopy (ARPES) studies using photon energies in the range of 19-22 eV have consistently revealed a large hole-like Fermi surface centered at ($`\pi ,\pi `$) with a volume consistent with the Luttinger count of $`(1x)`$ electrons (where $`x`$ is the hole doping). This widely accepted picture has recently been challenged by two studies which suggest a different Fermi surface when measured at a higher photon energy (32-33 eV). These recent studies propose that the Fermi surface consists of a large electron pocket centered on ($`0,0`$) with a clear violation of the Luttinger count. To reconcile their model with previous data at 22 eV photon energy, these authors suggest the presence of “additional states” near ($`\pi ,0`$), possibly due to stripe formation. Setting aside for the moment the important question of what the true Fermi surface of Bi2212 is, the implication of a photon energy dependent Fermi surface from ARPES data is particularly worrisome, and deserves to be addressed. Here, we present extensive ARPES data taken at various photon energies and find clear evidence that the Fermi surface measured by ARPES is independent of photon energy, and consists of a single hole barrel centered at $`(\pi ,\pi )`$. Although the data of Refs. are consistent with ours, their limited sampling of the Brillouin zone and lower momentum resolution lead to a misinterpretation of the topology of the Fermi surface. This occurs because of the presence of ghost images of the Fermi surface due to diffraction of the outgoing photoelectrons by a Q vector of $`\pm (0.21\pi ,0.21\pi )`$ associated with the superlattice modulation in the BiO layers (umklapp bands). In particular, in following a Fermi contour, if the data are not dense enough in $`𝐤`$-space, or not of sufficiently high momentum resolution, one can inadvertently “jump” from the main band to one of the umklapp bands, concluding incorrectly that the topology of the Fermi surface is electron-like. This is particularly relevant at the photon energy of 33eV because of a strong suppression of the ARPES matrix elements at k points in the vicinity of ($`\pi ,0`$), a final state effect, resulting in a large umklapp/main band signal ratio near ($`0.8\pi ,0`$) where the purported electron Fermi surface crossing occurs. ARPES probes the occupied part of the electron spectrum, and for quasi-2D systems its intensity $`I(𝐤,\omega )`$ is proportional to the square of the dipole matrix element, the Fermi function $`f(\omega )`$, and the one-electron spectral function $`A(𝐤,\omega )`$ . The measured energy distribution curve (EDC) is obtained by the convolution of this intensity with experimental resolution. In another paper, we discuss in great detail the various methodologies for determining the Fermi surface from ARPES data. Here, we look at two quantities: (1) the dispersion of spectral peaks obtained from the energy distribution curves, and (2) the ARPES intensity integrated over a narrow energy range about the Fermi energy. As we will show, these methods must be treated with care because of the $`𝐤`$ dependence of the matrix elements and the presence of the umklapp bands. The ARPES experiments were performed at the Synchrotron Radiation Center, Wisconsin, using a plane grating monochromator beamline with a resolving power of 10<sup>4</sup> at 10<sup>12</sup> photons/s, combined with a SCIENTA-200 electron analyser used in angle resolved mode. A typical measurement involved the simultaneous collection and energy/momentum discrimination of electrons over a $`12^{}`$ range (cut) with an angular resolution window of $`(0.5^{},0.26^{})`$ ($`0.26^{}`$ parallel to the cut). This corresponds to a momentum resolution of (0.038,0.020)$`\pi `$, (0.029,0.015)$`\pi `$, and (0.022,0.012)$`\pi `$ at 55, 33, and 22 eV respectively. The energy resolution for all data was $``$16 meV (FWHM). The quality of the optimally doped single crystal samples cannot be emphasized enough, particularly in regards to the flatness of the surface after cleave. A change of 1$`\mu `$m in height over the width of the sample is readily detectable as a broadening of the spectral features, and therefore care was exercised in studying very flat samples with sharp x-ray diffraction rocking curves. Reference spectra were collected from polycrystalline Au (in electrical contact with the sample) and used to determine the chemical potential (zero binding energy). To begin we look at data, Fig. 1, taken on an optimally doped Bi2212 sample ($`T_c`$=90K), measured at T=40K at 33eV. The light polarization was parallel to $`\mathrm{\Gamma }`$X (we use the notation $`\mathrm{\Gamma }`$=($`0,0`$), X=($`\pi ,\pi `$), Y=($`\pi ,\pi `$) and M=($`\pi ,0`$), with $`\mathrm{\Gamma }`$Y parallel to the superlattice modulation) and EDCs were collected on a regular lattice of k points ($`\delta k_x=1^{}`$, $`\delta k_y=0.26^{}`$). We first examine spectra along the $`\mathrm{\Gamma }`$Y direction. The EDCs are shown in the middle panel of Fig. 1, and the left panel shows a two dimensional plot of the energy and momentum distribution of the photoelectrons along the $`\mathrm{\Gamma }`$Y cut. A strong main band (MB) and additional umklapp bands (UB) can be observed in this plot. Around ($`0,0`$), there is a weaker pair of higher order umklapps (UB(2) corresponding to a translation of $`\pm (0.42,0.42)\pi `$) as observed previously , which confirms the diffraction origin of the umklapp bands. Along this cut, we also see the ($`\pi ,\pi `$) translation of the main band, the so-called shadow band (SB) , which is probably associated with the two formula units per base orthorhombic unit cell. Fig. 1c shows the integrated intensity within a $`\pm `$100 meV window about the chemical potential. We note the very rapid suppression of intensity beyond $`0.8\mathrm{\Gamma }`$M , which does not occur at 22eV. This is what led the authors of Refs. to suggest the existence of an electron-like Fermi surface with a crossing at this point. As first discussed in an earlier paper, and addressed in greater detail here, we will demonstrate that instead, this crossing is due to one of the umklapp bands. This umklapp crossing is more obvious at 33 eV, since, unlike at 22 eV, the main band intensity is suppressed by matrix element effects. To examine this issue more closely, we measured another optimally doped sample ($`T_c`$=90K, T=40K) with 33eV photons polarized parallel to $`\mathrm{\Gamma }`$M, shown in Fig. 2, with the same high density of k-points as for the $`\mathrm{\Gamma }`$X oriented sample. The integrated intensity at E<sub>F</sub> is similar to the $`\mathrm{\Gamma }`$X oriented sample in that there is an ‘apparent’ closed Fermi surface around ($`0,0`$), indicated by an arrow in Fig. 2b. However, on closer inspection of the ($`\pi ,0`$) region, we see what is truly occuring. Fig. 2c shows slices parallel to MY from the plot of Fig. 2b. The main band (MB), indicated by the short black bars, continues to run parallel to $`\mathrm{\Gamma }`$M, but its intensity is heavily suppressed near M. In addition, the ($`+`$) umklapp band, indicated by red bars, splits away from the main band, disperses towards M, and dies in intensity. A transposed version of this occurs beyond M with the ($``$) umklapp, indicated in blue. Similar behaviour is also seen at the M point at the top of Fig. 2b (slices not shown) and in the $`\mathrm{\Gamma }`$X oriented sample of Fig. 1c. It is easy to see how sparse data at lower resolution can easily lead one to miss the suppressed main band crossing along MY at 33eV. Fig. 2d shows a plot similar to the one in Fig. 2c, but at a resolution of (0.11,0.11)$`\pi `$ \- the same as that used in Ref. \- instead of (0.029,0.015)$`\pi `$ in Fig. 2c. Clearly, it is no longer possible to distinguish the umklapp from the main band (MB), and one might wrongly suppose that the Fermi surface curves around to cross the $`\mathrm{\Gamma }`$M line. But we emphasize again that such a supposition is only a result of sparse data, and not of any inherent differences in experimental results. It is worth noting that in the $`\mathrm{\Gamma }`$X oriented sample (Fig. 1c) we can see a weak signal corresponding to the main band MY Fermi crossing, which becomes stronger at 22eV. Therefore, if quantities based on integrated intensity are used to define the Fermi surface, one may falsely infer a crossing along $`\mathrm{\Gamma }`$M due to the ($`+`$) umklapp band, as indicated in Fig. 2d. The presence of the umklapps can also explain the origin of the asymmetry in the underlying intensity plot at M in Fig. 2b. At 33eV the ARPES signal from the main band is strongly suppressed near M due to the matrix elements, but since the umklapps are translated by $`\pm (0.21,0.21)\pi `$, we get a diagonal-like suppression of the total signal near M. This can be appreciated by looking at the dashed segments of the overlay on Fig. 2b. That is, the umklapp signal at $`𝐤`$ is comparable in intensity to the main band signal at $`𝐤\pm 𝐐`$, as expected if the umklapp is simply a diffraction of the outgoing photoelectrons by the BiO superlattice. According to this picture, when moving along $`\mathrm{\Gamma }`$M, there should be a crossover from the ($`+`$) to ($``$) umklapp, and this is in fact seen in the raw data. Fig. 3 show extensive EDCs taken in cuts parallel to MY at k-points along $`\mathrm{\Gamma }`$M for 33 eV. In most of these plots, the main band (crossing shown by a black square) is the strongest signal and the $`\pm `$ umklapps (crossings shown as black and white arrows) are a weaker signal superimposed near the $`\mathrm{\Gamma }`$M line. A constant offset has been used in the figures so that the umklapp crossings appear as a “bunching” of the spectra. Going from $`\mathrm{\Gamma }`$ to M we see, in the following order, the ($``$) umklapp (white arrow), the ($`+`$) umklapp (black arrow) which disappears at ($`\pi ,0`$), and finally the reappearance of the ($``$) umklapp (white arrow). We find that the dispersion of all the main and umklapp signals are consistent with the tight binding fit to the dispersion at 19-22eV . The difference is that at 22eV , the suppression of the signal at ($`\pi ,0`$) is weaker, and the MY crossing of the main band is clearer. In Fig. 4a, we show an intensity map from the sample of Fig. 1, but at 55 eV photon energy. From this intensity plot, one can clearly see the main Fermi surface and its two umklapp images, and the correlation of this image with a single large hole surface around $`(\pi ,\pi )`$ together with its predicted umklapp images (Fig. 4b) is striking. In conclusion, we find that the Fermi surface of Bi2212 is a single hole barrel centered at $`(\pi ,\pi )`$, a result which we find to be independent of photon energy. Rather, we have demonstrated that the unusual intensity variation observed by previous authors at 33 eV is caused by a combination of matrix element effects and the presence of umklapp bands caused by the diffraction of the photoelectrons from the BiO superlattice. This work was supported by the US National Science Foundation, grants DMR 9624048, and DMR 91-20000 through the NSF Science and Technology Center for Superconductivity, the US Dept of Energy Basic Energy Sciences, under contract W-31-109-ENG-38, the CREST of JST, and the Ministry of Education, Science and Culture of Japan. The Synchrotron Radiation Center is supported by the NSF grant DMR-9212658. M.R. was supported in part by the Indian DST through the Swarnajayanti scheme.
no-problem/9910/hep-ph9910355.html
ar5iv
text
# The Fractal Properties of the Source and BEC 11footnote 1Talk presented by O.V.Utyuzh at the 12^{𝑡⁢ℎ} Indian Summer School ”Relativistic Ion Physics” held in Prague, Czech Republic, 30 August - 3 September 1999, to be published in Czech J. Phys. ## 1 Formulation of the problem ### 1.1 Introduction Two features seen in the analysis of multiparticle spectra of secondaries produced in high energy collision processes are of particular interest: $`(i)`$ intermittent behaviour observed in analysis of factorial moments and $`(ii)`$ Bose-Einstein correlations (BEC) observed between identical particles. Whereas the former indicates a possible (multi) fractal structure of the production process (in the momentum space) the latter provides us with knowledge on the space-time aspects of production processes . It was argued that these features are compatible with each other only when: $`(i)`$ either the shape of interaction region is regular but its size fluctuates from event to event according to some power-like scaling law or $`(ii)`$ the interaction region itself is a self-similar fractal (in coordinate space) extending over a very large volume. Although there exists a vast literature on the (multi)fractality in momentum space its space-time aspects are not yet fully recognized with remaining so far the only representative investigations in this field. We shall present here numerical analysis of a particle production model possessing both the momentum and coordinate space fractalities extending therefore ideas discussed in to a more realistic scenario. ### 1.2 Cascade model used As a model we shall choose a simple self-similar cascade process of the type discussed in but developed further to meet our demands. The following point must be stressed. Every cascade model is expected to lead automatically to intermittent behaviour of momentum spectra of observed particles . Although this is true for models based on random multiplicative processes in observed variables (like energy, rapidity or azimuthal angle), this is not necessarily the case for multiplicative processes in variables which are not directly mesurable but which are, nevertheless, of great dynamical importance (like masses $`M_i`$ of intermediate objects in a cascade process considered here). In a purely mathematical case, where cascade process proceeds ad infinitum, one eventually always arrives at some fractal picture of the production process. However, both the finite masses $`\mu `$ of produced secondaries and the limited energy $`M`$ stored originally in the emitting source prevent the full development of such fractal structure . One must be therefore satisfied with only limited and indirect presence of such structure. This applies also to the analysis presented here. In our model some initial mass $`M`$ “decays” into two masses, $`MM_1+M_2`$ with $`M_{1,2}=k_{1,2}M`$ and $`k_1+k_2<1`$ (i.e., a part of $`M`$ equal to $`(1k_1k_2)M`$ is transformed to kinetic energies of the decay products $`M_{1,2}`$). The process repeats itself until $`M_{1,2}\mu `$ ($`\mu `$ being the mass of the produced particles) with successive branchings occuring sequentially and independently from each other, and with a priori different values of $`k_{1,2}`$ at each branching, but with energy-momentum conservation imposed at each step. For different choices of dimensionality $`D`$ of cascade process, $`D=1`$ (linear) or $`D=3`$ (isotropic), and for different (mostly random) choices of decay parameters $`k_{1,2}`$ at each vertex, we are covering a variety of different possible production schemes, ranging from one-dimensional strings to isotropic three-dimensional thermal-like fireballs. For our purpose of investigation of connections between BEC and space-time fractality of the source we have extended this (momentum space) cascade also to the space-time and we have added to it a kind of BEC “afterburner” along the lines advocated recently in . What concerns space-time development we model it by introducing a fictitious finite life time $`t`$ for each vertex mass $`M_i`$, distributed according to some prescribed distribution law given by $$\mathrm{\Gamma }(t)=\frac{2q}{\tau }\left[1(1q)\frac{t}{\tau }\right]^{\frac{1}{1q}}\stackrel{|q1|0}{}\mathrm{\Gamma }(t)=\frac{1}{\tau }\mathrm{exp}\left[\frac{t}{\tau }\right].$$ (1) This procedure is purely classical, i.e., intermediate masses $`M_i`$ are not treated as resonances (as was done in ) but are regarded to be stable clusters with masses given by the corresponding values of decay parameters $`k_{1,2}`$ and with velocities $`\stackrel{}{\beta }=\stackrel{}{P}_{1,2}/E_{1,2}`$ ($`(E_{1,2};\stackrel{}{P}_{1,2})`$ are the corresponding energy-momenta of decay products calculated in each vertex in the rest frame of the parent mass). The energy-momentum and charges are strictly conserved in each vertex separately. The form of $`\mathrm{\Gamma }`$ used in (1) allows to account for the possible fluctuations of the evolution parameter $`\tau `$ and finds its justification in the Tsallis statistics, to which we shall return later when discussing our results . ### 1.3 Bose-Einstein correlations Our main goal is the investigation of the BEC, in particular whether these correlations show indeed some special features which could be attributed to the branchings and to their space-time and momentum space structure. We are therefore interested in two particle correlation function $$C_2(Q=|p_ip_j|)=\frac{dN(p_i,p_j)}{dN(p_i)dN(p_j)}.$$ (2) To calculate it we have decided to use the ideas of the BEC “afterburners” advocated recently in . Such step is necessary because cascade per se do not show bosonic bunching in momenta (as is the case in models where Bose statistics is incorporated from the beginning, like ; however, we cannot follow this strategy here). Because we are interested in some possible systematics of results rather then in particular values of the “radius” $`R`$ and “coherence” $`\lambda `$ parameters characterizing source $`M`$, we have chosen the simplest, classical version of such afterburner. After generating a set of $`i=1,\mathrm{},N_l`$ particles for the $`l^{th}`$ event we choose all pairs of the same sign and endow them with the weight factors of the form $$C=1+\mathrm{cos}\left[(r_ir_j)(p_ip_j)\right]$$ (3) where $`r_i=(t_i,\stackrel{}{r}_i)`$ and $`p_i=(E_i,\stackrel{}{p}_i)`$ for a given particle. The signs are connected with charges which each cascade vertex is endowed with using simple rules: $`\left\{0\right\}\{+\}+\{\}`$, $`\{+\}\{+\}+\left\{0\right\}`$ and $`\{\}\left\{0\right\}+\{\}`$. ## 2 Results Although it is straightforward to get our cascade model in the form of the Monte Carlo code, the main features of $`D=1`$ case can be also demonstrated analytically. For example, in the limiting cases of totally symmetric cascades (where for all vertices $`k_{1,2}=k`$), in which amount of energy allocated to the production is maximal, one gets the following multiplicity of produced particles: $$N_s=\mathrm{\hspace{0.17em}2}^{L_{max}}=\left(\frac{M}{\mu }\right)^{d_F},d_F=\frac{\mathrm{ln}2}{\mathrm{ln}\frac{1}{k}}.$$ (4) It is entirely given by the length of the cascade, $`L_{max}=\mathrm{ln}(M/\mu )/\mathrm{ln}(1/k)`$, $`\mu =\sqrt{m_0^2+p_T^2}`$. The exponent $`d_F`$ is formaly nothing but a generalized (fractal) dimension of the fractal structure in phase space formed by our cascade. Notice the characteristic power-like behaviour of $`N_s(M)`$ in (4) which is normally atributed to thermal models. For example, for $`k=1/4`$ one has $`N_sM^{1/2}`$, which in thermal models would correspond to the ideal gas equation of state with velocity of sound $`c_0=1/\sqrt{3}`$ . In the opposite limiting case of maximally asymmetric cascades, $`M\mu +M_1`$ (where $`k_1=\mu /M`$ and $`k_2=k`$) in which the amount of kinetic energy allocated to the produced secondaries is maximal, the corresponding multiplicity is equal $$N_a=\mathrm{\hspace{0.17em}1}+L_{max}=\mathrm{\hspace{0.17em}1}+\frac{1}{\mathrm{ln}\frac{1}{k}}\mathrm{ln}\frac{M}{\mu }.$$ (5) The dependence on $`L_{max}`$ is now linear (i.e., dependence on the energy is logarithmic). The important feature, which turns out to be valid also in general, is the observed scaling in the ratio of the available mass of the source $`M`$ and the mass of produced secondaries: $`M/\mu `$. The $`D=3`$ case differs only in that the decay products can flow in all possible directions, which are chosen randomly from the isotropic angular distribution. To allow for some nonzero transverse momentum in $`D=1`$, we are using the transverse mass $`\mu =0.3`$ GeV. For $`D=3`$ cascade $`\mu `$ is instead put equal to the pion mass, $`\mu =0.14`$ GeV. All decays are described in the rest frame of the corresponding parent mass $`M_i`$ in a given vertex. To get the final distributions one has to perform a necessary number of Lorentz transformations to the rest frame of the initial source mass $`M`$. As an output we are getting in each run (event) a number $`N_j`$ of secondaries of mass $`\mu `$ with energy-momenta $`(E_j;\stackrel{}{P}_j)_i`$ and birth space-time coordinates $`(t_j;\stackrel{}{r}_j)_i,i=1,\mathrm{},N_j`$ (i.e., coordinates of the last branching). Results presented in Figs. 1 and 2 are obtained from $`50000`$ such events. Decay parameters $`k_{1,2}`$ were chosen randomly from a triangle distribution $`P(k)=(1k)`$ (leading to a commonly accepted energy behaviour of $`N(M)M^{0.4÷0.5}`$, as discussed above). For more detailed presentation of rapidity and multiplicity distributions and demonstration of intermittent behaviour of factorial moments see . Fig. 1 shows densities $`\rho (r)`$ of points of production and correlation function $`C_2(Q)`$ as defined in (3) for $`D=1`$ and $`3`$ dimensional cascades originated from masses $`M=10,40`$ and $`100`$ GeV. The evolution parameter is set equal $`\tau =0.2`$ fm (in we discuss also $`\tau 1/M`$ case). The decay function $`\mathrm{\Gamma }`$ is taken exponential (i.e., $`q=1`$). We observe, as expected in , a power-like behaviour of cascading source: $$\rho (r)\left(\frac{1}{r}\right)^Lr>r_0,$$ (6) but only for $`r>r_0`$, i.e., for radii larger then some (not sharply defined) radius $`r_0`$, value of which depends on all parameters used: mass $`M`$ of the source, dimensionality $`D`$ and evolution parameter $`\tau `$ of the cascade. Below $`r_0`$ the $`\rho (r)`$ is considerably bended, remaining almost flat for $`D=1`$ cascades. For the limiting case of $`M=100`$ GeV the corresponding values of parameter $`L`$ vary from $`L=1.89`$ for one $`D=1`$ cascades to $`L=2.78`$ for $`D=3`$ cascades. The shapes of $`\rho (r)`$ scale in the ratio $`M/\mu `$ in the same way as the multiplicities discussed before (the same remains also true for rapidity and multiplicity distributions, cf. ). One can summarize this part by saying that power-like behaviour sets in (albeit only approximately) only for long cascades (large values of $`M/\mu `$). It remains therefore to be checked whether (and to what extend) such conditions are indeed met in the usual hadronic processes. The corresponding BEC functions $`C_2`$ show a substantial differences between $`D=1`$ and $`D=3`$ dimensional cascades, both in their widths and shapes. Whereas the former are more exponential-like (except, perhaps, for small masses $`M`$) the latter are more gaussian-like with a noticeably tendence to flattening out at very small values of $`Q`$ observed for small masses $`M`$. Also values of intercepts, $`C_2(Q=0)`$, are noticeable lower for $`D=3`$ cascades. The length of the cascade (i.e., the radius of the production region, cf. discussion of density $`\rho `$ before) dictates the width of $`C_2(Q)`$. However, the $`M/\mu `$ scaling observed before in shapes of source functions is lost here. This is because $`C_2`$ depends on the differences of the momenta $`p=\mu \mathrm{cosh}y`$, which do not scale in $`M/\mu `$. The flattening mentioned above for $`D=3`$ cascades are the most distinctive signature of the fractal structure combined with $`D=3`$ dimensionality of the cascade. The correlations of the position-momentum type existing here as in all flow phenomena are, in the case of $`D=3`$ cascades, not necessarily vanishing for very small differences in positions or momenta between particles under consideration. The reason is that our space-time structure of the process can have in $`D=3`$ a kind of “holes”, i.e., regions in which the number of produced particles is very small. This is perhaps the most characteristic observation for fractal (i.e., cascade) processes of the type considered here. This feature seems to be more pronounced for diluted cascades corresponding to $`q<1`$ case discussed below. Fig. 2 displays the same quantities but this time calculated for $`M=40`$ GeV and for different values of parameter $`q`$ in the decay function $`\mathrm{\Gamma }`$ defined in (1). This function is written there in form of Tsallis distribution , which allows to account for a variety of possible influences caused by, for example, long-range correlations, memory effects or for the possible additional fractal structure present in the production process. They all result in a non-extensitivity of some thermodynamical variables (like entropy) with $`|1q|`$ being the measure of this non-extensitivity. In practical terms of interest here, for $`q<1`$ the tail of distribution (1) is depleted and its range is limited to $`t(0,\tau /(1q))`$ whereas for $`q>1`$ it is enhanced in respect to the standard exponential decay law (and its range is $`t(0,\mathrm{})`$) . In other words, one can account in this way for both more diluted (for $`q<1`$) and more condensed (for $`q>1`$) space-time structure of the developing cascades. Such distributions are ubiquitous in numerous phenomena and they are founded in the, so called, Tsallis non-extensive thermostatistics generalizing the conventional Boltzmann-Gibbs one (which in this notation corresponds to the $`q=1`$ case). It has found also applications in high energy and nuclear physics (cf. for references). Its effect on the cascades investigated here is, as can be seen from Fig. 2, in that it mimics (to some extent) the changes atributed in Fig. 1 to different energies (making cascade effectively shorter for $`q=0.8`$ and longer for $`q=1.2`$). The results of Fig. 2 (taken for $`M=40`$ GeV) should be then compared with those of Fig. 1 for $`M=10`$ and $`M=100`$ GeV. They demonstrate that effects of longer or shorter cascades in momentum space (as given by different $`M`$ in Fig. 1) is similar to effects of the more or less condensed cascades in the position space as given by $`q`$ here. This fact should be always kept in mind in such analysis as ours. ## 3 Conclusions We conclude that BEC are, indeed, substantially influenced by the fact that the production process is of the cascade type (both in momentum and space-time) as was anticipated in , although probably not to the extent expected (which, however, has not been quantified there). In practical applications (fitting of experimental data) there are many points which need further clarification. The most important is the fact that data are usually collected for a range of masses $`M`$ and among directly produced particles are also resonances. This will directly affect lengths of the cascades, and through them the final results for $`C_2`$. Selecting events with similar masses $`M`$ should allow to check whether in such processes $`q=1`$ or not. The importance of this finding is in that $`q<1`$ would signal a non-stochastic development of the cascade, whereas $`q>1`$ would indicate that, as has been discussed in , parameter $`\tau `$ fluctuates with relative variance $$\omega =\frac{\left(\frac{1}{\tau }\right)^2\frac{1}{\tau }^2}{\frac{1}{\tau }^2}=q1.$$ (7) Such fluctuations are changing exponential behaviour of $`\mathrm{\Gamma }`$ in (1) to a power-like distribution with enhanced tail. Acknowledgement: O.V.Utyuzh is very grateful to organizers of the $`12^{th}`$ Indian Summer School ”Relativistic Heavy Ion Physics” held in Prague, Czech Republic, $`30`$ August - $`3`$ September 1999, for financial support and warm hospitality extended to him during the conference. Figure Captions Fig. 1. Density distribution of the production points $`\rho (r)`$ (left panels) and the corresponding $`C_2(Q=|p_ip_j|)`$ (right panels) for one-dimensional (upper panels) and three-dimensional (lower panels) cascades. Each panel shows results for $`M=10,40`$ and $`100`$ GeV masses of the source. Time evolution parameter is $`\tau =0.2`$ fm and nonextensitivity parameter $`q=1`$. Fig. 2. The same as in Fig. 1 except that each panel shows results for mass of the source $`M=40`$ GeV and for three different values of the nonextensitivity parameter $`q=0.8,1.0`$ and $`1.2`$.
no-problem/9910/cond-mat9910393.html
ar5iv
text
# I Introduction ## I Introduction From the physical point of view, nanoparticles exhibit such interesting features as superparamagnetism and exponentially slow relaxation rates at low temperatures due to anisotropy barriers. However, the picture of a single-domain magnetic particle where all spins are pointing into the same direction, leading to coherent relaxation processes, ceases to be valid for very small particles where surface effects become really crucial. For instance, in a particle of radius $`4`$ nm, $`50\%`$ of atoms lie on the surface. Therefore, it is necessary to understand the effect of free boundaries first on the static and then on the dynamical properties of nanoparticles. However, one of the difficulties which is inherent to systems of round (spherical or ellipsoidal) geometries, consists in separating surface effects due to symmetry breaking of the crystal field on the boundaries and the unavoidable finite-size effects caused by using systems of finite size. In hypercubic systems, this problem is easily handled by using periodic boundary conditions, but this is not possible in other topology, and thus surface and finite-size effects are mixed together. In this article, we discuss surface and finite-size effects on the thermal and spatial behaviours of the intrinsic magnetisation of an isolated small particle. We consider two different systems: 1) A cube of simple cubic structure with either periodic or free boundary conditions. This system is treated analytically by the isotropic model of $`D`$-component spin vectors in the limit $`D\mathrm{},`$ in magnetic field. 2) The second system, which is more realistic, is the maghemite particle ($`\gamma `$-Fe<sub>2</sub>O<sub>3</sub>) of ellipsoidal (or spherical) shape with open boundaries. The appropriate model is the anisotropic classical Dirac-Heisenberg model including exchange and dipolar interactions, and taking account of bulk and surface anisotropy. On the contrary, this system can only be dealt with using numerical approaches such as the classical Monte Carlo technique . In the case of a cubic system we obtain the thermal behaviour of local magnetisations at the center of faces, edges and corners. An exact and very useful relation between the intrinsic magnetisation and the magnetisation induced by the magnetic field, valid at all temperatures and fields, was obtained in Ref.. It was shown that the positive contribution of finite-size effects to the magnetisation is lower than the negative one rendered by boundary effects, thus leading to a net decrease of the magnetisation with respect to the bulk. For the maghemite, this study has been performed in a very small and constant magnetic field; the surface shell is assumed to be of constant thickness and only the particle size is varied. So, the thermal behaviour of the intrinsic magnetisation is obtained for different particle sizes . This behaviour is compared with that of a cubic maghemite particle with periodic boundary conditions but without anisotropy. In this case the contributions of finite-size and surface effects lead to the same results as for the cube system, but the difference between them is now much larger, due to surface anisotropy. In addition, we show that the magnetisation profile is temperature dependent. ## II Cubic system: $`D\mathrm{}`$ spherical model We consider an isotropic box-shaped magnetic system of volume $`𝒩=L^3`$, with simple-cubic lattice structure, and nearest-neighbour exchange coupling, in a uniform magnetic field. For this we use the Hamiltonian of the isotropic classical $`D`$-component vector model, that is, $$=𝐡\underset{i}{}𝐬_i\frac{1}{2}\underset{i,j}{}\lambda _{ij}\underset{\alpha =1}{\overset{D}{}}s_{\alpha i}s_{\alpha j},$$ (1) where $`𝐬_i`$ is the normalized $`D`$-component vector, $`\left|𝐬_i\right|=1`$; $`𝐡𝐇/J_0`$ is the magnetic field, and $`\lambda _{ij}J_{ij}/J_0`$ the exchange coupling. We also define the reduced temperature $`\theta T/T_c^{MFA},`$ $`T_c^{MFA}=`$ $`J_0/D`$ being the Curie temperature of this model in the mean-field approximation, $`J_0`$ is the zero-momentum Fourier component of $`J_{ij}`$. In this model, the magnetisation $`𝐦`$ is directed along the field $`𝐡,`$ so that $`𝐡=h𝐞_z`$ and $`𝐦_i=m_i𝐞_z.`$ Using the diagram technique for classical spin systems in the limit $`D\mathrm{},`$ generalizing it so as to include the magnetic field and adopting a matrix formalism, one ends up with a closed system of equations for the average magnetisation component $`m_is_{zi}`$ and correlation functions $`s_{ij}Ds_{\alpha i}s_{\alpha j}`$ with $`\alpha 2`$, $$\underset{j}{}𝒟_{ij}m_j=G_ih,\underset{j}{}𝒟_{ij}s_{jl}=\theta G_i\delta _{il},$$ (2) where $`𝒟_{ij}\delta _{ij}G_i\lambda _{ij}`$ is the Dyson matrix of the problem, and $`G_i`$ is a local function to be determined from the set of constraint equations on all sites $`i=1,\mathrm{},𝒩`$ of the lattice $$s_{ii}+𝐦_i^2=1.$$ (3) Now, we define the induced average magnetisation per site by $$𝐦=\frac{1}{𝒩}\underset{i}{}𝐦_i$$ (4) which vanishes for finite-size systems in the absence of magnetic field due to the Golstone mode associated with global rotations of the magnetisation. On the other hand, it is clear that at temperatures $`\theta 1`$ the spins in the system are aligned with respect to each other and there should exist an intrinsic magnetisation. The latter is usually defined for finite-size systems as $$M=\sqrt{\left(\frac{1}{𝒩}\underset{i}{}𝐬_i\right)^2}=\sqrt{𝐦^2+\frac{1}{𝒩^2}\underset{i,j=1}{\overset{𝒩}{}}s_{ij}},$$ (5) where the second equality is valid in the limit $`D\mathrm{}.`$ Note that $`Mm`$ and that $`M`$ remains non zero for $`h=0`$; in this case in the limit $`\theta 0,`$ $`s_{ij}=1`$ for all $`i`$ and $`j,`$ and $`M1.`$ For $`\theta \mathrm{}`$ the spins become uncorrelated and $`M1/\sqrt{𝒩}.`$ In the limit of $`𝒩\mathrm{},`$ the intrinsic magnetisation $`M`$ approaches that of the bulk system. In the presence of a magnetic field, the Goldstone mode is suppressed and the magnetisation $`𝐦`$ of Eq.(4) no longer vanishes, this is why we call it the supermagnetisation, in contrast with the intrinsic magnetisation $`M`$. If the field is strong the magnitude of the supermagnetisation approaches the intrinsic magnetisation. An important exact relation was established in Ref. between $`M`$ and $`m,`$ $$m=M\frac{2𝒩Mh/\theta }{1+\sqrt{1+(2𝒩Mh/\theta )^2}}=MB(𝒩MH/T),$$ (6) where $`B(\xi )=(2\xi /D)/\left[1+\sqrt{1+(2\xi /D)^2}\right]`$ is the Langevin function for $`D1`$. Note that Eq. (6) is usually applied to superparamagnetic systems with the spontaneous bulk magnetisation $`m_\mathrm{b}(T)`$ in place of $`M(T,H)`$. However, unlike $`m_\mathrm{b}(T)`$, the intrinsic magnetisation $`M`$ of Eq. (5) is a pertinent characteristic of a finite magnetic system and depends on both field and temperature. Solving the model above consists in determining $`𝐦_i`$ and $`s_{ij}`$ as functions of $`G_i`$ from the linear equations (2), and inserting these solutions in the constraint equation (3) in order to obtain $`G_i`$. Two types of boundary conditions are considered, free boundary conditions (fbc) and periodic boundary conditions (pbc). In the case of fbc, $`𝐦_i`$ and $`G_i`$ are inhomogeneous and $`s_{ij}`$ non-trivially depends on both indices due to boundary effects. In this case the exact solution is found numerically, though some analytic calculations can be performed at low temperature and field. Whereas in the pbc case the solution becomes homogeneous and the problem greatly simplifies. Although the model with pbc is unphysical, it allows for an analytical treatment and study of finite-size effects separately from boundary effects. At low temperature, the intrinsic magnetisation in the fbc case, including only the contributions from faces, reads, $$M1\frac{\theta W}{2}\left[1\mathrm{\Delta }_𝒩+\frac{6}{5}\frac{1}{L}\right],$$ (7) where $`W`$ is the well-known Watson’s integral and $$\mathrm{\Delta }_𝒩=\frac{1}{W}\left(W\frac{1}{𝒩}\underset{𝐪\mathrm{𝟎}}{}\frac{1}{1\lambda _𝐪}\right)>0$$ (8) describes the finite-size effects, with $`\mathrm{\Delta }_𝒩1/L`$, while the last term in (7) represents the contribution from boundaries. The first term, on the other hand, is the bulk contribution which survives in the limit $`L\mathrm{}`$. In contrast with the finite-size effects, boundary effects entail a decrease of the intrinsic magnetisation. The contributions to Eq. (7) from the edges and corner are of order $`\theta /L^2`$ and $`\theta /L^3`$, respectively. Fig. 1 shows the temperature dependence of the intrinsic magnetisation $`M`$ , Eq. (5), and local magnetisations of the $`14^3`$ cubic system with free and periodic boundary conditions in zero field. For periodic boundary conditions, $`M`$ exceeds the bulk magnetisation at all temperatures. In particular, at low temperatures this agrees with the positive sign of the finite-size correction to the magnetisation, Eq. (7). The magnetisation at the center of the cube with free boundary conditions is rather close to that for the model with pbc in the whole temperature range and converges with the latter at low temperatures. Local magnetisations at the center of the faces and edges and those at the corners decrease with temperature much faster than the magnetisation at the center. This is also true for the intrinsic magnetisation $`M`$ which is the average of the local magnetisation $`M_i`$ over the volume of the system. One can see that, in the temperature range below the bulk critical temperature, $`M`$ is smaller than the bulk magnetisation. This means that the boundary effects suppressing $`M`$ are stronger than the finite-size effects which lead to the increase of the latter, and this is in agreement with the low-temperature formula of Eq. (7). ## III Maghemite particles: Monte Carlo simulations In this section, we consider the more realistic case of (ferrimagnetic) maghemite nanoparticles ($`\gamma `$-Fe<sub>2</sub>O<sub>3</sub>) of ellipsoidal (or spherical) shape with open boundaries, in a very small and uniform magnetic field. The surface shell is assumed to be of constant thickness ($`0.35`$ nm) in, and only the particle size is varied. To deal with spatial magnetisation distributions one has to consider exchange, anisotropy and magneto-static energies together. Accordingly, our model for a nanoparticle is the classical Dirac-Heisenberg Hamiltonian including exchange and dipole-dipole interactions, anisotropy, and Zeeman contributions. Denoting (without writing explicitly) the dipole-dipole interaction by $`H_{dip}`$, our model reads, $``$ $`=`$ $`{\displaystyle \underset{i,𝐧}{}}J_{\alpha \beta }{\displaystyle \underset{\alpha ,\beta }{}}𝐒_i^\alpha 𝐒_{i+𝐧}^\beta K{\displaystyle \underset{i=1}{\overset{N_t}{}}}\left(𝐒_i𝐞_i\right)^2(g\mu _B)H{\displaystyle \underset{i=1}{\overset{N_t}{}}}𝐒_i+H_{dip}`$ (9) where $`J_{\alpha \beta }`$ are the exchange couplings between nearest neighbours spanned by the unit vector $`𝐧`$; $`𝐒_i^\alpha `$ is the (classical) spin vector of the $`\alpha ^{th}`$ atom at site $`i;`$ $`H`$ is the uniform field applied to all spins in the particle, $`K>0`$ is the anisotropy constant and $`𝐞_i`$ the single-site anisotropy axis. In both cases of a spherical and ellipsoidal particle, we consider a uniaxial anisotropy in the core along our $`z`$ reference axis (major axis for the ellipsoid), and single-site anisotropy on the surface, with equal anisotropy constant $`K_s,`$ and $`𝐞_i`$ are defined so as to point outward and normal to the surface . Our method of simulation proceeds as follows: we start with a regular box of spinel structure, then we cut in a sphere or an ellipsoid that contains the total number $`N_t`$ of spins of a given particle. We distinguish between spins in the core (of number $`N_c`$) from those on the surface ($`N_s`$) of the particle according to whether or not their coordination number is equal to that of a system with periodic boundary conditions (pbc). All spins in the core and on the surface are identical but interact via different couplings; exchange interactions between the core and surface spins are taken equal to those inside the core. Our parameters are as follows: the exchange interactions are (in units of K) $`J_{AB}/k_B28.1,J_{BB}/k_B8.6,J_{AA}/k_B21.0`$. The bulk and surface anisotropies are $`k_c(K_c/k_B)8.13\times 10^3,`$ $`k_s(K_s/k_B)0.5`$, respectively, where $`k_B`$ is the Boltzmann constant. In Fig. 2, we plot the thermal variation of the core and surface contributions to the magnetisation (per site) as a function of the reduced temperature $`\tau ^{core}T/T_c^{core}`$ for $`N_t=909,3766`$ corresponding to $`N_{st}N_s/N_t=53\%,41\%`$ and diameters of circa $`4`$ and $`6`$ nm, respectively. The core and surface magnetisations are averages over all spins in the core or on the surface, respectively. For both sizes we see that the surface magnetisation $`M_{surf}`$ decreases more rapidly than the core contribution $`M_c`$ as the temperature increases, and has a positive curvature while that of $`M_c`$ is negative. Moreover, it is seen that even the (normalised) core magnetisation per site does not reach its saturation value of $`1`$ at very low temperatures, which shows that the magnetic order in the core is disturbed by the surface (see Fig.4 below). As the size decreases the maximum value of $`M_{surf}`$ decreases showing that the magnetic disorder is enhanced. In Fig. 3 we plot the core and surface magnetisations (with $`N_t=909,3766`$, and $`N_{st}=53\%,41\%`$), the magnetisation of a cube with spinel crystalline structure and pbc, and the bulk magnetisation as functions of the reduced temperature $`\tau ^{core}`$. Apart from the obvious shift to lower temperatures of the critical region due to the finite-size and surface effects, we see that, as was also shown analytically for the cube system, the finite-size effects give a positive contribution to the magnetisation with respect to the bulk, whereas the surface effects yield a negative contribution. Moreover, it is seen that for nanoparticles the contribution from the surface is much larger than that coming from finite-size effects. The difference between the two contributions appears to be enhanced by the surface anisotropy in the case of nanoparticles. In Fig. 4 we plot the spatial evolution of the orientation of the magnetic moment from the center to the border of the particle, at different temperatures. At all temperatures, the magnetisation decreases with increasing particle radius. This obviously suggests that the magnetic disorder starts at the surface and gradually propagates into the core of the particle. At high temperatures, the local magnetisation exhibits a jump of temperature-dependent height, and continues to decrease. This indicates that there is a temperature-dependent radius, smaller than the particle radius, within which the magntisation assumes relatively high values. This result agrees with that of Ref. (for spherical nanoparticles with simple cubic structure) where this radius was called the magnetic radius. The local magnetisation also depends on the direction of the radius vector, especially in an ellipsoidal particle. ## IV Conclusion Both for the cube system and the nanoparticle of the maghemite type surface effects yield a negative contribution to the intrinsic magnetisation, which is larger than the positive contribution of finite-size effects, and this results in a net decrease of the magnetisation with respect to that of the bulk system. In the first case we have been able to separate finite-size effects from surface effects by considering the same system with periodic and free boundary conditions. On the other hand, the results for a spherical or ellipsoidal nanoparticle with free boundaries have been compared to those of a cube with a spinel structure and periodic boundary conditions, but without any anisotropy. In this case, it turns out that the contributions from surface and finite-size effects have the same sign as before but the difference between them becomes larger, due to surface anisotropy. These spin models invariably predict that the surface magnetisation (per spin) of systems with free boundaries is smaller than the magnetisation of the bulk system. However, experiments on layered systems, especially of 3d elements, have shown that there is enhancement of the magnetic moment on the surface, which has been attributed to the contribution of orbital moments. It is clear that the models presented here do not account for such effect, but they can be generalised so as to include orbital as well as spin vectors.
no-problem/9910/hep-th9910171.html
ar5iv
text
# LGCR–99/08/02DTP–MSU/99-22hep-th/9910171 Classical glueballs in non-Abelian Born-Infeld theory ## I Introduction The standard Yang–Mills theory does not admit classical particle-like solutions . More precisely, this famous no-go result asserts that there exist no finite–energy non–singular solutions to the four–dimensional Yang–Mills equations which would be either static, or non-radiating time–dependent. . Non-existence of static solutions can be related to conformal invariance of the Yang–Mills theory, which implies that the stress–energy tensor is traceless : $`T_\mu ^\mu =0=T_{00}+T_{ii}`$, where $`\mu =0,\mathrm{},3,i=1,2,3`$. Given the positivity of the energy density $`T_{00}`$, this means that the sum of the principal pressures $`T_{ii}`$ is everywhere positive, i.e. the Yang–Mills matter is repulsive. This makes the mechanical equilibrium impossible . The Higgs field breaks the conformal invariance of the pure Yang–Mills theory and so in the spontaneously broken gauge theories particle-like solutions may exist. Two types of such solutions are known: magnetic monopoles and sphalerons. Topological criterion for the existence of monopoles is the non-triviality of the second homotopy group of the broken phase manifold $`\pi _2(G/H)`$ associated with the configuration of the Higgs field. Thus topologically stable monopoles exist in the $`SO(3)`$ gauge theory with a real Higgs triplet, in which case $`G/H=S^2`$, but do not exist in the $`SU(2)`$ gauge theory with a complex Higgs doublet, where the symmetry is completely broken (the Higgs broken phase manifold is $`S^3`$). However, in the theory with doublet Higgs another particle–like solution has been found by Dashen, Hasslacher and Neveu . Its existence was explained by Manton as a consequence of non–triviality of the third homotopy group $`\pi _3(S^3)`$, indicating the presence of non–contractible loops in the configuration space. This solution is the sphaleron; it sits at the top of the potential barrier separating topologically distinct Yang–Mills vacua. Because of this position, the sphaleron is necessarily unstable. Still its rôle is very important, since in presence of fermions it can mediate transitions without the conservation of fermion number. In the latter case, the manifold of the Higgs broken phase coincides with the gauge group manifold, and it is not quite clear, whether it is the topology of the Higgs field, or the topology of the Yang–Mills field itself which is crucial for the existence of this solution. This issue was clarified after the discovery of sphaleron–like solutions in the $`SU(2)`$ gauge theory coupled to gravity, without Higgs fields at all. Particle–like solutions in this theory were found numerically by Bartnik and McKinnon (BK) ; their relation to sphalerons has been explained by Gal’tsov and Volkov and Sudarsky and Wald (for a recent review, see ). This and other examples (similar solutions exist in the flat space Yang–Mills theory coupled to the dilaton) show that the topological reason for the existence of sphalerons in the theories with gauge fields is the non-triviality of third homotopy class of the Yang–Mills gauge group (note that $`\pi _3(G)=Z`$ for any simple compact Lie group $`G`$). The Higgs field in this case just plays a rôle of attractive agent balancing the repulsive Yang–Mills forces. In other words, its function is to break the scale invariance of the Yang–Mills theory rather than the gauge invariance. The same symmetry breaking may occur due to gravity or the presence of dilaton field, which do not imply a spontaneous breaking of the gauge symmetry. The superstring theory gives rise to one important modification of the standard Yang-Mills quadratic Lagrangian suggesting the action of the Born-Infeld (BI) type . Such a modification also breaks the scale invariance, so the natural question arises whether in the Born–Infeld–Yang–Mills (BIYM) theory the non-existence of classical particle-like solutions can be overruled. This is particularly intriguing since now neither gravity, nor scalar fields are involved, so one is thinking about the genuine classical glueballs. Note that a mere scale invariance breaking, being a necessary condition, by no means guarantees the existence of particle-like solutions, and a more detailed study is needed to prove or disprove this conjecture. Our investigation shows that the $`SU(2)`$ BIYM classical glueballs indeed do exist and display a remarkable similarity with the BK solutions of the Einstein–Yang–Mills (EYM) theory. Non–Abelian generalisation of the Born–Infeld action presents an ambiguity in specifying how the trace over the the matrix–valued fields is performed in order to define the Lagrangian. Here we adopt the version with the ordinary trace which leads to a simple closed form for the action. In fact, another trace prescription is favored in the superstring context, namely, the symmetrized trace , but so far the explicit Lagrangian with such trace is known only as perturbative series . For our purposes the full non-perturbative Lagrangian is needed, so we consider the ordinary trace, presenting some arguments at the end of the paper about the possibility of extension of our results to the theory with symmetrized trace. The BIYM action with the ordinary trace looks like a straightforward generalisation of the corresponding $`U(1)`$ action in the “square root” form $$S=\frac{\beta ^2}{4\pi }(1)d^4x,$$ (1) where $$=\sqrt{1+\frac{1}{2\beta ^2}F_{\mu \nu }^aF_a^{\mu \nu }\frac{1}{16\beta ^4}(F_{\mu \nu }^a\stackrel{~}{F}_a^{\mu \nu })^2}.$$ (2) Here the dimensionless gauge coupling constant (in units $`\mathrm{}=c=1`$) is set to unity, so the only parameter of the theory is the constant $`\beta `$ of dimension $`L^2`$, the “critical” field strength. It is easy to see that the BI non-linearity breaks the conformal symmetry ensuring the non-zero trace of the stress–energy tensor $$T_\mu ^\mu =^1\left[4\beta ^2(1)F_{\mu \nu }^aF_a^{\mu \nu }\right]0.$$ (3) This quantity vanishes in the limit $`\beta 0`$ when the theory reduces to the standard one. For the YM field we assume the usual monopole ansatz $$A_0^a=0,A_i^a=ϵ_{aik}\frac{n^k}{r}(1w(r)),$$ (4) where $`n^k=x^k/r,r=(x^2+y^2+z^2)^{1/2}`$, and $`w(r)`$ is the real-valued function. After the integration over the sphere in (1) one obtains a two-dimensional action from which $`\beta `$ can be eliminated by the coordinate rescaling $`\sqrt{\beta }tt,\sqrt{\beta }rr`$. As a result we find the following static action: $$S=L𝑑r,L=r^2(1),$$ (5) with $$=\sqrt{1+2\frac{w^2}{r^2}+\frac{(1w^2)^2}{r^4}},$$ (6) where prime denotes the derivative with respect to r. The corresponding equation of motion reads $$\left(\frac{w^{}}{}\right)^{}=\frac{w(w^21)}{r^2}.$$ (7) A trivial solution to the Eq.(7) $`w0`$ corresponds to the pointlike magnetic BI-monopole with the unit magnetic charge (embedded $`U(1)`$ solution). In the Born–Infeld theory it has a finite self-energy. For time-independent configurations the energy density is equal to minus the Lagrangian, so the total energy (mass) is given by the integral $$M=_0^{\mathrm{}}(1)r^2𝑑r.$$ (8) For $`w0`$ one finds $`M`$ $`=`$ $`{\displaystyle \left(\sqrt{r^2+1}r^2\right)𝑑r}`$ (9) $`=`$ $`{\displaystyle \frac{\pi ^{3/2}}{3\mathrm{\Gamma }(3/4)^2}}1.23604978.`$ (10) Let us look now for essentially non–Abelian solutions of finite mass. In order to assure the convergence of the integral (8) the quantity $`1`$ must fall down faster than $`r^3`$ as $`r\mathrm{}`$. Thus, far from the core the BI corrections have to vanish and the Eq.(7) should reduce to the ordinary YM equation. The latter is equivalent to the following two-dimensional autonomous system : $$\dot{w}=u,\dot{u}=u+(w^21)w,$$ (11) where a dot denotes the derivative with respect to $`\tau =\mathrm{ln}r`$. This dynamical system has three non-degenerate stationary points $`(u=0,w=0,\pm 1)`$, from which $`u=w=0`$ is a focus, while two others $`u=0,w=\pm 1`$ are saddle points with eigenvalues $`\lambda =1`$ and $`\lambda =2`$. The separatices along the directions $`\lambda =1`$ start at infinity and after passing through the saddle points go to the focus with the eigenvalues $`\lambda =(1\pm i\sqrt{3})/2`$. The function $`w(\tau )`$ approaching the focus as $`\tau \mathrm{}`$ is unbounded. Two other separatices, passing through saddle points along the directions specified by $`\lambda =2`$, go to infinity in both directions. Since there are no limiting circles, generic phase curves go to infinity or approach the focus, unless $`w=0`$ identically. All of them produce a divergent mass integral (8). The only trajectories remaining bound as $`\tau \mathrm{}`$ are those which go to the saddle points along the separatrices specified by $`\lambda =1`$. From this reasoning one finds that the only finite-energy configurations with non-vanishing magnetic charge are the embedded U(1) BI-monopoles. Indeed, such solutions should have asymptotically $`w=0`$, which does not correspond to bounded solutions unless $`w0`$. The remaining possibility is $`w=\pm 1,\dot{w}=0`$ asymptotically, which corresponds to zero magnetic charge. Coming back to $`r`$-variable one finds from (7) $$w=\pm 1+\frac{c}{r}+O(r^2),$$ (12) where $`c`$ is a free parameter. This gives a convergent integral (8) as $`r\mathrm{}`$. Note that two values $`w=\pm 1`$ correspond to two neighboring topologically distinct YM vacua. Now consider local solutions near the origin $`r=0`$. For convergence of the total energy (8), $`w`$ should tend to a finite limit as $`r0`$. Then using the Eq.(7) one finds that the only allowed limiting values are $`w=\pm 1`$ again. In view of the symmetry of (7) under reflection $`w\pm w`$, one can take without loss of generality $`w(0)=1`$. Then the following Taylor expansion can be checked to satisfy the Eq.(7): $$w=1br^2+\frac{b^2(44b^2+3)}{10(4b^2+1)}r^4+O(r^6),$$ (13) with $`b`$ being (the only) free parameter. As $`r0`$, the function $``$ tends to a finite value $$=_0+O(r^2),_0=1+12b^2.$$ (14) By rescaling $`r^2_0=\stackrel{~}{r}^2`$ one can cast the Eq.(7) again into the form of the dynamical system (11), so by the same reasoning the series (13) may be shown to correpond to the local solution starting as $`\stackrel{~}{\tau }\mathrm{},\stackrel{~}{\tau }=\mathrm{ln}\stackrel{~}{r}`$ from the saddle point $`u=0,w=1`$ along the separatrix $`\lambda =2`$. Another bounded $`w`$ satisfying the dynamical system (11) might start at the focal point. But then in terms of $`\stackrel{~}{r}`$ $$wC\sqrt{\stackrel{~}{r}}\mathrm{sin}\left(\frac{\sqrt{3}}{2}\mathrm{ln}\stackrel{~}{r}+\alpha \right)$$ (15) with $`\alpha =const`$, this does not satisfy the assumption $`const`$, therefore it is not a solution of the initial system (7). Thus we proved that any regular solution of the Eq.(7) belongs to the one-parameter family of local solutions (13) near the origin. It follows that the global finite energy solution starting with (13) should meet some solution from the family (12) at infinity. Since both these local solutions are non–generic, one can at best match them for some discrete values of parameters. To complete the existence proof one has to show that this discrete set of parameters is non-empty. The idea of the proof is as follows. First, rewrite the Eq.(7) in the resolved form $$\ddot{w}=\gamma \dot{w}+w(w^21),$$ (16) where the “negative friction coefficient” is $$\gamma =1+\frac{\dot{}}{}=1\frac{\left[\dot{w}+w(1w^2)\right]^2+(1w^2)^3}{r^4+(1w^2)^2}.$$ (17) It is easy to show that $`w`$ can not have local minima for $`0<w<1,w<1`$ and can not have local maxima for $`1<w<0,w>1`$. In view of (12)(13) one finds that any regular solution lies entirely within the strip $`1<w<1`$ and has at least one zero. Once $`w`$ leaves the strip, it has to diverge. The divergence occurs at some finite $`\tau =\tau _0`$ with the following leading term : $$w\pm \frac{1}{\sqrt{\tau _0\tau }}.$$ (18) The Eq.(16) may be presented in the form of the “energy equation” $$\dot{}=\gamma \dot{w}^2,=\frac{1}{2}\dot{w}^2\frac{1}{4}(1w^2)^2.$$ (19) For the ordinary quadratic Yang-Mills system $`\gamma 1`$, so the “energy” $``$ diverges soon after the solution leaves the strip $`[1,\mathrm{\hspace{0.17em}1}]`$. However, in the present case $`\gamma `$ can become negative when $`\dot{w}`$ and $`w`$ grow up, and this can stop further “acceleration” or even reverse it. One has to show that this may happen before $`w`$ leaves the strip $`[1,\mathrm{\hspace{0.17em}1}]`$. Observe that in the Eq.(16) all terms except for $`\dot{}/`$ in $`\gamma `$ (17) are invariant under rescaling $`kr\widehat{r}`$, while the $``$-term changes to $$\sqrt{1+\frac{k^4}{\widehat{r}^4}\left[2\dot{w}^2+(1w^2)^2\right]}.$$ (20) Thus, fixing the scale $`k^2=b`$, where $`b`$ is the free parameter of the local solution (13), one finds that, for sufficiently large $`b`$, the function $`\gamma `$ can be made negative in any desired region. Now, if $`b`$ is too large, the sign of the derivative $`\dot{w}`$ will be reversed, and $`w`$ will leave the strip in the positive direction. For some precisely tuned value of $`b`$ the solution will remain a monotonous function of $`\tau `$ reaching the value $`1`$ at infinity (Fig.1). This happens for $`b_1=12.7463`$. By a similar reasoning one can show that for another fine-tuned value $`b_2>b_1`$ the integral curve $`w(\tau )`$ which has a minimum in the lower part of the strip and then becomes positive will be stabilized by the friction term in the upper half of the strip and tend to $`w=1`$. This solution will have two nodes. Continuing this process we obtain the increasing sequence of parameter values $`b_n`$ for which the solutions remain entirely within the strip $`[1,\mathrm{\hspace{0.17em}1}]`$ tending asymptotically to $`(1)^n`$. The lower values $`b_n`$ found numerically are given in Tab.1. | $`n`$ | $`b`$ | $`M`$ | | --- | --- | --- | | $`1`$ | $`1.27463\times 10^1`$ | $`1.13559`$ | | 2 | $`8.87397\times 10^2`$ | $`1.21424`$ | | 3 | $`1.87079\times 10^4`$ | $`1.23281`$ | | 4 | $`1.27455\times 10^6`$ | $`1.23547`$ | | 5 | $`2.65030\times 10^7`$ | $`1.23595`$ | | 6 | $`1.80475\times 10^9`$ | $`1.23596`$ | Tab 1. Parameters $`b,M`$ for first six solutions. This picture displays a striking similarity with the one occuring for the EYM system . However, there is one important distinction. In the EYM case the sequence $`b_n`$ converges to a finite value $`b_{\mathrm{}}`$, and the limiting solution exists with an infinite number of zeros . In our case the sequence $`b_n`$ has no finite limit. The region of oscillations expands with growing $`n`$, and so does the size of the particles (see Fig. 2). Typically, the first and the last amplitude have large enough values, while in the middle zone the amplitude of oscillations becomes very small with increasing $`n`$ (i.e. an observer placed inside the core will see the unscreened magnetic charge). On the contrary, with $`n`$ increasing the mass rapidly converges to the finite value (9) corresponding to the abelian solution $`w0`$. Like in the BK case, solutions with odd and even node number $`n`$ have different physical meaning . The lowest one with $`n=1`$ is the direct analog of the sphaleron. It can be shown to have the Chern-Simons number $`Q=1/2`$, to possess a fermionic zero mode and it is expected to have one odd-parity unstable decay mode along the path from the initial to the neighboring vacuum. The potential barrier between the neighboring vacua hence has a finite height. Higher odd-$`n`$ solutions also have $`Q=1/2`$, but possess more than one decay direction leading to the neighboring vacuum; they are expected to have $`n`$ odd-parity negative modes. Solutions with even values of $`n`$ have $`Q=0`$, and correspond to the paths in the phase space returning back to the same vacuum. These may be contiuously deformed to the trivial vacuum $`w1`$ and therefore are topologically trivial. If one uses the BIYM Lagrangian defined with the symmetrized trace, the equation of motion still preserves the form (7) with another friction coefficient $`\gamma `$ and an additional function of two variables $`\dot{w}^2,(1w^2)^2`$ in front of the force term. It can be shown that the minima/maxima argument used above still holds as well as the $`\gamma `$-scaling argument. Therefore we expect that classical glueballs will persist in this version of the BIYM theory too. It can be expected that the spectrum of magnetic monopoles in the BIYM–Higgs theory is affected by sphaleronic excitations like in the case of gauge monopoles coupled to gravity (for discussion and references see ). The occurence of the limiting value of $`\beta `$ found in is likely to be a typical signal. We wish to thank G.W. Gibbons, N.S. Manton, G. Clement and M.S. Volkov for valuable comments. One of the authors (DVG) would like to thank the Laboratory of Gravitation and Cosmology of the University Paris-6 for hospitality and the CNRS for support while this work was initiated.
no-problem/9910/nucl-th9910050.html
ar5iv
text
# Nuclear Matter with Quark-Meson Coupling I: Comparison of Nontopological Soliton Models ## 1 Introduction Ever since the advent of quantum chromodynamics (QCD) it has been popular to describe the nucleon in terms of bag or soliton models. There are many versions of such models, characterized by two extremes: the MIT bag model , where the nucleon consists of just constituent quarks arbitrarily restricted in a given volume, and the Skyrme model , where there are no quarks and the nucleon is instead a topological soliton of the pion field. Other models interpolate between these two extremes, seeking to combine the obvious emphases on the structure of nucleons within the MIT bag model and on nuclear interactions via meson exchange within the Skyrme model. For example, chiral bag models surround an MIT-like bag of quarks with a Skyrme-like cloud of pions (and perhaps other mesons as well). Clearly, each such model attempts to balance between low-energy degrees of freedom — the mesons — and high-energy degrees of freedom — the quarks (and possibly gluons as well). Our aim here is to develop a model that as simply as possible, but without losing essential physics, combines quark and meson degrees of freedom. This task is made more difficult by the “Cheshire cat principle” , an extrapolation of results found from chiral bag models, which find that low-energy nucleon properties are largely insensitive to the size of the quark bag. As the bag shrinks, the meson cloud forms more and more of the nuclear structure. To select a model, we must study nuclear properties that distinguish between a bag of quarks and a cloud of mesons. An obvious testing ground for any bag or soliton model is dense baryon matter, where the structure of individual nucleons will become as important as the interactions between neighboring nucleons. Preferably, the model will have a dynamical formation of the bag or soliton, for then one can treat the transition to a quark-gluon plasma consistently. Surprisingly enough, the Skyrme model, which may be thought of as a chiral bag model with the quark bag shrunk to zero size, does predict a dynamical transition to a phase of solitons of fractional baryon number at high densities ; however, identifying this phase with a quark-gluon plasma is certainly problematic. Thus in this paper we study dense nuclear matter in a set of models known as non-topological soliton (NTS) models. In particular, we study the Friedberg-Lee (FL) model and a class of related models, the chiral chromodielectric ($`\chi `$CD) models , as well as extensions of these models (or, one might argue, approximations to these models) that explicitly include couplings to mesons. These models, based upon general arguments from QCD, are characterized by the coupling of quarks to a scalar field $`\sigma `$ that has a nonzero vacuum expectation field. This field is understood to be a composite gluon field. The interaction between the quarks and scalar field leads to a dynamical confinement mechanism, with the quarks carving a hole in the background scalar field. The structure of this bag depends precisely on how the quarks couple to the scalar field, and it is this coupling that distinguishes the various models we study here. All these models reproduce single nucleon properties reasonably well — indeed, chiral bag models show that single nucleon properties are relatively insensitive to the structure of the quark bag. We must look at high densities, where the bags begin to overlap, to see the differences between models with different quark-$`\sigma `$ couplings. In the last few years there also has been considerable interest in the application of the quark-meson coupling (QMC) model to study the nuclear matter equation of state, as well as medium effects on the nucleon structure and nucleon-meson coupling, motivated by the apparent success of the Walecka QHD models in describing the properties of nuclear matter. The QMC model, initially suggested by Guichon , consists of non-overlapping nucleon (MIT) bags that interact through the exchange of scalar and vector mesons in the mean-field approximation (MFA). This model has been generalized to include Fermi motion and center of mass corrections and was applied to nuclear matter (see and references therein) and also to finite nuclei . Of course, the assumption that the nucleons can be regarded as non-overlapping bags is only valid at low density, where these models seem to capture the essential physics. Already at nuclear saturation density, however, the internucleon separation is comparable to the nucleon radius, and at higher densities the assumption that the bags do not overlap clearly breaks down. This assumption becomes even more questionable when a modification of the bag constant in nuclear matter is taken into account. Jin and Jennings and Müller and Jennings have recently shown that the introduction of a density-dependent bag constant can reproduce the EMC effect and reconcile the QMC results with those of the Walecka QHD-I model . As a result the bag radius grows with increasing density and the overlapping of the bags starts just above nuclear saturation density. To take into account the effects from overlapping nucleon bags — that is, to study a nuclear liquid rather a nuclear gas — it is clearly of interest to introduce some dynamics in the confining mechanism. Thus we are led to replace the MIT bag model by a NTS model. Although the study of nuclear matter properties was begun long ago (see and references therein) and was carried out with different soliton models and with different levels of sophistication (see and references therein), none of these calculations included the effect of background meson fields on nuclear matter. Now, in principle, the non-topological soliton models — in particular, the $`\chi `$CD model, which in its full form includes perturbative gluon exchange — contain sea quarks and meson exchange explicitly. However, in practice it is very difficult to deal with anything besides the nuclear constituent quarks. In the interest of simplicity (and at the sacrifice of consistency), we extend our NTS models to include explicitly meson degrees of freedom. (Such an extension has been referred to as the local uniform approximation to $`\chi `$CD in .) In the following, then, we shall study a system of non-topological solitons interacting via the exchange of scalar and vector mesons within the MFA. In modeling nuclear matter each soliton is centered in a spherical Wigner-Seitz (WS) cell. It has been argued that the choice of a spherical WS cell is more appropriate for a fluid phase than a crystal as it represents an angular average over neighbouring sites. We consider two further approximations: in one, the WS calculation is simply used to provide an effective nucleon mass and the kinetic energy is then taken to be that of a Fermi gas, thus providing the correct low-density limit. A second approximation uses a Bloch-like boundary condition to calculate the band structure of the quark states. Neither of these approximations is quite satisfactory, and the second paper of this series is devoted to improving the modeling of a liquid of solitons. Nevertheless, we can still hope that the major qualitive features of dense nuclear matter will be reproduced even given the two approximations used here. Indeed, we find some encouraging results such as nuclear saturation and an increase of the proton rms with nuclear density. These effects are shown to derive from the background meson fields, which have a considerable effect on the formation of energy bands in nuclear matter. In this paper we limit our attention to Friedberg-Lee soliton type models extended to include the meson fields. The non-topological soliton models are presented in Sec. 2. The Wigner-Seitz approximation is then discussed in Sec. 3. In Sec. 4 we study in some detail the trivial solution of the model. Using these solutions we show that the high density limit of the Friedberg-Lee type models depends on the leading power of the quark-$`\sigma `$ coupling vertex. The numerical results are presented in Sec. 5 ## 2 Soliton models with quark-meson coupling Our starting point for studying the high density behavior of soliton matter is the Friedberg-Lee non-topological soliton model, and we also consider related models like the chiral chromodielectric model of Fai, Perry and Wilets . In their simplest versions, these models include only constituent quarks and a single scalar field $`\sigma `$ that couples to the quarks. For the light quarks we shall assume $`m_u=m_d=0`$. Extending these models in the spirit of the quark-meson coupling model, we introduce in addition two meson fields: namely, a scalar meson $`\varphi `$ and a vector meson $`V_\mu `$, which play important roles in quantum hadrodynamics. We assume these mesons couple linearly to the quarks. There is some freedom in the structure of the quark-meson vertex, in regard to its dependence on the soliton field $`\sigma `$. Using the Nielsen-Patkos Lagrangian , Banerjee and Tjon have recently argued that in NTS models the quark-meson coupling should also depend on the scalar soliton field. Similar conclusions are reached by Krein, et al. . However, we have found that this causes unwanted behavior within the mean field and Wigner-Seitz approximations used here (see Sec. 4), and so we report calculations that use $`\sigma `$-independent quark-meson couplings. As we shall see, this choice has the advantage of reproducing the Quantum Hadrodynamics equation of state at low densities. Thus we take the Lagrangian density to have the form $``$ $`=`$ $`\overline{\psi }\left[i\gamma ^\mu _\mu m_f(g(\sigma )+g_s\varphi g_v\gamma ^\mu V_\mu )\right]\psi +{\displaystyle \frac{1}{2}}_\mu \sigma ^\mu \sigma U(\sigma )`$ (1) $`+{\displaystyle \frac{1}{2}}_\mu \varphi ^\mu \varphi {\displaystyle \frac{1}{2}}m_s^2\varphi ^2{\displaystyle \frac{1}{4}}F_{\mu \nu }F^{\mu \nu }+{\displaystyle \frac{1}{2}}m_v^2V_\mu V^\mu `$ where the $`\sigma `$ field self interaction is assumed to be $$U(\sigma )=\frac{a}{2!}\sigma ^2+\frac{b}{3!}\sigma ^3+\frac{c}{4!}\sigma ^4+B.$$ (2) The constants $`a`$, $`b`$ and $`c`$ are fixed so that $`U(\sigma )`$ has a local minimum at $`\sigma =0`$ (inflection point if $`a`$=0) and a global minimum at $`\sigma =\sigma _v`$, the vacuum value. The mass of the glueball excitation associated with the $`\sigma `$ field is given by $`m_{GB}=\sqrt{U^{\prime \prime }(\sigma _v)}`$. The quark-$`\sigma `$ coupling is taken to be $$g(\sigma )=\{\begin{array}{cc}g_\sigma \sigma & \text{for the FL model }\hfill \\ g_\sigma \sigma _v\left[\frac{1}{\kappa (\sigma )}1\right]& \text{for the }\chi \text{CDM }\hfill \end{array}$$ (3) where the chromodielectric function $`\kappa (\sigma )`$ has the form $$\kappa (\sigma )=1+\theta (x)x^{n_\kappa }[n_\kappa x(n_\kappa +1)];x=\sigma /\sigma _v,$$ (4) In the following we will take $`n_\kappa =3`$. In perturbative calculations that include gluons the dielectric function $`\kappa (\sigma )`$ is regularized in order to handle infinities in the one gluon exchange diagrams associated with a vanishing dielectric constant . Such a regulation is also useful for our numerical work. Koepf et. al. use the prescription $$\kappa (\sigma )\kappa (\sigma )(1\kappa _v)+\kappa _v$$ (5) where $`\kappa _v`$ is a constant often simply fixed at $`\kappa _v=0.1`$. However, this regulation forces $`g^{}(\sigma _v)=0`$, and we prefer instead to regulate as follows: $$\kappa (\sigma )=1+\theta (x)x^{n_\kappa }[n_\kappa x(n_\kappa +1\kappa _v)].$$ (6) We find that properties of isolated solitons are independent of $`\kappa _v`$ for values even as large as .1, but results become sensitive to $`\kappa _v`$ at high densities. Here, we shall treat $`\kappa _v`$ as an additional parameter. The Euler-Lagrange equations corresponding to (1) are given by $$\left[\gamma ^\mu \left(i_\mu +g_vV_\mu \right)\left(m_f+g(\sigma )g_s\varphi \right)\right]\psi =0$$ (7) $$_\mu ^\mu \sigma +U^{}(\sigma )+g^{}(\sigma )\overline{\psi }\psi =0$$ (8) $$_\mu ^\mu \varphi +m_s^2\varphi g_s\overline{\psi }\psi =0$$ (9) $$_\mu F^{\mu \nu }m_v^2V^\nu +g_v\overline{\psi }\gamma ^\nu \psi =0$$ (10) where $`U^{}(\sigma )=\frac{dU(\sigma )}{d\sigma }`$ and $`g^{}(\sigma )=\frac{dg(\sigma )}{d\sigma }`$ . We solve Eqs. (7-10) in the mean field approximation: we replace the soliton field $`\sigma `$ by a c-number $`\sigma \sigma (\stackrel{}{r})`$ and the meson fields by their expectation values in the nuclear medium $`\varphi <\varphi >=\varphi _0`$ and $`V_\mu <V_\mu >=\delta _{\mu 0}V_0`$, with $`\varphi _0`$ and $`V_0`$ constants. The approximation that the scalar and vector mesons fields can be regarded as constants while the soliton field is to depend on spatial coordinates stems from the long range nature of the light mesons and the short range nature of the soliton field due to the large glueball mass. In essence, the mesons are fast degrees of freedom and the glueball is slow, and we use a Born-Oppenheimer approximation. The resulting equations for the quark and the scalar soliton field are, $$\left[i\stackrel{}{\alpha }\stackrel{}{}+g_vV_0+\beta \left(m_f+g(\sigma )g_s\varphi _0\right)\right]\psi _k=ϵ_k\psi _k$$ (11) $$^2\sigma +U^{}(\sigma )+g^{}(\sigma )\underset{k(valance)}{}\overline{\psi _k}\psi _k=0.$$ (12) We consider only valence quarks in our calculations. ## 3 The Wigner-Seitz approximation In order to use Eqs. (11-12) for the study of high density nuclear matter, it is sufficient to concentrate on a unit cell and solve these equations with the appropriate boundary conditions. In this paper we shall assume that each unit cell contains a single nucleon, that is, our unit cell coincides with a Wigner-Seitz cell. The single nucleon energy $`E_N`$ is a sum of two terms, the energy of the $`n_q=3`$ quarks and the energy carried by the scalar soliton field $`\sigma `$: $$E_N=n_qϵ_q+_{WScell}𝑑\stackrel{}{r}\left[\frac{1}{2}(\stackrel{}{}\sigma )^2+U(\sigma )\right].$$ (13) The quark energy $`ϵ_q`$ should be regarded as the eigenvalue of Eq. (11) for isolated bags, or as the average energy in the band for dense nuclear matter. It should be noted that $`E_N`$ cannot be identified with the nucleon mass as it contains spurious center of mass motion . In order to correct the center of mass motion in the Wigner-Seitz cell the nucleon mass at rest is taken to be $$M_N=\sqrt{E_N^2<P_{cm}^2>_{WS}},$$ (14) where $`<P_{cm}^2>_{WS}=n_q<p_q^2>_{WS}+m_{GB}<(\stackrel{}{}\sigma )^2>_{WS}`$. The notation $`<>_{WS}`$ stands for an average over the Wigner-Seitz cell. Thus $`<p_q^2>_{WS}`$ is the expectation value of the quark momentum squared and $`m_{GB}<(\stackrel{}{}\sigma )^2>_{WS}`$ is the scalar soliton momentum squared. The latter average is obtained using a coherent state approximation , which is essentially a single mode correction to the classical soliton mass. We find this latter correction to be small with respect to the total mass of the soliton and ignore it henceforth (it is not clear how valid this single-mode approximation is: at higher density we find that this correction exceeds the energy of the $`\sigma `$ field, with both going to zero). At low density the band width vanishes and the quarks are confined in separate bags. Then we expect the following approximation to be most accurate: we assume the individual nucleons move around as a gas of fermions with effective mass $`M_N`$ given by Eq. (14), and so we get the following estimate for the total energy density at nuclear density $`\rho _B`$: $$=\frac{\gamma }{2\pi ^2}_0^{k_F}𝑑kk^2\sqrt{M_N^2+k^2}+\frac{1}{2}m_s^2\varphi _0^2\frac{1}{2}m_v^2V_0^2,$$ (15) where $`\gamma =4`$ is the spin-isospin degeneracy of the nucleons. The Fermi momentum of the nucleons is related to the baryon density through the relation $$\rho _B=\frac{\gamma }{6\pi ^2}k_F^3.$$ (16) The total energy per baryon is given by $`E_B=/\rho _B`$. The constant scalar meson field $`\varphi _0`$ is determined by the thermodynamic demand of minimizing $``$, which gives $$\varphi _0=\frac{\gamma }{4\pi ^2m_s^2}_0^{k_F}𝑑kk^2\frac{\frac{d}{d\varphi _0}(E_N^2<P_{cm}^2>)}{\sqrt{E_N^2<P_{cm}^2>_{WS}+k^2}}.$$ (17) The vector meson field is determined by averaging the Euler-Lagrange equation, Eq. (10), on a Wigner-Seitz cell yielding $$V_0=\frac{g_v}{m_v^2}<\psi ^{}\psi >=\frac{g_v}{m_v^2}\underset{k(valance)}{}<\psi _k^{}\psi _k>_{WS}\rho _B=\frac{n_qg_v}{m_v^2}\rho _B.$$ (18) These equations are similar to those of quantum hadrodynamics, the difference here being that the nucleon now has structure and thus the meson fields couple to the nucleon through its quarks. At low density the nucleon mass approaches its free value, and our mean field equations reduce to those of quantum hadrodynamics. The approximation used here for the nuclear system has often been adopted in modeling soliton matter, and is generally known as the Wigner-Seitz approximation. Each soliton is enclosed in a sphere of radius $`R`$ such that $`\frac{4\pi }{3}R^3=1/\rho _B`$. On a periodic lattice the quark functions should satisfy Bloch’s theorem $`\psi (\stackrel{}{r}+\stackrel{}{a})=e^{i\stackrel{}{k}\stackrel{}{a}}\psi (\stackrel{}{r})`$ for the lattice vectors $`\stackrel{}{a}`$. Concentrating on a single cell the Bloch theorem gives boundary conditions for the quark spinors in that cell. Although one can solve these boundary condition in a self consistent manner , we shall make the simplifying assumption of identifying the bottom of the lowest band by the demand that the derivative of the upper component of the Dirac function disappears at $`R`$, and the top of that band by the demand that the value of the upper component is zero at $`R`$ . The quark spinor in the lowest band is assumed to be an s-state $$\psi _k=\left(\begin{array}{c}u_k(r)\\ i\sigma \widehat{r}v_k(r)\end{array}\right)\chi ,$$ (19) and the resulting Euler-Lagrange equations for the spinor components are $$\frac{du_k}{dr}+\left[m_f+g(\sigma )g_s\varphi _0+(ϵ_k+g_vV_0)\right]v_k=0$$ (20) $$\frac{dv_k}{dr}+\frac{2v_k}{r}+\left[m_f+g(\sigma )g_s\varphi _0(ϵ_k+g_vV_0)\right]u_k=0.$$ (21) The corresponding equation, (12), for the soliton field assumes the form $$^2\sigma +U^{}(\sigma )+g^{}(\sigma )\rho _s(r)=0.$$ (22) The quark density $`\rho _q`$ and the quark scalar density $`\rho _s`$ are given by $$\rho _q(r)=\frac{n_q}{4\pi \overline{k}^3/3}_0^{\overline{k}}d^3k\left[u_k^2(r)+v_k^2(r)\right],$$ (23) $$\rho _s(r)=\frac{n_q}{4\pi \overline{k}^3/3}_0^{\overline{k}}d^3k\left[u_k^2(r)v_k^2(r)\right],$$ (24) where the band is filled up to $`\overline{k}`$. The quark functions are normalized so that there are three quarks in the Wigner-Seitz cell. The boundary conditions for the soliton field are $`\sigma ^{}(0)=\sigma ^{}(R)=0`$. The boundary conditions for the quark functions at the origin are given by $`u(0)=u_0`$ and $`v(0)=0`$, where $`u_0`$ is determined by the normalization condition $$_0^R4\pi r^2𝑑r(u(r)^2+v(r)^2)=1.$$ (25) The boundary conditions at $`r=R`$ are given by $$u_b^{}(R)=0v_b(R)=0$$ (26) for the bottom of the lowest band, and $$u_t(R)=0$$ (27) for the top of the band. Using these equations we can solve for the corresponding $`ϵ_b`$ and $`ϵ_t`$. We assume the tight-binding dispersion relation $$ϵ_k=ϵ_b+(ϵ_tϵ_b)\mathrm{sin}^2\left(\frac{\pi k}{2k_t}\right),$$ (28) and that the band is filled right to the top $`k_t`$. The assumption of such a dilute filling has been made previously, and we do not discuss it further. The quark functions corresponding to the energy $`ϵ_k`$ can be simply obtained by integrating Eqs. (20) and (21) for each intermediate value $`ϵ_k`$. Substituting the dispersion relation into Eq. (13), the nucleon energy is given by $$E_N=\frac{3n_q}{k_t^3}_0^{k_t}𝑑kk^2ϵ_k+_0^R4\pi r^2𝑑r\left[\frac{1}{2}\sigma ^{}(r)^2+U(\sigma )\right],$$ (29) which is then used in (14) to determine the equation of state (15). At higher density, where the bags begin to overlap, there is no reason to assume each quark is tightly bound to a single bag nor to impose that 3$`q`$ groups move collectively with a well-defined momentum. Our equation of state cannot then be considered a good approximation at higher densities, and we must then find a different approximation for handling the kinetic energy. This is the subject of the next paper in this series. For now we are content with studying low density behavior, with a special interest in whether our approximations can still produce saturation at the expected density. In fact, as one approaches the empirical nuclear saturation density, the different models begin to distinguish themselves. This is discussed in detail in the next section. ## 4 The Trivial solution The NTS models we are considering have a uniform plasma phase that is preferred at high densities. This corresponds to the solution $`\sigma =0`$, so that the soliton bags “dissolve” and the quarks are free. For the original FL model, this solution is favored at unreasonably low densities. Moreover, in the WS calculations , for cell radii below $``$0.8-0.9 fm only a trivial constant solution can be found. For the more sophisticated analysis reported in , which includes higher partial waves in the quark wave functions and direct computation of the bands, the calculation breaks down already at a cell radius $``$1.4 fm. We view this as a fault of the model and not of the WS approximation, for regardless of whether our restricting the soliton to a single spherical cell represents a dense system well or not, our model of the nucleon should nevertheless allow the soliton to be squeezed to volumes well below nuclear density before it breaks apart. (The saturation density of nuclear matter is $`\rho _0`$=0.17fm<sup>-3</sup>, which corresponds to a WS cell radius $`R_0`$=1.12fm.) This leads us to look for improvements upon the original FL model, and the two particular extensions studied in this paper are generalizations of the quark-glueball coupling $`g(\sigma )`$ and the addition of explicit meson degrees of freedom. Before proceeding to the numerical results, let us look at how these extensions affect the trivial solution. The trivial solution to the WS equations (20-22) is $`u_k(r)=u_0`$, $`v_k(r)=0`$ and $`\sigma (r)=\sigma _0`$, where the constants $`u_0`$ and $`\sigma _0`$ are independent of both $`r`$ and $`k`$. Strictly, for $`k>0`$ this solution does not satisfy our boundary conditions; however, we find that below a given cell volume the numerical solution develops a singularity at the boundary in order to reproduce this preferred solution. Moreover, the more accurate self-consistent calculations of for the FL model (which do not assume just $`s`$-wave states in the quark wave function) show that the width of the band also narrows sharply at the onset of the trivial solution, indicating that the $`k`$-dependence is not important. Thus in order to find the trivial solution that characterizes the breaking down of the WS calculation, we may assume that all the quarks are at the bottom of the band. This corresponds to squeezing an isolated soliton, and we insist that our model favors a nontrivial solution until the cell radius gets well below that corresponding to nuclear density. In the present section we shall consider altering the quark-meson couplings in our models as suggested in Lagrangian , namely: $$g_sg_sg(\sigma ),g_vg_vg(\sigma ).$$ (30) This will allow us to investigate whether the coupling to meson fields can help cure the breaking down of the FL model at too low density. For the trivial solution, then, the normalization condition (25) gives $$u_0=\left(\frac{3}{4\pi R^3}\right)^{1/2}=\rho _B^{1/2}.$$ (31) From Eq. (21) we find the quark eigenenergy is $$ϵ=m_f+g(\sigma _0)[1g_s\varphi _0+g_vV_0],$$ (32) where $`\sigma _0`$ obeys Eq. (22), modified to account for the change in $`q`$-$`\sigma `$ coupling, which leads to $$U^{}(\sigma _0)=3\rho _Bg^{}(\sigma _0)[1g_s\varphi _0+g_vV_0].$$ (33) Solving for the mean field values of $`\varphi _0`$ and $`V_0`$, we have $$\varphi _0=\frac{3g_s}{m_s^2}\rho _Bg(\sigma _0)\mathrm{and}V_0=\frac{3g_v}{m_v^2}\rho _Bg(\sigma _0),$$ (34) yielding finally (for $`m_f`$=0) $$ϵ=g(\sigma _0)\left\{13\rho _Bg(\sigma _0)\left[\frac{g_s^2}{m_s^2}\frac{g_v^2}{m_v^2}\right]\right\}$$ (35) and the total energy density $$=3\rho _Bg(\sigma _0)\left\{1\frac{3}{2}\rho _Bg(\sigma _0)\left[\frac{g_s^2}{m_s^2}\frac{g_v^2}{m_v^2}\right]\right\}+U(\sigma _0).$$ (36) This is clearly a spurious solution corresponding to putting all the quarks in the lowest level. One needs to add the kinetic energy correctly to get the true energy of the quark plasma. We would like to select a model for which the trivial solution is found only at densities much higher than nuclear density, and we would like the quark mass $`ϵ`$ to be zero in the preferred phase. This latter condition implies $`\sigma _0`$=0. However, for the FL model $`g(\sigma )=g_\sigma \sigma `$, and therefore $`\sigma _0`$=0 is not a solution of Eq. (33). Using Eqs. (34) and substituting for $`U^{},g^{}`$ and $`g`$, for the FL model Eq. (33) becomes $$a\sigma _0+\frac{b}{2}\sigma _0^2+\frac{c}{6}\sigma _0^3=3\rho _Bg_\sigma \left[13\rho _Bg_\sigma \left(\frac{g_s^2}{m_s^2}\frac{g_v^2}{m_v^2}\right)\sigma _0\right].$$ (37) If we turn off the meson fields $`g_s,g_v0`$, we see that the solution to (37) goes as $`\rho _B^{1/3}`$ at large densities. Not only does this blow up as $`\rho _B\mathrm{}`$, but it gives the quarks an unphysical negative mass $`ϵg_\sigma \rho _B^{1/3}`$. On the other hand, if we turn on the meson fields and make the usual choice of parameters so that $`g_s^2/m_s^2>g_v^2/m_v^2`$ (which is necessary for saturation), there is now a positive high density solution that goes as $`\rho _B^1`$. However, there is still an unphysical negative solution, now diverging as $`\rho _B`$. Thus we do not expect the inclusion of the meson fields to cure the problem of the FL model. Now let us consider a modified FL model in which we take the quark-glueball coupling to be $`g(\sigma )=g_\sigma \sigma ^2`$. We shall call this the FL<sup>2</sup> model. Then instead of Eq. (37) we have $$a\sigma _0+\frac{b}{2}\sigma _0^2+\frac{c}{6}\sigma _0^3=6\rho _Bg_\sigma \sigma _0\left[13\rho _Bg_\sigma \left(\frac{g_s^2}{m_s^2}\frac{g_v^2}{m_v^2}\right)\sigma _0^2\right].$$ (38) For this model the trivial solution $`\sigma _0=0`$ exists in the Wigner-Seitz approximation. There are also nonzero solutions, however. With the meson fields switched off there are two nontrivial solutions $$\sigma _0=\frac{3b}{2c}\left\{1\pm \sqrt{1\frac{8c}{3b^2}(a+6\rho _Bg_\sigma )}\right\},$$ (39) which exist only at densities $`\rho _B<a(3b^2/8ac1)/6g_\sigma `$. For the parameter choice used here, this corresponds to cell radii $`R>1.67`$ fm. These solutions give the quarks a positive mass and do not necessarily signal any problems with the model. For nonzero $`q`$-$`\sigma `$ coupling, the solutions exist also at high density, behaving as $`\sigma _0\pm \rho _B^{1/2}`$. The quark effective mass (35) vanishes in this limit. Thus the FL<sup>2</sup> does not seem to have the problems of the FL model when using the WS approximation. Finally, let us consider the $`\chi `$CD models. From Eqs. (3) and (4) we see that for small $`\sigma `$ the coupling $`g(\sigma )`$ is of leading order $`n_\kappa `$ in $`\sigma `$. This means that, regardless of whether the mesons are present, for the $`\chi `$CD model with $`n_\kappa 2`$ there will be a solution to Eq. (33) such that $`\sigma _0=0`$. This corresponds to the desired free massless quark plasma phase, with vanishing meson field averages, since then $`g(0)=g^{}(0)=0`$. Furthermore, one can show that as $`\rho _B\mathrm{}`$ the only other solution has $`\sigma _0\rho _B^1`$ and gives the quarks a positive mass that also vanishes in the high density limit. Thus we expect the $`\chi `$CD models to provide a more reasonable description of dense nuclear matter than does the FL model, for there is no unphysical trivial solution to the WS equations that will cause problems. Indeed, the type of behavior predicted by our analysis of the trivial solution is reflected in our Wigner-Seitz calculations. As we show in Fig. 1, the $`\chi `$CD model exhibits Wigner-Seitz solutions down to very small cell radii (we never reached a breakdown point, taking $`R`$ as low as $`0.05`$fm), whereas the FL model breaks down at the expected point $`R0.8`$fm. The FL<sup>2</sup> model can be taken down to lower cell radii than the FL, breaking down at $`R.25`$fm, well above the transition to the uniform plasma phase. However, the behavior of $`ϵ_b`$ below $`R=1`$fm for the FL<sup>2</sup> model does not seem desirable. This could be cured by allowing the quark-meson coupling to depend on $`\sigma `$, but we find then non-smooth transitions to new non-trivial solutions in all the NTS models discussed here. We find more reasonable results if we keep the quark-meson coupling independent of $`\sigma `$, and thus the $`\chi `$CD seems the preferable model. We shall present results only for this latter model in the next section. ## 5 Results We have performed the calculations using a straightforward numerical integration of the equations of motion for the $`q`$ and $`\sigma `$ fields. The mean field $`\varphi _0`$ is found by directly minimizing the free energy (15). As a check, we have also performed some calculations with a relaxation routine, but this is far more time consuming. The numerical calculations with the $`\chi `$CD model were carried out using the following set of parameters $$a=50\text{fm}\text{-2}\text{ },b=1300\text{fm}\text{-1}\text{ },c=10^4,g_\sigma =2,\kappa _v=0.1.$$ (40) The parameters of the potential $`U(\sigma )`$ are chosen to give a reasonable bag constant $`B=46.6`$MeV/fm<sup>3</sup> and glueball mass $`m_{GB}=1.82`$GeV. The value of $`\sigma _v`$ is calculated to be $`.285`$fm<sup>-1</sup>, the single soliton energy is $`1391`$ MeV and the nucleon mass is $`1176`$ MeV. The nucleon rms radius is $`0.876`$ fm. The parameters were chosen to fit the rms radius and nuclear matter properties, resulting in a somewhat high value of the nucleon mass. We did not spend much effort in fine-tuning the parameters, however. As the short range repulsion provided by the vector field in QHD is provided by the Wigner-Seitz boundary conditions in the liquid soliton model, in what follows we shall take the coupling constant between the vector meson and the quarks to be zero, $`g_v=0`$. On the other hand we shall treat $`g_s`$ as a free parameter in order to study its effect on the soliton matter. For completeness, we also list the parameter sets used for the other models studied in the previous section. For the FL model, we take the parameters as in Birse et. al. . (This set was also used in Ref. .) The full set of parameters for the soliton model is, $$a=0.0,b=700.43\text{fm}\text{-1}\text{ },c=10^4,g_\sigma =10.98.$$ (41) These parameters correspond to a single soliton energy $`1260`$ MeV, and nucleon mass $`902`$ MeV after removing the center of mass motion. For the FL2 model we use the same potential parameters and a coupling $`g_\sigma =60`$fm<sup>-1</sup>. We proceed to our presentation of the results for the $`\chi `$CD model. To our knowledge, this is the first application of this particular model to the study of dense matter. Previous work has dealt with few nucleon systems . In Figs. 2 and 3 we show the fields $`u(r)`$ and $`\sigma (r)`$ for several values of the cell radius $`R`$. As can be seen from its behavior at the cell boundary, the quark field $`u_k(r)`$ displayed in Fig. 2 is that corresponding to the bottom of the band $`k=0`$. Fig. 3 shows that the depth of the bag decreases only slighty as the cell radius is lowered, indicating that the quarks remain tightly bound. (Note that for the FL model, however, the bag depth actually increases with smaller $`R`$ .) That the quarks are tightly bound in the bag can be seen clearly in Fig. 4, where we display the quark density $`\rho _q(r)`$ given by Eq. (23). There is only a small overlap with quarks from neighboring cells even at radii below 1 fm. In Figs. 1-4, the quark-meson coupling has been set to zero. In Fig. 5 the top and bottom of the lowest energy band is shown as a function of $`R`$ for several values of the quark-meson coupling $`g_s`$. The point at which the band begins to form, $`R1.6`$fm, is not very sensitive to the value of $`g_s`$. On the other hand, the structure of the band depends strongly on the coupling to the scalar meson, becoming wider as $`g_s`$ is increased. In Fig. 6 we present the energy of the soliton $`E_N`$ given by Eq. (29), and in Fig. 7 we show the total energy per baryon $`E_B`$ of the system, derived from Eq. (15). As expected, the introduction of the scalar meson to the the soliton matter results in attraction and saturation. The empirical nuclear saturation density corresponds to $`R_0=1.12`$ fm, whereas from Fig. 7 we see, for example, that our NTS nuclear matter energy per baryon has a minimum at $`R1.35`$fm for $`g_s=1`$. The saturation energy is $`E_BM_N^{(as)}20`$ MeV, as compared to the empirical value $`E_{sat}=16`$ MeV. The compression modulus of nuclear matter is $`K=R^2d^2E_B/dR^2`$ at the equilibrium point: for the present model we find $`K1170`$MeV. This is to be compared to empirical estimates that usually lie in the range $`100`$-$`500`$MeV, with $`K=200`$MeV the generally accepted value. These results should be regarded as only order of magnitude, as at higher density we are surely underestimating the kinetic energy of the system. We shall develop a more reasonable model of the liquid state, which leads to a more quantitatively accurate EOS for solitonic nuclear matter, in the next paper of this series. The solution for the $`\chi `$CD model is stable down to very low values of $`R`$ (at least as low as $`0.05`$ fm), and it is apparent that with proper calibration this model can be used as an excellent starting point for the study of nuclear matter. Another important feature of this model is the increase of the nucleon rms with density, Fig. 8. This unexpected outcome of the model is in accord with the EMC effect. Checking the sensitivity of this effect to model parameters, we found that the increase in the rms is insensitive to recoil corrections, but might disappear with different choices of the soliton parameters. ## 6 Discussion In this paper we have investigated a class of non-topological soliton models that generalize the Friedberg-Lee model. We have studied nuclear matter in the Wigner-Seitz approximation, where neighboring bags begin to overlap at higher density, as a means of distinguishing between the various models. The models differ in the precise form of the coupling between the quarks and the scalar gluon field, and in the presence and form of couplings to explicit meson fields. Of the models studied, we have found the chiral chromodielectric model to exhibit behavior best in line with phenomenological expectations. For this model, there is no breakdown in the Wigner-Seitz calculation as the cell radius is decreased, thus overcoming a serious shortcoming of the original FL model. We consider it a computational necessity to consider only constituent quarks and ignore colored gluons and sea quarks, and thus these restrictions along with the addition of explicit meson fields may be seen as an approximation to the original theory. The original $`\chi `$CD model is closely based upon QCD, exhibiting absolute confinement, and it is therefore satisfying to see that this model is selected by our studies of nuclear matter. We include mesons along the lines of quantum hadrodynamics, and we use the mean field approximation. In our final calculations we have considered only the scalar meson, but it is straightforward to include also the vector meson, which should be done for fine-tuning the parameters. We note that we found it best to use a quark-meson coupling that is not modulated by the quark-glueball coupling, in contrast to that argued for in Refs. . Indeed, when using a $`q\varphi `$ coupling proportional to $`g(\sigma )`$, we find jumps to qualitatively different solutions as density is increased. The resulting curve of energy as a function of cell radius has several bumps, which is clearly an undesirable feature. However, this should not necessarily be taken as an argument against models that employ quark-meson couplings that depend on $`\sigma `$, but rather merely a statement that when using the mean field approximation it is more consistent to use a $`\sigma `$-independent $`q`$-$`\varphi `$ coupling $`g_s`$. Indeed, one can view this coupling as a “mean field approximation” to a more fundamental coupling — namely, $`g_s\stackrel{~}{g}_sg(\sigma )`$. In the end, then, we have used a $`\chi `$CD model with $`\sigma `$-independent coupling between quarks and the scalar meson. We find that the inclusion of the scalar meson provides a clear saturation point for nuclear matter and that a rough fit to both empirical nuclear matter and single nucleon properties can be obtained. One of the most interesting features found is an increase in the nucleon rms radius at intermediate densities, in line with the EMC effect. This is dependent on the presence of the scalar meson. There are clearly several refinements to our model and our calculations that must be considered before fine-tuning the parameters. Among these are the inclusion of other mesons in the model, the addition of perturbative gluonic effects, and an improved calculation of the quark wave function along the lines of Ref. . One can also improve our handling of the spurious CM motion. Foremost, however, we need to improve our modeling of the liquid state. In this paper we have used a low-density approximation in modeling nuclear matter with solitons. This consists in using a Wigner-Seitz approximation to calculate an effective nucleon mass. Then the kinetic energy is added to the system by taking the motion of the nucleons to be that of a Fermi gas. The assumptions of this picture include: the nuclear medium restricts any given nuclear bag to a spherical cell; any given nucleon moves slowly, so that it can be constructed at rest and then boosted; the quarks remain tightly bound inside the bag; the nucleons move independently within the medium. Only the first of these assumptions can remain valid at high densities, where the nuclear medium will form a liquid in which each nucleon’s motion is localized on short time scales and quarks need not remain tightly bound inside the bag. Since we are ultimately interested in studying the transition to a uniform quark plasma within our non-topological soliton model, we must do a better job of modeling the liquid state at high densities. This is the subject of our next paper. ## Acknowledgments We thank J.A. Tjon for helpful discussions.
no-problem/9910/astro-ph9910320.html
ar5iv
text
# Population synthesis of old neutron stars in the Galaxy ## 1. Introduction Isolated neutron stars (NSs) are expected to be as many as $`10^8`$$`10^9`$, a non–negligible fraction of the total stellar content of the Galaxy. The number of observed radio pulsars is now $``$ 1,000. Since the pulsar lifetime is $`10^7`$ yr, this implies that the bulk of the NS population, mainly formed of old objects, remains undetected as yet. Despite intensive searches at all wavelengths, only a few (putative) isolated NSs which are not radio pulsars (or soft $`\gamma `$ repeaters) have been recently discovered in the X–rays with ROSAT (Walter, Wolk & Neuhauser 1996; Haberl et al. 1998; Neuhauser & Trumper 1999). The extreme X–ray to optical flux ratio ($`>10^3`$) makes the NS option rather robust, but the exact nature of their emission is still controversial. Up to now, two main possibilities have been suggested, either relatively young NSs radiating away their residual internal energy or much aged NSs accreting the interstellar medium (ISM). Both options have advantages and drawbacks. Standard cooling atmosphere models fail to predict in a natural way the spectrum of the best studied object, RX J1856-3754 (see Walter et al. this volume). Accretion models require instead a very low NS velocity relative to the ISM ($`v<20`$ km s<sup>-1</sup>) in order to produce the luminosities inferred from ROSAT data (see Walter et al. this volume). We feel that a more thorough analysis of the statistical properties of NSs can be useful in providing indirect evidence in favor or against the accretion scenario. As discussed by Lipunov (1992), isolated NSs can be classified into four main types: Ejectors, Propellers, Accretors and Georotators. In Ejectors the relativistic outflowing momentum flux is always larger than the ram pressure of the surrounding material so they never accrete and are either active or dead pulsars, still spun down by dipole losses. In Propellers the incoming matter can penetrate down to the Alfven radius, $`R_A`$, but no further because of the centrifugal barrier, and stationary inflow can not occur, but the piling up of the material at the Alfven radius may give rise to (supposedly short) episodes of accretion (Treves, Colpi & Lipunov 1993; Popov 1994). Steady accretion is also impossible in Georotators where (similarly to the Earth) the Alfven radius exceeds the accretion radius, so that magnetic pressure dominates everywhere over the gravitational pull. It is the combination of the star period, magnetic field and velocity that decides which type a given isolated NS belongs to and, since both $`P`$, $`B`$ and $`V`$ change during the star evolution, a NS can go through different stages in its lifetime. While the dynamical evolution of NSs in the Galactic potential was studied by several authors (sse e.g. Madau & Blaes 1994; Zane et al. 1995), little attention was paid to the NSs magneto–rotational evolution. Recently, this issue was discussed in some detail by Livio, Xu & Frank (1998) and Colpi et al. (1998). Goal of this investigation is to consider these two issues simultaneously, coupling the dynamical and the magneto–rotational evolution for the isolated NS population. The possibility that the low–velocity tail is underpopulated with respect to what was previously assumed should be seriously taken into account. It is our aim to revise the estimates on the number of old accreting neutron stars in the Galaxy in the light of these new data, in the attempt to reconcile theoretical predictions with present ROSAT limits (Neühauser & Trümper 1999). ## 2. The Model In this section we summarize the main hypothesis introduced to track the evolution of single stars and describe shortly the technique used to explore their statistical properties, referring to Popov & Prokhorov (1998) for details on spatial evolution calculations and to Konenkov & Popov (1997) and Lipunov & Popov (1995) for details of magneto-rotational evolution. ### 2.1. Dynamical evolution The dynamical evolution of each single star in the Galactic potential (taken in the form proposed by Miyamoto & Nagai 1975) is followed solving its equations of motion. The period evolution depends on both the star velocity and the local density of the interstellar medium, any attempt to investigate the statistical properties of the NS population should incorporate a detailed model of the ISM geography. Unfortunately the distribution of molecular and atomic hydrogen in the Galaxy is highly inhomogeneous. Here we use the analytical distributions from Bochkarev (1992) and Zane et al. (1995) for the hydrogen density $`n(R,Z)`$. Within a region of $`140`$ pc around the Sun, the ISM is underdense, and we take $`n=0.07\mathrm{cm}^3`$. In our model we assume that the NS birthrate is constant in time and proportional in magnitude to the square of the local gas density. Neutron stars at birth have a circular velocity determined by the Galactic potential. Superposed to this ordered motion a kick velocity is imparted in a random direction. We use here an isotropic Gaussian distribution (relative to the local circular speed) with dispersion $`\sigma _V`$, simply as a mean to model the true pulsar distribution at birth (see e.g. Cordes & Chernoff 1998). The mean velocity $`V=(8/\pi )^{1/2}\sigma _V`$ is varied in the interval 0–550 km s<sup>-1</sup>. ### 2.2. Accretion physics and period evolution The accretion rate was calculated according to the Bondi formula $$\dot{M}=\frac{2\pi (GM)^2m_pn(R,Z)}{(V^2+V_s^2)^{3/2}}10^{11}nv_{10}^3\mathrm{g}\mathrm{s}^1$$ (1) where $`m_p`$ is proton mass, the sound speed $`V_s`$ is always 10 km$`\mathrm{s}^1`$ and $`v_{10}=(V^2+V_s^2)^{1/2}`$ in units of 10 km$`\mathrm{s}^1`$. $`M`$ and $`R`$ denote the NS mass and radius, which we take equal to $`1.4M_{}`$ and 10 km, respectively, for all stars. All neutron stars are assumed to be born with a period $`P(0)=`$ 0.02 s, and a magnetic moment either $`\mu _{30}=1`$ or $`\mu _{30}=0.5`$, where $`\mu _{30}=\mu /10^{30}\mathrm{G}\mathrm{cm}^3`$. In the ejector phase the energy losses are due to magnetic dipole radiation. When the gravitational energy density of the incoming interstellar gas exceeds the outward momentum flux at the accretion radius, $`R_{ac}2GM/v^2`$, matter starts to fall in. This happens when the period reaches the critical value $$P_E(EP)10\mu _{30}^{1/2}n^{1/4}v_{10}^{1/2}\mathrm{s}.$$ (2) When $`P>P_E(EP)`$ the NS is in the propeller phase, rotational energy is lost and the period keeps increasing at a rate taken from Shakura (1975). As the star moves through the inhomogeneous ISM a transition from the propeller back to the ejector phase may occur if the period attains the critical value $$P_E(PE)3\mu _{30}^{4/5}v_{10}^{6/7}n^{2/7}\mathrm{s}.$$ (3) Note that the transitions $`PE`$ and $`EP`$ are not symmetric as first discussed by Shvartsman in the early ’70s. Accretion onto the star surface occurs when the corotation radius $`R_{co}=(GMP^2/4\pi ^2)^{1/3}`$ becomes larger than the Alfven radius (and $`R_A<R_{ac}`$, see below). This implies that braking torques have increased the period up to $$P_A(PA)420\mu _{30}^{6/7}n^{3/7}v_{10}^{9/7}\mathrm{s}.$$ (4) As soon as the NS enters the accretor phase, torques produced by stochastic angular momentum exchanges in the ISM slow down the star rotation at the equilibrium period $$P_{eq}=2.6\times 10^3v_{(t)10}^{2/3}\mu _{30}^{2/3}n^{2/3}v_{10}^{13/3}\mathrm{s}$$ (5) where $`v_{(t)}`$ the turbulent velocity of the ISM (Lipunov & Popov 1995; Konenkov & Popov 1997). At the very low accretion rates expected for fast, isolated NSs, it could be that the Alfven radius is larger than the accretion radius. The condition $`R_A<R_{ac}`$ translates into a limit for the star velocity $$v<410n^{1/10}\mu _{30}^{1/5}\mathrm{km}\mathrm{s}^1.$$ (6) ## 3. Results and discussion ### 3.1. The NS census for a non–decaying field We consider two representative values for the (costant) magnetic dipole moment, $`\mu _{30}=0.5`$ and $`\mu _{30}=1`$. The present fraction of NSs in the Ejector and Accretor stages as a function of the mean kick velocity is shown in figure 1. Here, and in the following the total number of Galactic NSs was assumed to be $`10^9`$. A total number $`10^9`$ appears to be consistent with the nucleosynthesis and chemical evolution of the Galaxy, while $`10^8`$ is derived from radio pulsars observations. It is uncertain if all NSs experience an active radio pulsar phase, due to low initial magnetic fields or long periods, or to the fall–back in the aftermath of the supernova explosion. There is a serious possibility that the total number of NSs derived from radio pulsar statistics is only a lower limit. In order to compare the expected number of accreting ONSs with the ROSAT All Sky Survey (RASS) results, we evaluated the number of those ONSs, within 140 pc from the Sun, producing an unabsorbed flux of $`10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> or higher at energies $`100`$ eV. The results are illustrated in figure 2. The main point is that for mean velocities below 200 km s<sup>-1</sup> the number of ONSs with a flux above the RASS detection limit would exceed 10. Most recent analysis on the number of isolated NSs in the RASS (Neühauser & Trümper 1999) indicate that the upper limit is below 10. An important aspect is that our results exclude the possible presence of a consistent low–velocity population at birth, which exceeds that contained in the gaussian with $`V>200`$ km s$`^1.`$ ### 3.2. The NS census for a decaying field We refer here only to a very simplified picture of the field decay in which $`B(t)=B(0)\mathrm{exp}(t/t_d)`$. Calculations have been performed for $`t_d=1.1\times 10^9`$ yr, $`t_d=2.2\times 10^9`$ yr and $`\mu _{30}(0)=1`$. Results are shown in figure 3. For some values of $`t_d`$ and bottom field most of NS can stay at the Ejector stage, and the number of accretors and Propellers would not be increased. We show this analytical estimates graphically in figure 4, where the Ejector time, $`T_E`$, is plotted vs. bottom magnetic momentum for constant velocity and ISM density ($`n=1`$ $`\mathrm{cm}^3`$, $`v=10`$ km/s), different $`t_d`$ and two values of the initial magnetic momentum, $`10^{30}`$ and $`10^{31}`$ G cm<sup>3</sup> (see Popov & Prokhorov 1999). Summarizing, we can conclude that, although both the initial distribution and the subsequent evolution of the magnetic field strongly influences the NS census and should be accounted for, the lower bound on the average kick derived from ROSAT surveys is not very sensitive to $`B`$, at least for not too extreme values of $`t_d`$ and $`\mu (0)`$, within this model. ## 4. Conclusions In this paper we have investigated how the present distribution of neutron stars in the different stages (Ejector, Propeller, Accretor and Georotator) depends on the star mean velocity at birth. On the basis of a total of $`10^9`$ NSs, the fraction of Accretors was used to estimate the number of sources within 140 pc from the Sun which should have been detected by ROSAT. Most recent analysis of ROSAT data indicate that no more than $`10`$ non–optically identified sources can be accreting ONSs. This implies that the average velocity of the NS population at birth has to exceed $`200\mathrm{km}\mathrm{s}^1`$, a figure which is consistent with those derived from radio pulsars statistics. We have found that this lower limit on the mean kick velocity is substantially the same either for a constant or a decaying $`B`$–field, unless the decay timescale is shorter than $`10^9`$ yr. Since observable accretion–powered ONSs are slow objects, our results exclude also the possibility that the present velocity distribution of NSs is richer in low–velocity objects with respect to a Maxwellian. The paucity of accreting ONSs seem therefore to lend further support in favor of neutron stars as very fast objects. #### Acknowledgments. Work partially supported by the European Commission under contract ERBFMRX-CT98-0195. The work of S.P., V.L and M.P. was supported by grants RFBR 98-02-16801 and INTAS 96-0315. S.P. and V.L. gratefully acknowledge the University of Milan and of Insubria (Como) for support during their visits. S.P. also acknowledge organizers of the IAU 195. ## References Bochkarev, N.G. 1992, Basics of the ISM Physics (Moscow University Press) Colpi, M., Turolla, R., Zane, S., & Treves, A. 1998, ApJ 501, 252 Cordes, J.M.,& Chernoff, D.F. 1998, ApJ 505, 315 Haberl, F., Motch, C., & Pietsch, W. 1998, Astron. Nachr. 319, 97 Konenkov, D.Yu., & Popov, S.B. 1997, PAZh, 23, 569 Lipunov, V.M. 1992, Astrophysics of Neutron Stars (Springer & Verlag) Lipunov, V.M., & Popov, S.B. 1995, AZh, 71, 711 Livio, M., Xu, C., & Frank, J. 1998 ApJ, 492, 298 Madau, P., & Blaes, O. 1994, ApJ, 423, 748 Miyamoto, M., & Nagai, R. 1975, Pub. Astr. Soc. Japan, 27, 533 Neühauser, R., & Trümper, J.E. 1999, A&A, 343, 151 Popov, S.B. 1994, Astron. Circ., N1556, 1 Popov, S.B., & Prokhorov, M.E. 1998, A&A, 331, 535 Popov, S.B., & Prokhorov, M.E. 1999, astro-ph/9908212 Shakura, N.I. 1975, PAZh, 1, 23 Treves, A., Colpi, M., & Lipunov, V.M. 1993, A&A, 269, 319 Walter, F.M., Wolk, S.J., & Neühauser, R. 1996, Nature, 379, 233 Zane, S., Turolla, R., Zampieri, L., Colpi, M., & Treves, A. 1995, ApJ, 451, 739
no-problem/9910/astro-ph9910482.html
ar5iv
text
# Limits on the Density of Compact Objects from High Redshift Supernovae ## 1 Introduction Current cosmological data indicates that the density of matter in the Universe is around $`\mathrm{\Omega }_\mathrm{m}0.3`$ (in units of critical density). Of this, big bang nuclesynthesis requires 15–20% be in the form of baryons, with the rest in some other more exotic form. Even among the baryons only $`\mathrm{\Omega }_{}0.01`$ has been accounted for directly. The rest could be in warm gas (Cen & Ostriker 1999), or in compact objects such as brown dwarfs, white dwarfs, or neutron stars. The nonbaryonic dark matter could be composed of a smooth microscopic component, such as axions, or alternatively it may be composed of compact objects, such as primordial black holes (Crawford and Schramm 1982). Although compact objects have been searched for in our own halo using microlensing studies (see Sutherland 1999 for a review), results are as of yet inconclusive. While the most straightforward interpretation is that a large fraction of the halo (up to 100%) is composed of compact objects of roughly 0.5M, other scenarios using existing stellar populations remain viable. The bottom line is that, at present, the form of the bulk of the dark matter remains unknown. In recent years there have been tremendous advances in our ability to observe and characterize type Ia supernovae. In addition to dramatically increasing the number of SNe observed at high redshift, the intrinsic peak brightness of these supernovae is now thought to be known to within about 15%. These supernovae are thus excellent standard candles, and by measuring the spread in their observed brightnesses it may be possible to determine the nature of the lensing, and thereby infer the distribution of the lensing matter. It has long been recognized that the lensing of SNe can be used to search for the presence of compact objects in the Universe (Linder, Schneider & Wagoner 1988; Rauch 1991). Since the amount of matter near or in the beam determines the amount of magnification of the image, the magnification distribution from many SNe can probe for the presence of compact objects in the Universe. If the Universe consists of compact objects, then on very small scales most of the light beams do not intersect any matter along the line of sight, resulting in a dimming of the image with respect to the filled-beam (standard Robertson-Walker) result. On occasion a beam comes very near a compact object, resulting in a tremendous brightening of the ensuing image. In such a Universe the magnification distribution will be sharply peaked at the empty beam value, and will possess a long tail towards large magnifications. The lensing is sensitive to objects with Einstein rings larger than the linear extent of the SNe (roughly $`10^{15}`$cm at their maximum, which gives a lower limit on the mass of the lenses of $`10^2`$M). Lensing due to compact objects is not, however, the only way to modify the flux of a SN. Even smooth microscopic matter, such as the lightest SUSY particle or axions, are expected to clump on large scales. The effect on the magnification distribution depends on the clumpiness of the Universe. If the clumping of matter in the Universe is very nonlinear then all of the matter will reside in dense halos, and the filaments connecting them, and there will be large empty voids extending tens or even hundreds of megaparsecs in diameter. There will thus be a large probability that a given line of sight will be completely devoid of matter, and so will give a large demagnification (as compared to the pure Robertson-Walker result). A simple way to estimate the importance of this effect is to compare the rms scatter in magnification to the demagnification of an empty beam relative to the mean (given by the filled beam value). Both can be calculated analytically, the former as an integral over the nonlinear power spectrum, and the latter as an integral over the combination of distances. The result depends on the redshift of interest, but for most realistic models the rms is smaller than the empty beam value at $`z\mathrm{¿}\mathrm{}1`$. This means that, at least qualitatively, it should be possible to distinguish between compact objects and smoothly distributed matter at such redshifts. In this letter we extend previous work in several aspects. First, we provide the formalism to investigate models with a combination of both compact objects and smooth dark matter. Although Rauch (1991) and Metcalf and Silk (1999) have explored the use of lensing of supernovae to detect compact objects, they concern themselves with distinguishing between two extreme cases: either all or none of the matter in compact objects. Our formalism allows us to address the more general question of how well the fraction of dark matter in compact objects can be measured with any given SN survey. Both baryonic and dark matter are dynamically significant and could contribute to the lensing signal. For example, we can imagine four simplistic scenarios, in which the baryonic and dark matter are each in one of two states: smoothly distributed or clumped into compact objects (with masses above $`10^2`$M). To distinguish among these cases we need to be able to differentiate between 0%, 20%, 80%, and 100% of the matter in compact objects. Second, we use realistic cosmological N-body simulations (Jain, Seljak and White 1999) to provide the distribution of magnification for the smooth microscopic component. Previous works (Metcalf and Silk 1999; Holz and Wald 1998) make the simplifying assumption of an uncorrelated distribution of halos to determine this distribution. As shown in Jain et al. (1999; JSW99), there are differences in the probability distribution function (pdf) of magnification for models with different shapes of the power spectrum and/or different values of cosmological parameters. For example, open models exhibit large voids up to $`z1`$, and their pdf’s extend almost to the empty beam limit. On the other hand, flat $`\mathrm{\Omega }_\mathrm{m}`$ models have less power (being normalized to the same cluster abundance), and are more linear at higher $`z`$, resulting in a more Gaussian pdf. These differences become particularly important when we consider models in which only a fraction of the total matter is in compact objects. Magnification distributions in such cases differ only weakly from the smooth matter case, and an accurate description of the pdf is of particular importance. Finally, we also discuss the cross-correlation of SN magnification with convergence reconstructed from shear or magnification in the same field. As will be presented in the discussion section, this can be used as a further probe of compact objects. ## 2 Magnification probability distribution We wish to derive the magnification probability distribution function (pdf) in a Universe with both compact objects and smooth dark matter. We begin with the magnification pdf for smoothly distributed matter, $`p_{\mathrm{LSS}}(\mu ,z)`$, since this background distribution is present in all cases. For convenience we define the magnification, $`\mu `$, to be zero at the empty beam value. We use the pdf’s computed from N-body simulations in JSW99, obtained by counting the number of pixels in a map that fall within a given magnification bin. We explicitly include the dependence of the pdf on the redshift of the source. The two main trends with increasing $`z`$ are an increase in the rms magnification, and an increasing Gaussianity of the pdf (see figure 15 in JSW99). As we increase $`z`$ we are superimposing more independent regions along the line of sight, and the resulting pdf approaches a Gaussian by the central limit theorem. A possible source of concern is that the resolution limitations of the N-body simulations might corrupt the derived pdf’s in ways which crucially impact our results. In principle we would like to resolve all scales down to the scale of the SN emission region ($`10^{15}`$cm in linear size). Fortunately such high resolution is unnecessary, as there is very little power in the matter correlation function on such small scales. The contribution to the second moment of the magnification peaks at an angular scale of $`\theta >3^{}`$ (see figure 8 in JSW99); scales smaller than this do not significantly change the value of the second moment. These smaller scales, however, are relevant for the high magnification tail of the distribution, and even the largest N-body simulations are resolution limited in the centers of halos. As high magnification events are very rare in the small SN samples being considered, limitations in the resolution of the high magnification tail of the distribution are not of great concern. The simulations are very robust around the peak of the magnification pdf, with lower resolution PM simulations giving results in good agreement with higher resolution $`P^3M`$ simulations (figure 20 in JSW99). As these $`P^3M`$ simulations converge for the 2nd moment we may conclude that, aside from the high-magnification tail, the pdf’s obtained from these simulations do not suffer from the limitations of finite numerical resolution. This is also confirmed by smoothing the map by a factor of two, and comparing the pdf’s before and after the smoothing. The resulting pdf’s are very similar for all models, indicating that small scale power has little effect on the region of most interest, near the peak of the pdf. The pdf’s are shown in figure 1, plotted against deviations from the mean magnification, $`\delta \mu =\mu \overline{\mu }`$ ($`\delta \mu =0`$ corresponds to the mean (filled beam) value). The mean magnification, $`\overline{\mu }`$, is given by the difference between the empty beam and the mean (filled) beam values, which for SN at $`z=1`$ is $`\overline{\mu }=0.24`$ for $`\mathrm{\Omega }_\mathrm{m}=1`$, $`\mathrm{\Omega }_\lambda =0`$ models such as standard CDM model (with $`\mathrm{\Omega }_\mathrm{m}h=0.5`$) or $`\tau `$CDM model (with $`\mathrm{\Omega }_\mathrm{m}h=0.21`$), $`\overline{\mu }=0.13`$ for $`\lambda `$CDM model with $`\mathrm{\Omega }_\mathrm{m}=0.3`$, $`\mathrm{\Omega }_\lambda =0.7`$, and $`\overline{\mu }=0.09`$ for open (OCDM) model with $`\mathrm{\Omega }_\mathrm{m}=0.3`$, $`\mathrm{\Omega }_\lambda =0`$. In contrast, in a Universe filled with a uniform comoving density of compact objects the pdf depends on a single parameter, the mean magnification $`\overline{\mu }`$, or equivalently, the mean convergence $`\sigma `$. The two are related via $`\overline{\mu }=(1\sigma )^21`$ ($`\overline{\mu }2\sigma `$ if $`\sigma 1`$). The pdf rises sharply from $`\mu =0`$, and drops off as $`\mu ^3`$ for high $`\mu `$ (Paczyński 1986). Based on Monte-Carlo simulations, Rauch (1991) gives a fitting formula for the pdf: $$p_\mathrm{C}(\mu ,\sigma )=2\sigma _{\mathrm{eff}}\left[\frac{1e^{b\mu }}{(1+\mu )^21}\right]^{3/2},$$ (1) where $`b=247e^{22.3\sigma }`$ and $`\sigma _{\mathrm{eff}}`$ is chosen so that the pdf integrates to unity. Note that this expression is only valid for $`\sigma <0.1`$, and can only be used for SN with $`z<1`$–2, depending on the cosmology. To combine the two distributions we consider a model where a fraction $`\alpha `$ of the matter is in compact objects, and where these compact objects trace the underlying matter distribution. Suppose a given line of sight has magnification $`\mu `$ in the absence of compact objects. In the presence of compact objects the mean magnification along this line of sight remains unchanged. Since the smooth component contributes a SN magnification of $`(1\alpha )\mu `$, the effect of compact objects is described with a pdf that gives a mean magnification of $`\alpha \mu `$. The combined pdf, $`p(\mu )`$, is given by integrating over the whole distribution, $$p(\mu ;\alpha ,z)=_0^{\frac{\mu }{1\alpha }}p_{\mathrm{LSS}}(\mu ^{},z)p_\mathrm{C}[\mu \mu ^{}(1\alpha ),\alpha \mu ^{}/2]𝑑\mu ^{}.$$ (2) The middle panel in figure 1 shows the magnification pdf for a range of values of $`\alpha `$, for a cosmological model with $`\mathrm{\Omega }_\mathrm{m}=0.3`$, $`\mathrm{\Omega }_\lambda =0.7`$, and $`\sigma _8=0.9`$. The larger the value of $`\alpha `$, the closer the peak of the distribution to the empty beam value. As $`\alpha `$ increases from zero the pdf becomes wider, as the compact objects increase the large magnification tail. As $`\alpha `$ increases beyond $`\alpha =0.2`$, however, the distribution begins to narrow, since more and more lines of sight are empty and thus closer to the empty beam value. Note in particular the similarity between the $`\alpha =0.2`$ pdf and the $`\tau `$CDM model with $`\alpha =0`$ (upper panel in figure 1). We need to further convolve these distributions with the measurement noise and the scatter in the intrinsic SN luminosities. Current estimates for these is 0.07 magnitudes for rms observational noise and 0.12 magnitudes for intrinsic scatter. The two combined give an additional rms scatter in magnification of 0.14 (Hamuy et al. 1996). To model this noise we convolve all the pdf’s with a Gaussian of width $`0.14`$. The resulting pdf’s are shown in the bottom panel of figure 1. The distinction between the different values of $`\alpha `$, although small, is still apparent. In the next section we calculate how many SNe are required to distinguish between the different curves in the bottom panel, and thereby measure $`\alpha `$. ## 3 Maximum Likelihood Analysis We assume that the SNe are independent events. The likelihood function for the combined sample is then just the product of the individual SN likelihood functions, $$L(\alpha )=\mathrm{\Pi }_ip(\mu _i;\alpha ,z_i).$$ (3) Redshift information could serve as an important confirmation that the effects observed are due to gravitational lensing, since the shapes and positions of the pdf’s evolve with redshift in a known manner. This redshift dependence does not, however, significantly increase the statistical power of the determination, compared to a sample where all of the SNe are at the mean redshift of the sample. To simplify the analysis we therefore assume that all of the SN are at a fixed redshift, z=1, and drop the redshift dependence of $`p`$. Taking the log and ensemble averaging we find $`\mathrm{ln}L(\alpha )`$ $`=`$ $`{\displaystyle \underset{i}{}}\mathrm{ln}p(\mu _i;\alpha )`$ (4) $`=`$ $`N_{\mathrm{SN}}{\displaystyle p(\mu ;\alpha _0)\mathrm{ln}p(\mu ;\alpha )𝑑\mu },`$ where $`N_{\mathrm{SN}}`$ is the number of observed SNe and $`\alpha _0`$ is the assumed true value of $`\alpha `$. An estimate of the unknown parameter $`\alpha `$ is given by maximizing the log-likelihood function. The ensemble average of this gives the solution $`\alpha =\alpha _0`$ (i.e. the estimator is asymptotically unbiased). The error on the determination of the parameter $`\alpha `$ is given by the curvature of the negative log-likelihood function around its maximum. The ensemble average of this minimum variance is $`\sigma _\alpha ^2`$ $`=`$ $`\left[{\displaystyle \frac{\mathrm{ln}L(\alpha )}{\alpha }}\right]^2`$ (5) $`=`$ $`N_{\mathrm{SN}}{\displaystyle p(\mu ;\alpha )\left[\frac{\mathrm{ln}p(\mu ;\alpha )}{\alpha }\right]^2𝑑\mu },`$ which is to be evaluated at $`\alpha =\alpha _0`$. According to the Cramér-Rao theorem, $`\sigma _\alpha `$ gives the smallest attainable error on $`\alpha `$ for an unbiased estimator. As expected, the error decreases as the inverse square root of the number of SNe. Equation 5 determines the number of SNe required to achieve a given level of confidence in the measurement of $`\alpha `$, given a true value $`\alpha =\alpha _0`$. The case of $`\alpha _0=0`$ gives the sensitivity in the case of a Universe with a small fraction of matter in compact objects. In this case, for $`\lambda `$CDM model we find $`\sigma _\alpha 1.4N_{\mathrm{SN}}^{1/2}`$. This includes information from the full pdf, so any large differences between the models in the tail of the pdf would be statistically significant. Although this tail is susceptible to systematic effects generated by a lack of knowledge of the pdf, it will not be probed by small numbers of SNe. To exclude the tails we redid the integral in equation 5 including only the information within $`\pm 2\sigma _\mu `$ around the mean. The resulting error increases to $`\sigma _\alpha 1.8N_{\mathrm{SN}}^{1/2}`$. This means that we need on the order of 100 SNe to determine $`\alpha `$ to 20%, and around 1000 to determine $`\alpha `$ to 5%, all with one-sigma errors and assuming $`\alpha _0=0`$. The variance gradually increases with $`\alpha _0`$, and at $`\alpha _0=1`$ the required number of SN increases by a factor of 2. The variance in $`\alpha `$ scales roughly linearly with the rms noise variance $`\sigma _\mu `$, so if the scatter in SN is larger the corresponding error in $`\alpha `$ increases. Variance also scales roughly inversely with the separation between mean and empty beam $`\overline{\mu }`$. For an open Universe, $`\overline{\mu }`$ is $`7/10`$ the value in the flat ($`\mathrm{\Omega }_\mathrm{m}=0.3`$) Universe, so one needs roughly twice the number of SNe to reach the same sensitivity. For a flat Universe with $`\mathrm{\Omega }_\mathrm{m}=1`$ and $`\overline{\mu }=0.24`$ the statistical significance increases, and one needs one quarter the SNe for the same sensitivity. One can also test for the systematic effects introduced by using an incorrect model for the smooth component. To do this we did an analysis of a $`\lambda `$CDM Universe, “mistakenly” assuming it was an OCDM one. This results in a bias on the parameter $`\alpha `$ which can be as high as 20%, so that a precision test with this method is only possible once the parameters of the cosmological model are known precisely. ## 4 Discussion Results obtained in the previous section indicate that high-$`z`$ SNe can enable an accurate determination of the fraction of matter in compact objects. If the estimates of the errors used here are reasonable, around 100 SNe at $`z=1`$ are required to make a 20% determination of the fraction of dark matter in compact objects. Although such numbers of SNe are not currently available, they can be expected in the near future as the high-redshift supernovae surveys continue their observations.<sup>1</sup><sup>1</sup>1In addition, SNAPSAT (Perlmutter 1999), a proposed satellite dedicated to observing SNe, would easily meet the requirements, with $`2000`$ high-$`z`$ SNe per year. Our results are in broad agreement with the results of Metcalf and Silk (1999), who find that about 50-100 SN are required to distinguish $`\alpha =0`$ from $`\alpha =1`$ with $`90\%`$ confidence. An important limitation will be systematic effects caused by uncertainties in the magnification distributions. For the smooth component these can be obtained using high resolution N-body simulations. We have argued that our results are reliable in the regime of application and can in any case be verified with higher resolution simulations in the future. Lack of knowledge of the underlying cosmological model may also introduce systematic effects, but again this can be expected to be better determined in the future. Another important systematic error is our ignorance of the intrinsic dispersion in brightness of the observed SNe. Our simple assumption of a Gaussian noise profile in the measurement of the peak luminosities of SN Ia’s is certain to break down at some level. This noise estimate is based upon phenomenological observations of SNe, and needs to be borne out both by further observations (especially at low redshift, where lensing effects are negligible and independent calibrations are available) and by theoretical models. Deviations from Gaussian noise are most likely to be important in the tail of the magnification pdf’s, which can be excluded in the analysis. The lack of a precise knowledge of the intrinsic peak values of the SNe is less likely to cause a shift in the peak of the magnification pdf’s, which is the main signature of the lensing effect. A further source of concern is redshift evolution of the intrinsic properties of the SNe. It is possible that both the mean and the higher moments of the distribution of peak luminosities of type Ia SNe varies with redshift, and this could pose significant challenges to an accurate measurement of lensing effects. Improvements in the determination of such effects will occur as the size of the data sets at both low and high redshifts are increased, and direct comparisons of observations are available. An important consistency check will be to demonstrate that the shift of the peak of the distribution as a function of redshift is consistent with theoretical expectations (under the assumed value of $`\alpha `$). A complimentary method to obtain the pdf due to the background smooth matter component is to use weak lensing observations. This would provide the pdf directly from data, avoiding entirely the need for cosmological simulations. As discussed in §2 most of the power is on scales larger than 3’, so the pdf will almost converge to the correct one even if one smoothes the beam at this scale. Weak lensing surveys can provide maps of the magnifications by reconstructing the projected mass density from the shear extracted using galaxy ellipticities. The distribution of magnifications gives a pdf convolved with the random noise from galaxy ellipticities. Given an rms ellipticity of $`0.4`$, we find that about 50 galaxies per square arcminute are required to give an rms noise comparable to the noise in the SNe. This number is similar to the density of galaxies at deep ($`m_I26`$) exposures. Such galaxies have a mean redshift ($`z1`$) comparable to the SNe under discussion. We can therefore choose the size of a patch such that its rms noise agrees with the noise in the SN data. Such a pdf can then be directly compared to that from a SN sample (provided the differences in the redshift distribution are not too large), and any differences between the weak lensing and SN pdf’s would indicate the presence of compact objects or some other source of small scale fluctuation in magnification. Note that one cannot test for compact objects by simply using cross-correlation of the two magnifications, as compact objects do not change the mean magnification in a given line of sight, and thus the cross-correlation coefficient remains unchanged. The increased scatter, however, could provide the desired signature. In conclusion, the magnification distribution from several hundred high redshift type Ia SNe has the statistical power to make a 10-20% determination of the fraction of the dark matter in compact objects. It remains to be seen whether this statistical power can be exploited at its maximum, or whether systematic effects will prove to be too daunting. ###### Acknowledgements. US ackowledges the support of NASA grant NAG5-8084 and B. Jain and S. White for collaboration on JSW99.
no-problem/9910/physics9910044.html
ar5iv
text
# Test of CsI(Tℓ) crystals for the Dark Matter Search ## 1 Introduction Several evidences from a variety of sources indicate that the universe contains a large amount of dark matter . The most strong evidence for the existence of dark matter comes from the galactic dynamics. There is simply not enough luminous matter observed in spiral galaxies to account for the observed rotational curves . Among several dark matter candidates, one of the most prominent candidate is the weakly-interacting massive particles(WIMP). The leading WIMP candidate is perhaps the neutralino, the lightest super-symmetric particles such as photinos, Higgsinos and Z-inos . These particles typically have masses between 10 GeV and a few TeV and couple to ordinary matter only with weak interactions. The elastic scattering of WIMP with target nuclei could be detected by measuring the recoil energy of the nucleus, which is up to several tens of keV . Recently, a great deal of attention has been drawn to crystal detectors since the detection technique is already developed and radioactive background from the crystal is under control. Especially, the most stringent limit for the direct detection of WIMP has been established using the NaI(Tl) crystal detector . They achieved as low threshold as 6 keV and relatively good separation between the recoiling events and the ionizing events by background $`\gamma `$’s using the difference of the scintillation decay time. Recently, positive signal of annual modulation has been reported by DAMA group . Looking at the similar sensitivity region with other experiments which involves different systematics is absolutely necessary to confirm their results. It has been noted by several authors that CsI(T$`\mathrm{}`$) crystal may give better performance for the separation between recoiling events and the ionizing events by background $`\gamma `$ . Although the light yield of CsI(T$`\mathrm{}`$) crystal is slightly lower than NaI(Tl) crystal, better particle separation can be more advantageous for WIMP search. Also CsI(T$`\mathrm{}`$) has much less hygroscopicity than NaI(Tl), and has higher density (see Table I). The spin-independent cross section of WIMP is larger for CsI(T$`\mathrm{}`$) than NaI(Tl) because CsI(T$`\mathrm{}`$) has a compound with two similar heavy mass nuclei while spin-dependent cross section will be comparable. Moreover hundreds of tons of CsI(T$`\mathrm{}`$) crystals are already being used for several detectors in high energy experiment . Thus fabricating large amount of crystals is quite feasible. In this report, we have studied the characteristics of CsI(T$`\mathrm{}`$) crystal for the possibility of dark matter search experiment . ## 2 Experimental Setup We prepared a 3cm$`\times `$3cm$`\times `$3cm CsI(T$`\mathrm{}`$) crystal with all surfaces polished. Photo-multiplier tubes of 2 inch diameter(Hamamtsu H1161) are directly attached on two opposite end surfaces. The cathode planes of PMT cover all the area of the crystal surfaces attached. The other sides are wrapped with 1.5 $`\mu `$m thick aluminized foil window or Teflon tapes followed by black tapes. It is necessary to use only very thin foil for the side where X-ray sources are attached that low energy X-rays are not blocked. For the alpha source, additional aluminum foil is located between the aluminized foil and the source to reduce the $`\alpha `$ energy. Signals from both PMTs are then amplified using a home-made AMP($`\times `$8) with low noise and high slew rate. Another signals are amplified with ORTEC AMP($`\times `$200) to make the trigger logic. Discriminator thresholds are set at the level of single photoelectron signal. By using LED, we confirmed that the single photoelectron signal is well above the electronic noise. In order to suppress the accidental triggers from dark currents, we delay the signal by 100 ns and then formed a self coincidence for each PMT signal, which require that at least two photoelectrons occur within 200 ns. Then coincidence of both PMT signals are made for the final trigger decision. In this way the trigger caused by the accidental noises are suppressed by a great amount. With this condition the effective threshold is four photoelectrons, which roughly corresponds to 40 photons produced. Using the widely accepted light yield of CsI(T$`\mathrm{}`$), $``$50,000 photons/MeV, our threshold can be interpreted as 2 keV. The crystal and PMTs are located inside the of 5 cm thick lead blocks in order to stop the environmental background. A digital oscilloscope is used for the data taking with GPIB interface to a PC with LINUX system. We developed DAQ system with GPIB and CAMAC interface based on the ROOT package and entire analysis was performed with the ROOT package too. The schematics of the experimental setup and the trigger elements are shown in Figure 1 a) and b). The digital oscilloscope we used for our experiment samples the signal at 1 Gs/sec with 8 bit pulse height information and two channels are read out simultaneously. Full pulse shape informations are saved for the further analysis. ## 3 Calibration We have performed measurements of X-rays, $`\gamma `$-rays, and alpha particles using various radioactive sources with the setup described in the previous section. The energy spectra of X-rays and $`\gamma `$ rays from the <sup>57</sup>Co source is given in Fig. 2. The highest peak is from the gamma ray of 122 keV. Shown in left side of broad distribution of pulses are the Compton edge. The energy resolution at 122 keV is about 7%. Also, the X-ray peak at 6.4 and 14.4 keV are clearly seen with energy resolution of 30 and 20%, respectively. This resolution is not much worse than that of NaI(Tl) crystal . Many calibration sources such as <sup>57</sup>Co, <sup>109</sup>Cd, <sup>137</sup>Cs, <sup>54</sup>Mn and <sup>60</sup>Co are used for the determination of linearity and resolution. Fig. 3 shows the energy resolution of CsI(T$`\mathrm{}`$) crystal with PMT on each side. The best fit of the resolution with following the parameterization is $$\frac{\sigma }{\mathrm{E}(\mathrm{MeV})}=\frac{0.03}{\sqrt{\mathrm{E}(\mathrm{MeV})}}0.01,$$ (1) and it becomes $$\frac{\sigma }{\mathrm{E}(\mathrm{MeV})}=\frac{0.02}{\sqrt{\mathrm{E}(\mathrm{MeV})}}0.01$$ (2) , when we add PMT signals from both sides. The pulse shape is quite linear at high energy as shown in Fig. 4 but there is some deviation at low energy as shown in Fig. 5. The pulse height of the 662 keV $`\gamma `$-ray line from <sup>137</sup>Cs is defined as unity for the linearity plot. It turns out that the variation in the response function near the L-, K-shell of Cs and I causes nonlinearity at X-ray region within 30% . This is because photoelectrons ejected by incident gamma rays just above the K-shell energy have very little kinetic energy so that the response drops. Just below this energy, however, K-shell ionization is not possible and L-shell ionization takes place. Since the binding energy is lower, the photoelectrons ejected at this point are more energetic which causes a rise in the response. The pulse shape is linear within 10 % up to low energy X-ray region if these effects are corrected. ## 4 Pulse Shape Analysis In many scintillating crystals, electrons and holes produced by ionization are captured to form certain meta-stable states and produce slow timing component. On the other hand, a larger stopping power from recoiling nucleus produces a higher density of free electrons and holes which favors their recombination into loosely bound systems and results in fast timing component. By using this characteristic, we may be able to separate X-ray backgrounds from the high ionization loss produced by WIMP. To demonstrate this difference, we measured signals produced by alpha particles using <sup>241</sup>Am source. Kinetic energy of the alpha particle is 5.5 MeV and the incident energy was controlled by the thickness of thin aluminum foil in front of the crystal. Although alpha particle at this energy stops in the crystal, the visible energy seen by the PMT is about 75% of the energy. This is due to the quenching factor for alpha particles and agrees with what were observed by the other experiments . We show the two dimensional histogram of mean time vs. integrated charge in Fig. 6. The mean time is the pulse height weighted time average, defined as $$<t>=\frac{t_i\times q_i}{q_i},$$ (3) where $`q_i`$ is the amplitude of the pulse at the channel time $`t_i`$ up to 4 $`\mu `$s. It is practically the same as the decay time of the crystal. Two clear bands in the Fig. 6 indicate that we can make good separation between the alpha particle and X-ray. The low energy of X-ray from the <sup>241</sup>Am source is 60 keV. In Fig. 7, we projected signals near 60 keV region to the mean time axis and it shows that the decay time for alpha particles is $``$700 ns while for X-rays $``$1100 ns. Two peaks are well separated by more than 3 sigma in this energy region. ## 5 Conclusion We demonstrated that CsI(T$`\mathrm{}`$) crystal can be used to measure low energy gamma rays down to few keV. Linearity within 10% and good energy resolution have been obtained down to 6 keV X-ray region. In addition, a good separation of alpha particles from gamma rays has been achieved by using mean time difference. If recoiled ions in the crystal behave similar to alpha particles, the mean time difference would be very useful to differentiate WIMP signals from backgrounds. The background study and neutron response on CsI(T$`\mathrm{}`$) study are underway. If this study is successful, a pilot experiment with a large amount crystals will be launched in near future. This work is supported by Korean Ministry of Education under the grant number BSRI 1998-015-D00078. Y.D. Kim wishes to acknowledge the financial support of the Korean Research Foundation made in the program year of 1998.
no-problem/9910/astro-ph9910360.html
ar5iv
text
# The velocity structure of LMC Carbon stars: young disk, old disk, and perhaps a seperate population ## 1. Introduction Carbon stars are an important tracer of the kinematics of the disk of the Large Magellanic Cloud (LMC). Kunkel et al. (kunkel (1997)) have analyzed the velocities of carbon stars in the outer LMC disk. Hardy, Schommer, & Suntzeff (ssh (1999)) have measured the radial velocities of 551 carbon stars in the inner $`70\mathrm{deg}^2`$ of the LMC and fit these velocities to a disk model. Here we focus on the residuals to this disk solution in order to isolate different kinematic components of the LMC carbon star population. The Milky Way disk has a multi-components structure that may be describable as a kinematically cold thin disk and a hotter thick disk as originally advocated by Gilmore & Reid (1983), or may comprise a continuum of structures of increasing thickness (Norris & Ryan 1991). The LMC provides a unique laboratory to study the kinematic substructure of a quite different galaxy that also has disk kinematics. By studying this substructure, we can eventually learn about the relations between formation history, disk heating, and enrichment in a non-Milky Way setting. Schommer et al. (1992) and Hughes, Wood, & Reid (1991) have presented data suggested that older components of the LMC disk are kinematically hotter than younger components, although these studies are limited by poor statistics. In this paper, we analyze a much larger radial-velocity sample to extract a substantially more detailed picture of the LMC’s disk substructure. From both conceptual and practical standpoints, the analysis of the Carbon star radial velocities is best divided into two steps. In the first step Hardy et al. (ssh (1999)) fit for the global properties of the disk including its projected rotation curve and its transverse velocity. Here we apply the second step and examine the residuals to that fits in order to extract information about the kinematic structure of the disk. This study yields unambiguous evidence that the LMC disk, like the Milky Way disk, has a multi-component structure. We go on to show that, just as with the Milky Way, the colder disk component is more metal rich than the hotter one. In addition to determining the structure of the LMC disk, we also search for a non-disk component. One of the motivations for this research is the microlensing conundrum. At present $`20`$ microlensing events towards the Magellanic clouds have been analyzed (Alcock et al. macho6yr (1997); Lasserre et al. eros2lmc (2000)). If these microlensing events are due to halo objects, or Machos, then the detected Machos make up $`1030`$% of the mass of the halo. All obvious astrophysical candidates for halo microlensing have severe problems (e.g. Graff, Freese, Walker & Pinsonneault gfwp (1999)) An alternative hypothesis is that the microlensing events are due to lenses within the LMC (Wu wu (1994), Sahu sahu (1994)). However, if these lenses are virialized, they must have a large velocity dispersion (Gould gouldvir (1995)). In that case, we should see this population in the carbon star velocities, unless the carbon stars do not trace the lens population (Aubourg et al. 1999). Another possibility is that the observed microlensing is due to an unvirialized foreground or background population of lenses, such as a tidal streamer (Zhao zhao (1998); Zaritsky & Lin (1997); Zaritsky et al. zsthl99 (1999)). In this case, we would expect the velocities of the lenses to be different from those of LMC stars. Again, we should see this population in the carbon star velocities, unless the carbon stars do not trace the lens population or unless, by coincidence, the lens population has the same radial velocity as the main LMC population. We find that the data provide evidence at the $`2\sigma `$ level for additional velocity structure that could be due to an unvirialized foreground or background population. While this detection cannot be regarded as compelling, the problem of explaining the observed microlensing events by other routes has proven so difficult that this proposed solution should be given serious consideration: our marginal detection should be checked by a much larger radial-velocity study. ## 2. The Data Hardy et al. (ssh (1999)) obtained radial velocities $`v`$ for 551 carbon stars in 35 fields, each about $`0.25\mathrm{deg}^2`$ scattered more or less uniformly over the inner $`70\mathrm{deg}^2`$ of the LMC. The measurement errors are typically $`1\mathrm{km}\mathrm{s}^1`$. Hardy et al. (ssh (1999)) fit these velocities to a planar, inclined disk with a circular velocity that is allowed to vary in 5 bins. Table 1 shows a summary of the parameters for the solution used in this paper; see Hardy et al. (ssh (1999)) and Schommer et al. (sosh (1992)) for details and descriptions of the rotation curve parameters and other possible fits. The fit adopted here is basically a solid body rotation model (constant dV/dr) out to 3.5 degrees, a flat rotation curve beyond that (3.5-5.5), a slightly twisting line of nodes ($`\mathrm{\Theta }`$ in Table 1), an overall dispersion around the fit ($`\sigma `$) which is characteristic of an intermediate to old disk population, and an orbital transverse motion consistent with the proper motion measures of the LMC (e.g., Kroupa & Bastian 1997). The solution simultaneously fits for the transverse velocity of the LMC $`𝐯_{}`$ since this gives rise to a gradient in radial velocities across the face of the LMC with respect to angular position, $`v=𝐯_{}`$. In this paper we primarily use the residuals to this fit, $`\mathrm{\Delta }v`$, (§ 3 and § 4.1) but also make use of the heliocentric radial velocities, $`v`$ (§ 4.2). | Table 1. Rotation Curve Parameters | | | | --- | --- | --- | | V<sub>sys</sub> | dV/dr | V<sub>circ</sub> | | 50 km/s | 21.5 km/s/kpc | 75 km/s | | $`<\mathrm{\Theta }(PA)>`$ | $`\sigma `$ | V<sub>tr</sub> | | –20 | 18-22 km/s | 250 km/sec | ## 3. Detection of two populations A histogram of the residuals $`\mathrm{\Delta }v`$ is shown in Figure 1. We attempt to represent these residuals as various sums of Gaussians of the form $$P(\mathrm{\Delta }v)=\underset{i=1}{\overset{n}{}}\frac{N_i}{\sqrt{2\pi }\sigma _i}\mathrm{exp}\left[\frac{(\mathrm{\Delta }v\overline{\mathrm{\Delta }v}_i)^2}{2\sigma _i^2}\right],$$ (1) subject to the constraint $`_iN_i=551`$. Here $`n`$ is the number of Gaussian components, and for each component $`i`$, $`N_i`$ is the number of stars, $`\overline{\mathrm{\Delta }v}_i`$ is the mean residual velocity, and $`\sigma _i`$ is the dispersion. We fit the velocity residuals to these functional forms by adjusting the parameters to maximize the log likelihood estimator, $$\mathrm{ln}L=\underset{k=1}{\overset{551}{}}\mathrm{ln}[P(\mathrm{\Delta }v_k)].$$ (2) This is equivalent to a $`\chi ^2`$ minimization measurement in the Poisson limit of infinitely small bin size. Probabilities can be inferred from the log likelihood estimator by comparing likelihoods to the solution with maximum likelihood and using the relation $$\mathrm{\Delta }\chi ^2=2\mathrm{\Delta }\mathrm{ln}L.$$ (3) Figure 1 shows fits to the (unbinned) residuals using a single Gaussian (with two free parameters) and a double Gaussian. In the latter fit, we impose the physically plausible additional constraint $`\overline{\mathrm{\Delta }v}_1=\overline{\mathrm{\Delta }v}_2`$, so there are a total of 4 free parameters. The double Gaussian solution has 20% of the stars in a thin disk population with a velocity dispersion of $`8\mathrm{km}\mathrm{s}^1`$, and the remaining 80% of the stars in a thicker disk population with a velocity dispersion of $`22\mathrm{km}\mathrm{s}^1`$. The improvement is $`\mathrm{\Delta }\chi ^2=20`$ for the addition of two degrees of freedom, i.e. a statistical significance of $`1\mathrm{exp}(\mathrm{\Delta }\chi ^2/2)110^{4.3}`$. Thus, LMC carbon stars are better represented as two populations than one. However, this does not prove that we have detected two distinct populations. It could also be that there are a continuum of populations with a range of dispersions from below 8 to above $`22\mathrm{km}\mathrm{s}^1`$. Nevertheless, for clarity of discussion, we will refer to two discrete populations. ### 3.1. Metallicity of the two populations Costa & Frogel (frogel (1996)) (CF) published $`RI`$ photometry of 888 LMC carbon stars and 204 with infrared $`(JHK)`$ photometry. Within this sample, 103 of the stars that have infrared photometry had velocities measured by Hardy et al. (ssh (1999)). CF showed that the infrared colors differ between samples of carbon stars from the Milky Way, the LMC, and the SMC. The carbon stars in the three galaxies can be fit by $$(JH)_0=0.62(HK)_0+\zeta $$ (4) with $`\zeta \{0.72,0.67,0.60\}`$ respectively for the Galaxy, the LMC, and the SMC. Cohen et al. (cfpe (1981)) suggested that this shift in colors is due to a metallicity related blanketing effect, in which case $`\zeta `$ can be used as a metallicity indicator. As can be seen in Figure 5 of CF, there is substantial scatter in the color-color relations compared to the differences among the three galaxies. Thus, this metallicity indicator cannot reliably determine the metallicity of an individual carbon star: it should be used only as a statistical estimator for stellar populations. Even though the metallicities of carbon stars in the three galaxies are unknown, if we assume that $`[\mathrm{Fe}/\mathrm{H}]\{0,0.4,0.8\}`$ for the three galaxies, we can make a rough calibration of this metallicity indicator: $$\delta [\mathrm{Fe}/\mathrm{H}]6.7\delta \zeta .$$ (5) This relation should be taken only as rough estimate. However, one can be more confident of the relative order of the metallicities of carbon stars in the three galaxies, and hence $`\zeta `$ can robustly distinguish between a high-metallicity population and a low-metallicity population. We find that the metallicity indicator $`\zeta `$ is different for high velocity-residual stars than for low velocity stars. Specifically, for stars with $`|\mathrm{\Delta }v|<10`$ km/sec, we find $`\zeta =0.678\pm 0.007`$ while for $`|\mathrm{\Delta }v|>10`$ km/sec, we have $`\zeta =0.662\pm 0.005`$. These two values of $`\zeta `$ are different at the 93% confidence level. However, since most of the “low velocity” stars chosen this way are actually from the more numerous thick-disk velocity sample, dividing up the sample in this way is not the best way to measure the metallicity difference. To isolate the thin and thick disks, we modify equation (1) to read $$P(\mathrm{\Delta }v)=\underset{i=1}{\overset{n}{}}\frac{N_i}{\sqrt{2\pi }\sigma _i}\mathrm{exp}\left[\frac{(\mathrm{\Delta }v\overline{\mathrm{\Delta }v_i})^2}{2\sigma _i^2}\right]\mathrm{exp}\left[\frac{(\zeta \overline{\zeta }_i)^2}{2\sigma _\zeta ^2}\right],$$ (6) where $`\overline{\zeta }_i`$ is the mean value of $`\zeta `$ for each population and $`\sigma _\zeta =0.044`$ is the observed dispersion of $`\zeta `$ in the sample for the 103 stars with velocities and infrared data. Note that for stars without infrared data, the last term is simply set to unity. We then find $`\overline{\zeta }_1=0.663\pm 0.04`$, $`\overline{\zeta }_2=0.700\pm 0.16`$, and $`\overline{\zeta }_2\overline{\zeta }_1=0.037\pm 0.017`$, i.e. a $`2\sigma `$ difference, which corresponds to $`\mathrm{\Delta }[\mathrm{Fe}/\mathrm{H}]0.25`$. Given the combination of different velocities and different metallicities, we claim that we have detected either two different disks within the LMC representing different ages of stellar populations or a continuous distribution of disk populations with a range of ages. In either case, the younger populations have higher metallicity and lower velocity dispersion. ### 3.2. No virialized lenses Gould (gouldvir (1995)) showed that for microlensing within a virialized disk, the microlensing optical depth is $$\tau =2\frac{v^2}{c^2}\mathrm{sec}^2i$$ (7) where $`i`$ is the angle of inclination of the disk with respect to the line of sight, $`3040^{}`$ in the case of the LMC. In the case of the carbon stars, the total velocity dispersion is $`21\mathrm{km}\mathrm{s}^1`$ and thus the optical depth due a virialised stellar population traced by the carbon stars is $`\begin{array}{c}<\hfill \\ \hfill \end{array}2\times 10^8`$, much smaller than that measured by the MACHO experiment (Alcock et al. macho6yr (1997)) of $`1.2_{0.3}^{+0.4}\times 10^7`$. Thus, the virialized population traced by carbon stars cannot account for microlensing. However, a virialized population too old to be traced by carbon stars would not be seen in our data (Aubourg et al. 1999). ### 3.3. Conclusion We have explicitly assumed that hotter, more metal poor population is older than the younger, metal rich population in analogy with the Milky Way, even though the LMC may have a different disk heating mechanism than the Milky Way. The age-velocity dispersion relation has been confirmed previously by Hugues, Wood & Reid (hughes (1991)) and Schommer et al. (sosh (1992)). Since we detect a metallicity difference based on our infrared colors within this population, we also determine that some noticeable metal enrichment occured during the Carbon star formation epoch. The velocity dispersion of the thick disk component, $`22\mathrm{km}\mathrm{s}^1`$, is much higher than the thin disk, and is close to the velocity dispersion of the oldest objects measured in the LMC, $`30\mathrm{k}\mathrm{m}\mathrm{s}^1`$ (Hughes, Wood & Reid 1991, Schommer et al. 1992). Thus, we can show that the bulk of disk heating occurred during the Carbon star formation epoch. ## 4. Search for a Kinematically Distinct Population The analysis of Gould (gouldvir (1995)) only applies to virialized populations. It is still possible that an unvirialized population of stars could be causing microlensing. Such a population might be a streamer of stellar material pulled out by tidal interactions between the LMC and the Milky Way, or between the LMC and the SMC (Zhao zhao (1998)). Zaritsky & Lin (zl97 (1997)) claimed that they may have seen such a streamer in LMC clump giants. This paper caused numerous counter-arguments which are summarized and debated in (Zaritsky et al. zsthl99 (1999)). Ibata, Lewis & Beaulieu (ilb (1998)) examined the velocities of 40 clump giants in the LMC of which 24 were candidate foreground stars according to the criteria of Zaritsky & Lin (zl97 (1997)). Ibata et al. (1998) found no difference in the mean velocities of the candidate foreground stars and the other clump stars and concluded that these stars did not form a separate kinematic population from the LMC. Zaritsky et al. (zsthl99 (1999)) confirmed the results of Ibata et al. (1998) using a much larger sample of 190 candidate foreground clump stars. However, the carbon-star sample that we analyze here is potentially more sensitive to the presence of tidal streamers than either of these two clump-star samples, in part because it is larger (551 stars) and in part because the velocity errors are much smaller ($`1\mathrm{km}\mathrm{s}^1`$). ### 4.1. Search for third population in disk-fit residuals We search the data for a non-virialized, kinematically distinct population (KDP) in two different ways. First, we fit the residuals to the disk solution to the sum of three Gaussians, two representing the LMC, and one for the KDP. That is, we apply equation (6) with $`n=3`$. We find a solution which is somewhat better than the two Gaussian fit, $`\mathrm{\Delta }\chi ^2=8`$ for a change of 4 degrees of freedom. The off-center KDP peak is found to be moving towards us at $`27\mathrm{km}\mathrm{s}^1`$ relative to the bulk of the LMC and to contain 63 stars, about 10% of the total. Thus, the data suggest that there may be a KDP, but at a statistically weak level of confidence. A Monte Carlo simulation was performed to verify the statistical confidence (details of which are described in § 4.3) which showed that this third bump is only present at the 75% confidence level. The fit to the third bump is shown in Fig 2. ### 4.2. Search for a third population in velocities In the model considered in the previous section, the KDP stars have a common motion relative to the LMC. Possibly, the KDP stars are moving steadily away from the LMC disk, or are not associated with the LMC disk. The KDP should be seen in the original heliocentric radial velocities $`v`$ better than it is seen in the disk-fit residuals $`\mathrm{\Delta }v`$. We therefore fit the data to a functions of the form $`P(v,\mathrm{\Delta }v)=`$ $`{\displaystyle \underset{i=1}{\overset{2}{}}}{\displaystyle \frac{N_i}{\sqrt{2\pi }\sigma _i}}\mathrm{exp}\left[{\displaystyle \frac{(\mathrm{\Delta }v\overline{\mathrm{\Delta }v_i})^2}{2\sigma _i^2}}\right]\mathrm{exp}\left[{\displaystyle \frac{(\zeta \overline{\zeta }_i)^2}{2\sigma _\zeta ^2}}\right]`$ (8) $`+{\displaystyle \frac{N_{\mathrm{KDP}}}{\sqrt{2\pi }\sigma _{\mathrm{KDP}}}}\mathrm{exp}\left[{\displaystyle \frac{(v(\overline{v}_{\mathrm{KDP}}+A_x\theta _x+A_y\theta _y)^2)}{2\sigma _{\mathrm{KDP}}^2}}\right]`$ $`\mathrm{exp}\left[{\displaystyle \frac{(\zeta \overline{\zeta }_{\mathrm{KDP}})^2}{2\sigma _\zeta ^2}}\right],`$ where $`(\theta _x,\theta _y)`$ is its angular position on the sky, and $`A_x`$ and $`A_y`$ are planar coefficients for the heliocentric velocity distribution of the KDP. This equation is similar to equation (6) but we have replaced the $`\mathrm{\Delta }v`$ in the KDP terms by $`v`$, i.e., we fit to the heliocentric rather than the residual velocities. The origen of our x-y coordinate system is at $`\alpha =5^h21^m,\delta =69^{}17^{}`$, with X increasing to the east and Y to the north. Initially, we set $`A_x=A_y=0`$, so that there are same number of degrees of freedom as in the three-Gaussian fit to the residuals. We find no solution here that has a lower $`\chi ^2`$ than the two-Gaussian solution, implying that there is no evidence for the existence of a third population having a common heliocentric velocity outside the LMC disk. We therefore repeat the search, but allow $`A_x`$ and $`A_y`$ to vary as free parameters. We find that the likelihood is then maximized at very low values of the velocity dispersion $`\sigma _{\mathrm{KDP}}\begin{array}{c}<\hfill \\ \hfill \end{array}1\mathrm{km}\mathrm{s}^1`$. We reject these soultions as unphysical, and note that our fitting routines may have been falsely attracted to them as results of inevitable Poisson noise. We then find a solution with 39 stars in the KDP with $`\overline{v}_{KDP}=16.4\mathrm{km}\mathrm{s}^1`$, $`A_x=2.6\mathrm{km}\mathrm{s}^1\mathrm{deg}^1`$, $`A_y=4.9\mathrm{km}\mathrm{s}^1\mathrm{deg}^1`$, $`\sigma _{\mathrm{KDP}}=5\mathrm{km}\mathrm{s}^1`$, and $`\zeta _{\mathrm{KDP}}=0.673`$. Relative to the two-Gaussian solution, this KDP solution has $`\mathrm{\Delta }\chi ^2=16`$ for 6 additional parameters. Figure 3 shows the residuals of the LMC stars with respect to the KDP. The KDP is shown as the strong peak of points around residual 0. Other small peaks are due to the clumped distribution of our stars in angle, and are not significant. There are not enough stars in the KDP to significantly determine if the KDP covers the entire face of the LMC or has a patchy distribution. ### 4.3. Monte Carlo While the probability that any randomly chosen plane will come within $`5\mathrm{km}\mathrm{s}^1`$ of a significant fraction of our sample stars is small (and well represented by the $`\chi ^2`$ test), there are a large number of independent planes that can be compared to the data. To obtain a more accurate assessment of the statistical significance of this detection, we perform a set of Monte Carlo simulations. In each simulation, we draw velocities randomly from the two-Gaussian distribution of disk residuals found in § 3. We then search for a KDP in the resulting heliocentric velocities in the same way we did for the actual data in § 4.1 and § 4.2. In order to make the simulations tractable, we ignore metallicity information. This simplification is justified by the fact that the metallicity of the KDP measured in § 4.2 is not significantly different from the “young disk” component. If metallicity is ignored then the external-plane solution shows an improvement of $`\mathrm{\Delta }\chi ^2=14`$ for 5 additional parameters, which is formally significant at the 98% level. However, we find that out of 407 simulations, there is $`\mathrm{\Delta }\chi ^214`$ in 26 cases. Hence, our detection is significant only at the 94% level, roughly equivalent to $`2\sigma `$. ### 4.4. Evidence of the KDP in Other LMC components Given the intriguing signal we see in the C star velocities, but also the marginal level of significance, it is worth exploring other possible signs of the KDP. One such tracer is the 21cm gas emission, mapped, e.g., by Luks & Rohlfs (lh (1992)), and Kim et al. (kim (1998)). Luks & Rohlfs note that a lower velocity component (“L-component”) contains about 19% of the HI gas in the LMC, is separated from the main velocity component by $``$30 km/s. Although Kim et al. (kim (1998)) do not specifically comment on such a component in their paper based on higher spatial resolution HI imaging, a similar signal seems evident in their position-velocity maps (e.g., Figs. 7a and 7b in their paper) at RA 05:37 - 05:47 and DEC -30 to -120 arcmin. The standard interpretation of this substructure in gas is that it is due to hydrodynamic effects on gas within the LMC disk. However, the correlation of the gas velocity “L-component” with the stellar KDP suggests that the gas may be outside the LMC disk. An intriguing but somewhat more ambiguous signature may be evident in the CH star velocities of Cowley & Hartwick (ch (1991)). Velocities for a sample of $``$80 CH stars show a low velocity asymmetric tail, consistent with a component at $``$20 km/sec lower systematic velocity. Cowley & Hartwick (1991) even suggest that one explanation of this population is that it is a result of an earlier violent tidal encounter between the LMC-SMC system and the Milky Way. The small sample statistics and asymmetric spatial distribution of these stars make a more detailed exploration difficult. ## 5. Microlensing Interpretation We may have detected a kinematically distinct population of carbon stars in the direction of the LMC. If real, this population could be either a structure within the LMC disk or tidal debris that is well separated from the disk and hence either in front of or behind the LMC. If it is well separated from the LMC, then it would give rise to microlensing: either it would be in front of the LMC and so would act as lenses, or it would be behind the LMC and would act as sources. The microlensing optical depth due to a thin sheet of stellar matter with density $`\mathrm{\Sigma }_1`$ and the LMC with density $`\mathrm{\Sigma }_2`$ separated by a distance $`D`$ which is small compared to the distance from the Sun to the LMC is: $$\tau _{\mathrm{KDP}}=\frac{4\pi G}{c^2}D\frac{\mathrm{\Sigma }_1\mathrm{\Sigma }_2}{\mathrm{\Sigma }_1+\mathrm{\Sigma }_2}.$$ (9) The distance between the two sheets, $`D`$, cannot be determined from velocity data alone. However, since the two sheets must have similar velocities, the tidal tail cannot be a random interloper in the halo, but must be somehow related to the LMC. Lacking further information, we make the somewhat ad hoc assumption that the material in the tidal tail has been moving away from the LMC at a constant velocity of $`30\mathrm{km}\mathrm{s}^1`$ since close tidal encounter between the LMC and the SMC, 200 Myr ago (Gardiner & Noguchi gn (1996)). In that case, we have $$Dv_{\mathrm{KDP}}\times 200\mathrm{Myr}5\mathrm{kpc}.$$ (10) In fact, it is likely that the foreground object has had its velocity substantially changed by gravitational interaction with the LMC, and to a lesser extent, the SMC and the Milky Way, so this calculation only indicates that the object could have moved several kpc from the LMC in the past 200 Myr. All the results of this section will hold if the object is several kpc either in front of or behind the LMC. The total surface mass density, $`\mathrm{\Sigma }_1+\mathrm{\Sigma }_2`$, can be estimated from the observed surface brightness of the LMC, which is $`R21.2`$ mag arcsec<sup>-2</sup> (De Vaucouleurs dvc (1957)) near the center. If we assume a mass to light ratio of 3 (in solar units), this corresponds to a total surface mass density of 300 $`M_{}`$ pc<sup>-2</sup>. It is possible that the surface densities of the disk and KDP populations are not traced by carbon stars. Still, lacking further information, we estimate the optical depth by setting $`\mathrm{\Sigma }_1/\mathrm{\Sigma }_2=39/(55139)`$ according to the solution of § 4.2, we obtain $$\tau =6\times 10^8\frac{D}{5\mathrm{kpc}}.$$ (11) This optical depth is substantially larger than the optical depth due to a virialized disk population traced by the carbon stars ($`\begin{array}{c}<\hfill \\ \hfill \end{array}2\times 10^8`$). It is consistent with the value observed by the Macho collaboration (Alcock et al. 2000). There could be more tidal material which we have not found in this search because its velocity is by chance too close to the velocity of the LMC, and which would raise the optical depth. If $`D`$ were greater than 5 kpc, then $`\tau _{\mathrm{KDP}}`$ would rise proportionately. The transverse motion of such a population with respect to the LMC is probably 70 $`\mathrm{km}\mathrm{s}^1`$, the circular orbital velocity of the LMC. To calculate the typical transverse velocity in a microlensing event, this velocity should be added in quadrature to all the other sources of transverse velocity. The stars in the LMC are orbiting about the LMC center with a transverse motion of 70 $`\mathrm{km}\mathrm{s}^1`$ at 4 kpc (Kunkel et al. kunkel (1997); Hardy et al. ssh (1999)). The LMC system has a transverse velocity with respect to the Sun of some 250 $`\mathrm{km}\mathrm{s}^1`$ (Hardy et al. ssh (1999)) which will translate to a projected transverse motion of 25 $`\mathrm{km}\mathrm{s}^1`$ (at 5 kpc from the LMC). Adding these velocities in quadrature, the derived typical transverse velocity of a microlensing event is 100 $`\mathrm{km}\mathrm{s}^1`$, in which case the typical mass of a lens is $$M0.13M_{}\left(\frac{D}{5\mathrm{kpc}}\right)^1.$$ (12) This is significantly below the mean mass of stars in the neighborhood of the Sun (e.g. Gould, Bahcall, & Flynn 1997), but the LMC may have a different mass function. However, it is important to recognize that if $`D`$ is made larger so as to account for more of the optical depth, then the mean mass is driven lower $$M0.075M_{}\left(\frac{\tau }{1\times 10^7}\right)^1.$$ (13) ## 6. Conclusion We report two primary new results, one with high statistical confidence, one which is shakier, but perhaps more interesting if true. We show that Carbon stars in the LMC are divided into a hot and cold population, with a clear difference in metallicity between the two populations. Thus, we show that the epoch of LMC disk heating had to occur during the Carbon Star formation epoch. We also show with less confidence the existence of a third population, outside the LMC. If this population is real, it suggests that some fraction of the Carbon stars in the LMC are not in the disk, and thus could explain microlensing events. Although at present the statistical significance of this detection is not enviable, this result is still the best extant solution to the microlensing conundrum. The microlensing conundrum poses such a difficult problem that several extreme explanations have been proposed including mirror matter and cosmological populations of population III white dwarfs (Graff, Freese, Walker & Pinnsoneault 1999, and references therin). The kinematically distinct population is unique amongst these explanations in that it in not only allowed by the data, but even supported by the data at the 95% confidence level, and requires no modifications to the standard models of Particle Physics or Cosmology. We thus present it as the strongest explanation of LMC microlensing. Work at Ohio State was supported in part by grant AST 97-27520 from the NSF. ## Figure Captions
no-problem/9910/astro-ph9910451.html
ar5iv
text
# Gravitational lensing studies with the 4-m International Liquid Mirror Telescope (ILMT) ## 1 What is a Liquid Mirror Telescope? A Liquid Mirror Telescope (hereafter LMT) consists of a container, filled with mercury, which spins around a vertical axis at a constant speed. Thus, the surface of the reflecting liquid takes the shape of a paraboloid which can be used as the primary mirror of a telescope. By placing a CCD detector at the prime focus of the mirror, one obtains a telescope suitable for astronomical observations. Because an LMT cannot be tilted and hence cannot track like conventional telescopes do, the time delay integration (TDI) technique (also known as drift scan) is used to collect the light of the objects during their transit along the CCD detector. A semi-classical corrector is added in front of the CCD detector in order to provide a larger field of view and to remove the TDI distortion. This distortion arises because the images in the focal plane move at different speeds on distinct curved trajectories while the TDI technique moves the pixels on the CCD at a constant speed along a straight line. ## 2 The 4 m ILMT The 4 m International Liquid Mirror Telescope (ILMT) will be installed in the Atacama desert in Chile and will be fully dedicated to a zenithal direct imaging survey in two broad spectral bands (B and R). The possible construction of an array of several ($``$ 2) liquid mirrors, working at different wavelengths, is also being considered. It should allow one to reach limiting magnitudes B = 23.5 and R = 23 in a single scan of a $`4096\times 4096`$ pixels CCD or an equivalent mosaic of four $`2048\times 2048`$ pixels CCDs. The telescope field of view is about 30 x 30 arcminutes and the telescope will be operated during no less than 4-5 years. Thus, very precise photometric and astrometric data will be obtained in the drift scan mode night after night, during several consecutive months each year, for all objects contained in a strip of sky of approximately 140 square degrees, at constant declination. Due to its location in the Atacama desert, both low and high galactic latitude regions will be studied. The low galactic latitudes are propitious to microlensing effects and the high galactic latitudes to observations of macrolensing by galaxies as well as to strong and weak lensing effects induced by galaxy clusters. ## 3 Gravitational lensing studies Numerical simulations were carried out to estimate the gravitational lensing effects we can expect from a survey made with a 4 m LMT. ### 3.1 Microlensing in the Galaxy We used a galactic model with 3 components: the halo, the disk and the Galactic bulge. About 50 (resp. 10, 3) microlensing events due to the bulge (resp. the disk, the halo) are expected after one year of ILMT observations at an observing latitude of $`29^{}`$ assuming that the Galaxy is entirely made of 1 M dark compact objects. ### 3.2 Macrolensing Considering the quasar number counts relation and the optical depth of cosmologically distributed “singular isothermal spherical” galaxies, we expect to detect approximately 50 new multiply imaged quasars. ### 3.3 Weak lensing We used a model similar to that described by Nemiroff and Dekel (1989, ApJ, 344, 51) and conclude that in a survey of $`100^^2`$ with a limiting brightness $`\text{B}_{\text{lim}}`$ of 26.5 mag/arcsec<sup>2</sup> or fainter, one can expect at least 50 luminous arcs (axial ratio $`A5`$, angular extent $`\theta 10^{}`$). ### 3.4 Monitoring and determination of H<sub>0</sub> The daily monitoring of the 50 new lensed quasars will significantly contribute to a statistical and independent determination of the Hubble constant and to a better understanding of the QSO source structure and of the distribution of dark matter in the Universe, through the analysis of microlensing effects. Acknowledgments: It is a pleasure to thank Martin Cohen for his kind help and his calculations of star counts with his SKY program. We also thank Annie Robin and her team for making their galactic model available through the WEB.
no-problem/9910/cond-mat9910472.html
ar5iv
text
# The triangular Ising antiferromagnet in a staggered field ## I Introduction The triangular Ising antiferromagnet (TIAFM), described by the Hamiltonian $`H=J{\displaystyle \underset{i,j}{}}s_is_jJ>0,`$ (1) where $`s_i=\pm 1`$ and $`i,j`$ denotes nearest-neighbor sites on a triangular lattice, provides an interesting example of a frustrated system without disorder. Unlike the nearest-neighbor Ising antiferromagnet on a square lattice, this model does not have a finite-temperature phase transition. It has an exponentially large number of degenerate ground states, which implies that the zero-temperature entropy per spin is finite. The zero-field partition function can be computed exactly, leading to the result $`S(T=0)=0.3383\mathrm{}`$ for the zero-temperature entropy per spin. At zero temperature, the system is critical and the two-spin correlation function decays as a power law, $`c(r)cos(2\pi r/3)/r^{1/2}`$, along the three principal directions . The ground states of the TIAFM can be mapped exactly to dimer coverings on the dual lattice which is hexagonal . Using this mapping, it is possible to classify the ground states into sectors specified by the number of “strings” which represent the difference between two dimer coverings. The exponential degeneracy of the ground state of the TIAFM can be removed in various ways, e.g., by choosing different coupling constants along the three principal directions, or by introducing a uniform field. Both these cases have been extensively studied. For anisotropic couplings, the problem is exactly solvable and one finds a usual Ising-like second-order phase transition except in some special cases for which the transition temperature goes to zero. In the case of a uniform field, simulations and renormalization-group arguments indicate that there is a second-order transition belonging to the $`3`$-state Potts model universality class. A particularly interesting special case is the limit in which the system is restricted to remain within the manifold of the TIAFM ground states. This can be achieved by making the coupling constant $`J`$ infinitely large. One then considers the effects of degeneracy breaking terms. In this limit, the nature of the transition changes. In the case of anisotropic couplings, the transition changes from Ising-like to Kastelyn-type ($`K`$-type) . Below $`T_c`$, the system freezes into the ground state and the specific heat vanishes identically. As $`T_c`$ is approached from the high-temperature side, the specific heat shows a $`(TT_c)^{1/2}`$ singularity. In the case of a uniform applied field, the transition is believed to be of Kosterlitz-Thouless type . This case is treated by first mapping the problem to a solid-on-solid model and then using renormalization-group arguments. In this paper, we study the behaviour of the TIAFM in the presence of a staggered field chosen to be conjugate to one of the ground states. Our work is motivated in part by similar studies on glassy systems with exponentially large number of metastable states. These studies consider the thermodynamic behavior of such systems in the presence of a field conjugate to a typical configuration of an identical replica of the system. As the strength of the field is increased from zero, the system is found to undergo a first-order transition in which the overlap with the selected configuration changes discontinuously. This transition is driven by the competition between the energy associated with the field term and the configurational entropy arising from the presence of an exponentially large number of metastable states. Like these glassy systems, the TIAFM has frustration and an exponentially large number of ground states. Thus it is of interest to investigate whether a similar behaviour is present in the TIAFM which is a simpler model with no externally imposed quenched disorder. Besides, the question of whether a phase transition can occur in the TIAFM in the presence of an ordering field is interesting by itself. For systems with a finite number of ground states, such as the purely ferromagnetic Ising model and the Ising antiferromagnet on a bipartite lattice, it can be proved that no phase transition can occur in the presence of ordering fields . However, no such general proof exists for systems with an exponentially large number of ground states, and the question of whether a competition between the energy associated with the ordering field and the extensive ground-state entropy can drive a phase transition in such systems remains open. The staggered field considered by us is conjugate to a ground state with alternate rows of up and down spins. In the lattice gas picture of the Ising model, this corresponds to an applied potential which is periodic in the direction transverse to the rows. In the presence of the field, there are a large number of low-lying energy states and this suggests the possibility of an interesting phase transition as the temperature is varied. We consider the case where the coupling constant $`J`$ is finite, as well as the limit $`J\mathrm{}`$. In the latter limit, one considers only the set of states which are ground states of the TIAFM Hamiltonian of Eq. (1). In this limit, we show that the problem of evaluating the partition function reduces to calculating the largest eigenvalue of a one-dimensional fermion Hamiltonian with long-range coulombic interactions. We have not been able to solve this problem but have obtained a finite lower bound for the transition temperature. The transition appears to be $`K`$-type. For finite $`J`$, we have studied the equilibrium behavior of the system by Monte Carlo (MC) simulations using three different kinds of dynamics: (1) single-spin-flip Metropolis dynamics, (2) cluster dynamics and (3) “string” dynamics in which all the spins on a line are allowed to flip simultaneously. We find that in all three cases, equilibration times at low fields and low temperatures increase rapidly with system size. The last dynamics is found to be the most efficient one for equilibrating the system in this regime. Finite-size scaling analysis of the data for small fields suggests the existence of a characteristic temperature near which the correlation length becomes very large. However, because of the long equilibration times, we have not been able to study large enough systems to be able to answer conclusively the question of whether this corresponds to a true phase transition. One surprising finding of our study concerns zero-temperature quenches of the system, starting from random initial configurations. We show that the system almost always reaches the ground state in such quenches. On the other hand, a slow cooling of the system leads to a metastable state. This is contrary to what happens in usual glassy systems where a fast quench usually leads to the system getting stuck in a higher energy state, while a slow cooling leads to the ground state with a high probability The paper is organized as follows. In section II, we consider the TIAFM in zero field and describe the mapping from the ground states to dimer coverings and the subsequent classification of the ground states into sectors. Many of the results in this section are well-known, but we have included them for the sake of completeness. Also our description is somewhat different from the existing ones. In section III, we consider the TIAFM with an applied staggered field in the limit $`J\mathrm{}`$. The mapping of this system to a one-dimensional fermion model is described and a finite lower bound for the transition temperature is derived. In section IV, we present our numerical results for the equilibrium properties at finite $`J`$. These results are obtained from exact numerical evaluation of averages using transfer matrices and also through MC simulations. We also discuss the dynamic behaviour of the system under different MC procedures. Section V contains a summary of our main results and a few concluding remarks. ## II Mapping of TIAFM ground states to dimer coverings and classification into string sectors The frustration of the TIAFM arises from the fact that it is impossible to satisfy all three bonds of any elementary plaquette of the triangular lattice. At most we can have two bonds satisfied. The lowest energy configuration of the system is one in which every elementary triangle is maximally satisfied. This condition can be satisfied for a large number of configurations and for future reference we shall denote the set of all such states by $`𝒢`$. We now show the correspondence between the ground states and dimer coverings on the dual lattice. The dual lattice is formed by taking the centers of all the triangles. Consider any two triangles which share a bond. If the bond is not satisfied, we place a dimer connecting the centers of the two triangles. The fact that every triangle has one and only one unsatisfied bond implies that every point of the dual lattice forms the end-point of one and only one dimer. Hence we obtain a dimer covering. This mapping is not unique, since flipping all spins in any given spin configuration leads to the same dimer covering. FIG. 1.: A ground state configuration and the corresponding dimer covering for a $`6\times 6`$ lattice. Periodic boundary conditions are applied in the horizontal and vertical directions. The crosses correspond to repeated points. In Fig. 1 we show a ground-state configuration and the corresponding dimer covering. Another dimer covering which corresponds to a ground state with alternate rows of up and down spins is shown in Fig. 2. We shall call this the standard configuration. It is important to choose the boundary conditions in a convenient manner and we follow the convention used in Fig. 1 with periodicity in the $`x`$ and $`y`$ directions. A useful classification of the ground states is obtained by superposing the standard dimer configuration with any other dimer configuration. This results in string configurations as shown, for example, in Fig. 3 which is obtained by superposing the standard configuration of Fig. 2 with the configuration of Fig. 1. Clearly there is a one-to-one correspondence between string and dimer configurations. It is easy to prove the following points: (i) the number of strings passing though every row is conserved; (ii) the strings do not intersect; (iii) the number of strings can be any even number from $`0`$ to $`L`$, where $`L`$ is the number of spins in a row; (iv) the periodic boundary conditions mean that the strings have to match at the boundaries and form closed loops. We classify the ground states into different sectors, with each sector specified by the number of strings. The number of states in each sector can be counted exactly using transfer matrices. Let us label the bonds on successive rows of the lattice in the manner shown in Fig. 4. The position of the strings on each row is specified by the set of numbers $`\{b_1,b_2,\mathrm{}b_n\}`$, where $`b_k`$ gives the position of the $`k`$th string. Note that $`\{b_k\}`$ give the positions of the satisfied bonds in a row. In a sector with $`n`$ strings we consider the $`{}_{}{}^{L}C_{n}^{}\times {}_{}{}^{L}C_{n}^{}`$ matrix which has non-vanishing entries equal to one if the two states can be connected by string configurations. We need two different transfer matrices, namely $`T^{(1)}`$, which transfers from odd numbered rows to even numbered ones and $`T^{(2)}`$, which transfers from even to odd ones. The total number of states in any given sector is then given by: FIG. 2.: The standard configuration of dimers. FIG. 3.: A configuration of strings obtained by superposing the dimer configurations in Fig. 1 and Fig. 2. FIG. 4.: Labelling of successive rows on a $`6\times 6`$ lattice. $`𝒩(n)=Tr(T^{(1)}T^{(2)})^{L/2},`$ (2) where we choose, for convenience, the length of the lattice, $`L`$, to be even. As an example let us consider the transfer matrix in the two string sector. This is given by $`T_{(l_1,l_2)(l_3,l_4)}^{(1)}=\delta _{l_1,l_3}\delta _{l_2,l_4}+\delta _{l_1,l_31}\delta _{l_2,l_4}+\delta _{l_1,l_3}\delta _{l_2,l_41}`$ (3) $`+\delta _{l_1,l_31}\delta _{l_2,l_41}\mathrm{for}\mathrm{l}_2\mathrm{l}_1+1,`$ (4) $`T_{(l_1,l_1+1)(l_3,l_4)}^{(1)}=\delta _{l_1,l_3}\delta _{l_1+1,l_4}+\delta _{l_1,l_3}\delta _{l_1+1,l_41}`$ (5) $`+\delta _{l_1,l_31}\delta _{l_1+1,l_41}.`$ (6) The matrix is diagonalized by the antisymmetrized plane-wave eigenstates $`a_{l_1,l_2}=e^{i(q_1l_1+q_2l_2)}e^{i(q_1l_2+q_2l_1)},q_1<q_2.`$ (7) The periodic boundary condition leads to the following values for the wave vectors: $`q_i=(2n_i+1)\pi /L`$, with $`n_i=0,1,2,\mathrm{}L1`$. The eigenvalues are given by $`\lambda _{\overline{q}}^{(1)}=(1+e^{iq_1})(1+e^{iq_2}).`$ (8) The matrix $`T^{(2)}`$ has the same set of eigenvectors while the eigenvalues are given by $`\lambda _{\overline{q}}^{(2)}=(1+e^{iq_1})(1+e^{iq_2}).`$ (9) The results for the two-string sector can be generalized to any of the other sectors. The transfer matrices $`T^{(1)}`$ and $`T^{(2)}`$ in any sector are diagonalized by antisymmetrized plane wave states. This just reflects the fact that the strings can be thought of as the world lines of non-interacting fermions. The eigenvalues in the $`n`$-string sector are: $`\lambda _{\overline{q}}^{(1)}`$ $`=`$ $`{\displaystyle \underset{k=1}{\overset{n}{}}}(1+e^{iq_k}),`$ (10) $`\lambda _{\overline{q}}^{(2)}`$ $`=`$ $`{\displaystyle \underset{k=1}{\overset{n}{}}}(1+e^{iq_k}),`$ (11) with $`q_k`$s as before. The number of states in the $`n`$-string sector is thus given by: $`𝒩(n)`$ $`=`$ $`Tr(T^{(1)}T^{(2)})^{L/2}`$ (12) $`=`$ $`{\displaystyle \underset{q_1<q_2\mathrm{}.q_n}{}}[{\displaystyle \underset{k=1}{\overset{n}{}}}(1+e^{iq_k})(1+e^{iq_k})]^{L/2}`$ (13) In the large $`L`$ limit, only the dominant term in the above sum contributes and we finally obtain: $`𝒩(p)`$ $`=`$ $`e^{L^2\alpha (p)},`$ (14) $`\alpha (p)`$ $`=`$ $`p\mathrm{ln}2+{\displaystyle \frac{2}{\pi }}{\displaystyle _0^{\pi p/2}}𝑑x\mathrm{ln}(\mathrm{cos}(x)),`$ (15) where $`p=n/L`$ is the fraction of strings (“string density”). Thus every sector with non-zero $`p`$ has an exponentially large number of states. We note that the function $`\alpha (p)`$ is peaked at $`p=2/3`$ and the entropy of this sector, $`S=\alpha (2/3)`$, reproduces the well-known result of Wannier for the zero-temperature entropy of the TIAFM. Thus we have rederived Wannier’s result and also shown that most of the states are in the sector with string density equal to $`2/3`$. ## III Splitting of levels in the presence of a staggered field: The $`J\mathrm{}`$ limit In the presence of a staggered field $`h`$ that is conjugate to one of the ground states of the TIAFM, the macroscopic degeneracy of the ground state is lifted. The field we consider is conjugate to the state corresponding to the standard dimer configuration (Fig. 2). There are two such spin configurations and we choose the one which has all up spins on the first row. Note that in the presence of the field, any two states related by the flipping of all the spins have the same string representation but different energies. To remove this ambiguity, we use an additional label for the string states, which we take as the sign of the first spin in the first row. The spin configuration on any row is then fully specified by the set $`(s,b_1,b_2,\mathrm{}b_n)`$. Let us now look at the effect of the field in splitting the energy levels in each sector. In the zero-string sector there are two states, one corresponding to the ground state and the other, obtained by flipping all spins, to the highest energy state. The lowest energy states in the two-string sector can be generated by starting with the ground-state spin configuration and flipping a line FIG. 5.: A configuration of two strings which corresponds to the lowest energy state in this sector. This configuration is obtained by starting with the ground state and flipping a line of spins (the circled ones). The strings are closely packed and all the spins in the region between them point opposite to the local applied fields. FIG. 6.: A higher energy configuration in the two-string sector. It can be seen that the strings divide the lattice into two domains with the spins in one domain being along the applied field and opposite to it (the circled spins) in the other domain. of spins as shown in Fig. 5. Fig. 6 shows a higher energy two-string state. Note that the strings separate the lattice into two domains, one in which all the spins point along the staggered field directions and another in which they point opposite to the field. This is in general true for any $`n`$-string state where the strings divide the lattice into $`n`$ domains, with spins in alternate domains pointing along and opposite to the staggered fields. The lowest energy configuration in any sector is clearly the state with alternate pairs of strings tightly packed. For the sector in which the string density is $`p`$, the lowest energy per spin is $`e_g(p)=(1p)h,`$ (16) where for the case $`J\mathrm{}`$ being considered here, we have subtracted the infinite constant energy term $`J`$. Because of the conservation of the number of strings across rows, the transfer matrix is block diagonal, each block corresponding to a fixed string sector. In the zero field case the strings are noninteracting and the problem reduced essentially to that of free fermions on a line. In the present case, however, the energy increases when the separation between two strings is increased. In fact it is easy to see that this case reduces to a one-dimensional fermion problem in which every alternate pair of fermions interact with each other via an attractive linear potential. It is then no longer simple to diagonalize the transfer matrix. However, through the following argument we prove the existence of a phase transition and obtain a lower bound for the transition temperature. At zero temperature, the system will be in the ground state in the zero-string sector. As the temperature is increased, the entropic factor associated with the other sectors becomes important and can cause either a gradual or a sharp transition to other sectors. To determine which of the two possibilities actually occurs, we consider the simpler case where the strings do not interact and all configurations belonging to the sector with string density $`p`$ have the same energy $`Ne_g(p)`$ where $`N=L^2`$ is the total number of spins. Since all the states in this sector have energies greater than or equal to $`Ne_g(p)`$ in the interacting model, a sharp transition in the non-interacting case implies a sharp transition in the interacting model. In particular, if the non-interacting model exhibits a transition at temperature $`T_c`$, so that it is frozen in the ground state in the zero-string sector for $`TT_c`$, then the interacting model must also be in the ground state for all $`TT_c`$. In other words, the transition temperature of the non-interacting model provides a lower bound to the transition temperature of the interacting model. The partition function of the non-interacting model may be written as $`Z`$ $`=`$ $`{\displaystyle \underset{p}{}}e^{N\alpha (p)\beta Ne_g(p)}`$ (17) $`=`$ $`e^{N(\alpha (p_m)\beta e_g(p_m))},`$ (18) where $`\beta =1/T`$ and $`p_m`$ is the value of $`p`$ corresponding to the minimum of the function $`f(p)=\alpha (p)+\beta e_g(p)`$. Using Eq. (15) and Eq. (16), we get $`p_m`$ $`=`$ $`0,T<T_c,`$ (19) $`p_m`$ $`=`$ $`{\displaystyle \frac{2}{\pi }}\mathrm{cos}^1({\displaystyle \frac{e^{h/T}}{2}}),T>T_c,`$ (20) with $`T_c=h/\mathrm{ln}(2)`$. Thus, there exists a sharp transition at a finite temperature $`T_c`$, the number of strings being identically zero below this temperature. In Fig. 7, we show the dimensionless free energy function $`f(p)`$ at two different temperatures, one above and one below $`T_c`$. It can be seen that for $`T<T_c`$, the function $`f(p)`$ has its lowest value at $`p=0`$. The minimum of $`f(p)`$ moves FIG. 7.: The dimensionless free energy $`f(p)`$ of the non-interacting model, plotted as a function of the string density $`p`$ at two different temperatures, $`T_1=2.5h`$ which is above $`T_c`$, and $`T_2=h`$, which is below $`T_c`$. continuously away from $`p=0`$ as the temperature is increased above $`T_c`$, approaching $`p=2/3`$ in the $`T\mathrm{}`$ limit. In Fig. 8, we have plotted $`p_m`$, the equilibrium value of the string density obtained from Eq. (20), as a function of $`T/h`$. It is easy to see from Eq. (20) that $`p_m`$ grows as $`(TT_c)^{1/2}`$ as $`T`$ is increased above $`T_c`$. Since the internal energy is proportional to $`p_m`$ in the non-interacting model, the specific heat vanishes identically for $`T<T_c`$ and diverges as $`(TT_c)^{1/2}`$ for $`T`$ approaching $`T_c`$ from above. Thus we get a $`K`$-type transition which is expected because of the equivalence of our system to dimer models. While this proves the existence of a transition in the interacting model too, it is not clear whether the nature of the transition is the same. It is quite possible that the long-range interactions FIG. 8.: The equilibrium string density $`p_m`$ plotted as a function of the temperature $`T`$ (measured in units of $`h`$) for the non-interacting string model. between the strings would result in a transition in a different universality class. This issue is addressed in the next section. It is interesting to compare our model with the model with anisotropic couplings studied by Blöte and Hilhorst . Consider the case when the horizontal couplings have strength $`(J\mathrm{\Delta })`$ and the remaining two are of strength $`J`$. In the limit $`J\mathrm{}`$, we need to consider only the states within $`𝒢`$. In this case too, the ground state lies in the zero-string sector but is two-fold degenerate since the up-down symmetry is retained. The excitations are again in the form of strings but are non-interacting and so equivalent to the excitations in the simplified model considered by us. In fact the expression for the free energy in Eq. (18) follows directly from Eq. (2) in Ref. if we make the identification $`h=2\mathrm{\Delta }`$. ## IV MC simulations and transfer matrix calculations for finite $`J`$ For finite $`J`$, we have carried out MC simulations to determine whether the phase transition persists and its nature if it does. A problem with the simulations is that equilibration times are very long for small values of $`h/J`$ and $`T/J`$. We have tried to overcome this problem by performing simulations with three kinds of dynamics. However, even with the fastest dynamics, we have been able to obtain reliable data only for relatively small system sizes $`(L18)`$. We have also carried out exact numerical evaluation of averages using transfer matrices for small samples. The results obtained from these numerical calculations are described below. ### A Single-spin-flip Metropolis dynamics In Fig. 9 we show the results of a MC simulation using the standard single-spin-flip Metropolis dynamics . We have plotted the staggered magnetization $`m`$ as a function of temperature $`T`$ for a heating run and a cooling run on a $`6\times 6`$ system. The staggered field and the coupling constant are set to $`h=0.05`$ and $`J=1.0`$, respectively (Unless otherwise stated, all the numerical results reported in this section are for $`J=1.0`$). The data shown were obtained by averaging over $`10^6`$ MC steps per spin (MCS). The heating run was started from the ground state in the zero-string sector and the cooling run started from a random spin configuration. It is clear from the data that even for this small system, equilibration is not obtained for temperatures lower than about 0.3. We also examined the states obtained by starting the system FIG. 9.: Results for the staggered magnetization $`m`$, obtained from single-spin-flip MC heating and cooling runs for a $`6\times 6`$ system with $`h=0.05`$ and $`J=1`$. Also shown are the results of exact numerical evaluation of the staggered magnetization for $`J=1`$, and the staggered magnetization in the $`p=2/3`$ sector for $`J\mathrm{}`$. in a random configuration and then quenching it instantaneously to zero temperature. We find that the system then goes to the lowest-energy state in one of the many sectors. For example, in the simulation corresponding to Fig.9, the system reached the zero-string ground state. On heating, the system continues to be in the zero-string sector until at some temperature value it jumps to the high-temperature phase. On the other hand, a slow cooling from the high temperature phase leads to the lowest energy state in the $`p=(2/3)`$ sector and the true zero-string ground state is not reached. These results can be understood as follows. As discussed in the preceding section, the ground state lies in the zero-string sector, and the excitations within $`𝒢`$ from the ground state correspond to the formation of an even number of strings. The single-spin-flip dynamics is reasonably efficient in exploring the states within a sector with a fixed number of strings. However, at low temperatures, it is extremely ineffective in changing the number of strings. In fact, even with zero external field, the single-spin-flip dynamics at zero temperature is non-ergodic and only samples states within a given sector. At finite temperatures, the only way to change the number of strings is through moves which take the system out of $`𝒢`$. These moves cost energy of order $`J`$. At low temperatures, the probability of acceptance of such moves becomes extremely small. Thus in Fig. 9, during the heating run, the system starts from the ground state in the zero-string sector and stays stuck in it till the temperature is sufficiently high. At high temperatures, the $`p=2/3`$ sector is most probable (note that at very high temperatures, the string picture is no longer valid) and during the cooling run, the system starts from this sector and again stays stuck in this sector since the dynamics cannot reduce the number of strings. Thus the cooling curve basically shows equilibrium properties within the $`p=2/3`$ sector. We have verified the above picture by an exact numerical evaluation of the staggered magnetization for a $`6\times 6`$ system. This is done by numerically computing the two sums that occur in the expression $`m={\displaystyle \frac{1}{N}}M={\displaystyle \frac{1}{N}}{\displaystyle \frac{Tr[M(V^{(1)}V^{(2)})^{L/2}]}{Tr[(V^{(1)}V^{(2)})^{L/2}]}},`$ (21) where $`V^{(1),(2)}`$ are the usual row-to-row transfer matrices and $`M`$ is a diagonal matrix corresponding to the staggered magnetization. Similarly one can compute the staggered susceptibility $`\chi `$ defined as $`\chi ={\displaystyle \frac{1}{N}}[M^2M^2].`$ (22) This exact evaluation can, however, be done only for small systems since this procedure involves using very large matrices. For finite $`J`$, we have been able to do this calculation only for $`L6`$. For $`J\mathrm{}`$, the transfer matrices become block diagonal, and this means that one can perform separately the computations in each block which are of smaller size. In this case, we have been able to go up to system size $`L=12`$. Note that in this limit, we can also compute the thermodynamic properties in each sector. In Fig. 9, we have plotted the exact results for $`m`$ obtained from the full partition function with $`J=1`$, as well as the results for $`m`$ in the $`p=2/3`$ sector for infinite $`J`$. It is readily seen that our picture of the system getting stuck in the $`p=2/3`$ sector during the cooling run is correct. The counter-intuitive results of the quenching process can also be understood using the above picture. After the quench, domains of spins pointing in and opposite to the direction of the staggered field begin forming. Only spins on the boundaries of the domains can flip, leading to motion of the domain walls. This motion is biased, favouring the growth of the domains aligned with the staggered field. Now we recall that any non-zero string configuration will have domains of misaligned spins spanning the entire lattice. Clearly it is extremely unlikely that the biased domain growth process will lead to such configurations. We have checked in our simulations that as the system size is increased, the probability of the quench leading to the zero-string sector approaches unity. To further clarify this process, we show in Fig. 10 different stages in the evolution of a $`24\times 24`$ system following a zero-temperature quench from a random initial state. The field is set at the value $`h=0.05`$. It can be seen that the domains of misaligned spins rapidly vanish. On the other hand, in Fig. 11 we show a $`T=0.4`$ equilibrium spin configuration and the result of quenching it to $`T=0`$. In this case the system gets stuck in the $`p=2/3`$ sector. FIG. 10.: Three stages in the evolution of the system, following a zero-temperature quench from a random initial state. The first is the initial configuration and the other two are configurations obtained after $`2`$ and $`4`$ MC sweeps. The dark and bright regions indicate spins pointing along and opposite to the direction of the staggered fields, respectively. ### B String dynamics To speed up the dynamics, it is necessary to be able to efficiently change the number of strings. A straightforward way of doing this is to introduce moves which FIG. 11.: A configuration which is at equilibrium at $`T=0.4`$ and the configuration resulting from quenching it to $`T=0`$. It can be seen that the final configuration consists of tightly bound strings and is the lowest energy state in the $`p=2/3`$ sector. attempt to flip an entire vertical line of spins. Such moves are accepted or rejected according to the usual Metropolis rules. Combining these moves with the single-spin-flip ones makes the dynamics ergodic at zero-temperature in the absence of the field. In Fig. 12, we show the results of simulations with the string dynamics, again for a $`6\times 6`$ system. The values of $`J`$ and $`h`$ are the same as those for the data shown in Fig. 9, and the averaging is over the same number of MCS. The excellent agreement with the exact results shows that equilibration times have been greatly reduced. We have also shown in Fig. 12 simulation results for a $`12\times 12`$ system. Again there is very good agreement with the exact results, which, as noted above, were obtained by setting $`J\mathrm{}`$. To determine the existence of a phase transition, FIG. 12.: Staggered magnetization $`m`$ versus temperature $`T`$ for $`h=0.05`$, $`J=1`$. The data for system sizes $`L=6`$, $`12`$ and $`18`$ were obtained from MC simulations using string dynamics. Exact transfer-matrix results for $`L=6`$ and for $`L=12`$ ($`J\mathrm{}`$) are also shown. we have performed simulations with the above dynamics and studied the dependence of the staggered susceptibility $`\chi `$ on the system size for different values of the field. The results are summarized in Figs. 13, 14 and 15. The data in Fig. 13 correspond to a low field value, $`h=0.05`$. The number of MCS used for computing the averages is $`10^6`$, $`10^7`$ and $`4\times 10^8`$ for the three system sizes, $`L=6`$, $`12`$ and $`18`$, respectively. For system sizes $`L=6`$ and $`L=12`$, we also show the exact transfer-matrix results. Even though the $`L=12`$ transfer matrix results are for $`J\mathrm{}`$, we find very good agreement with the simulation data. This is because excitations out of $`𝒢`$, which involve energies of order $`J`$, are very much suppressed at the low temperatures considered. The $`L=18`$ MC data are not as smooth as the data for smaller sample sizes, indicating that the errors in the calculation of averages are significant in spite of averaging over a very large number of MCS. Thus, even with the string dynamics, we have not been able to attain equilibration for systems with $`L>18`$. FIG. 13.: Staggered susceptibility $`\chi `$ versus temperature $`T`$ for $`h=0.05`$, $`J=1`$. The data for system sizes $`L=6`$, $`12`$ and $`18`$ were obtained from MC simulations using string dynamics. Exact transfer-matrix results for $`L=6`$ and for $`L=12`$ ($`J\mathrm{}`$) are also shown. The close agreement between the MC results for $`J=1`$ and the exact transfer-matrix results for $`J\mathrm{}`$ indicates that the MC results for the system sizes considered are representative of the $`J\mathrm{}`$ limit. In section III, we established the existence of a finite-temperature phase transition in this limit. Our MC results indicate that this transition occurs near $`T2.5h`$, which is substantially higher than the lower bound, $`h/\mathrm{ln}(2)`$, derived in section III. To determine whether this transition is K-type, we have examined the dependence of $`\chi _p`$, the peak value of the staggered susceptibility $`\chi `$, on the system size $`L`$. In the $`J\mathrm{}`$ limit, the staggered susceptibility is proportional to the specific heat which diverges as $`(TT_c)^{1/2}`$ in a K-type transition. This implies that the susceptibility exponent $`\gamma =1/2`$, and the correlation length exponent $`\nu `$ is equal to $`3/4`$. According to standard finite-size scaling , $`\chi _p`$ then should be proportional to $`L^{\gamma /\nu }=L^{2/3}`$. As shown in Fig. 16, our numerical data are in good agreement with this expectation. We, therefore, conclude that our model undergoes a K-type transition in the $`J\mathrm{}`$ limit. In Fig. 14, we show simulation results for an intermediate field value, $`h=0.25`$. In this case, for system sizes $`L=6`$, $`12`$, $`18`$ and $`24`$, equilibrium values were obtained by averaging over $`2\times 10^6`$, $`5\times 10^6`$, $`2\times 10^7`$ and $`5\times 10^7`$ MCS, respectively. As in the $`h=0.05`$ case, the FIG. 14.: Staggered susceptibility $`\chi `$ versus temperature $`T`$ for $`h=0.25`$, $`J=1`$. The data for system sizes $`L=6`$, $`12`$, $`18`$ and $`24`$ were obtained from MC simulations using string dynamics. Exact transfer-matrix results for $`L=6`$ are also shown. FIG. 15.: Staggered susceptibility $`\chi `$ versus temperature $`T`$ for $`h=0.4`$, $`J=1`$. The data for system sizes $`L=6`$, $`12`$, $`18`$ and $`24`$ were obtained from MC simulations using string dynamics. Exact transfer-matrix results for $`L=6`$ are also shown. FIG. 16.: The susceptibility maximum $`\chi _p`$ plotted against the system size $`L`$ for three different values ($`0.05`$, $`0.25`$ and $`0.4`$) of the staggered field $`h`$. The solid lines correspond to the power-law form $`\chi _pL^{2/3}`$. peak of $`\chi `$ occurs near $`T2.5h`$, and the peak value of $`\chi `$ increases as $`L`$ is increased. Finally, in Fig. 15, we have shown the results for a high field value, $`h=0.4`$. In this case, equilibration times are quite small and we can simulate relatively large systems without any difficulty. All the MC data shown in Fig. 15 were obtained with averaging over only $`2\times 10^5`$ MCS. We find that in this case, the staggered susceptibility saturates for $`L12`$, and clearly there is no phase transition. In Fig. 16, we have plotted $`\chi _p`$, the value of the staggered susceptibility at the peak, against the system size $`L`$ for the three different fields. As noted above, we get $`\chi _pL^{2/3}`$ for $`h=0.05`$. For $`h=0.25`$, the values of $`\chi _p`$ for $`L=6`$ and $`L=12`$ are consistent with this power-law form, but the data for higher values of $`L`$ show deviations from this form and signs of saturation. Finally, for $`h=0.4`$, the peak value of $`\chi `$ clearly saturates for $`L12`$. Taken at face value, these results would imply that for $`J=1`$, there is a K-type transition for $`h=0.05`$, but no transition for $`h=0.25`$ and $`h=0.4`$. In other words, there is a phase transition for small $`h`$, which disappears beyond a critical value of the field. This naive interpretation of the data is questionable because a line of continuous phase transitions in the h-T plane is very unlikely to end abruptly at some point . A more plausible interpretation is that the system with finite $`J`$ does not exhibit a true phase transition for any value of the staggered field – the signature of a phase transition found in the scaling behavior of the data for small $`h`$ is a remnant of the transition in the $`J\mathrm{}`$ limit. The behavior of a system with finite $`J`$ would differ from that in the $`J\mathrm{}`$ limit only if the values of the parameters $`J`$, $`T`$ and $`L`$ are such that excitations out of the manifold $`𝒢`$ are not strongly suppressed. Since the typical value of the local field in a configuration in $`𝒢`$ is $`2J`$, the typical energy cost associated with a single-spin-flip excitation out of this manifold is $`4J`$. Since this excitation can occur at any site of the lattice, the free energy cost of such an excitation is approximately given by $`\delta F4J2T\mathrm{ln}L`$. Such excitations are likely to occur if $`\delta F0`$, which corresponds to $`LL_c=e^{2J/T}`$. The values of $`L_c`$ at temperatures near the peak of $`\chi `$ are $`10^7,28`$ and $`7.4`$ for $`h=0.05,0.25`$ and $`0.4`$, respectively. In view of the very large value of $`L_c`$ for $`h=0.05`$, it is not surprising that the MC results for $`m`$ and $`\chi `$ for $`h=0.05`$, $`J=1.0`$, and $`L18`$ are essentially identical to the results for the same value of $`h`$ in the $`J\mathrm{}`$ limit. The power-law scaling of the data for $`\chi _p`$ at $`h=0.05`$ can then be attributed to the occurrence of a phase transition in the $`J\mathrm{}`$ limit. The observation that for $`h=0.25`$, the numerical data for $`\chi _p`$ show deviations from power-law scaling with $`L`$ and signs of saturation for $`L24`$ is also consistent with this interpretation. The small value of $`L_c`$ for $`h=0.4`$ implies that the effects of $`J`$ being finite should be evident even in the small samples we consider. The fact that the data for $`h=0.4`$ clearly indicate the absence of any phase transition is, thus, consistent with the interpretation that there is no phase transition for finite $`J`$. While the scenario described above is consistent with all our numerical data, we can not be absolutely sure that it is correct – data for much larger systems would be needed for a conclusive answer to the question of whether a phase transition occurs for finite $`J`$. We note that even if our interpretation is correct, the behavior of finite samples with finite $`J`$ would look very similar to that near a true phase transition if $`h/J`$ is small. In such cases, the value of $`\chi _p`$ will continue to grow with $`L`$ as a power law until $`L`$ becomes comparable to $`L_c`$, at which point $`\chi _p`$ will saturate. Since $`L_c`$ depends exponentially on $`J/h`$, it would be very large for $`h/J1`$. ### C Cluster dynamics We have also performed simulations using a cluster method. We briefly report our results here. This method was introduced by Kandel et al. for the study of frustrated systems. Recently Zhang and Cheng have applied this algorithm to the zero-field TIAFM. We have modified this algorithm to take into account the presence of the staggered field. The cluster algorithm is usually implemented in two steps. In the first step, one performs a “freeze-delete” operation on the bonds using a fixed set of rules , which results in the formation of independent clusters. The second step consists in flipping these clusters. In our modified algorithm, the first step is unchanged. The freeze-delete operations are exactly as in Ref. and are effected without considering the energy associated with the staggered field. In the second step, we calculate the staggered-field energy of every cluster and then flip it using heat-bath rules. It can be proved that this procedure satisfies the detailed balance condition. The cluster dynamics performs better than the single-spin-flip dynamics and we have been able to obtain equilibrium averages for a $`L=6`$ system ($`J=1`$, $`h=0.05`$) with $`10^6`$ MCS. However, for bigger system sizes ($`L12`$), we have not been able to achieve equilibration even with runs over $`10^8`$ MCS. Thus this dynamics is much slower than the string dynamics. This is due to the following reason. While the cluster dynamics does allow the number of strings to change, the clusters formed at low temperatures are quite large and the probability of flipping them becomes very small. In order to obtain quantitative comparisons of the three different dynamics, we have studied the autocorrelation function, $`C(\tau )={\displaystyle \frac{M(\tau )M(0)M^2}{M^2M^2}},`$ (23) where $`M`$ is the total staggered magnetization and $`\tau `$ is the “time” measured in units of MCS. In Figs. 17 and 18, we plot the results for $`C(\tau )`$ obtained from simulations using different dynamics at two different temperatures. The data correspond to a $`L=6`$ lattice and the averaging was carried out over $`10^7`$ MCS in all the cases. We note that the single-spin-flip dynamics leads to a two-step relaxation – a fast one corresponding to equilibration within a sector and a slower one in which different sectors are sampled. The results shown in these figures also demonstrate the superiority of the string dynamics over the other two methods at both high and low temperatures. FIG. 17.: Autocorrelation function $`C(\tau )`$ of the staggered magnetization, obtained from the three different dynamics at a comparatively high temperature, $`T=0.4`$. The data are for a $`6\times 6`$ sample with $`J=1`$, $`h=0.05`$. The “time” $`\tau `$ is measured in units of MC steps per spin. FIG. 18.: Autocorrelation function $`C(\tau )`$ of the staggered magnetization, obtained from string and cluster dynamics at a low temperature, $`T=0.125`$. The data are for a $`6\times 6`$ sample with $`J=1`$, $`h=0.05`$. The “time” $`\tau `$ is measured in units of MC steps per spin. ## V Summary and Discussion In summary, we have studied the equilibrium properties of a triangular Ising antiferromagnet in the presence of an ordering field which is conjugate to one of the degenerate ground states. We have addressed the question of whether a phase transition can occur in this system. Using a mapping of the TIAFM ground states to dimer coverings, we find that it is possible to obtain a very detailed description of the low-lying energy states. In the limiting case of the coupling constant $`J\mathrm{}`$, we show that the problem reduces to that of a set of non-intersecting strings with long-range interactions. For this limiting case, we prove existence of a transition which appears to be $`K`$-type. For finite $`J`$, we have studied the system using exact numerical evaluation of the staggered magnetization and susceptibility by transfer matrix methods, and also by MC simulations using three different dynamics. We find that the dimer description also helps in understanding the dynamics and in finding methods of improving the efficiency of the MC simulation. A single-spin-flip dynamics is very inefficient in sampling different string sectors and at low temperatures, the system stays stuck within a sector and shows thermodynamic behaviour corresponding to that sector. A cluster dynamics method improves over the single-spin-flip dynamics, but is still very slow at low temperatures. We have developed a dynamics which allows moves that add or remove pairs of strings. As expected, this greatly reduces equilibration times. However, even with this increased efficiency, we have not been able to equilibrate systems with $`L>18`$ in the interesting region of low field values ($`h/J<<1`$). Hence our results on possible phase transitions for finite $`J`$ are inconclusive, although there are indications that a true phase transition does not occur for finite $`J`$. We close with a few comments on possible connections of the system studied here with supercooled liquids near the structural glass transition. The phase transition we found in our model in the $`J\mathrm{}`$ limit is similar in nature to the Gibbs-Di Marzio scenario for the structural glass transition. In the Gibbs-Di Marzio picture, the structural glass transition is supposed to be driven by an “entropy crisis” resulting from a vanishing of the configurational entropy as the transition is approached from the high-temperature side. A similar vanishing of the entropy occurs at the phase transition in our model. It is interesting to note in this context that a “compressible” TIAFM model in which the ground-state degeneracy is lifted by a coupling of the spins with lattice degrees of freedom has been proposed as a simple spin model of glassy behavior. In view of these similarities with the structural glass problem, a detailed study of the dynamic behavior of our model would be very interesting. ## VI Acknowledgements We thank Chinmay Das, Rahul Pandit and B. Sriram Shastry for helpful discussions.
no-problem/9910/astro-ph9910355.html
ar5iv
text
# The Ionized Gas Kinematics of the LMC-type galaxy NGC 1427A in the Fornax Cluster1footnote 11footnote 1Based on data collected at Las Campanas Observatory, Chile, run by the Carnegie Institution of Washington ## 1 Introduction Interactions between galaxies and their environments are thought to be important mechanisms driving galaxy evolution. For example, they have been invoked to explain the excess of blue galaxies in high redshift clusters relative to present-day clusters, the so-called Butcher-Oemler effect (Butcher & Oemler (1978); Gunn (1989); Evrard (1991)). Clusters of galaxies are ideal places to study these interactions, due to their great concentration of galaxies of various morphologies, sizes and luminosities, and huge masses of gas, in a comparatively small volume of space. Among the various kinds of interactions that could be experienced by a cluster galaxy we have: tidal forces from another galaxy or from the cluster as a whole (Byrd & Valtonen (1990); Henriksen & Byrd (1996)), the ram pressure from the passage through the intracluster medium (ICM) (Gunn & Gott (1972); Giovanell & Haynes (1985); Evrard (1991); Phookun & Mundy (1995)), high-speed encounters between galaxies (Moore et al. (1996)), collisions and mergers (Lynds & Toomre (1976); Theys & Spiegel (1977); Barnes & Hernquist (1991)), and the combined action of two or more of these mechanisms (Patterson & Thuan (1992); Lee, Kim, & Geisler 1997). The Fornax cluster is a relatively poor galaxy cluster dominated by early-type galaxies. Compared to Virgo, the center of Fornax is two times denser in number of galaxies, but Virgo as a whole is almost four times richer (Ferguson & Sandage (1988); Hilker (1998)). The hot ICM of Fornax shines in X-rays, as detected by ROSAT and ASCA (Jones et al. (1997); Rangarajan et al. (1995); Ikebe et al. (1996)), and this hot gas extends at least 200 kpc from the center of the cluster. Two giant ellipticals, NGC 1399 (a cD galaxy with an extended halo of about 400 kpc in diameter, and an extraordinarily large population of globular clusters, see Hilker (1998); Grillmair et al. (1999)) and NGC 1404, lie at the center of the cluster. Fornax may be composed of two subclusters in the process of merging, evidenced by the big relative radial velocity between NGC 1399 and NGC 1404 of about 500 km/s (Bureau, Mould, & Staveley-Smith 1996). However, these galaxies are close in space. Distance determinations based on surface brightness fluctuations (Jensen, Tonry, & Luppino 1998) and globular cluster luminosity functions (Richtler et al. (1992); Grillmair et al. (1999)) put them at roughly the same distance. Moreover, the X-ray observations with ROSAT show that the hot corona associated with NGC 1404 is distorted and probably being stripped, indicating an infall of this galaxy towards NGC 1399 and the cluster center (Jones et al. (1997)). NGC 1427A is the brightest irregular (Irr) galaxy in the Fornax cluster, and very similar to the LMC in its morphology and colors (Hilker et al. (1997)). The great majority of the high surface brightness regions that dominate the light of NGC 1427A are aligned along the south-western edge of the galaxy, in a kind of distorted ring (see Fig.1 and Fig.5). Several arguments point towards explaining the appearance of this galaxy in the context of an interaction with its environment. The resemblance to the so-called ring galaxies led Cellone and Forte (1997) to suggest that NGC 1427A is the result of an encounter with a smaller intruder, giving also a candidate for this intruder (the North Object, see Fig. 1 and Fig. 5). NGC 1427A is also very close to the center of the cluster, with a projected distance of 121 kpc to NGC 1399 and 83 kpc to NGC 1404 <sup>2</sup><sup>2</sup>2Throughout this paper we assume a distance to the Fornax cluster of 18.2 Mpc, from Kohle et al. (1996), recalibrated as in Della Valle et al. (1998) using the new distances to Galactic globular clusters from Hipparcos (Gratton et al. (1997))., so tidal forces might be important in the enhancement of the star formation in the galaxy. Finally, NGC 1427A is crossing the ICM of Fornax at a supersonic speed (see Section 4), so the ram pressure exerted by the intracluster gas could also be the cause of the peculiar distribution of star forming regions in the galaxy. Gavazzi et al. (1995) studied three galaxies in the cluster Abell 1367 which, like NGC 1427A, have their bright H ii regions distributed along one edge of their perimeters, which they attribute to the increase of the external pressure as the galaxies cross the ICM. In this paper we present the kinematics of the ionized gas (H ii regions) of NGC 1427A and discuss the obtained velocity field in the context of a normal Irr galaxy versus an interacting galaxy. In Section 2 we describe the observations, reduction of the data and the error analysis. In Section 3 we model the kinematics of the galaxy and analize the results. Section 4 contains the discussion of the possible scenarios for the history of NGC 1427A in the light of our results, and in Section 5 we give our conclusions. ## 2 Observations, Data Reduction and Error Analysis Long-slit spectra of NGC 1427A were obtained during two runs with the 2.5m DuPont telescope at Las Campanas Observatory, Chile, during 1997 February 3-4 and August 9-14. The telescope was equipped with the Modular Spectrograph. The grating used had 600 grooves/mm, and as the detector we used a 2048$`\times `$2048 SITe chip, with a pixel size of 15 $`\mu `$m. This setup gives a dispersion of 1.27 $`\AA `$/pix and a spatial sampling of 0.3625 arcsec/pix. On the February run the measured seeing was about 1 arcsec during the entire night, which corresponds to a linear scale of 88 parsec at the adopted distance to Fornax. For the August run, due to the presence of some clouds, we binned the spatial direction by a factor of 2 in order to get more light, obtaining 0.725 arcsec/pix. The seeing was 1.4 arcsec, resulting in a spatial resolution of 123 parsec. Integration times were of 45 minutes at the slit positions where three 15-minute frames were obtained, and 15 minutes otherwise. The instrumental resolution was derived by measuring the FWHM of several unblended lamp lines after calibrations. For the February run we obtained a mean FWHM of 2.98 $`\AA `$, corresponding to a standard deviation of the Gaussian $`\sigma `$ = 1.27 $`\AA `$ (i.e., 58 km/s at H$`\alpha `$), and a mean FWHM of 4.8 $`\AA `$ for the August run, corresponding to $`\sigma `$ = 2.05 $`\AA `$ (i.e., 93 km/s at H$`\alpha `$). The wavelength range is 4700 $`\AA `$ \- 6850 $`\AA `$ for the February run and 4800 $`\AA `$ \- 6960 $`\AA `$ for the August run. This range includes several emission lines of the ionized gas, namely, H$`\beta `$, \[OIII\], HeI, \[NII\], H$`\alpha `$, and \[SII\] (see Fig. 2). The slit was aligned in order to cover the majority of the bright H ii regions of the galaxy. The positions of the slits are shown in Fig. 1 and were derived by matching coordinate information obtained on the guider screen during the observations with an H$`\alpha `$ image of the galaxy. The coincidence between the spatial profiles along the slits and their inferred positions on the galaxy was almost perfect. The images show the strong emission lines of the H ii regions, but very weak emission coming from the regions between them, so we are mostly restricted to work with the brightest regions of recent star formation. In the majority of the cases three frames were obtained on each position in order to deal with cosmic rays. The data reduction was done using the IRAF<sup>3</sup><sup>3</sup>3IRAF is distributed by NOAO, which is operated by the Association of Universities for Research in Astronomy Inc., under contract with the National Science Foundation. software package. All the images were bias subtracted, flat-fielded using normalized continuum lamps, and then the frames for each slit position were combined to produce the final images. Because some of the H ii regions we observed are very faint, the extraction of their spectra was done with great care. First, we extracted the spectrum of a standard star with a very strong flux and used this image as a reference for the tracing of the spectra of the fainter H ii regions. Finally, for the background subtraction we used samples of sky as close as possible to the H ii regions, fitting the level of this background (night sky plus the background light of NGC 1427A) with a low-order polynomial. The wavelength calibration was done using He-Ne lamps taken just before or just after each exposure. We identified 23 good lines, with which we constructed dispersion solutions with a fifth-order Legendre polynomial, always obtaining a residual RMS of less than 0.1 $`\AA `$. To measure the radial velocities we first fitted the continuum and then subtracted it from every spectrum, so we are finally left with just the emission lines. By far the strongest line in all our spectra is H$`\alpha `$. This, along with the fact that there were far more comparison lines, evenly distributed, on the red side than the blue side of the wavelength range, making the dispersion solution better on the red side, led us to use just the H$`\alpha `$ emission to measure the velocities. Once the continuum was subtracted, the velocities were measured by fitting a Gaussian profile to the line to obtain the center, and with this center we obtained the radial velocity by using the standard Doppler formula. Finally we applied the heliocentric correction to all the H$`\alpha `$ velocities. To estimate the errors in our velocities we extracted various sky spectra from each of the final images, selected ten to twelve night emission lines at various signal-to-noise ratios (defined as S/N = $`f/(f+n(ron)^2+n(sky))^{1/2}`$, where $`f`$ is the flux in electrons contained in the emission line after the continuum was subtracted, $`sky`$ is the continuum level at the emission line in electrons per pixel, $`n`$ is the width of the line at zero intensity in pixels, and $`ron`$ is the readout noise in electrons/ADU), and measured the centers of all these lines following the same procedure as for the H$`\alpha `$ velocities. Then we plotted the difference between each measurement and the average of all the measurements of the same line (which we call the residual) versus S/N. In total we had approximately 1300 data points, which we grouped in bins and plotted. The results are shown in Fig. 3a. To assign the error to a velocity, we measure the S/N of the corresponding H$`\alpha `$ line and interpolate using the diagonal rational function (Press et al. (1992)) at that S/N. The data are given in Table 1, where the coordinates refer to the axes shown in Fig. 1 and Fig. 5, and the origin is not included in the images. We performed another method of error estimation by means of a Monte Carlo simulation. We constructed an artificial spectrum consisting of one perfectly gaussian emission line (i.e., we exactly know its center, width and amplitude) placed in the wavelength region where we observe the H$`\alpha `$ line in our spectra. Then, using IRAF routines, Poisson noise was added randomly to the ‘perfect’ spectrum (using the same gain, 0.8 electrons/ADU, and read-out noise, 3 electrons, as the chip used during the observations), creating one thousand ‘noisy’ spectra. Next, we added to these artificial spectra real sky randomly extracted from the regions of NGC 1427A where no H ii regions are present. Finally, we applied to these semi-artificial spectra the same measuring process as for the H$`\alpha `$ velocities. A histogram of all the measurements fits well with a Gaussian curve (which tells us that the measurement errors are normally distributed, an important point when discussing modeling of the velocity field, see Section 3), whose standard deviation we took as the error estimated for a representative S/N of all the artificial spectra. We automatized this whole procedure and repeated it for many different S/N ratios, obtaining results quite similar to those obtained with the analysis of the skylines. We chose then to adopt the night skylines method as our error estimation. ## 3 Kinematic models. In Fig. 4 we show the measured H$`\alpha `$ heliocentric velocities of the 29 positions over the galaxy for which we measured reliable data. The velocities are plotted as a function of the distance to the axis of rotation, whose position was obtained as we will explain later in this section. On local scales, the data show a state of complex kinematics, with very close points whose velocities do not overlap within their error bars. It is not uncommon for these clumpy Irr galaxies to show disordered patterns in their velocity fields (e.g. Hunter & Gallagher (1986)), but the large scatter of velocities observed in NGC 1427A and the particular characteristics of its environment make us suspicious about treating it as a normal Irr. On a global scale, one can see that there is a rotation present, with an amplitude of about 150 km/s from one side of the galaxy to the other. Most Irr galaxies, unlike spirals (which usually show amplitudes in the rotation speeds of 400 km/s from end to end), are slow rotators, showing near rigid-body behaviour extending over most of their optical dimensions (e.g. Gallagher & Hunter (1984); Hunter & Gallagher (1986)). In NGC 1427A it is clear that the velocity rises from east to west following a roughly linear trend (corresponding to solid-body rotation), $`but`$ $`with`$ $`a`$ $`large`$ $`scatter`$ $`between`$ $`the`$ $`data`$ $`points`$ $`and`$ $`the`$ $`fitted`$ $`line`$ (see Fig.4). It is clear that any smooth, conventional model of rotation curve will not be capable to follow such a large scatter. However, trying to adjust some simple models to the data will uncover overall characteristics and give some insights about the nature of the velocity field of the galaxy. As a first approximation, we tried to fit a rigid-body rotation model, $`v_{l.o.s.}=v_0+(\omega \times 𝐫)(𝐳)=\mathrm{v}_0+\omega _\mathrm{y}\mathrm{x}\omega _\mathrm{x}\mathrm{y},`$ where $`v_0`$ is the recession velocity of the (arbitrary) origin of the x-y coordinates on the plane of the sky (see Table 1), $`(𝐳)`$ is a unit vector along the line of sight, and $`\omega _x`$ and $`\omega _y`$ are the components of the angular velocity vector $`\omega `$ projected on the plane of the sky ($`X`$ being the E-W direction and $`Y`$ the N-S one, see Fig. 4). This model does not yield any information concerning the center of rotation nor an inclination of the disk of the galaxy. A linear least-squares fit gives a best-fit model with $`\chi ^2`$=134, and a reduced $`\chi ^2`$, or $`\chi ^2`$ per degree of freedom, of $`\chi ^2/(NM)`$=5.2, where $`N`$=29 is the number of data points, and $`M`$=3 the number of parameters to adjust. This is a large value for the merit function that is not acceptable in order to adopt the model as a good one. However, the results are still valid as a first approximation to the magnitude and direction of the rotation. The best-fit model parameters obtained were 1.29$`\pm `$0.05 and -12.8$`\pm `$0.1 km/s/kpc for $`\omega _x`$ and $`\omega _y`$ respectively, and one can see that, as expected from simple inspection, the rotation projected on the sky is almost entirely around the N-S axis. These values for the components of the angular velocity vector imply an axis of rotation on the plane of the sky whose direction is inclined $`6^{}`$ counter-clockwise from the vertical direction. The shallow velocity gradient of about 13 km/s/kpc is in agreement with what is observed in most Irrs, having $``$ 5-20 km/s/kpc (Gallagher & Hunter (1984)). Next, we used a model after de Zeeuw and Lynden-Bell (1988), which assumes that the gas lies in a flat disk following circular orbits. The model represents a family of rotation curves, parametrized by $$v_{rot}(r^{})=\frac{Vr^{}}{(r^2+r_0^2)^{p/2}}.$$ Here, $`V`$, $`r_0`$, and $`p`$ are constant parameters, and $`r^{}`$ is the distance from each point to the center of rotation measured on the plane of the galaxy. Note that the solid-body ($`p`$=0), flat ($`p`$=1), Keplerian ($`p`$=3/2), and other models of rotation curves are special cases of this family. To allow for an arbitrary inclination of the disk of the galaxy with respect to the sky, we did the following. First, using the center of rotation $`(x_0,y_0)`$ as origin of coordinates, we rotated the $`X`$ and $`Y`$ axes (i.e., the plane of the sky) by an angle $`\beta `$ around the line of sight $`Z`$, obtaining the system $`X^{\prime \prime }Y^{\prime \prime }Z^{\prime \prime }`$, with $`Z^{\prime \prime }=Z`$. After the fitting, this angle will tell us about the direction of the axis of rotation projected onto the sky. Then we made a second rotation, now around the $`X^{\prime \prime }`$ axis, tilting the $`X^{\prime \prime }Y^{\prime \prime }`$ plane by an angle $`\alpha `$, obtaining the system $`X^{}Y^{}Z^{}`$. The disk lies in the $`X^{}Y^{}`$ plane, and the $`Z^{}`$ axis is parallel to the angular momentum vector of the rotating disk. The angle $`\alpha `$, then, sets the inclination of the galaxy with respect to the plane of the sky. In order to fit this model to our data, we project the velocity of rotation along the line of sight (the $`Z`$ direction), so the equation to fit is $$v_{l.o.s.}=v_0+v_{rot}(r^{})\mathrm{sin}\alpha \mathrm{cos}\theta ^{}.$$ Here $`v_0`$ is the systemic velocity of the galaxy, and $`\theta ^{}`$ is the angle between the position vector $`𝐫^{}`$ and the $`X^{}`$ axis. This equation depends on eight parameters $`(v_0,\beta ,\alpha ,V,x_0,y_0,r_0,p)`$, the majority of them in a nonlinear way. We developed a code that, using the Levenberg-Marquardt method of nonlinear fitting (Press et al. (1992)), returns the values for the parameters that minimize the $`\chi ^2`$ merit function. In terms of the final value of $`\chi ^2`$, the best-fit de Zeeuw $`\&`$ Lynden-Bell model resulted closer to the data than the pure rigid-body rotation, but it is still not a good fit. A careful inspection of each step during the process of iteration to the best-fit model shows that the parameters $`v_0`$, $`\beta `$, $`\alpha `$, $`x_0`$, and $`y_0`$, quickly converge to their final, best-fit values. The best-fit value for the systemic velocity $`v_0`$ is 2039 km/s, in reasonable agreement with the H i systemic velocity measured by Bureau et al. (1996). The best-fit center of rotation, shown as a cross in Fig. 5, is located approximately 12 arcsec to the west of the midpoint between the optical edges of the galaxy. We obtained an angle $`\beta =10^{}`$, counter-clockwise from the N-S direction (see Fig. 5), close to the inclination found using the solid-body model. For the inclination of the disk with respect to the sky, $`\alpha `$, the best-fit value was $`80^{}`$, which would correspond to a disk seen almost edge-on (see Section 4 for the implications of this high inclination). However unexpected, this value of $`\alpha `$ is reached quickly by the algorithm. It does not agree with the inclination reported by Bureau et al. (1996) of $`\alpha =48^{}`$, derived using the photometric axial ratio, a rather arbitrary criterion for a galaxy like NGC 1427A. Having arrived at this point of the fitting procedure, the merit function reaches a flat valley in parameter space, with $`\chi ^2`$ 80, and $`\chi ^2`$ per degree of freedom of $``$ 3.8<sup>4</sup><sup>4</sup>4For comparison, if we fix the value of $`\alpha =48^{}`$ and run the fitting program, we obtain a minimum $`\chi ^2`$=122.. The parameters $`p`$, $`V`$, and $`r_0`$ are degenerate, in the sense that there is no unique set that gives a global minimum of $`\chi ^2`$. It is clear from the expression for $`v_{rot}`$ that, as $`p`$ increases, $`V`$ also has to rise in order to keep $`v_{rot}`$ constant. This is indeed what the fitting algorithm shows. Setting $`p=1`$, we find $`V=75`$ (in km/s only for this value of $`p`$, and the sign indicating the direction of the spin), $`r_0`$=2.7 kpc, and $`\chi ^2`$=83. For $`p=1.2`$, $`V=220`$, and $`r_0=2.9`$ kpc, we have $`\chi ^2=81`$. And finally, for $`p=1.5`$, $`V=1175`$, and $`r_0=3.7`$ kpc, the $`\chi ^2`$=79, almost negligibly better than the model with $`p=1`$. For values of $`p<`$1 the obtained $`\chi ^2`$’s begin to rise quickly. As one can see, it is not possible to distinguish between models with $`1p3/2`$, because the data are scarce at distances from the center where the models begin to differ from each other. This is shown in Fig. 6, where the data have been projected onto the plane of the disk (dividing the corresponding velocities by $`\mathrm{sin}\alpha \mathrm{cos}\theta ^{}`$), and the corresponding error bars rescaled (note that we didn’t take the error in the scaling factor into account). Data points marked as triangles lie very close to the axis of rotation, making the factor $`\mathrm{cos}\theta ^{}`$ very small and uncertain in its sign. This is reflected by meaningless values of $`v_{rot}`$, possibly with the wrong sign, as well as very large (but still underestimated) error bars. We explored the possibility that the models fail to explain the data due to problems with our error estimation. Of course, if we multiply the errors in the measured velocities by a factor of, say, 1.5, then the resulting value of $`\chi ^2`$ would be acceptable. But we are quite confident that our quoted errors are not underestimated, having obtained them by means of two different methods, one of them based on the actual set of data. Also, one could obtain a bad fit, even with the correct model, if the quoted errors were not normally (Gaussian) distributed. The reason for this is that the minimization of the merit function $`\chi ^2`$ $`assumes`$ that the errors are normally distributed (Press et al. (1992)). We tested this possibility by building (for both methods of error estimation) the distribution of the errors and fitting a Gaussian function to them. In both cases the agreement was very good, as seen in Fig. 3b for the method using skylines, so we reject this possibility. All the previous attempts to fit a model to the data and the above discussion about the distribution of the errors was regarding the estimated errors in the velocities. So far we have assumed that we know $`exactly`$ the positions $`(x,y)`$ of the regions whose spectra we have. So, in order to quantify the changes in the fitting results when some uncertainty in the coordinates is introduced, we performed the following exercise. We took the coordinates $`(x,y)`$ of all the data points and changed randomly their values around the original ones, after which we adjusted the rigid-body model to the “new” data set. We estimate a “real” uncertainty in the coordinates to be no more than 2 arcsec in the worst of the cases, so the changes introduced in the coordinates were randomly distributed between plus or minus 2 arcsec. Repeating the procedure two or three thousand times, always in a random way, we found that $`\omega _x`$, $`\omega _y`$, and $`\chi ^2`$ never change by a large amount. Doubling the uncertainty to 4 arcsec does not make any difference, so we conclude that there is no need to worry about uncertainties in the coordinates. Finally, since we are interested in relative velocities, eventual systematic errors should not affect our results as long as they affect all velocities in the same way. ## 4 Discussion ### 4.1 Kinematics We have presented the velocities of the ionized gas from many of the brightest H ii regions in NGC 1427A, and modeled them to derive the basic properties of its dynamics. Using two different models for the kinematics we found the major axis of rotation, with both solutions in reasonable agreement. The simplest model, a global rigid-body rotation plus a random component on small scales (responsible for the poor fit), seems to be a good approximation to the data (Fig. 4), and is in concordance with what is observed in most Irr’s. The radial velocities of points in the North Object match well with this model (see Fig. 4), which suggests that it is part of the galaxy, as the rest of the H ii regions. However, if we want information about the center of rotation and the inclination of the galaxy, we need a more elaborate model. Our solution using the de Zeeuw $`\&`$ Lynden-Bell model is better than the solid-body one in terms of the merit function $`\chi ^2`$ but, here again, the random component dominates the appearance of the rotation curve (Fig. 6). The puzzling feature of this solution is the remarkably high inclination ($`80^{}`$) returned by the fit, which does not depend on whether we use the points in the North Object in the fitting procedure. Assuming that the North Object is part of NGC 1427A and that it lies in the same disk as the rest of the H ii regions, this inclination would place it at a distance of about 30 kpc from the fitted (and optical) center of NGC 1427A, which is difficult to believe. If we lower the angle of inclination until the one derived using the photometric axial ratio (Bureau et al. (1996)) this problem is softened, with the North Object at 8.2 kpc, but with a $`\chi ^2`$ 50% higher than before. So, we are inclined to place the North Object outside the disk of NGC 1427A. We estimated the probability of the chance coincidence that the North Object being an independent cluster member with its velocity in the same range as those of the H ii regions of NGC 1427A (1950-2100 km/s). Assuming for the cluster galaxies a Gaussian radial velocity distribution (which is the case when the three dimensional distribution is Maxwellian) centered at NGC 1399 (1430 km/s) and with a dispersion of 325 km/s (Bureau et al. (1996)), we obtain a probability of 3.5% of a chance coincidence. This low probability, the North Object lying outside the plane of the galaxy, and the coincidence in the radial velocities would indicate that it is a separate object but gravitationally bound to NGC 1427A, probably a small satellite orbiting the galaxy. Based on the previous results, we estimated the dynamical mass and other related quantities for NGC 1427A. Taking the angular velocity obtained from the solid-body fit and assuming a spherical mass distribution, the total mass inside a radius of 6.2 kpc (the size of the major axis at the 24.7 mag/arcsec<sup>2</sup> isophote in V) is $`M_{dyn}`$ (9 $`\pm `$ 3)x$`10^9M_{\mathrm{}}`$ <sup>5</sup><sup>5</sup>5The uncertainty in the total mass is almost entirely due to the uncertainty in the size of NGC 1427A, which is a combination of uncertainties in the angular size of the galaxy and the distance to Fornax.. This is a lower limit for the total mass inside this radius because of the unknown component of the angular velocity along the line of sight. However, if the inclination of the disk is really as high as $`80^{}`$, then this unknown component will not be very relevant, and the quoted value for $`M_{dyn}`$ will be close to the actual one. The mass in the form of neutral hydrogen can be obtained from the integrated H i flux (Bureau et al. (1996)) and the adopted distance to Fornax, using the formula of Roberts (1975). With this, the H i mass turns out to be $`M_{H\mathrm{i}}`$ = (1.8 $`\pm `$ 0.3)x$`10^9M_{\mathrm{}}`$, so the fraction of the total mass in the form of neutral hydrogen is approximately 0.2, twice the value for the LMC (based on the total mass from Kunkel et al. (1997) and the H i flux from Huchtmeier & Richter (1988)). Finally, from the magnitudes given by Hilker et al. (1997), the mass-to-light ratios for NGC 1427A are $`M/L_B`$ 3.9, and $`M/L_V`$ 4.8, in units of solar masses per solar luminosities in the corresponding band. As a comparison, the LMC has a mass-to-light ratio of $`2.9M_{\mathrm{}}/L_{\mathrm{}}`$ in the B band (from the magnitudes given by de Vaucouleurs et al. (1991) and the total mass of Kunkel et al. (1997)). The values obtained for the total mass of NGC 1427A, the fraction of H i in it, as well as the mass-to-light ratio, all are in good agreement with typical values for the latest galaxy types, as summarized by Roberts & Haynes (1994). The random behaviour on small scales is not difficult to understand. Since the aperture sizes of our spectra vary between 4 and 12 arcsec, corresponding to spatial extensions in the range 0.3 - 1 kpc on the galaxy, our velocities are actually averages taken over structures and regions of various sizes. On these scales it is very common to find in these galaxies structures such as shells and supershells, large-scale filaments of ionized gas, as well as a non-negligible component of diffuse ionized gas (Hunter & Gallagher (1986); Martin (1997), 1998). All these structures reflect the strong impact that massive stars have on their surroundings, injecting large amounts of energy via stellar winds and supernova shocks. Hints of two supershells can be seen in the optical images of NGC 1427A, with diameters of 0.7 and 1.1 kpc (see Fig. 7), apparently emerging from the largest of the high surface brightness features. These structures seem to be primarily photoionized, despite their location very far from the nearest star associations (Hunter & Gallagher (1997); Martin (1998)), and show expansion velocities between 20 and 60 km/s, sometimes going up to 100 km/s (see Fig. 3 in Martin (1998)). The filled circles in the rotation curve of Fig. 4 correspond to the brightest H ii regions seen in Fig. 5, and one can see that they are closer to the solid-body line than the blank circles, which correspond to diffuse ionized gas some distance away from the bright H ii regions. This diffuse gas should be more subject to the effects of expanding shells and filaments, and this could be the reason why they depart from the overall rotation. The largest discrepancies in our data are between 40 and 70 km/s, so it is very likely that some of them are due to the strong influence of very massive stars on the ISM. Furthermore, part of the diffuse gas may not be in the disk of the galaxy, but instead it could have been transported into the halo by some mechanism (see, e.g., Dahlem, Dettmar, & Hummel (1994) for ionized gas away from the disk in NGC 891, and also Bomans, Chu, & Hopp (1997) for gas outflows from intense star forming regions in NGC 4449), where it would not necessarily corotate with the disk. Nevertheless, considering only the bright H ii regions does not improve the fits (the dispersion is smaller, but so are the error bars). Therefore, some physical mechanism (winds, turbulence, …) must still be involved to explain the $``$ 10-15 km/s discrepancies. ### 4.2 Interaction with the cluster environment Hilker et al (1997) and Cellone $`\&`$ Forte (1997) already suggested, based on morphological reasons and colours of the H ii regions, that the appearance of NGC 1427A is due to an interaction with the Fornax Cluster environment. This possibility is very likely given the location of NGC 1427A near the center of the cluster. Based on the obvious alignment of the bright giant H ii regions along a half ring at the south western part of the galaxy and the colors of the only two bright knots at the extreme north (“the North Object”), Cellone $`\&`$ Forte suggested that this could be the encounter between two different objects, the North Object being one of the many dwarf ellipticals that populate the center of the Fornax Cluster. As we said before, assuming a solid-body rotation, the velocities for the North Object fall well into the general kinematical pattern, which would indicate, with high probability, that it is just another part of NGC 1427A, not an intruder galaxy. On the other hand, if we take the de Zeeuw & Lynden-Bell model as the valid one, then we would have to accept that the North Object is not in the same disk as the rest of the H ii regions, possibly being a satellite galaxy of NGC 1427A. The proximity of the two giant ellipticals of the cluster, NGC 1399 and NGC 1404, suggests that NGC 1427A might be experiencing strong tidal forces. Tidal interaction is also a proposed mechanism for triggering star formation, but it seems unlikely that this could produce the ring-like pattern of star forming regions along one edge of the galaxy. Tides are known to produce thin low surface brightness filaments that stretch out from interacting galaxies (Gregg & West (1998)). A search for tails at this low surface brightness would be possible with the use of wide-field imaging plus relatively large pixel sizes (in order to collect more light at the expense of resolution). We argue here that the most likely scenario to explain the morphological and kinematical features of NGC 1427A is its passage through the hot ICM of Fornax. When a galaxy crosses the ICM of a cluster at a supersonic speed, a shock front will appear before the galaxy. This will abruptly raise the temperature and density of the ICM gas that goes through it, and so, behind the shock, the galaxy will be exposed to the action of a high thermal pressure plus the ram pressure that the shocked intracluster gas exerts upon it. Given the small sound speeds in the interstellar gas, it is very likely that another shock will form, now inside the galaxy. If the shocked interstellar gas has a cooling time<sup>6</sup><sup>6</sup>6The cooling time is estimated by $`t_{cool}(3/2)kT/n\mathrm{\Lambda }`$, where $`\mathrm{\Lambda }`$ is the volume emissivity of the gas divided by the electron density and proton density (the cooling coefficient). We adopt the cooling curve of Gehrels and Williams (1993). much shorter than the time needed by the shock wave to cross the medium, it will cool very rapidly, with the subsequent condensation that pressure equilibrium requires. In this way, dense shells of cold material follow immediately behind this $`\mathrm{`}isothermal`$ $`shock`$’ (also called a radiative shock). Molecular clouds are formed when the column density of these cold clouds exceeds the threshold at which UV dissociation is truncated (Franco & Cox (1986)), and when parts of these dense shells are fragmented and become gravitationally unstable (see Elmegreen & Elmegreen (1978)) new stars are formed. This is how regions of active star formation may align around the edges of gas-rich cluster galaxies, as in the galaxies observed by Gavazzi et al. (1995). NGC 1427A is at a projected distance of 120 kpc from NGC 1399 and moving at a relative radial velocity $`V_r`$ 600 km/s (Bureau et al. (1996)), so it will be in contact with the densest parts of the ICM during $`t_{ICM}`$ 2x$`10^8`$ years, a time long enough to allow shocks propagate into the ISM and trigger new star formation. Note that, since NGC 1427A is a gas-rich galaxy, it is probably crossing the Fornax ICM for the first time. The X-ray emitting plasma in Fornax has a temperature of 1.3x$`10^7`$ K (Rangarajan et al. (1995)), and a density of $`10^3`$ cm<sup>-3</sup> at the distance of NGC 1427A (Ikebe et al. (1996)). The adiabatic sound speed in a completely ionized medium with temperature $`T`$ is $`c_s`$ $``$ 0.15 $`T^{1/2}`$ km/s <sup>7</sup><sup>7</sup>7We assume a gas with primordial abundances (90% hydrogen and 10% helium in number) so, for complete ionization the mean molecular weight is 0.59, and with just singly ionized helium it would be 0.61. , which for the ICM in Fornax gives $`c_{ICM}`$ 500 km/s. If we assume that this hot intracluster gas moves with NGC 1399 (around which it appears to be centered, see Fig.1 in Jones et al. (1997)), then the passage of NGC 1427A across the ICM is supersonic, with an approximate Mach number $`M`$ 1.2 (a lower bound, since we only know one component of the relative velocity). A weak adiabatic shock will be leading the way of NGC 1427A through the ICM, slightly raising the temperature and density of the gas that crosses it. The ISM in gas-rich galaxies is extremely complex, with the thermodynamic properties of the different phases varying rapidly from place to place and also in time (see, e.g., Kulkarni & Heiles (1988) for a discussion of the Milky Way’s ISM; and also McKee & Ostriker (1977)). We will discuss the situation for two representative states of the ISM: a hypothetical hot ionized halo, and a warm neutral hydrogen disk. In order to keep the halo in hydrostatic equilibrium in the galaxy’s potential well as revealed by its rotation curve, the required temperature of this hypothetical gas is $``$ 2x10<sup>5</sup> K. At this temperature the sound speed is $`c_{halo}`$ 70 km/s. Taking the observed mean value for the pressure of the Milky Way’s ISM of $`<P_{ISM}>`$ $`3000`$ cm<sup>-3</sup>K (Kulkarni & Heiles (1988)), we would have a halo density of 1.5x$`10^2`$ cm<sup>-3</sup>. Note that, for these conditions, the cooling time is $``$ 6x$`10^5`$ years, so constant energy input is required to keep the gas at this temperature. Assuming that most of the incident momentum from the ICM is transferred to the galaxy, we obtain $`v_{ISM}v_{halo}(\rho _{ICM}/\rho _{halo})^{1/2}v_{ICM}`$ 150 km/s. Then, there would be a shock with $`M`$ 2. Applying the Rankine-Hugoniot jump conditions (Landau & Lifshitz (1979)) we obtain behind this shock a temperature of $``$ 4x$`10^5`$ K and a density of $``$ 3.5x$`10^2`$ cm<sup>-3</sup>. With these values, the cooling time for the shocked gas in the halo would be slightly $`larger`$ than the cooling time before the shock appeared. For our H i phase, we may take an original temperature of $`10^4`$ K <sup>8</sup><sup>8</sup>8This temperature is at the higher end of the observed range for this gas phase in the Galaxy, but we adopt it because at lower temperatures the cooling function is uncertain due to the varying degree of ionization. However, the conclusions will be the same as long as, below 10000 K, the slope of the cooling curve remains positive.. Then the sound speed in this medium will be $`c_{H\mathrm{i}}`$ 10 km/s (here the mean molecular weight is 1.23 if everything is neutral), and using $`<P_{ISM}>`$ the density would be 0.3 cm<sup>-3</sup>. Again, the cooling time is short, so constant energy input is required. With these values, the velocity of the shock within the H i medium turns out to be $`v_{ISM}v_{H\mathrm{i}}`$ 30 km/s, and now we have a shock with $`M`$ 3. Using the jump conditions we have that behind the shock the temperature of the H i is 4x$`10^4`$ K and its density 1 cm<sup>-3</sup>. The cooling time of the shocked H i would be $`t_{coolH\mathrm{i}}`$ 3000 years, more than $`20`$ $`times`$ $`shorter`$ than the cooling time for the unperturbed H i. The reason for this is that in the range of temperatures for the H i phase the cooling function has a positive slope, while in the range of temperatures of the halo gas this slope in negative (see Fig.1 in Gehrels & Williams (1993)). The halo shock takes $``$ 4x10<sup>7</sup> years to fully cross a spherical halo with a radius of 6 kpc, while the shock in the neutral phase would need 3x10<sup>7</sup> years to move just 1 kpc. In both media, the cooling time is much shorter than the shock crossing time, so we could regard them as isothermal shocks. However, in the halo (where the cooling time scales before and after the shock are of the same order), this consideration does not apply if the agents that originally kept the gas at its equilibrium temperature are still present regardless of the shock, so the gas would be unable to cool. If this is not the case, all the halo gas accumulated behind the shock will cool and eventually be detected as H i. In the H i disk, the swept up gas behind the shock will surely cool rapidly, form molecular clouds, and trigger bursts of new star formation. The rotation rate of $``$ 13 km/s/kpc (a lower bound, since we do not know the component of the angular velocity along the line of sight) corresponds to a rotation period of $`T`$ 4.5x$`10^8`$ years, comparable to the crossing time, $`t_{ICM}`$, and much longer than the lifetimes of normal H ii regions, $`t_{H\mathrm{ii}}`$ 10-15 Myr (given by the lifetimes of the very massive stars whose ionizing fluxes generated them in the first place). Thus, it is not surprising that these star forming complexes are only found along one side of the galaxy, which would have to be the side directly exposed to the shocked ICM. This explains the bow-shock appearance of the south-western edge of NGC 1427A, since the H ii regions formed at the interacting side do not last long enough to reach the other side, following the rotation of the galaxy. The same scenario was proposed by de Boer et al (1998) for the interaction between the LMC and the hot Milky Way halo, giving as evidence for it the existence of a gradient in the ages of the peripheral young star clusters of the LMC in the direction expected from the relative motion between both galaxies. To obtain this kind of evidence is obviously not possible in the case of NGC 1427A because we can not resolve the young star clusters behind the H ii regions at this distance. ## 5 Conclusions We have obtained the ionized gas kinematics of NGC 1427A by means of long slit spectroscopy of the brightest H ii regions. The velocity field follows, on average, solid body rotation over the whole optical dimensions. Looking closer, however, there are large discrepancies in some data points, most of them associated with the diffuse component of the ionized gas in regions far away from the center of rotation. We modeled the kinematics using two models of rotation, both assuming circular orbits in a flat disk. There is agreement between both models regarding the inclination of the axis of rotation, which is near the N-S direction. The rigid-body fit gives an angular velocity of 13 km/s/kpc, which is consistent with what is observed in this type of galaxies. The de Zeeuw and Lynden-Bell model fits the data better than the simpler solid-body but yields an unexpectedly high inclination ($`80^{}`$) of the disk of the galaxy. Both models give large values for the merit function $`\chi ^2`$ because the set of velocities shows a random component that is important on small scales. This behaviour alone does not provide evidence for an interaction with the cluster environment, and may be explained by the impact that massive stars has on the ISM in Irr galaxies. We reject the scenario in which NGC 1427A is the result of a collision with a smaller member of the cluster, because the only candidate intruder, the North Object, has a radial velocity which is nicely coincident with the general velocity pattern. However, if the inclination of the disk derived from the de Zeeuw and Lynden-Bell model is adopted, we can not place the North Object in the same disk as the rest of the H ii regions. Instead, it would turn out to be a small satellite of NGC 1427A. Several properties of NGC 1427A and its environment strongly suggest that this galaxy is interacting with the hot gas that pervades the cluster center, and we are inclined to favor this scenario. We have given quantitative estimates (although some of the numbers we used are just reasonable guesses) in order to show how the bow-shock alignment of the recent star formation in NGC 1427A is very likely due to the ram pressure from the ICM of Fornax as the galaxy crosses it. Further evidence for this scenario will have to wait for more detailed kinematics, such as interferometric Fabry-Perot imaging and good resolution stellar spectra. Then it will be possible to compare the kinematics of the gas component with that of the stars, which may be very different in the ram pressure scenario. Also, high resolution mapping in H i should show signs of this interaction, such as stripped gas and sudden truncation and asymmetries in the distribution of the neutral gas, as observed in the Virgo Cluster (Cayatte et al. (1990)) and even in groups of galaxies (Davis et al. (1997)). $`Acknowledgements`$ We thank Bill Kunkel for allowing us to use part of his observing time, and without him this work wouldn’t have started. We also thank Guillermo Tenorio-Tagle for invaluable discussions and insights; María Teresa Ruiz and Michael Hilker for their interest and help in the continuation of this work; and Roberto Terlevich for useful comments. We thank to Fondecyt Chile for support through “Proyecto FONDECyT 8970009”.
no-problem/9910/astro-ph9910285.html
ar5iv
text
# Extended High-Ionization Nuclear Emission-Line Region in the Seyfert Galaxy NGC 4051 ## 1. INTRODUCTION It is known that Seyfert galaxies often show very high-ionization emission lines such as \[Fe VII\] $`\lambda 6087`$, \[Fe X\] $`\lambda 6374`$, \[Fe XI\] $`\lambda 7892`$ and \[Fe XIV\] $`\lambda 5303`$ (Oke & Sargent 1968; Grandi 1978; Penston et al. 1984; De Robertis & Osterbrock 1986). Because the ionization potentials of these lines are higher than 100 eV, much attention has been paid to the high-ionization nuclear emission-line region \[HINER; Murayama, Taniguchi, & Iwasawa 1998 (hereafter MTI98); see also Binette 1985\]. The possible mechanisms of radiating such high-ionization emission lines are the following three processes: (1) collisional ionization in the gas with temperatures of T<sub>e</sub> $``$ 10<sup>6</sup> K (Oke & Sargent 1968; Nussbaumer & Osterbrock 1970); (2) photoionization by the central nonthermal continuum emission \[Osterbrock 1969; Nussbaumer & Osterbrock 1970; Grandi 1978; Korista & Ferland 1989; Ferguson, Korista, & Ferland 1997b; Murayama & Taniguchi 1998a, 1998b (hereafter MT98a and MT98b, respectively)\]; and (3) a combination of shocks and photoionization (Viegas-Aldrovandi & Contini 1989). Recently, in context of the locally-optimally emitting cloud models (LOC models; Ferguson et al. 1997a), Ferguson, Korista, & Ferland (1997b) showed that the high-ionization emission lines can be radiated in conditions of widely range of gas densities. More recently MT98a have found that type 1 Seyfert nuclei (S1s) have excess \[Fe VII\] $`\lambda `$6087 emission with respect to type 2s (S2s). Given the current unified model of AGN (Antonucci & Miller 1985; see for a review Antonucci 1993), the finding of MT98a implies that the HINER traced by the \[Fe VII\] $`\lambda `$6087 emission resides in the inner wall of such dusty tori. Since the covering factor of the torus is usually large (e.g., $`0.9`$), and the electron density in the tori (e.g., $`10^{7\text{}8}`$ cm<sup>-3</sup>) is considered to be significantly higher than that (e.g., $`10^{3\text{}4}`$ cm<sup>-3</sup>) of the narrow-line region (NLR), the contribution from the torus dominates the emission of the higher-ionization lines (Pier & Voit 1995). Taking this HINER component into account, MT98b have constructed new dual-component (i.e., a typical NLR with a HINER torus) photoionization models and explained the observations consistently. On the other hand, it is also known that some Seyfert nuclei have an extended HINER whose size amounts up to $``$ 1 kpc (Golev et al. 1995; MTI98). The presence of such extended HINERs can be explained as the result of very low-density conditions in the interstellar medium ($`n_\mathrm{H}1`$ cm<sup>-3</sup>) makes it possible to achieve higher ionization conditions (Korista & Ferland 1989). Thus MT98a suggested a three-component model for the spatial distribution of HINER in terms of photoionization. That is: (1) the inner wall of the dusty torus with electron densities of $`n_\mathrm{e}10^{6\text{}7}`$ cm<sup>-3</sup>; the torus HINER \[Pier & Voit 1995; Murayama & Taniguchi 1998b\], (2) the innermost part of the NLRs; the NLR HINER ($`n_\mathrm{e}10^{3\text{}4}`$ cm<sup>-3</sup>) at a distance from $``$10 to $``$100 pc, and (3) the extended ionized region ($`n_\mathrm{e}10^{0\text{}1}`$ cm<sup>-3</sup>) at a distance $``$1 kpc; the extended HINER (Korista & Ferland 1989; MTI98). Perhaps the relative contribution to the HINER emission from the above three components may be different from galaxy to galaxy. In particular, extended HINERs have been found only in NGC 3516 (Golev et al. 1995) and Tololo 0109–383 (MTI 98) and thus it is important to investigate how common the extended HINER in Seyfert galaxies. In this paper, we report on the discovery of an extended HINER in the nearby Seyfert galaxy NGC 4051. This observation was made during the course of our long-slit optical spectroscopy program for a sample of nearby Seyfert galaxies at the Okayama Astrophysical Observatory. Throughout this paper, we use a distance toward NGC 4051 of 9.7 Mpc, which is estimated using a value of H<sub>0</sub> = 75 km s<sup>-1</sup> Mpc<sup>-1</sup> and its recession velocity of 726 km s<sup>-1</sup> (Ulvestad & Wilson 1984). Therefore, 1″ corresponds to 47 pc at this distance. ## 2. OBSERVATIONS The spectroscopic observations were made at Okayama Astrophysical Observatory, National Astronomical Observatory of Japan on 1992 June 5. The New Cassegrain Spectrograph was attached to the Cassegrain focus of the 188 cm reflector. A 512 $`\times `$ 512 CCD with pixel size of 24 $`\times `$ 24 $`\mu `$m was used, giving a spatial resolution of 1$`\stackrel{}{\mathrm{.}}`$46 pixel<sup>-1</sup> by 1 $`\times `$ 2 binning. A 1$`\stackrel{}{\mathrm{.}}`$8 slit with a length of 300″ was used with a grating of 150 groove mm<sup>-1</sup> blazed at 5000 Å. The position angle was set to 90°. The wavelength coverage was set to 4500 – 7000 Å. We took three spectra; (1) the central region, (2) 2″ north of the central region, and (3) 2″ south of the central region. Each exposure time was 1200 seconds, respectively. The slit positions for NGC 4051 are displayed in Figure 1. The data was reduced with the use of IRAF. The reduction was made with a standard procedure; bias subtraction and flat fielding were made with the data of the dome flats. The flux scale was calibrated by using a standard star (BD+33 2642). The nuclear spectrum was extracted with 2$`\stackrel{}{\mathrm{.}}`$92 aperture. The seeing size derived from the spatial profile of the standard star was about 2$`\stackrel{}{\mathrm{.}}`$3 (FWHM) during the observations. ## 3. OBSERVATIONAL RESULTS ### 3.1. Emission-Line Properties of the Nuclear Spectrum The spectrum of the nuclear region (the central 2$`\stackrel{}{\mathrm{.}}`$9 $`\times `$ 1$`\stackrel{}{\mathrm{.}}`$8 region) is shown in Figure 2. In order to estimate emission-line fluxes, we made multicomponent Gaussian fitting for the spectrum using the SNG (the SpectroNebularGraph; Kosugi et al. 1995) package. The identified emission lines of the nuclear region are summarized in Table 1. The \[Fe X\] $`\lambda `$6374 emission line is blended with \[O I\] $`\lambda `$6364. Assuming the theoretical ratio of \[O I\] $`\lambda `$6300/\[O I\] $`\lambda `$6364 = 3 (Osterbrock 1989), we measured the \[Fe X\] $`\lambda `$6374 flux. The reddening was estimated by using the Balmer decrement (i.e., the ratio of narrow components of H$`\alpha `$ and H$`\beta `$). If the case B would be assumed, an intrinsic value of H$`\alpha `$/H$`\beta `$ ratio was 2.87 for $`T`$ = 10<sup>4</sup> K (Osterbrock 1989). However, Veilleux & Osterbrock (1987) mentioned that the harder photoionizing spectrum of AGNs results in a large transition zone or partly ionized region in which collisional excitation becomes important (Ferland & Netzer 1983; Halpern and Steiner 1983). The main effect of the collisional excitation is to enhance H$`\alpha `$. Therefore we adopt H$`\alpha `$/H$`\beta =`$ 3.1 for the intrinsic ratio, and accordingly we obtain $`A_V`$ = 1.00 mag. This value is almost consistent with the previous estimation ($`A_V`$ = 1.11 mag: Erkens et al. 1997). In our observation, \[Fe X\] $`\lambda `$6374 (ionized potential 233.6 eV) is stronger than \[Fe VII\] $`\lambda `$6087 (99.1 eV). This observational result is inconsistent with the predictions of simple one-zone photoionization models (see section 4.). In Table 2, we give a comparison between our observational data and the previous ones (Anderson 1970; Grandi 1978; Yee 1980; Penston et al. 1984; Veilleux 1988; Erkens, Appenzeller, & Wagner 1997). Although Erkens et al. (1997) gave \[Fe VII\] $`\lambda `$6087/\[Fe X\] $`\lambda `$6375 = 0.966 in their paper, they newly reduced their data and found the true observed line ratio is 0.500 (Wagner & Appenzeller 1999, private communication). Though \[Fe VII\] $`\lambda `$6087/\[Fe X\] $`\lambda `$6374 in Veilleux (1988) is significantly larger than that in ours, our ratio is consistent with those in Penston et al. (1984) and Erkens et al. (1997). Although we do not understand fully the significant difference between Veilleux (1988) and the other observations, it may be due partly to the difference of slit width or aperture size among the observations. NGC 4051 is one of the well-known Seyfert galaxies (Seyfert 1943). It has been mostly classified as a type 1 Seyfert (Adams 1977), while Boller, Brandt, & Fink (1996) and Komossa & Fink (1997) pointed out that the observational properties of NGC4051 are similar to those of narrow-line Seyfert 1 galaxies (NLS1; Osterbrock & Dahari 1983; Osterbrock & Pogge 1985). Though our observational data show the broad component of H$`\alpha `$ clearly, the broad component of H$`\beta `$ is not detected. The results of deconvolution for H$`\alpha `$ and H$`\beta `$ are shown in Figure 3 and 4, respectively. ### 3.2. Spatial Distribution of the Emission-Line Region In Table 3a – 3d, we give the emission-line properties of the off-nuclear regions; west (2$`\stackrel{}{\mathrm{.}}`$9 west), southeast (1$`\stackrel{}{\mathrm{.}}`$5 south 2″west), southwest (1$`\stackrel{}{\mathrm{.}}`$5 south 2″east), and east (2$`\stackrel{}{\mathrm{.}}`$9 east). Since the flux of \[O I\] $`\lambda `$6300 in these areas is not measured because of the insufficient S/N, we do not subtract the flux of \[O I\] $`\lambda `$6364 from that of \[Fe X\] $`\lambda `$6374. Though we measured the fluxes of emission lines of northeast, those data are not tabulated because we could not detect the H$`\beta `$ unambiguously. The S/N of the northwest position is so poor that we did not measure the fluxes of emission lines. Figure 5 shows that the HINER traced by \[Fe X\] $`\lambda `$6374 is extended westward up to 3″ ($``$ 150 pc). This is more extended than the NLR traced by \[O I\] $`\lambda `$6300. Since, as shown in Figure 6, there is no strong line of sky emission at the observed wavelength of \[Fe X\], the extended \[Fe X\] appears to be real. Figure 5 also shows that the HINER may be extended southwestward. However this may be due to the contamination from the nuclear region, suggested by the relatively broad width of H$`\beta `$ at the southwest position. Following Veilleux & Osterbrock (1987), we investigate the excitation conditions of the emission-line region in each position. As shown in Figure 7, we find that the regions where \[Fe X\] is absent exhibit AGN-like excitations, whereas the regions where \[Fe X\] is found show H II region-like excitations (except for the southeast region where the line ratios show H II region-like excitation though \[Fe X\] is not detected). It is unlikely that the \[Fe X\] emission arises from H II regions. Therefore the observed H II region-like excitations are due not to photoionization by massive stars but to some additional mechanism. We will discuss this complex property in section 5. ### 3.3. A Summary of the Observational Results As noted in section 1, there are three kinds of the HINER; 1) the torus HINER, 2) the NLR HINER, and 3) the extended HINER (see MT98a). Our detection of the extended \[Fe X\] emitting region tells us that the extended HINER exists at least in NGC 4051. Here we estimate how strong the contribution from the torus HINER using the dual component photoionization model of MT98b. According to the diagnostic diagram of their model (Figure 2 in MT98b), we find that the torus HINER may contribute to the total intensity of the HINER emission less than 3%. On the other hand, if \[Fe X\] in the nuclear region was mainly attributed to the NLR HINER, this line would have a larger FWHM in the nuclear region than in off-nuclear regions because the flux contribution of the NLR HINER may be negligibly small in the off-nuclear regions. Since we could not find the difference of FWHM of \[Fe X\] between the nuclear region and west, the NLR HINER is not a dominant source in NGC 4051. It is therefore suggested that the majority of the HINER emission in NGC 4051 arises from low-density ISM within a radius of $``$ 150 pc. ## 4. PHOTOIONIZATION MODEL In order to understand the nuclear environment of NGC 4051, we use photoionization models and compare the predictions of the models with the observed emission-line ratios of the nuclear region of NGC 4051. The simplest model for the NLRs of Seyfert galaxies is a so-called one-zone model, which assumes optically thick clouds of single density and single distance from the source of the ionization radiation (e.g. Ferland & Netzer 1983, Stasinska 1984). However, these models have been known to predict too weak high-ionization emission lines such as \[Fe VII\] $`\lambda 6087`$ and \[Fe X\] $`\lambda 6374`$, and moreover, predict less intense \[Fe X\] $`\lambda `$6374 than \[Fe VII\] $`\lambda `$6087. Because these model predictions appear inconsistent with observations, one-zone models are not suitable to investigate the environment of the nuclear region of NGC 4051. Hence, it is better to use more realistic models, for example, the optically-thin, multi-cloud model (Ferland & Osterbrock 1986) or the LOC models (Ferguson et al. 1997a). The predicted emission-line flux calculated with these models are shown in Table 4. The former model predicts \[Fe VII\] $`\lambda `$6087/\[Fe X\] $`\lambda `$6374 $``$ 10, that is inconsistent with the result of observations of NGC 4051. The LOC model also predicts \[Fe X\] $`<`$ \[Fe VII\]. Taking these points into accounts, we construct a two-component photoionization model below. ### 4.1. A Two-Component Photoionization Model We construct a two-component system for the nuclear emission-line region of NGC 4051. One component is optically thick, ionization-bounded clouds (IB clouds). This component emits low-ionization emission lines mainly, like clouds in typical NLRs. The other is optically thin, low-density, matter-bounded clouds (MB clouds; Viegas-Aldrovandi 1988; Viegas-Aldrovandi & Gruenwald 1988; Binette, Wilson, & Storchi-Bergmann 1996; Wilson, Binette, & Storchi-Bergmann 1997; Binette et al. 1997) which radiates high-ionization emission lines selectively. These MB clouds are expected to emit more intense \[Fe X\] $`\lambda `$6374 than \[Fe VII\] $`\lambda `$6087 because their densities are assumed to be low enough to achieve very high-ionization conditions (section 4.2). Ionization and thermal equilibrium calculations have been performed with the photoionization code CLOUDY (version 90.04; Ferland 1996) to calculate the emission from plane-parallel, fixed hydrogen density clouds. Taking into account many lines of evidence in favor of a nitrogen overabundance (Storchi-Bergmann & Pastoriza 1990; Storchi-Bergmann 1991; Storchi-Bergmann et al. 1998), we adopt twice the solar nitrogen abundance. Namely, all elements have solar values except for nitrogen. The detection of strong \[Fe X\] suggests that most of iron remains in gas phase although the depletion of iron would be more serious than that of other elements (e.g., Phillips, Gondhalekar, & Pettini 1982). Therefore internal dust grains in the NLR are not taken into account in our calculations. The shape of the ionizing continuum from the central engine is $$f_\nu =\nu ^{\alpha _{\mathrm{uv}}}\mathrm{exp}(\frac{h\nu }{kT_{\mathrm{BB}}})\mathrm{exp}(\frac{kT_{\mathrm{IR}}}{h\nu })+a\nu ^{\alpha _\mathrm{x}}.$$ (1) We adopt the following parameters; (1) $`kT_{\mathrm{IR}}`$ is the infrared cutoff of the so-called big blue bump component and we adopt $`kT_{\mathrm{IR}}`$ = 0.01 Ryd; (2) $`T_{\mathrm{BB}}`$ is the temperature which parameterize the big blue bump continuum, and we adopt a typical value, $`1.5\times 10^5`$ K; (3) $`\alpha _{\mathrm{uv}}`$ is the slope of the low energy big blue bump component. We adopt $`\alpha _{\mathrm{uv}}=0.5`$. Note that the photoionization is not sensitive to this parameter. And, (4) $`\alpha _\mathrm{x}`$ is the slope of the X-Ray component, and we adopt $`\alpha _\mathrm{x}=1.0`$. This power law component is not extrapolated below 1.36 eV or above 100 keV. Below 1.36 eV, this term is set to zero while above 100 keV, the continuum is assumed to fall off as $`\nu ^3`$. Finally, (5) the UV to X-Ray spectral slope, $`\alpha _{\mathrm{ox}}`$, is defined as $$\alpha _{\mathrm{ox}}\frac{\mathrm{log}[F_\nu (2\mathrm{k}\mathrm{e}\mathrm{V})/F_\nu (2500\mathrm{\AA })]}{\mathrm{log}[\nu (2\mathrm{k}\mathrm{e}\mathrm{V})/\nu (2500\mathrm{\AA })]},$$ (2) which is a free parameter related to the parameter a in equation (1). We adopt $`\alpha _{\mathrm{ox}}=1.4`$. The observational values of these parameters for NGC 4051 are summarized in Table 5. The calculations for IB clouds are proceeded until the electron temperature drop below 3000 K, since the gas with lower temperature than 3000 K is not thought to contribute significantly to the emission lines. The calculations for MB clouds are proceeded till the column density of the MB clouds reach to a value given as a free parameter. ### 4.2. Results First, we discuss the physical conditions of the IB clouds. Assuming that the low-ionization forbidden lines are radiated mainly from the IB clouds, we estimate the hydrogen density of the IB cloud $`n_{\mathrm{IB}}`$ = 10<sup>2.9</sup> cm<sup>-3</sup>, which derived from the observed \[S II\] doublet ratio, \[S II\] $`\lambda `$6716/\[S II\] $`\lambda `$6731 = 0.934 (see Osterbrock 1989). Similarly, assuming that the MB clouds contribute to the flux of the low-ionization lines very little, we search an ionization parameter for the IB clouds, $`U_{\mathrm{IB}}=Q(\mathrm{H})/(4\pi R^2N_{\mathrm{H},\mathrm{IB}}c)`$ (the ratio of the ionizing photon density to the Hydrogen density) using CLOUDY. Emission-line ratios of \[O I\] $`\lambda `$6300/\[O III\] $`\lambda `$5007, \[N II\] $`\lambda `$6583/\[O III\] $`\lambda `$5007 and \[S II\] $`\lambda \lambda `$6717,6731/\[O III\] $`\lambda `$5007 are calculated for various values of $`U_{\mathrm{IB}}`$, and compared with the observed values. As shown in Figure 8, the comparisons for the individual line ratios do not give a certain value of $`U_{\mathrm{IB}}`$. Therefore, we use the \[S II\] $`\lambda \lambda `$6717,6731/\[O III\] $`\lambda `$5007 ratio to determine the hydrogen density of the IB clouds, and then we derive $`U_{\mathrm{IB}}`$ = 10<sup>-2.9</sup> accordingly. Second, we estimate most probable values of the parameters for MB clouds. When $`U_{\mathrm{MB}}`$ 10<sup>-0.4</sup>, the calculated flux of \[Fe X\] $`\lambda `$6374 is smaller than that of \[Fe XI\] $`\lambda `$7892; and when $`U_{\mathrm{MB}}`$ 10<sup>-0.6</sup>, \[O III\] $`\lambda `$5007 begins to emit from MB clouds. Because these conditions are not suitable to explain the observations, we adopt $`U_{\mathrm{MB}}=10^{0.5}`$. When the hydrogen column density of MB clouds $`N_{\mathrm{MB}}>10^{21}`$ cm<sup>-2</sup>, \[O III\] $`\lambda `$5007 also begins to radiate from the MB clouds (see Figure 9). Therefore we examine two cases for $`N_{\mathrm{MB}}=10^{20.5}`$ cm<sup>-2</sup> and for 10<sup>21.0</sup> cm<sup>-2</sup>. Assuming the size of HINER $`D_{\mathrm{HINER}}=150\mathrm{p}\mathrm{c}=4.63\times 10^{20}`$ cm, we obtain $`n_{\mathrm{MB}}10^{0.17}`$ cm<sup>-3</sup> for $`N_{\mathrm{MB}}=10^{20.5}`$ cm<sup>-2</sup> and $`n_{\mathrm{MB}}10^{0.33}`$ cm<sup>-3</sup> for $`N_{\mathrm{MB}}=10^{21}`$ cm<sup>-2</sup> because $`n_{\mathrm{MB}}N_{\mathrm{MB}}/D_{\mathrm{HINER}}`$. Since the former density is too low to produce sufficiently strong emission, we adopt the latter case, that is, $`n_{\mathrm{MB}}=10^{0.33}`$ cm<sup>-3</sup> and $`N_{\mathrm{MB}}=10^{21}`$ cm<sup>-2</sup>. In Table 6, we give the emission line fluxes normalized by H$`\beta `$ (narrow component) for the IB and MB clouds described above. It seems reasonable that the nuclear emission-line region of NGC 4051 is a mixture of both IB and MB clouds. In order to reproduce the observed \[Fe X\]/H$`\beta `$ ratio, we find that the relative contribution of the MB clouds is 5.3% in the H$`\beta `$ luminosity. We compare the total calculated line ratios with the observed values in Table 7. We find that \[O III\], \[N II\] and \[S II\] are two or three times stronger than the observational values although high-ionization lines are consistent with the observation. This discrepancy can be reconciled if there is another emission component which radiates hydrogen recombination lines mainly. Hereafter we call this “contamination component”. Possible contamination sources are either BLR or nuclear star forming regions or both (section 5). Note that this contamination component must not destroy the line ratios such as $`\lambda \lambda `$6717,6731/\[O III\] $`\lambda `$5007, \[S II\] $`\lambda `$6716/\[S II\] $`\lambda `$6731 and so on. Flux contributions of the IB clouds, the MB clouds, and the contamination component to Balmer lines are treated as free parameters a and b; a is the H$`\beta `$ flux ratio between the MB clouds and the IB clouds and b is that between the contamination component and the IB clouds. We can find a probable set of the line ratios which is consistent with the observation. Here we assume a ratio of H$`\alpha `$/H$`\beta `$ for the contamination component to be 3.1. Because \[Fe X\] is assumed to emit from the MB clouds, we obtain a relation: $$(\frac{[\mathrm{Fe}\mathrm{X}]}{\mathrm{H}\beta })_{\mathrm{obs}}=\frac{a\times (\frac{[\mathrm{Fe}\mathrm{X}]}{\mathrm{H}\beta })_{\mathrm{MB}}}{1+a+b}.$$ (3) Since we can regard \[O III\] $`\lambda `$5007 as a representative low-ionization emission line, we obtain another relation: $$(\frac{[\mathrm{O}\mathrm{III}]}{\mathrm{H}\beta })_{\mathrm{obs}}=\frac{(\frac{[\mathrm{O}\mathrm{III}]}{\mathrm{H}\beta })_{\mathrm{IB}}}{1+a+b}.$$ (4) In Table 6, we give (\[Fe X\]/$`\mathrm{H}\beta `$)<sub>obs</sub>, (\[Fe X\]/$`\mathrm{H}\beta `$)<sub>MB</sub>, (\[O III\]/$`\mathrm{H}\beta `$)<sub>obs</sub>, and (\[O III\]/$`\mathrm{H}\beta `$)<sub>IB</sub>. Using these relations, we find a = 0.161 and b = 1.868. These results mean that the contributions of the IB clouds, the MB clouds and the contamination component are 33.0%, 5.3% and 61.7% in the H$`\beta `$ luminosity, respectively. We give a summary of the set of the line ratios in Table 8. Though we do not observe \[Fe XI\] $`\lambda `$7892, previous observations (Penston et al. 1984 and Erkens et al. 1997) show \[Fe XI\] $`\lambda `$7892/\[Fe X\] $`\lambda `$6374 = 0.324 or 0.514. Our calculated \[Fe XI\] $`\lambda `$7892/\[Fe X\] $`\lambda `$6374 is 0.674, and this is not so different from the previous observations. On the other hand, the observed \[O I\] $`\lambda `$6300 is four times stronger than the model value. One reason for this may be that \[O I\] arises partly from other regions that we do not take into account in our model. Finally, in Table 9, we give a summary of the three emission components adopted for the nuclear emission-line region of NGC 4051. These parameters are determined uniquely in the process described above. However, there may be other models which explain the observed line ratios of NGC 4051. Recently, Contini & Viegas (1999) proposed a multi-cloud model in which the existence of shocks is introduced for NGC 4051. Their model explains the optical line ratios and the continuum SED, although they did not mention the spatial extension of ionized regions. In order to discriminate which model is more plausible, further detailed observations will be necessary. ## 5. DISCUSSION As we have shown in previous sections, the observed emission-line ratios of the nuclear region of NGC4051 are consistently understood by introducing the three emission components; 1) the ionization-bounded clouds, 2) matter-bounded clouds, and 3) the contamination component to the Balmer emission lines. Although our three-component model appears consistent with the observation, our result implies that the majority of Balmer emission ($``$ 60%) arises from the contamination component. Now we consider the problem; What is the contamination component ? First, we consider this problem for the nuclear region. Possible candidates of the contamination components are either the BLR or nuclear star-forming regions or both. If NGC 4051 belongs to a class of NLS1s (Boller et al. 1996; Komossa & Fink 1997), it seems hard to measure the contribution from the BLR to the H$`\beta `$ emission because of the narrow with of the broad line if present. It is also possible to consider that NGC 4051 experiences a burst of massive star formation in its nuclear region because there is a lot of cold molecular gas as well as circumnuclear star-forming regions in NGC 4051 (Kohno 1997; Vila-Vilaró, Taniguchi, & Nakai 1998). Peterson, Crenshaw, & Meyers (1985) reported that H$`\beta `$ of NGC 4051 exhibits time variability (enhanced by 85% in H$`\beta `$ flux) on time scales shorter than $``$2 years, that is, H$`\beta `$ of NGC4051 contains the broad component more or less. Although we have no way to evaluate the contribution of this BLR contamination to the total flux quantitatively, it is possible that all of the contamination component is contributed from the BLR. In addition, the nuclear star-formation may contribute to the contamination. Kohno (1997) discussed the gravitational instability of the nuclear molecular gas of some Seyfert galaxies using the Toomre’s Q-value. The Toomre’s Q parameter characterizes the criterion for local stability in thin isothermal disks and is expressed as Q = $`\mathrm{\Sigma }_{\mathrm{crit}}/\mathrm{\Sigma }_{\mathrm{gas}}`$, where $`\mathrm{\Sigma }_{\mathrm{crit}}`$ is the critical surface density. He gave Q = 0.90 for the nuclear region of NGC4051. This means that the molecular gas in the nuclear region of NGC4051 is thought to be gravitationally unstable. In any case, about 60% of observed H$`\beta `$ is not originated from the NLR in the nuclear region of NGC 4051. This means that the line ratios of the nuclear region suffer seriously from the contamination. In Figure 10, we replot the excitation diagnostic diagram using the line ratios, from which the contamination component is subtracted. This diagram shows that the contamination-subtracted line ratios of nuclear region show the typical AGN-like excitation condition. Therefore we conclude that the unusual excitation condition is due to the contamination component. High-spatial resolution optical spectroscopy or X-ray imaging observations will be helpful in investigating whether or not the star-formation activity dominates the flux of H$`\beta `$. Second, we consider the off-nuclear regions. As shown in Figure 7, the three off-nuclear regions (west, southwest, and southeast) also show H II region-like excitations. Since a typical size of the BLR is $``$ 0.01 pc (e.g. Peterson 1997), it is likely that these excitation conditions are thought to be due to circumnuclear star-forming regions. We would like to thank the staff of Okayama Astrophysical Observatory. We wish to thank Immo Appenzeller and Stefan Wagner for the use of their spectroscopic data of NGC4051 and for useful advice. We thank Kotaro Kohno for the use of the data of his radio observations of NGC4051. We also thank Youichi Ohyama, Naohisa Anabuki and Shingo Nishiura for much discussion and comments. T.M. is supported by a Research Fellowship from the Japan Society for the Promotion of Science for Young Scientists. This work was financially supported in part by Grant-in-Aids for the Scientific Research (Nos. 10044052, and 10304013) of the Japanese Ministry of Education, Culture, Sports, and Science.
no-problem/9910/cond-mat9910085.html
ar5iv
text
# 1 Introduction ## 1 Introduction The segregation of granular materials is an effect of eminent importance for industrial operations and has been a subject of research for decades. The behavior of powders within the industrial environment, e.g. silos, hoppers, conveyor belts or chutes, displays interesting effects - one of them is size segregation. Segregation or the mixing properties of granular media are not yet completely understood and thus cannot be controlled under all circumstances. For a review of experimental techniques, theoretical approaches, and numerical simulations see Refs. and the references therein. A lot of effort has been invested in the understanding of size-segregation (see this proceedings for a recent overview of the state of the art). It turns out that segregation can be driven by geometric effects, shear, percolation and also by a convective motion of the small particles in the system . Segregation due to convection, in rather dilute, more dynamic systems appears to be orders of magnitude faster than segregation due to purely geometrical effects in dense, quasi-static situations . However, there are still a lot of open questions which are subject to current research on model granular media . Most of the segregation phenomena are obtained in the presence of gradients in density or temperature. Here we isolate the latter case, i.e. we examine the segregation of two species of grains in the presence of local heat reservoirs, but in the absence of external forces like e.g. gravity. ## 2 The inelastic hard sphere model In this study, we use the standard interaction model for the instantaneous collisions of particles with radii $`a_k`$, and mass $`m_k`$, with the subscripts $`k=S`$ or $`L`$, for small and large particles, respectively. This model accounts for dissipation, using the restitution coefficient $`r`$, and is introduced and discussed in more detail in Refs. . The post-collisional velocities $`𝐯^{}`$ are given in terms of the pre-collisional velocities $`𝐯`$ by $`𝐯_{1,2}^{}`$ $`=`$ $`𝐯_{1,2}{\displaystyle \frac{(1+r)m_{\mathrm{red}}}{m_{1,2}}}𝐯_n,`$ (1) with $`𝐯_n\left[(𝐯_1𝐯_2)𝐯_n\right]\widehat{}n`$, the normal component of $`𝐯_1𝐯_2`$, parallel to $`\widehat{}n`$, the unit vector pointing along the line connecting the centers of the colliding particles. The reduced mass is here $`m_{\mathrm{red}}=m_1m_2/(m_1+m_2)`$. If two particles collide, their velocities are changed according to Eq. (1). If a particle $`i`$ crosses a line of fixed temperature $`T_j`$, its velocity is changed in magnitude, but not in direction, according to the rule $$𝐯_i^{}=\pm v^{T_j}\frac{𝐯_i}{|𝐯_i|},$$ (2) with the random thermal velocity $`\pm v^{T_j}`$ drawn from a Maxwellian velocity distribution, and with random sign. (If the ‘$`+`$’ sign is used in Eq. (2), a large net mass flux occurs when a cluster of particles crosses a line of fixed temperature. If the ‘$``$’ sign is used the two subsystems have conserved particle numbers, but are still coupled via collisions across the boundaries.) In 2D one has $`v^{T_j}=\sqrt{v_x^2+v_y^2}`$, where $`v_x`$ and $`v_y`$ are the components of the thermal velocity vector, each distributed according to a Gaussian distribution. Eq. (2) is also applied to particles after a collision, if the particle’s center of mass is closer than $`a_L`$ to one of the heat reservoirs $`j`$. In order to obtain the two random velocities $`v_x`$ and $`v_y`$, two random numbers $`r_1`$ and $`r_2`$, homogeneously distributed in the interval $`[0:1]`$ are used. With the desired typical thermal velocity $`\overline{v}=\sqrt{2T_j/m_i}`$, one has $$v_x=\sqrt{\overline{v}^2\mathrm{ln}r_2}\mathrm{cos}(2\pi r_1),\mathrm{and}$$ (3) $$v_y=\sqrt{\overline{v}^2\mathrm{ln}r_2}\mathrm{sin}(2\pi r_1),$$ (4) using the method by Box and Muller, as described in Ref. . If the velocity of the particle would be set to $`𝐯^{}=(v_x,v_y)`$, artificial peaks in the density at the positions $`Z_1`$ and $`Z_2`$ would be observed. This artefact was the reason to choose the above described way of thermal coupling. Note, that our coupling to a reservoir does not guarantee a certain temperature in a small volume around the reservoir . It rather adjusts the velocity of all particles that came along the reservoir and touched it with their center of mass. Those particles which approach the reservoir usually have a lower temperature and thus reduce the mean. As $`r`$ decreases, the reduction increases. Our choice of thermal coupling is arbitrary, however, the discussion of thermostating is far from the scope of the present paper – therefore, we restrict ourselves to the method introduced here. Our method of imposing a temperature gradient does not have any simple physical analog, but it does allow us to isolate the effects of the temperature gradient. More physically feasible energy sources, such as vibrating walls perturb the motion more than the method used here. ## 3 The event-driven simulation method For the simulation of the hard spheres, we use the event-driven algorithm, originally introduced by Lubachevsky , and applied to the simulation of granular media e.g. in Refs. . In these simulations, the particles follow an undisturbed translational motion until an event occurs. An event is either the collision of two particles, the crossing of one particle with the boundary of a cell (in the linked-cell structure which is used for algorithmic optimization only), or the crossing of one of the lines with fixed temperature. A particle-particle collision is treated as described in the previous section, a cell-boundary crossing has no effect on the particle-motion, and the crossing of a fixed temperature location leads to a change of velocity according to Eq. (2). Only the particle(s) involved in the last event is (are) treated and their next event is computed. In the next step (which does not imply a fixed step in time), the next event out of all possible events is treated. ## 4 Boundary conditions and system parameters The simulations presented here (typical situations are displayed in Fig. 1) were performed with $`N=1296`$ particles in a two-dimensional (2D) box of size $`l=l_x=l_y=0.04`$ m. The box has periodic boundaries, i.e. a particle that leaves the volume at the bottom (left), immediately enters it at the top (right) and vice-versa. Two particle types ($`S`$, $`L`$) are used with $`a_S=a=2.5\times 10^4`$ m and $`a_L=2a=5\times 10^4`$ m. In the system with dimensionless size $`L=l/a=80`$, $`N_S=1176`$ particles of the small species and $`N_L=120`$ particles of the large species coexist. Note that we used a rather arbitrary choice for the calculation of the particle mass (see above) so that $`m_L=8m_S`$ for the size ratio used here. The fraction of the area which is occupied by the particles, i.e. the total volume fraction, is $`\nu =\pi (N_Sa_S^2+N_La_L^2)/l^20.2`$. The hot and cold temperature reservoirs are situated at the vertical positions $`Z_2=z_2/a=15`$ and $`Z_1=z_1/a=55`$, respectively, and reach over the whole width of the system. The temperatures of the reservoirs are $`T_1`$ and $`T_2`$ – but since no external body force like gravity is involved the behavior of the system does not depend on the absolute value of $`T_1`$ or $`T_2`$; only the ratio $`T_2/T_1`$ is important, and it is varied in the range from $`T_2/T_1=1`$ to 9 in the following. In Fig. 1 snapshots of simulations with $`T_2/T_1=1`$ and different values of $`r`$ are plotted. For small $`r`$, clustering is observed and the averaging over horizontal slices (data are presented in Fig. 2) becomes questionable. ## 5 Results A quantitative measure of segregation is the partial density of the different species. We present $`\rho _k=n_k/N_k`$, the particle number density $`n_k(z)`$ weighed by the number of particles $`N_k`$ of species $`k`$ in Fig. 2. The strength of segregation increases with decreasing $`r`$, only for small $`r=0.60`$, segregation is rather weak due to clustering. Fluctuations in density are correlated to the heat reservoirs: Close to a “hot” region in the vicinity of a energy source, the density is lower than in the “cold” regions in between. Note that the large particles segregate, while the small particles make up a “background fluid” with a comparatively small density variation, if dissipation is weak. Thus, all particles prefer regions of low temperature, but the large ones are attracted by the cold regions more strongly. Now, the restitution coefficient is fixed to $`r=0.99`$ and the temperature ratio $`T_2/T_1`$ is modified. In Fig. 3 the situation is presented for different ratios $`T_2/T_1=1`$, 2, 3, 5, and 9. The large particles segregate from the “background fluid” made up by the small particles and the quality of segregation increases with the magnitude of the temperature gradient. For $`T_2>T_1`$ the large (and heavier) particles can be found close to the colder heat reservoir, as different to the situation discussed above. When heavier particles have the same temperature as light ones, their mean velocity is reduced, so that they can not diffuse away from the cold heat reservoir. ## 6 Summary and Conclusion In summary, we presented simulations of different size particles in the presence of a temperature gradient. If the temperature gradient is due to the dissipative nature of the material, the large particles move towards the cold regions, as far away from an energy source as possible. If the gradient in temperature is externally imposed, most of the large particles prefer to move towards the colder heat reservoir. When dissipation is strong enough, clustering is observed in the locally driven system. Further possible studies involve the prediction of the density and temperature profiles with kinetic theory – first for the monodisperse and later also for the polydisperse case. ## Acknowledgements We gratefully acknowledge the support of IUTAM, the National Science Foundation, the Department of Energy, the Office of Basic Energy Sciences, the Geosciences Research Program, the Deutsche Forschungsgemeinschaft (DFG), and the Alexander-von-Humboldt foundation.
no-problem/9910/astro-ph9910508.html
ar5iv
text
# RXTE Monitoring of LMC X-3: Recurrent Hard States ## Introduction Long-term X-ray variability on timescales of months to years is seen in many galactic black hole candidates. In analogy to the 35 d cycle of Her X-1, this variability has been identified with the precession of a warped accretion disk in some objects. Possible driving mechanisms include radiation pressure (Maloney, Begelman & Nowak, 1998; Wijers & Pringle, 1999) or accretion disk winds (Schandl, 1996). In this paper we present a spectral and temporal analysis of the long term variability of the canonical soft state black hole candidate LMC X-3. Together with LMC X-1, this source is the only persistent black hole candidate which so far has only been observed in the soft state. While LMC X-1 does not exhibit any long term variability, LMC X-3 was known to be variable on a $``$100 d timescale (Cowley et al., 1991, 1994). Detailed results of our campaign are presented elsewhere (Wilms et al., 1999b). ## Long Term Variability Our analysis of the long-term RXTE All Sky Monitor light curve (Fig. 1, top) indicates a complex long term behavior. Analysis with the Lomb (1976)-Scargle (1982) Periodogram indicates that the variation is dominated by epochs of low luminosity, which are recurring on the $``$100 d timescale found previously (Cowley et al., 1994). In addition, a long term periodicity is apparent in the data. Contrary to the 100 d timescale, the long term periodicity is not stable: Depending on what time interval of the ASM light curve is studied, the long term period varies between 200 and 300 d. This periodicity is caused by the times of average to high luminosity seen in the light curve and manifests itself by a broad peak at $`250`$ d in the Lomb Scargle PSD. ## Spectral Variability We have analyzed the RXTE data using the newest RXTE ftools, as well as XSPEC, Version 10.00ab. The spectral model used for the data analysis was the standard multi-temperature disk blackbody (Mitsuda et al., 1984; Makishima et al., 1986), plus a power-law component. Adding a Gaussian iron line resulted in upper limits for the line equivalent width only. Typical reduced $`\chi ^2`$ values were $`\chi _{\mathrm{red}}^2<2.5`$ for 41 degrees of freedom, with the residuals being fully consistent with the uncertainty of the detector calibration (Wilms et al., 1999a, b). In Fig. 1 we present the variation of the spectral parameters found during the analysis as a function of time. During episodes of high ASM flux, the source behaves like any other source in the classical soft state: The accretion disk temperature, $`kT_{\mathrm{in}}`$ varies freely to accommodate the variable luminosity of the source, while the the normalization of the multi-temperature disk black body is constant. At the same time, the photon index $`\mathrm{\Gamma }`$ varies independently of $`kT_{\mathrm{in}}`$. See, e.g., Tanaka & Lewin (1995) for similar examples in other soft state black hole candidates. On the other hand, for times of low ASM count rate, the disk temperature decreases to $`kT_{\mathrm{in}}1`$ keV from its usual value of $`1`$ keV, while at the same time the photon index changes dramatically from $`4`$ to $`1.7`$. We interpret these changes as evidence for transitions to the hard state in LMC X-3. ## Hard States in LMC X-3 Fig. 2 displays the spectral evolution of LMC X-3 from 1998 June through August. In Obs28 the source had the lowest flux of all monitoring observations. No evidence for a soft spectral component is present in the data, the spectrum is consistent with a pure power-law spectrum with photon-index 1.8. After Obs28, the soft component slowly emerged until the standard soft state spectrum is reached. ## Discussion and Conclusions We have presented the results from the first two years of our RXTE campaign on LMC X-3. This is the first campaign where a systematic study of a soft state black hole candidate with monthly coverage was possible (earlier campaigns, such as those of Cowley et al., 1991; Ebisawa et al., 1993, suffered from the inflexible pointing constraints of the earlier satellites). We have found that the long-term luminosity variations are due to changes in the spectral shape of the source, for large luminosity these changes are due to a variation of the characteristic disk temperature, $`kT_{\mathrm{in}}`$, while for small luminosity the source undergoes a spectral hardening. We have presented the first clear case for a soft to hard state transition in this canonical soft state black hole candidate. Our results are a challenge to models in which the long term variability of sources such as LMC X-3 is explained in the context of warped accretion disk models. In these models, no clear spectral evolution with source intensity is expected, with the exception of possible changes in $`N_\mathrm{H}`$ due to covering effects. In black hole candidates such as Cyg X-1, the hard- to soft-state transitions are attributed to changes in the accretion disk geometry, e.g., the (non-) existence of a hot and Comptonizing electron cloud in the center of the source. These changes are typically attributed to a varying mass accretion rate, $`\dot{M}`$. Our result makes such a geometry also probable for LMC X-3. A possible cause for the quasi-periodicity of the soft to hard transitions, therefore, might be periodic changes in $`\dot{M}`$. ### Acknowledgements We thank the RXTE schedulers for their patience in scheduling a total of up to now 40 observations on LMC X-3 and 32 observations on LMC X-1 during the course of the past three years. We also thank those who are responsible for the almost non-existent observing constraints of RXTE for providing us with the ability to perform campaigns such as this one. The attendance of JW at the Compton symposium was made possible by a travel grant from the Deutsche Forschungsgemeinschaft.
no-problem/9910/hep-ex9910030.html
ar5iv
text
# Search for the Weak Decay of a Lightly Bound 𝐻⁰ Dibaryon ## Abstract We present results of a search for a neutral, six-quark, dibaryon state called the $`H^0`$, a state predicted to exist in several theoretical models. Observation of such a state would signal the discovery of a new form of hadronic matter. Analyzing data collected by experiment E799-II, using the KTeV detector at Fermilab, we searched for the decay $`H^0\mathrm{\Lambda }p\pi ^{}`$ and found no candidate events. We exclude the region of lightly bound mass states just below the $`\mathrm{\Lambda }\mathrm{\Lambda }`$ mass threshold, $`2.194\mathrm{GeV}/\mathrm{c}^2<M_H<2.231`$ $`\mathrm{GeV}/\mathrm{c}^2`$, with lifetimes from $``$$`5\times 10^{10}`$ sec to $``$$`1\times 10^3`$ sec. In 1977, Jaffe proposed the existence of a metastable dibaryon, the $`H^0`$($`hexa`$-quark), a bound six-quark state ($`B=2`$, $`S=2`$), described as $`H^0=|uuddss`$. If it exists, this hadron would be a new form of matter. The observation of a bound dibaryon would enhance the understanding of strong interactions and would aid in the search for additional exotic-multiquark states . The two-flavor six-quark state is unbound , a result of the Pauli exclusion principle. The Pauli exclusion principle can be circumvented through the addition of strangeness as an extra degree of freedom. Jaffe estimated that the color-hyperfine interaction between the six quarks of a $`|uuddss`$ state would be strong enough to cause the $`H^0`$ to be a bound state. Different theoretical models have produced a multitude of predictions for $`M_H`$, covering a broad mass range from deeply bound states to unbound states . Most of the predictions, however, are clustered in the range of 2.1 $`\mathrm{GeV}/\mathrm{c}^2`$ up to a few $`\mathrm{MeV}/\mathrm{c}^2`$ above the $`M_{\mathrm{\Lambda }\mathrm{\Lambda }}`$ threshold of 2.231 $`\mathrm{GeV}/\mathrm{c}^2`$. If $`M_H`$ is between the $`M_{\mathrm{\Lambda }n}`$ (2.055 $`\mathrm{GeV}/\mathrm{c}^2`$) and $`M_{\mathrm{\Lambda }\mathrm{\Lambda }}`$ thresholds, it is expected to be a metastable state and undergo a $`\mathrm{\Delta }S=1`$ weak decay. Its lifetime is estimated to be less than $``$$`2\times 10^7`$ sec , while baryonic $`\mathrm{\Delta }S=1`$ weak decays suggest a lower limit on the lifetime of $``$$`1\times 10^{10}`$ sec. Using a variety of techniques, experimentalists have been trying for years to detect the $`H^0`$, without conclusive results . In recent years, production models based on empirical data with few assumptions built into them have allowed experimentalists to gauge the sensitivity of their results. In particular, the combined results from three recent experiments rule out the mass range for $`\mathrm{\Delta }S=1`$ transitions below 2.21 $`\mathrm{GeV}/\mathrm{c}^2`$. The analysis presented here covers the mass range of lightly bound $`H^0`$’s between the $`M_{\mathrm{\Lambda }p\pi ^{}}`$ and $`M_{\mathrm{\Lambda }\mathrm{\Lambda }}`$ thresholds, 2.194 $`\mathrm{GeV}/\mathrm{c}^2`$and 2.231 $`\mathrm{GeV}/\mathrm{c}^2`$, respectively. In addition, this search is sensitive to a large range of lifetimes, from $``$$`5\times 10^{10}\mathrm{sec}`$ to $``$$`1\times 10^3`$ sec, completely covering the range of lifetimes proposed in reference yet to be probed. It is expected that an $`H^0`$ can be produced in $`pN`$ collisions through hyperon production; where two strange quarks are produced, followed by the coalescence of a hyperon and a baryon to form a bound six-quark state. Currently, the only model for $`H^0`$ production at Tevatron beam energies is one proposed by Rotondo . His model is based on production of the doubly strange $`\mathrm{\Xi }^0`$, followed by the coalescence of the $`\mathrm{\Xi }^0`$ with a $`n`$, predicting a total cross-section of 1.2 $`\mu `$b. Our search for the $`H^0`$ is the first search to normalize to the doubly strange $`\mathrm{\Xi }^0`$, removing the strangeness production portion from the $`H^0`$ production process, making this analysis a sensitive probe of hypernuclear coalescence. The $`H^0`$ production process at the Tevatron, through $`pN`$ collisions, complements other current experimental efforts that search for $`H^0`$’s produced in heavy ion collisions. The KTeV beam line and detector at Fermilab were designed for high precision studies of direct CP violation in the neutral kaon system (E832) and in rare $`K_L`$ decays (E799-II). To reduce backgrounds from long lived neutral states, the apparatus was situated far from the production target. A clean neutral beam, powerful particle identification, and very good resolution for both charged particles and photons made it a good facility to search for and fully reconstruct both the signal mode, $`H^0\mathrm{\Lambda }p\pi ^{}`$, and the normalization mode, $`\mathrm{\Xi }^0\mathrm{\Lambda }\pi _D^0`$, where $`\pi _D^0`$ refers to the Dalitz decay of the $`\pi ^0`$ to $`e^+e^{}\gamma `$. For both modes, the $`\mathrm{\Lambda }`$’s decay downstream of the parent particle’s vertex to a $`p\pi ^{}`$. The data presented here were collected during two months of E799-II data-taking in 1997. The KTeV detector and the trigger configuration used to select events with four charged particles have been described elsewhere . This article highlights aspects of the detector directly relevant to this analysis. A neutral beam, composed primarily of kaons and neutrons was produced by focusing an 800 GeV/$`c`$ proton beam at a vertical angle of 4.8 mrad on a 1.1 interaction length (30 cm) BeO target. Photons produced in the target were converted in a 7.6 cm lead absorber located downstream of the target. Charged particles were removed further downstream with magnetic sweeping. Collimators, followed by sweeping magnets, defined two 0.25 $`\mu `$sr neutral beams that entered the KTeV apparatus (Fig. 1) 94 m downstream from the target. The 65 m vacuum ($``$$`10^6`$ Torr) decay region extended to the first drift chamber. The momenta of the charged particles were measured with a charged particle spectrometer, consisting of four planar drift chambers, two upstream and two downstream of a dipole analyzing magnet. The energies of the particles were measured with a high resolution CsI electromagnetic calorimeter. To distinguish electrons from hadrons, the energy (E) measured by the calorimeter was compared to the momentum (p) measured by the charged spectrometer. Electrons were identified by $`0.9<\mathrm{E}/\mathrm{p}<1.1`$, while pions and protons were identified by $`\mathrm{E}/\mathrm{p}<0.9`$. Offline, events were required to have four reconstructed charged particles. We searched for long lived $`H^0`$’s which were produced at the target and decayed in the vacuum decay region. A characteristic feature of the topology of both the signal and normalization modes is that the parent particle’s true decay vertex is defined by a charged track vertex; the $`p\pi ^{}`$ vertex for the $`H^0`$ and the $`e^+`$$`e^{}`$ vertex for the $`\mathrm{\Xi }^0`$. The subsequent $`\mathrm{\Lambda }`$’s decay downstream of the parent particle’s vertex to $`p\pi ^{}`$. Events were required to have at least four reconstructed tracks, two tracks associated with positive particles and two with negative particles. In the case of the $`H^0`$, having identified the $`p`$ and the $`\pi ^{}`$ by their $`E/p`$ and their charge, there remains a two-fold ambiguity in combining the $`p`$’s with the $`\pi ^{}`$’s to form a vertex. To resolve that ambiguity, each pair of positive and negative tracks was combined to form a vertex at the location of closest approach between the two tracks. The distance of closest approach (DOCA) and the resultant momentum vector of the combined charged tracks for both the upstream $`p\pi ^{}`$ vertex and the downstream $`\mathrm{\Lambda }`$ vertex were calculated. The $`H^0`$ vertex was determined by calculating the DOCA of the downstream $`\mathrm{\Lambda }`$ and the upstream $`p\pi ^{}`$. The DOCA’s of the upstream, downstream and $`H^0`$ vertices were summed in quadrature, and the permutation that gave the minimum quadrature sum was selected. Downstream $`\mathrm{\Lambda }`$’s were identified by requiring the ratio of the lab momenta of the $`p`$ to $`\pi ^{}`$ to be greater than 3, which accepted 99.8% of the simulated signal events. Because the $`\mathrm{\Lambda }`$ decays to two particles, the transverse momentum ($`P_T`$) distribution of the decay products relative to the direction of the $`\mathrm{\Lambda }`$ exhibits a Jacobian peak at a maximum of 0.1 GeV/c. To enhance the selection of $`\mathrm{\Lambda }`$ decays relative to background three body $`K_L`$ decays, where the $`P_T`$ distribution is peaked at 0, we required the $`P_T`$ of the $`p`$ and the $`\pi ^{}`$ to be between 0.07 GeV/c and 0.11 GeV/c, accepting $``$60% of the simulated signal events. To select $`\mathrm{\Lambda }`$’s further, we required the mass of the reconstructed $`p\pi ^{}`$ to fall within $`\pm 5`$ $`\mathrm{MeV}/\mathrm{c}^2`$ of $`M_\mathrm{\Lambda }`$, where the $`M_\mathrm{\Lambda }`$ resolution is $``$$`\mathrm{MeV}/\mathrm{c}^2`$. The charged portion of the upstream vertex is made up of a $`p`$ and $`\pi ^{}`$ and has kinematics similar to a $`\mathrm{\Lambda }`$ decay. Thus, the same constraint used for the $`\mathrm{\Lambda }`$ was applied, requiring the ratio of the lab momenta of the $`p`$ to $`\pi ^{}`$ to be greater than 3. Interactions in the collimator, sweeping magnets, and vacuum window produced background events with multiple vertices. Events where at least one decaying particle was short-lived were removed by requiring the reconstructed $`H^0`$ and $`\mathrm{\Lambda }`$ vertices to be between 100 m and 155 m from the target, $``$5 m away from those apparatus elements. The signal region for $`H^0`$ candidates was defined by requiring $`M_H`$ to be between 2.190 $`\mathrm{GeV}/\mathrm{c}^2`$ and 2.235 $`\mathrm{GeV}/\mathrm{c}^2`$ to account for resolution effects in measuring $`M_H`$; the upper and lower limits on $`M_H`$ are 4 $`\mathrm{MeV}/\mathrm{c}^2`$ (more than twice our estimated $`M_H`$ resolution) above the $`M_{\mathrm{\Lambda }\mathrm{\Lambda }}`$ threshold of 2.231 $`\mathrm{GeV}/\mathrm{c}^2`$ and below the $`M_{\mathrm{\Lambda }p\pi ^{}}`$ threshold of 2.194 $`\mathrm{GeV}/\mathrm{c}^2`$, respectively. In addition, the transverse momentum of the reconstructed $`H^0`$ ($`P_T(H^0)`$), measured relative to a vector connecting the $`H^0`$ decay vertex and the target was required to be less than 0.015 GeV/c (see Fig. 2). The cut on $`P_T(H^0)`$ accepted 90% of the remaining simulated signal events. None of the events passed all the selection criteria. To quantify the measurement sensitivity, we normalized to $`\mathrm{\Xi }^0`$ production, using data taken with the same trigger configuration as that for the $`H^0`$ analysis, reconstructing $`\mathrm{\Xi }^0\mathrm{\Lambda }\pi _D^0`$ decays. Except for the additional photon coming from the $`\pi _D^0`$, the normalization mode’s decay topology is similar to that of the $`H^0`$’s. Applying a series of cuts similar to those used for the $`H^0`$ analysis yields 17 160 $`\mathrm{\Xi }^0`$ events, with negligible background. The cleanliness of the normalization mode’s signal is demonstrated in Fig. 3 which shows the $`\mathrm{\Xi }^0`$ invariant mass peak. The accepted $`\mathrm{\Xi }^0`$’s have mean momentum of $`270`$ GeV/c. Distributions of variables from simulated decays, such as the $`\mathrm{\Xi }^0`$’s momentum and the location of the $`\mathrm{\Xi }^0`$’s decay vertex are consistant with the same for data. The $`\mathrm{\Xi }^0`$ and $`H^0`$ are expected to have different absorption lengths in the BeO target and the Pb absorber, leading to a difference in the transmission probability ($`T`$) for the two particles. We estimate the $`\mathrm{\Xi }^0`$-nucleon ($`\sigma _{\mathrm{\Xi }N}`$) and $`H^0`$-nucleon ($`\sigma _{HN}`$) cross-sections and thus the $`T`$’s based on the assumption of isospin invariance. In addition, we utilize measured $`np`$, $`\mathrm{\Lambda }p`$ and deuteron-proton ($`dp`$) cross-sections , 40 mb, 35 mb and 75 mb, respectively, to account for the effect of replacing down quarks with strange quarks; we assume that the scale factor $`S=\sigma _{\mathrm{\Lambda }p}/\sigma _{np}`$ can be used to correct for the substitution of a single strange quark for a down quark and $`S^2`$ for a double substitution. We then estimate $`\sigma _{\mathrm{\Xi }N}`$ to be $`\sigma _{\mathrm{\Lambda }p}S=(31\pm 4)`$ mb and $`\sigma _{HN}`$ to be $`\sigma _{dp}S^2=(57\pm 18)`$ mb, where the assigned errors are taken to be equal to the magnitude of the correction itself. The measured absorption lengths for nucleons in BeO and Pb are scaled by the factors $`\sigma _{np}/\sigma _{HN}`$ and $`\sigma _{np}/\sigma _{\mathrm{\Xi }N}`$. The estimated $`T`$’s in the target are $`T_\mathrm{\Xi }^{Be0}=0.623\pm 0.037`$ and $`T_H^{Be0}=0.44\pm 0.12`$. In the lead absorber, the $`T`$’s are estimated to be $`T_\mathrm{\Xi }^{Pb}=0.562\pm 0.043`$ and $`T_H^{Pb}=0.35\pm 0.14`$. As no signal events passed all the selection criteria, the final result is presented as a 90% C.L. upper limit on the inclusive $`H^0`$ production cross-section over the solid angle defined by the collimators, expressed in terms of the inclusive $`\mathrm{\Xi }^0`$ production cross-section $`{\displaystyle \frac{d\sigma _H}{d\mathrm{\Omega }}}`$ $`<`$ $`{\displaystyle \frac{\xi }{N_\mathrm{\Xi }}}{\displaystyle \frac{T_\mathrm{\Xi }^{Be0}T_\mathrm{\Xi }^{Pb}}{T_H^{Be0}T_H^{Pb}}}{\displaystyle \frac{A_\mathrm{\Xi }}{A_H}}{\displaystyle \frac{B(\mathrm{\Xi }^0\mathrm{\Lambda }\pi _D^0)}{B(H^0\mathrm{\Lambda }p\pi ^{})}}{\displaystyle \frac{d\sigma _\mathrm{\Xi }}{d\mathrm{\Omega }}},`$ (1) where $`\xi `$ is the factor which multiplies the single event sensitivity (SES) to give the 90% C.L. upper limit, $`N_\mathrm{\Xi }`$ is the number of reconstructed $`\mathrm{\Xi }^0\mathrm{\Lambda }\pi _D^0`$ decays, the various $`T`$ factors are the transmission probabilities described previously, $`A_\mathrm{\Xi }`$ and $`A_H`$ are the acceptances for $`\mathrm{\Xi }^0`$ and $`H^0`$ decays, respectively, and $`B(\mathrm{\Xi }^0\mathrm{\Lambda }\pi _D^0)`$ and $`B(H^0\mathrm{\Lambda }p\pi ^{})`$ are the respective branching ratios. Our estimate of the SES suffers from a large relative uncertainty of $``$$`50\%`$, predominantly due to the uncertainty in determining the transmission factors. The uncertainty in the SES gives rise to a factor of $`\xi =3.06`$ in the determination of the 90% C.L. upper limit . The acceptances were determined from a detailed detector simulation. Because the trigger was the same for both the signal and normalization modes and because both the signal and normalization modes consist of four-track events with largely similar topologies, trigger and acceptance inefficiencies mostly cancel. The $`\mathrm{\Xi }^0`$ flux was measured using two separate triggers, each composed of different trigger elements. The discrepancy between the two flux measurements was converted into a systematic uncertainty in determining $`A_\mathrm{\Xi }`$. Other systematic uncertainties were negligible relative to this uncertainty. $`A_\mathrm{\Xi }`$ was determined to be $`(6.93\pm 0.94)\times 10^6`$. To determine $`A_H`$, the detector simulation included the $`H^0`$ production spectrum proposed in Rotondo’s phenomenological model . The dominant experimental uncertainty in $`A_H`$ comes from the simulation of proton showers in the calorimeter, where the relative uncertainty was determined to be 5.3%. For example, taking $`M_H`$ in the middle of the mass range we are sensitive to, $`M_H=2.21\mathrm{GeV}/\mathrm{c}^2`$, and the lifetime corresponding to the lifetime given in reference for this mass, $`\tau _H=5.28\times 10^9`$ sec, $`A_H=5.64\times 10^3`$. As a cross-check of Rotondo’s model, which incorporates a $`\mathrm{\Xi }^0`$ production spectrum, we applied our measured $`\mathrm{\Xi }^0`$ production spectrum in the detector simulation, replacing $`M_\mathrm{\Xi }`$ and $`\tau _\mathrm{\Xi }`$ with $`M_H`$ and $`\tau _H`$, respectively. This lowered $`A_H`$ by $``$15%. The 90% C.L. upper limit on the product of the $`H^0`$ branching ratio and the production cross-section, taking into account all the uncertainties, is $`B(H^0\mathrm{\Lambda }p\pi ^{}){\displaystyle \frac{d\sigma _H}{d\mathrm{\Omega }}}`$ $`<`$ $`5.87\times 10^9{\displaystyle \frac{d\sigma _\mathrm{\Xi }}{d\mathrm{\Omega }}}.`$ (2) In Fig. 4, we plot the 90% C.L. upper limit on the ratio $`(B(H^0\mathrm{\Lambda }p\pi ^{})d\sigma _H/d\mathrm{\Omega })/(d\sigma _\mathrm{\Xi }/d\mathrm{\Omega })`$, studying the effect on $`A_H`$ of varying $`\tau _H`$ over a large range of values. For short lifetimes, the $`H^0`$’s decay before reaching the decay region, while for long lived states, only a few decay while passing through the detector. Both effects lower our sensitivity to $`H^0`$ decays. Varying $`M_H`$ across the full range of masses to which we are sensitive leads to a relative shift of approximately $`\pm 60\%`$ from the central value of the curve plotted in Fig. 4. Included in the figure is a line at $`\tau _\mathrm{\Lambda }/2=1.316\times 10^{10}`$ sec, the expected lifetime of system made up of two lightly bound $`\mathrm{\Lambda }`$’s, which might be a lower bound on $`\tau _H`$. To interpret the sensitivity of this result relative to the theoretical production model, we integrate the theoretical predictions for both $`d\sigma _H/d\mathrm{\Omega }`$ and $`d\sigma _\mathrm{\Xi }/d\mathrm{\Omega }`$ over the solid angle covered by the collimators. The right ordinate axis of Fig. 4 shows the sensitivity of this measurement. Thus our result rules out lightly bound $`H^0`$’s, between the $`M_{\mathrm{\Lambda }p\pi ^{}}`$ and $`M_{\mathrm{\Lambda }\mathrm{\Lambda }}`$ thresholds of 2.194 $`\mathrm{GeV}/\mathrm{c}^2`$ and 2.231 $`\mathrm{GeV}/\mathrm{c}^2`$, respectively, over a large range of lifetimes, from $``$$`5\times 10^{10}`$ sec up to $``$$`1\times 10^3`$ sec. A model proposed in reference associates $`M_H`$ with both $`\tau _H`$ and $`B(H^0\mathrm{\Lambda }p\pi ^{})`$. For example, for $`M_H=2.21\mathrm{GeV}/\mathrm{c}^2`$ they predict $`\tau _H=5.28\times 10^9`$ sec and a branching ratio of $`5.4\times 10^2`$. To test this model, we vary $`M_H`$ between the $`M_{\mathrm{\Lambda }p\pi ^{}}`$ and $`M_{\mathrm{\Lambda }\mathrm{\Lambda }}`$ thresholds, determining the dependence of the production cross-section on the mass, lifetime and branching ratio. Figure 5 is a plot of $`(d\sigma _H/d\mathrm{\Omega })/(d\sigma _\mathrm{\Xi }/d\mathrm{\Omega })`$ versus $`M_H`$. In this figure, the factor influencing the sensitivity the most is the $`H^0`$ branching ratio which decreases from a maximum of 14% at the $`M_{\mathrm{\Lambda }\mathrm{\Lambda }}`$ threshold down to zero at the $`M_{\mathrm{\Lambda }p\pi ^{}}`$ threshold. The right ordinate axis of Fig. 5 shows the sensitivity of this measurement, based on Rotondo’s model. Assuming Rotondo’s production model, this result clearly rules out a long lived $`H^0`$ state, as proposed in reference , for $`M_H`$ between the $`M_{\mathrm{\Lambda }p\pi ^{}}`$ and $`M_{\mathrm{\Lambda }\mathrm{\Lambda }}`$ thresholds. To conclude, our result rules out a lightly bound $`H^0`$ dibaryon over a range of mass below the $`M_{\mathrm{\Lambda }\mathrm{\Lambda }}`$ threshold not ruled out by previous experiments and for a wide range of lifetimes, placing stringent limits on the $`H^0`$ production process. This result, in conjunction with the result from experiment BNL E888 , completely rules out the model proposed in reference for all $`\mathrm{\Delta }S=1`$ transitions. We thank D. Ashery, F.S. Rotondo and A. Schwartz for their insightful comments. We gratefully acknowledge the support and effort of the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported in part by the U.S. Department of Energy, The National Science Foundation and The Ministry of Education and Science of Japan.
no-problem/9910/astro-ph9910389.html
ar5iv
text
# Reconstructing Galaxy Spectral Energy Distributions from Broadband Photometry ## 1 Introduction With the application of photometric-redshifts to wide-angle, multicolor photometric surveys the study of galaxy evolution has moved from expressing the evolution as a function of observable parameters (e.g. magnitudes and colors) to one where we can describe the evolution of galaxies in terms of their physical attributes (i.e., their redshift, luminosity and spectral type). Over the last several years with new multicolor surveys coming on-line these techniques have become increasingly popular enabling large, well defined statistical approaches to galaxy evolution. In the astronomical literature there are a number of different approaches to estimating the redshifts of galaxies from their broadband photometry. While these techniques differ in their algorithmic details they share the same underlying goal. We wish to model the change in galaxy color as a function of redshift (and galaxy type) and use these models to estimate galaxy redshifts, in a statistical sense. For simplicity we divide the differing techniques into two classes, those which use spectral energy distributions (whether derived from models or empirically from observations of local galaxies) as spectral templates and those which derive a direct empirical correlation between color and redshift using a training set of galaxies. Template based photometric-redshifts are constructed by comparing the observed colors of galaxies to a set of galaxy spectral energy distributions. Their strength is that they are simple to implement and can be applied over a wide range in redshift. Their main limitation is that we must know the underlying spectral energy distributions of galaxies within our sample. Comparisons of the colors of galaxies with spectral synthesis models (Bruzual and Charlot 1993) have shown that the modeling of the ultraviolet part of galaxy spectra is highly uncertain (whether this is due to uncertainties in the modeling of the stars or due to the effect of dust is unclear). Consequently, photometric-redshift estimates are most accurate when we apply empirical spectral energy distributions derived from observations of local galaxies (e.g. Sawicki et al 1997). These empirical relations are however constructed from only a handful of galaxies that have been observed in detail and there is no guarantee that they represent the full distribution of galaxy types (particularly when we include the effects of evolution with redshift). The second approach is to derive a direct correlation between the observed colors of galaxies and their redshifts using a training set that contains spectroscopic and photometric data. The strength of this technique is that the relation is purely empirical; the data themselves define the correlation between color and redshift. The effects of dust and galaxy evolution that are present in the training set are, therefore, implicit within the derived correlation. Their weakness is that the correlations cannot be extrapolated to redshifts beyond the limits of the training set and that a sample of galaxies with redshifts (and selected to have a broad color distribution) must be present before the photometric-redshift relation can be derived. Clearly if we can combine these two approaches we may derive the optimal approach for estimating galaxy redshifts. If we can use a training set of galaxies to define the underlying spectral energy distributions (which will include the effects of evolution and dust) then we can apply these empirical template spectra over a wide range in redshift. In this paper we describe a fundamentally new approach to photometric-redshifts that extends our previous work on estimating galaxy redshifts from broadband colors such that we construct spectral energy distributions directly from the broadband data. In Section 2 we outline the physical and mathematical basis of this new approach. In Section 3 we apply these techniques to a sample of galaxies with simulated colors, showing that we can recover the underlying spectral energy distributions that describe these galaxies. Section 4 applies the optimization procedure to the multicolor photometric data of the Hubble Deep Field (Williams et al 1996) and shows that this technique can be used to improve the accuracy of template-based photometric redshift relations. Finally in Section 6 we describe the application of these techniques to analysis and the modeling of galaxy spectral energy distributions. ## 2 Building Spectral Templates from Broadband Photometry In an earlier work (Connolly et al 1995a) we described how to model the empirical correlation between the colors of galaxies and their redshifts by fitting a multi-dimensional polynomial relation. This technique proved successful for estimating the redshifts of galaxies in the $`0<z<1`$ regime but was built on a very general but somewhat unphysical basis that the color-redshift relation can be described by a low order polynomial. Ideally we want the underlying basis on which we define the photometric-redshift relation to be physically motivated. If we can construct a set of low resolution spectral energy distributions directly from a set of galaxies with multicolor photometry then we can achieve this goal (an empirical, physical basis). If we consider a galaxy, at a redshift $`z`$, observed through a series of broadband filters then the restframe fluxes observed through each filter, $`f_k`$ can be described by, $$f_k=R_k(\lambda )S(\lambda /[1+z])𝑑\lambda $$ (1) where $`R_k(\lambda )`$ is the response function of the $`k`$th filter and $`S(\lambda /[1+z])`$ is the spectral energy distribution of a galaxy blueshifted to the galaxy’s restframe. The response function, $`R_k(\lambda )`$, includes not only the filter transmission but also the instrumental and observational effects such as the CCD quantum efficiency and the change in effective shape of the filter due to absorption by the atmosphere. The spectral energy distribution, $`S(\lambda /[1+z])`$, is the true underlying spectrum that includes the stellar composition of the galaxy together with the effect of intragalactic extinction. For high redshift objects the effects of the IGM should be built in the above equation. We can see that $`f_k`$ is nothing more than a convolution of the input spectrum with the filter response function. Thus from a photometric catalog of galaxies with identical restframe spectral energy distributions, given the filter response functions, we can deconvolve the underlying spectral energy distribution. More exactly we can recover a low resolution slice of the spectrum whose limits are defined by the wavelength range over which the filters extend and the redshifts of the galaxies in the catalog. If we observe just one object there are, of course, a wide range of spectra that could give the exact flux values passing through the broadband filters. The question then arises, if instead we have an ensemble of $`N`$ galaxies over a range of redshift, with accurate redshifts and $`K`$ multicolor photometric observations per galaxy, can we invert this relation to recover the underlying spectral energy distributions (even in a statistical sense)? In principle, for a sample of galaxies spread over a range in redshift, we have $`N\times K`$ measurements of the underlying spectral energy distributions. For a series of optical and near-infrared filters the resolution of our reconstructed spectra would be proportional to the rest wavelength range sampled by the filters divided by $`N\times K`$. The advantage of this technique is that it is numerically straightforward to calculate. The deconvolution algorithms do not require a large amount of computational power. Its main weakness is that in the real world we cannot construct a large catalog of galaxies with identical spectral energy distributions on which to apply the deconvolution algorithm. We can circumvent this, however, by noting that galaxy spectra can be described by a small number of orthogonal spectral components or eigenspectra (Connolly et al 1995b). Each galaxy spectrum can be written as, $$S(\lambda )=\underset{j=1}{\overset{J}{}}a_jE_j(\lambda )$$ (2) where $`E_j(\lambda )`$ are the $`J`$ eigenspectra and $`a_j`$ are the expansion coefficients. Each galaxy spectral type can then be described by a linear combination of the eigenspectra (i.e. only the expansion coefficients, $`a_j`$, differ as a function of spectral type). From observations of local star forming and quiescent galaxies it has been shown that the number of components (or eigenspectra) required to reconstruct the continuum shape of a galaxy spectrum (to an accuracy of better than 1%) is small, typically 2–4 (Connolly et al 1995b). Utilizing the eigenspectra formalism simplifies the deconvolution problem in two ways. By restricting the number of components we need to reconstruct from the multicolor data to 2–4 the inversion process is much simplified. Secondly, the fact that any galaxy spectrum can be described by this small number of components enables us to use all of the available data (i.e. we do not have to restrict our analysis to a single class of galaxies with similar spectral types). ### 2.1 Template spectrum estimation The goal of the deconvolution is to provide a set of $`J`$ eigenspectra, $`E_j(\lambda )`$, that best represent the observed colors of the galaxy training set. We can parameterize the eigenspectra as a linear combination of basis functions such that, $$E_j(\lambda )=\underset{l=1}{\overset{L}{}}b_{jl}B_l(\lambda ),$$ (3) where $`B_l(\lambda )`$ are a set of $`L`$ basis function vectors (e.g. Legendre polynomials) and $`b_{jl}`$ are their relative expansion coefficients. In this paper we choose to parameterize the eigenspectra in terms of a linear combination of Legendre polynomials. The choice of the parameterization is, however, completely arbitrary. There are many ways we could describe the galaxy eigenspectra. In the simplest case the coefficients $`b_{jl}`$ could be just the flux values of the eigenspectra measured at a fixed set of wavelengths (i.e. the the basis functions would be delta functions centered at these wavelengths). We would then be reconstruct the eigenspectra directly rather than expressing it in terms of a linear combination of functions (Budavari et al 1999). We also note that, while we use the term eigenspectra, the above procedure does not guarantee (nor is it necessary) that the eigenspectra be orthogonal. From Equations 1, 2 and 3 we can now estimate that color of a galaxy in terms of the linear combination of eigenspectra or basis functions. For the $`i`$th galaxy in a ensemble of photometric observations the estimated flux through the $`k`$th filter is given by, $`f_{ik}^e`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}R_k(\lambda ){\displaystyle \underset{j=1}{\overset{J}{}}}a_{ij}E_j(\lambda /[1+z_i])d\lambda `$ (4) $`=`$ $`{\displaystyle _0^{\mathrm{}}}R_k(\lambda ){\displaystyle \underset{j=1}{\overset{J}{}}}{\displaystyle \underset{l=1}{\overset{L}{}}}a_{ij}b_{jl}B_l(\lambda /[1+z_i])d\lambda `$ (5) We can now define a $`\chi ^2`$ or cost function that describes the distance between the observed flux values, $`f_{ik}^m`$, measured for particular galaxy and those predicted by the eigenspectra, $`f_{ik}^e`$. We write the form of the cost function as a $`\chi ^2`$, weighted by the measured flux errors $`\sigma _{ik}`$ but other distances can be also used, dependent on how one would like to weight the different observations. $$\chi ^2=\underset{i=1}{\overset{N}{}}\underset{k=1}{\overset{K}{}}\frac{[f_{ik}^ef_{ik}^m]^2}{\sigma _{ik}^2}.$$ (6) The cost function depends on parameters $`a_{ij}`$ and $`b_{jl}`$. The minimum of this cost function determines the set of optimal parameters (in other words eigenspectra and expansion coefficients) that give the best estimation of fluxes in this framework. Of course, the larger the catalog of galaxies with multicolor observations the more non-redundant parameters we can optimize for and, consequently, the finer the resolution in the eigenspectra. By carefully choosing how we generate the eigenspectra we can have a cost function whose minimum can be found almost analytically. The variable parameters of the cost function will be $`a_{ij}`$ and $`b_{jl}`$. Since Equation 5 is linear in both of them they will appear in a quadratic form in the $`\chi ^2`$ cost function. At the $`\chi ^2`$ minimum all of the derivatives of the cost function should be zero. If we consider the values $`a_{ij}`$ as constants, the equations with the derivatives in $`b_{jl}`$ will give a set of $`J\times L`$ linear equations. In similar way keeping $`b_{jl}`$ constant we will have $`N\times J`$ linear equations for $`a_{ij}`$, or to be more exact it breaks up into $`J`$ sets of $`N`$ linear equations with $`N`$ unknowns in each. Each of these set of linear equations can be solved independently. Therefore, by iteratively solving the two sets of linear equations (holding one set of coefficients constant while solving for the other coefficients) one can minimize the cost function in a efficient manner. In fact one could generate the eigenspectra in many other ways, for example, using a $`tanh()`$ function to mimic the 4000 Å break and a power law function to represent the star formation at the ultraviolet end of the spectrum. Parameters could occur in the cost function in more complex forms than showed above. If the variable parameters are not in a quadratic form in the cost function nonlinear optimization methods (such as different types of gradient descent methods or simulated annealing if local minima cause problems) could be used instead of solving linear equations. This would be computationally harder but the shape of the eigenfunctions would not be restricted to a particular basis. The limitation on the number of parameters and, therefore, the number of Legendre polynomials, is defined by the set of galaxies with multicolor photometric observations. As described previously, if we have $`N`$ galaxies with observations in $`K`$ passbands then we have $`N\times K`$ independent measurements. If we want to describe the distribution of galaxy types by $`J`$ eigenspectra (in our case 3 eigenspectra) then we have $`J\times N`$ constraints. This means that the number of degrees of freedom in the system and, therefore, the maximum number of parameters (or polynomials) we can solve for, $`X`$, is, $$X\frac{N(KJ)}{J}$$ (7) As we noted earlier the parameterization we choose to describe the eigenspectra by is completely arbitrary. A second, and more physical, way to visualize how the number of observations (number of galaxies and passbands ) relates to how well we can recover the underlying eigenspectra is to think of the parameters $`X`$ as the number of wavelengths at which we can sample a galaxy spectrum. This involves using the values $`E_j(\lambda _l)`$ of a discretely sampled low resolution eigenspectrum as optimization parameters. The basis functions will, therefore, have a constant value 0 except at a selected $`\lambda _l`$ where the value is 1. In such a way we have a direct, low resolution realization of the underlying eigenspectra that describe the observed galaxy population. It is then clear that, given the wavelength interval we wish to reconstruct, $`X`$ defines the maximum resolution of the resultant eigenspectra. The number of observations and the passbands are given by the measurements but we have some freedom to chose the number of eigenspectra and the resolution. For example if we want to use the templates for photometric redshift estimation we have to have enough resolution to reconstruct the broadband features of the spectrum. So we need resolution at least the width of the (blueshifted) filters. On the other hand to represent all different spectral types even rare ones we would like to have large number of templates. If the number of observations is not enough to satisfy both of these requirements one has to chose between representing equally the spectra of all of the objects but with poor quality or to represent the typical ones with better resolution and have a few outliers. Comparing the final value of the cost function or the quality of photometric redshift estimation can help to set these parameters optimally. ## 3 Application to Simulated Data To demonstrate the validity of this technique and to determine the accuracy to which we can reconstruct galaxy spectral energy distributions from broadband photometry we initially apply the algorithm to a set of simulated data. From the Bruzual and Charlot spectral synthesis models (BC96, Bruzual and Charlot 1995) we construct a set of galaxy spectra using a simple stellar population ranging in age from 0 yrs to 20 Gyrs (with solar metalicity and a single burst of star formation). In total the sample contains 222 spectra covering the spectral range 200 Å to 2.2 $`\mathrm{\mu m}`$. From these spectra we apply a Principal Component Analysis or Karhunen-Loève transform (Karhunen 1947, Loève 1948, Connolly et al 1995b) to construct a series of orthogonal eigenspectra. The first two of these eigenspectra are shown in Figure 1a and Figure 1b respectively. Using the first two eigenspectra we simulate the colors of galaxies as a function of redshift. The expansion coefficients $`a_i`$, or mixing angles, are designed to produce a set of galaxy colors that match the color distribution of galaxies within the local Universe. In total, we construct a sample of 616 galaxies with U, B, V, I, J, H and K photometry covering the redshift range $`0<z<1.35`$. The upper limit on the redshift range is imposed for two reasons, to match the redshift distribution of those galaxies in the Hubble Deep Field with $`V_{606}<24`$, and secondly to avoid the added complication of including the attenuation due to the intergalactic medium (e.g. Madau et al 1996). #### 3.0.1 Reconstructing Bruzual and Charlot Spectra Given these simulated data (colors and redshift) we use the algorithms described in Section 2 to reconstruct the underlying eigenspectra. The redshift distribution of the input catalog of galaxies defines the wavelength range over which the eigenspectra can be reconstructed (the upper and lower bounds being defined by the restframe upper and lower filter cutoffs of the K and U passbands respectively). For the redshift range of the Bruzual and Charlot data we can recover the spectral energy distribution over a wavelength interval of approximately $`950<\lambda <22000`$ Å. Initially we sample the eigenspectra that we wish to recover in 20 bins (sampled linearly in wavelength), with a spectral resolution of approximately 1000 Å. Each of the two approaches (i.e. using Legendre polynomials and evenly sampled random spectra) outlined in Section 2 were applied to the data and were found to give identical results. In the section below we will, therefore, only discuss the optimization of the eigenspectra using Legendre polynomials. A subsequent paper (Budavari et al. 1999) will discuss in more detail techniques that can be applied directly to the spectra themselves. For basis functions, $`B_l(\lambda )`$, we used the first $`20`$ Legendre polynomials. The optimization procedure was started with the Legendre polynomials, $`B_l(\lambda )`$, specified by a series of random numbers. The optimization was found to converge rapidly for the 616 galaxies within our simulated data. After 50 iterations, a few minutes of workstation cpu time, the cost function was found to be stable (varying by less than 0.1% from one iteration to the next). The rapid convergence of the relative error as a function of iteration is shown in Figure 2. Comparison between the eigenspectra input into the simulations and the spectra reconstructed by our optimization technique is not straightforward. While the spectral templates that the reconstruction technique derives should occupy the same subspace as the original eigenspectra there are many non-unique ways to achieve this (i.e. the output spectra can be a rotated subset of the input spectra). To transform the input and output spectral templates into a common form, for each input eigentemplate, we calculate the linear combination of the output eigentemplates that gives the closest representation of input eigenspectrum (i.e. we project the original eigenspectra onto the reconstructed eigenbasis). This produces a set of eigenspectra that can then be directly compared with the BC96 input basis. In Figure 1 we show a comparison between the BC96 eigenspectra and those we derive from the optimization technique. Clearly there is an almost perfect one-to-one match between the two with the rms scatter less than 7% both for the first and second eigenspectra. The analysis we show in Figure 1 is the ideal case (where the noise in the observations is negligible). As deep photometric and spectroscopic surveys tend to push analyses of the data to the limit of the survey we need to investigate the effect of photometric uncertainties on the reconstruction of the underlying spectral energy distributions. For simplicity we assume a constant photometric error across all passbands (i.e. we do not allow for the lower signal-to-noise that are prevalent in ultraviolet observations of intermediate redshift galaxies). Figure 3 shows the effect of increasing the photometric uncertainty on the reconstruction of the first eigenspectrum. The solid line is the reconstructed eigenspectrum with no noise added, the triangles, squares, and circles show the effect of adding 5%, 10% and 20% flux errors respectively. As we see even with very low signal-to-noise data the eigenspectra can be reproduced to a very high accuracy. The large number of galaxies present within the sample means that each spectral interval (i.e. the 1000 Å spectral bins) is sampled by multiple galaxies. The coaddition of these multiple realizations increases the signal-to-noise of the reconstructed spectrum (relative to the input data). We note that for a flux error in excess of 5% the long wavelength end of the eigenspectrum becomes significantly more noisy than the remaining spectral regions. This arises because the longest rest wavelengths are only sampled by the lowest redshift galaxies. Therefore, the reconstructed spectrum will have a larger uncertainty where the spectral values are constrained by only a small number of data points. For decreasing signal-to-noise the longest wavelength spectral regions will be the most susceptible to the effect of the noise and can, therefore, be used as an indicator of when photometric uncertainties become significant within an analysis. #### 3.0.2 Photometric redshifts from empirical eigentemplates Having reconstructed the eigenspectra that describe the distribution of galaxy colors we utilize these spectra to derive a photometric redshift relation. We note that the reconstruction technique does not involve minimizing the difference between the spectroscopic redshift of a galaxy and its photometric redshift rather it minimizes the differences in the observed and estimated colors. This means that the dispersion about the photometric redshift relation is an accurate measure of how well the reconstructed spectra match the simulated data. We apply the standard template-based photometric redshift relation, adapted to utilize eigenspectra (e.g. Benitez 1999). For the range of redshifts we wish to consider (in the case of our simulations $`0<z<1.35`$) we define a redshift dependent $`\chi ^2(z)`$, $$\chi ^2(z)=\underset{i=1}{\overset{N}{}}\underset{k=1}{\overset{K}{}}\frac{(f_{ik}^m_ja_jE_{jk}(z))^2}{\sigma _{ik}^2}$$ (8) where $`f_{ik}^m`$ is the color of galaxy $`i`$ observed through the $`k`$th filter, $`\sigma _{ik}`$ is the flux error, $`a_j`$ are the expansion coefficients of the eigensystem (which we also solve for) and $`E_{jk}`$ is the color of the $`j`$th eigenspectrum observed through the $`k`$th filter. Minimizing this relation gives the estimated redshift of the galaxy (and is a simple and fast 1 dimensional problem). While the deconvolved eigentemplates are found to be marginally susceptible to the effects of photometric uncertainties within the data we find that the dispersion in the resulting photometric redshift relation derived from these spectral templates is sensitive to the photometric noise. In Figure 4 we show the dispersion in the photometric redshift relation for a set of simulated data with photometric uncertainties of 0%, 5%, 10% and 20%. For each sample the eigenspectra were derived directly from the data themselves (i.e. including the photometric errors). These eigenspectra were then used to derive the photometric-redshift relation. The top left panel shows the photometric-redshift relation in the presence of no noise ($`\sigma _z=0.018`$), the top right the effect of 5% flux error ($`\sigma _z=0.049`$), the bottom left the effect of 10% flux errors ($`\sigma _z=0.104`$), and the bottom right the effect of 20% flux errors ($`\sigma _z=0.258`$). The correlation between the dispersion about the photometric-redshift relation and the signal-to-noise of the data is not, however, due to errors present within the reconstructed eigenspectra. We demonstrate this in Figure 5. We derive the eigenspectra for a sample of galaxies with 20% flux errors (i.e. relatively low signal-to-noise). We then use these eigenspectra in a photometric-redshift analysis of a set of data with no flux errors. The resultant relation has a dispersion of 0.051 (substantially smaller than the 0.25 derived from the simulations with 20% flux errors). The limitation on the accuracy of a photometric redshift is, therefore, almost entirely due to the signal-to-noise of the data set to which we wish to apply the relation. ## 4 Deriving Spectral Energy Distributions from Broadband Photometry #### 4.0.1 Templates from Multicolor Photometric Observations The most natural data set on which to apply our deconvolution techniques are the Hubble Deep Field observations (HDF; Williams et al 1996). These data comprise a single WFPC2 field observed in four ultraviolet/optical passbands (F300W, F450W, F606W and F814W). The HDF has been the target of a substantive effort to obtain deep spectroscopic redshifts and follow up near-infrared and longer wavelength imaging for the central and flanking fields. It represents one of the densest regions of the sky in terms of published multicolor photometry and spectroscopy. In total there are over 110 galaxies within the HDF with high signal-to-noise optical and near-infrared colors and redshifts. ### 4.1 Template estimation To enable a direct comparison between the accuracy of photometric-redshifts based on our template estimation algorithm and those derived by others from standard techniques we use the photometric catalog of Fernandez-Soto et al (1999). These data are based on the optical WFPC2 colors and followup ground-based near-infrared imaging (J, H and K) of the HDF (Dickinson et al 1999). From this catalog we extract the 74 galaxies with spectroscopic redshifts, $`z<1.35`$, and with broadband optical and near infrared magnitudes (a total of 7 passbands). The upper redshift limit is imposed on our galaxy selection to remove the effect of the IGM on the observed colors of the galaxies (which comes into effect at $`z>2`$). An other reason to limit most of our study to this redshift range is that the number of galaxies with known redshift and reliable photometry is small. Using higher redshifts would force us to extend the wavelength range of the reconstructed templates on the low wavelength end. This part of the templates would be estimated based on the information from a small number of high redshift galaxies (usually with larger photometric errors) resulting larger errors in the shape of the templates. For the deconvolution we assume the filter curves and CCD quantum efficiency curves that are publicly available at the STScI and National Optical Astronomical Observatories websites. We note in passing that the resolution of the quantum efficiency curve of the infrared camera (IRIM) used for the near infrared observations is poorly known (i.e. the resolution is poor) and that a better representation of this curve may improve on the accuracy of the results we present below. Using these data we apply the optimization algorithm described in Section 2. The redshift range of the data coupled with the filter set enables us to reconstruct the spectral energy distributions over a wavelength range of 976–21946 Å. Within this interval we reconstruct 3 eigenspectra described by 20 Legendre polynomials (sampled at a resolution of 450 Å). The resultant spectral energy distributions, after approximately 50 iterations, are shown in Figure 6. The solid line shows the first eigenspectrum and the dashed and dotted lines the second and third eigenspectra respectively. The relative flux error (the deviation between the colors we would estimate based on the three eigenspectra and those measured for each galaxy) is less than 9%. It is quite remarkable that with only 3 eigenspectra and a poor spectral resolution we could reconstruct the fluxes with errors that are comparable with the errors in the observations. The limit on the resolution of the reconstruction is the number of galaxies with redshifts and multicolor photometry. For the 74 galaxies within the HDF sample, observed in 7 passbands, there are a total of 296 degrees of freedom (for 3 eigenspectra - see Equation 7). The 20 Legendre polynomials used in each of the 3 eigenspectra add up to a total of 60 parameters that we must solve for. The current solution is, therefore, well constrained. With a larger sample of galaxies (as we describe in Section 2.1) we could improve on the spectral resolution. Even given the poor resolution of the reconstructed spectra ($`450`$ Å) there are a number of noticeable spectral features present within the reconstructed data. The first eigencomponent (essentially the mean of the galaxy distribution) has the spectral shape of a Sbc or Scd galaxy (cf. the Coleman Wu and Weedman galaxy spectral energy distributions). At the redshifts we probe ($`\overline{z}=0.8`$) this is consistent with the median galaxy type (Lilly et al 1995). In the first eigenspectrum there is also clear evidence for a break in the galaxy spectrum at 4000 Å (due to Balmer series absorption). The second and third eigenspectra appear to be dominated by emission in the ultraviolet part of the spectrum with a strong upturn at $`\lambda <4000`$ Å, consistent with a star forming population. In fact the distribution of the spectral continuum shape of the eigenspectra are consistent with those we derive from both the BC96 models and from observations of local galaxies. For wavelengths greater than 1.4 $`\mu `$m the reconstructed spectra suffer from a ringing in the reconstruction. As we noted previously, because the information used in reconstructing this spectral region is derived only from the lowest redshift galaxies within our sample then the long wavelength regions will have lower signal-to-noise. Consequently these long wavelength regions are more susceptible to the limited number of galaxies used in the reconstruction (i.e. the first signs of the error show up here). We note, however, that when calculating magnitudes from these spectral templates the convolution with the filters partially averages out these fluctuations. ## 5 Photometric-Redshifts from Empirical Spectral Energy Distributions While a comparison between how well we can reconstruct the observed colors of galaxies using the empirical spectral templates is a measure of the “goodness-of-fit” the goal of this technique is to improve the accuracy of the photometric redshift relation. We, therefore, compare the photometric redshift-relation we derive using these empirical eigentemplates with those relations given in the literature (Sawicki et al 1996, Gwyn and Hartwick 1996, Fernandez-Soto et al 1999). Principally we concentrate on the redshift estimators of Fernandez-Soto et al (1999), for which we have an identical set of photometric observations. Using the eigentemplates as spectral energy distributions we use the standard template fitting method to derive a sample of photometric redshifts. A comparison between the estimated and spectroscopic redshifts, for the redshift range $`0<z<1.35`$, is given in Figure 7. The solid line represents a one-to-one comparison between the photometric and spectroscopic redshifts. About this line the rms dispersion within the relation is $`\sigma _z=0.077`$ using three eigencomponents. This compares favorably with the results of Fern ndez-Soto et al. (1999) who achieve a dispersion of $`\sigma _z=0.095`$ using the four Coleman, Wu and Weedman (1980) (CWW) spectral energy distributions and also with the polynomial fitting techniques of Connolly et al. (1995) who determine an $`\sigma _z=0.14`$ and $`\sigma _z=0.062`$ with first and second order polynomial fits respectively. Although the number of galaxies with $`z>1.5`$ is small to get reliable templates for the high redshift galaxies we made a test to demonstrate that the method does not brake over this redshift limit (Figure 8.). The higher redshift range increase the number of observations to $`102`$ from the original $`74`$ but the wavelength range needed for the templates also extends. Since higher redshifts strech the restframe spectra more, to get the same results the resolution of templates should increase also. The number of objects with $`z>1.5`$ is small, so these requirements can be satisfied only if we reduce the number of eigenspectra from $`3`$ to $`2`$. The comparison of the spectroscopic and photometric redshifts can be seen on Figure 8. There is one extreme outlier and there seems to be some systematic underestimation of redshift for high $`z`$ objects. Despite of these errors the redshift estimation is better than the one with the CWW templates. The rms error is $`\sigma _z=0.34`$ with our estimated and $`\sigma _z=0.40`$ with the CWW templates. The few outliers are responsible for most of the error. If we remove the one (three) extreme outlier(s) with $`\mathrm{\Delta }z>1`$ we got $`\sigma _z=0.17`$ ($`\sigma _z=0.22`$) rms dispersion with the estimated (CWW) templates, respectively. ## 6 Discussion and Applications The technique we described above incorporates the strengths of the empirical and template based photometric redshift techniques. It does this by utilizing a training set of galaxies (with colors and redshifts) to derive a set of spectral energy distributions (as opposed to defining a general but somewhat arbitrary set of polynomial coefficients). The result of this optimization procedure is to produce galaxy spectra that have been optimized to match the color distribution of galaxies within a given sample. It is important to note that the spectra are not optimized to produce the spectroscopic redshifts of the training set and, therefore, a comparison between the observed (spectroscopic) and predicted (photometric) redshifts is a statistically fair comparison. As we have defined a physical basis for the photometric-redshift relation (as opposed to the general polynomial relation) we can apply the spectral energy distributions over a wide range in redshift and are not restricted to just the redshift range over which we derive the relation (which was a fundamental limitation on our earlier approach). As we have shown in Figure 7 applying this technique we reduce the scatter about the photometric-redshift relation to $`\sigma _z=0.077`$. This compares favorably with the results obtained by Fernandez-Soto et al (1999) who obtain a dispersion of $`\sigma _z=0.095`$ for galaxies with $`z<1.35`$. The reason for the decrease in the dispersion about the photometric-redshift relation is the improvement in the spectrophotometric templates used for estimating the galaxy redshifts. The standard Coleman, Wu and Weedman (1980) templates, while producing a remarkably good fit, are based on the spectra of approximately 12 local galaxies. The optimization technique we employ utilizes the colors of 74 galaxies (distributed over a range of redshifts) and is, therefore, more likely to sample the full distribution of galaxy types. The limitation of our current application is simply the number of galaxies with accurate multicolor photometry in the Hubble Deep Field. The optimization technique is, however, general enough that we can incorporate any multicolor photometric and spectroscopic survey into the analysis. With the new generation of multicolor redshift surveys nearing completion we expect this approach to substantially improve on standard template-based photometric-redshift relations over the next few years. It should also be noted that as the result of this analysis is also a set of spectral energy distributions (or more exactly a statistical representation of the spectra) the eigenspectra can be applied to new multicolor surveys without requiring that they be transformed to the same photometric system. As the spectral energy distributions we reconstruct are derived directly from observations they include the effects of dust and galaxy evolution. It is reasonable to expect that these spectra will be a better representation of galaxies over a wide range in redshift than standard local galaxy templates (as noted above). The derived eigentemplates can therefore be used to construct a set of K-corrections optimized for galaxies over a wide range in redshift. Ultimately, with large photometric samples, we can take this analysis one step further. Comparing the predictions of spectral synthesis models with those derived from the multicolor photometry will show how star formation and reddening by dust couple to produce the observed colors of galaxies. It should, therefore, be possible to identify where the models of star formation and the observed spectral properties of galaxies deviate and thereby improve on the spectral synthesis codes. Finally we note that the analysis we have currently undertaken requires the use of multicolor photometry on galaxies of known spectroscopic redshift. On can extend this analysis to the case of data with only multicolor observations (i.e. no spectroscopic redshifts). We could optimize the estimated fluxes not only for the expansion coefficients and for the shape of the eigenspectra as we described above but also for the redshifts. This would naturally increase the sample of galaxies available for the optimization procedure (enabling much finer resolution for the resultant eigenspectra). It would also become a correspondingly larger computational problem. Another possibility would be to use photometric rather than spectroscopic redshifts. ## 7 Conclusions We have presented a new technique that can reconstruct the continuum spectra of galaxies directly from a set of multicolor photometric observations and spectroscopic redshifts. Using simulated multicolor data we show that we can recover the underlying spectral energy distribution even in the presence of substantial amounts of noise. Applying this approach to existing optical and near-infrared photometric data from the Hubble Deep Field we derive a set of spectral energy distributions that describe the observed galaxy colors. The main spectral features present in the spectral energy distributions of galaxies could be clearly seen within the reconstructed low resolution eigenspectra. The utility of this approach is demonstrated by using the empirical spectral energy distributions in a template-based photometric-redshift relation. The photometric redshift estimation based on the resultant template spectra gives redshift errors significantly better than standard template techniques. The current limitation on the accuracy of our technique is simply the number of high signal-to-noise multicolor photometric data currently available. Given the new photometric and spectroscopic surveys underway or nearing completion we anticipate a significant improvement in the resolution and accuracy of the derived spectral energy distributions. We would like to thank Daniel Eisenstein, Jim Annis and David Hogg for useful discussions about our reconstruction technique. IC acknowledges partial support from the MTA-NSF grant no. 124 and the Hungarian National Scientific Research Foundation (OTKA) grant no. T030836, AJC acknowledges support from an LTSA grant (NAG57934). AS acknowledges support from NSF (AST9802980) and a NASA LTSA (NAG53503).
no-problem/9910/astro-ph9910068.html
ar5iv
text
# Excitation of Alfvén Waves and Pulsar Radio Emission ## 1 Introduction Interpretations of various observational data tend to place the location of radio emission generation at a distance $`r10100R_{NS}`$ (e.g., Phillips 1992), though there plenty of claims to the contrary (Kijak et al. 1999, Smirnova et al. 1996, Gwinn et al. 1997). As was pointed out by Kunzl et al. (1998) and Melrose & Gedalin (1999), even with the most conservative estimates of the efficiency of plasma production in the polar caps, the plasma frequency at those heights is much larger that the observed frequency. This argues against radio emission mechanisms that generate Langmiur waves with a frequency near the local plasma frequency (Asseo et al. 1990, Whetheral 1997). von Hoensbroech et al. (1998) argued that this implies a strong underdense production of particles, but the theoretical foundations of such an assumption are weak. Alternatively, Melrose & Gedalin (1999) argued that in order to restrict emission to small altitudes the emission should be generated at frequencies much smaller than the plasma frequency, preferably on Alfvén waves. They considered excitation of oblique Alfvén waves by the Cherenkov resonance with plasma and found that it mostly produces waves with $`\omega \omega _p`$ and thus cannot resolve the problem. Though we agree with their conclusion that Cherenkov excitation of Alfvén waves is insignificant in the pulsar magnetosphere, we disagree on the reasons why. First, we do not agree with the conclusion of Melrose & Gedalin (1999) that Cherenkov resonance of the beam particles with the Alfvén waves occurs outside the light cylinder for the conventional beam energies. It actually occurs for radii $`r50R_{NS}`$ (see section 2). Secondly, they assumed that Cherenkov excitation of Alfvén waves occurs in a kinetic regime, while it was shown in Lyutikov (1999) that Cherenkov excitation of Alfvén waves occurs in a hydrodynamic regime. On the other hand, some authors postulated excitation of the Alfvén mode and, using the fact that Alfvén is guided along the curved magnetic field, were able to explain some observational data like dependence of the mean profile on the frequency (e.g. Barnard & Arons 1986, McKinnon 1994, Gallant 1998, Gwinn et al. 1999). The above arguments stimulated us to reconsider the possibility of excitation of Alfvén waves at the low altitudes in the magnetosphere, including the excitation of waves at anomalous cyclotron resonance. A major theoretical problem of the theories that produce radio emission on Alfvén waves is that Alfvén waves cannot escape from the plasma and have to be converted into escaping radiation (preferably X mode). In section 5 we review various possibilities of Alfvén waves conversion into escaping modes. ## 2 Waves and resonances The open field lines of the pulsar magnetosphere are populated by the dense one-dimensional flow of electron-positron pair plasma penetrated by highly energetic primary beam with the density equal to the Goldreich-Julian density $`n_{GJ}=𝛀.𝐁/(2\pi ec)`$, Lorentz factor $`\gamma _b10^6`$ (Arons 1983, Daugherty & Harding 1996). The density of the pair plasma is $`n\lambda _Mn_{GJ}=10^310^5n_{GJ}`$, where $`\lambda _M`$ is the multiplicity factor which gives the number of pairs produced by each primary particle; its Lorentz factor is $`\gamma _p\gamma _b/\lambda _M=1010^3`$. The distribution function of the bulk plasma also has a high energy tail up to to the Lorentz factor $`\gamma _t10^5`$. Though the pulsar plasma is thought to be relativistically hot, we restrict ourselves to cold plasma case which simplifies consideration considerably. Except for the Landau damping of the waves with slow phase velocity, thermal effects has only marginal importance for the wave-particles interaction and growth rates of instabilities (Lyutikov 1999a). In what follows we consider wave excitation in the plasma frame and then use the transformation rules of the Lorentz factors and frequencies from the plasma frame to pulsar frame: $`\gamma ^{}=2\gamma _p\gamma `$, $`\omega _p^{}=\sqrt{\gamma _p}\omega _p`$, $`\omega _B^{}=\omega _B`$, where primes denote quantities in the pulsar frame and $`\gamma _p`$ is the Lorentz factor of the relative motion of the plasma frame to the pulsar frame. In the strongly magnetized pair plasma, in the small frequency limit $`\omega \omega _p`$, there are two modes - transverse extraordinary (X) mode, with the electric vector perpendicular to the k-B plane, and quasitransverse Alfvén wave with the electric vector in the k-B plane. In the plasma frame the dispersion relations of the extraordinary and Alfvén modes are (e.g. Lyutikov et al. 1999a). $`\omega _A=kc\mathrm{cos}\theta \left(1{\displaystyle \frac{\omega _{p}^{}{}_{}{}^{2}}{\omega _{B}^{}{}_{}{}^{2}}}{\displaystyle \frac{k^2c^2\mathrm{sin}^2\theta }{4\omega _{p}^{}{}_{}{}^{2}}}\right),\text{ for }\omega \omega _p`$ (1) $`\omega _X=kc\left(1{\displaystyle \frac{\omega _p^2}{\omega _B^2}}\right)`$ (2) The plasma normal modes can be excited due to interaction with particles at the Cherenkov $$\omega k_{}v_{}=0$$ (3) and anomalous cyclotron resonances $$\omega k_{}v_{}+\omega _B/\gamma _{res}=0$$ (4) where $`\omega `$ is a frequency of the wave, $`k_{}`$ and $`v_{}`$ are projections of the wave vector and velocity along the direction of the magnetic field, $`\omega _B`$ is a (nonrelativistic) cyclotron frequency and $`\gamma _{res}`$ is a Lorentz factor of the particles. Also note plus sign in front of the cyclotron term in Eq. (4). Two regimes of excitaion are possible: kinetic and hydrodynamic. <sup>1</sup><sup>1</sup>1 In a hydrodynamic type instability all the particles of the beam resonate with one normal wave in the plasma, while in a kinetic regime only small fraction of the beam particles resonates with a given wave. The X mode can not be excited by the Cherenkov-type interaction since it has has $`𝐄𝐁,𝐯`$. At radii larger than $`50R_{NS}`$ (see Eq. 5) the phase velocity of the of the X mode becomes smaller than the velocity of the primary beam with $`\gamma _b10^6`$, so it can in principle be excited by cyclotron resonance, but near the surface of the neutron star the growth rate of the cyclotron excitation of the X mode is extremely small and the frequencies do not correspond to observed ones (Lyutikov 1999a). The cyclotron excitation of X mode may occur in the outer region of pulsar magnetosphere and is considered as a viable mechanism of pulsar radio emission generation (Kazbegi et al. 1989, Machabeli & Usov 1989, Lyutikov et al. 1999b). Thus, we conclude that near the surface the X mode cannot be excited and in what follows we concentrate on the possible excitation of the Alfvén mode. ## 3 Cherenkov excitation of the Alfvén mode The Cherenkov excitation of Alfvén waves has been considered in detail by Lyutikov (1999a). On a microphysical scale it is similar to the Cherenkov excitation of the Langmuir waves - the motion of the resonant particle is coupled to the component of the electric field of the wave along the magnetic field (and the velocity of the particle). It is possible only for oblique propagation since Alfvén waves which have a component of electric field parallel to the external magnetic field $`e_z\mathrm{sin}\theta `$. Using resonance condition (3) and the low frequency asymptotics of Alfvén waves (1) we infer that that the possibility of the Cherenkov excitation of the Alfvén waves by the particles from the primary beam depends on the parameter $$\mu =\frac{2\gamma _b\omega _p}{\omega _B}=2^{3/2}\gamma _b\sqrt{\frac{\lambda \mathrm{\Omega }}{\omega _B}}=\mathrm{\hspace{0.17em}2}^{3/2}\sqrt{\frac{\lambda \mathrm{\Omega }}{\omega _B}}\frac{\gamma _b^{}}{\gamma _p^{3/2}}=\mathrm{\hspace{0.17em}5}\times 10^3\left(\frac{r}{R_{NS}}\right)^{3/2}=\{\begin{array}{cc}<1,\hfill & \text{ if }\left(\frac{r}{R_{NS}}\right)50\hfill \\ >1,\hfill & \text{ if }\left(\frac{r}{R_{NS}}\right)50\hfill \end{array}$$ (5) Alfvén waves can be excited by Cherenkov resonance only for $`\mu <\mathrm{\hspace{0.17em}1}`$, when the phase velocity of the fast particles may become equal to the phase velocity of obliquely propagating Alfvén waves (see Figs. Excitation of Alfvén Waves and Pulsar Radio Emission and Excitation of Alfvén Waves and Pulsar Radio Emission). When $`\mu >\mathrm{\hspace{0.17em}1}`$ the particles move faster that the beam and, due to the specific dispersion of Alfvén waves, cannot resonate with them. In the outer parts of magnetosphere ( $`r50R_{NS}`$) parameter $`\mu `$ becomes much larger than unity $`\mu 1`$ so Alfvén waves cannot be excited by Cherenkov interaction. The resonant condition for the Cherenkov excitation of Alfvén waves (Eq. 3) may be solved for $`k_{}=k\mathrm{sin}\theta `$: $$k_{}=\omega _p\sqrt{\frac{1}{2\gamma _b^2}\frac{\omega _p^2}{\omega _B^2}}$$ (6) As expected, excitation is limited to $`\frac{\omega _p^2}{\omega _B^2}<\frac{1}{2\gamma _b^2}`$, e.i. for $`r50R_{NS}`$. The growth rate for the Cherenkov excitation of Alfvén waves (which occurs in the hydrodynamic regime) has been calculated in Lyutikov (1999) (see also Godfrey et al. 1975): $$\mathrm{\Delta }=\frac{\sqrt{3}\omega _p^{\frac{1}{3}}\omega _{GJ}^{}{}_{}{}^{\frac{2}{3}}\mathrm{cot}\theta }{2^{\frac{7}{6}}\gamma _b^{\frac{8}{3}}}$$ (7) where $`\omega _{GJ}^2=4\pi e^2n_{GJ}/m`$ is the plasma frequency associated with the Goldreich-Julian density. It is very similar to the growth of the Cherenkov excitation of Langmuir waves. This result is expected since the microphysics of the Cherenkov excitation of Langmuir and Alfvén waves is the same - coupling of the parallel (to the magnetic field) motion of particles to the parallel component of the electric field of the wave. Consequently, the growth rate given by Eq. (7) for the Alfvén wave excitation suffers from the same problem as Langmuir wave excitation: it is strongly suppressed by large Lorentz factor of the primary beam. We thus conclude that Cherenkov excitation of the Alfvén waves is ineffective. ## 4 Cyclotron excitation of the Alfvén mode The other possibility of excitation of Alfvén waves is by the anomalous cyclotron resonance (Tsytovich & Kaplan 1972, Hardee & Rose 1978, Lyutikov 1999a). On a microphysical scale, during an emission at the anomalous cyclotron resonance, a particle undergoes a transition up in Landau levels coupling transverse velocity to the electric field of the wave (Ginzburg & Eidman 1959). In case of low frequency ($`\omega _A\omega _p`$) waves, the the resonance condition for the cyclotron excitation of Alfvén reads $$k_{res}c\mathrm{cos}\theta \left(\frac{1}{2\gamma _b^2}\frac{\omega _p^2}{\omega _{B}^{}{}_{}{}^{2}}\frac{k^2c^2\mathrm{sin}^2\theta }{4\omega _p^2}\right)+\omega _B/\gamma _b=\mathrm{\hspace{0.17em}0},\text{ }k_{res}c\omega _p$$ (8) If the third term is much larger than the first two (this happens for the angles of propagation larger than some critical angles $`\frac{\omega _p}{\gamma _b\omega }`$ and $`\frac{\gamma _b\omega _p^4}{\omega _{B}^{}{}_{}{}^{4}}`$ ) the resonance occurs at $$k_{res}c=\left(\frac{4\omega _p^2\omega _B}{\gamma _b\mathrm{cos}\theta \mathrm{sin}^2\theta }\right)^{1/3}\frac{\omega _p}{\mu ^{1/3}}\left(\frac{1}{\mathrm{cos}\theta \mathrm{sin}^2\theta }\right)^{1/3}$$ (9) The condition $`\omega _{res}\omega _p`$, which guarantees that Alfvén waves are not damped by the Cherenkov interaction with the bulk plasma particles, requires $`\mu \frac{1}{\mathrm{cos}\theta \mathrm{sin}^2\theta }`$, or equivalently, $`r50R_{NS}`$. This is a serious restriction: Alfvén waves cannot be excited at lower altitudes since then the cyclotron resonance occurs in the region where Alfvén waves are strongly damped due to Cherenkov resonance with the bulk particles (Arons & Barnard 1986). Cyclotron excitation of Alfvén waves occurs in kinetic regime (Lyutikov 1999a). The growth rate is $$\mathrm{\Gamma }=\frac{\pi }{4}\frac{\omega _b^2}{\omega _{res}\mathrm{\Delta }\gamma }$$ (10) where $`\mathrm{\Delta }\gamma `$ is the scatter in Lorentz factors of the resonant particles and the resonant frequency $`\omega _{res}`$ follows from Eq. (9). Formally, growth rate for the cyclotron instability on Alfvén waves (Eq (10)) is the same as the growth rate for the the cyclotron instability on the high frequency transverse waves (Lyutikov et al. 1999b, Kazbegi et al. 1989), which occurs in the outer parts of the pulsar magnetosphere. The important difference is that the cyclotron instability on Alfvén waves can occur at lower altitudes, where the density of the resonant particles is higher. <sup>2</sup><sup>2</sup>2In the case of Alfvén waves we use the dispersive correction $`k_{}^2c^2\mathrm{sin}^2\theta /\omega _p^2`$ instead of $`\omega _p^2/\omega _B^2`$. Numerically, for the emission generated at $`r50R_{NS}`$ and a quite narrow primary beam ($`\mathrm{\Delta }\gamma 10^2`$ \- see Lyutikov et al. 1999a), we have $$\mathrm{\Gamma }10^5\mathrm{sec}^1$$ (11) Thus, we conclude that the growth rate of the cyclotron instability on the Alfvén waves may be considerable enough to account for the high brightness pulsar radio emission. ## 5 Wave conversion There are two fundamental problem with Alfvén waves - they cannot escape from plasma and they are damped on the Cherenkov resonance with plasma particles when their frequency becomes comparable to $`\omega _p`$. As Alfvén waves propagate into decreasing plasma density, their frequency will eventually become comparable to the plasma frequency (Barnard & Arons 1986). Before that, Alfvén waves have to be converted into escaping modes. The conversion should take place at such radii that the local plasma frequency, transformed into the pulsar frame, is still larger than the observed frequencies: $`r500R_{NS}`$. There are two generic types of conversion - linear and nonlinear. Linear conversion occurs in the regions where WKB approximation for wave propagation is not satisfied (e.g., Zheleznyakov 1996). Nonlinear conversion is due to the wave-wave or wave-particles interaction. In order to escape absorption at the Cherenkov resonance the Alfvén waves should converted into either superluminous ordinary waves or into X modes that does not have a component of the electric field along the external magnetic field. ### 5.1 Linear conversion Linear conversion of waves occurs when dispersion curves of two modes approach each other closely on the $`\omega k`$ diagram. Effective conversion occurs when the distance between two dispersion curves becomes comparable to the ”width” of the dispersion curve. Several process can contribute to the broadening of the dispersion curve. First, the inhomogeneity of the medium induces a width $`\frac{\delta \omega }{\omega }\frac{1}{kL}`$ where $`L`$ is a typical inhomogeneity scale. Inhomogeneities can be due both to large scale (of the order of the light cylinder radius) density fluctuations, excited by temporal and special modulations of the flow, and due to the small scale plasma turbulence. Linear conversion of Alfvén waves into X mode is impossible because of their different polarizations. Linear conversion of Alfvén waves into O mode can occur only when the frequency of the Alfvén waves approaches plasma frequency, e.i. exactly at the moment when Alfvén waves become strongly damped at the Cherenkov resonance with the bulk plasma. The effectiveness of the conversion then depends on the not very well known details of the bulk plasma distribution function (e.g., distributions with considerable high energy tail will tend to damp the waves stronger), so no decisive argument about the effectiveness of such conversion can be made. Secondly, the wave turbulence results in effective collisions (with frequency $`\nu _c`$) of Alfvén waves with each other; in this case the induced width is $`\delta \omega \nu _c`$. <sup>3</sup><sup>3</sup>3Terminology here may be a bit confusing: the frequency of collisions, of course, depends on the total energy density of turbulence. For a given turbulent energy density $`W_{tot}`$ the typical wave-wave collision frequency (based on a three wave interaction) can be estimated as $$\nu _c\left(\frac{e}{mc}\right)^2\frac{W_{tot}}{\omega _p}$$ (12) Effective conversion occurs when this frequency is of the order of the minimum difference between the dispersion curves. In the case of strong turbulence the the collision frequency becomes comparable to the frequency of the waves near the conversion point: $`\nu \omega _p`$. In that case the required wave energy density is approximately equal to the energy in plasma. Assuming that the energy density of plasma is of the order of the thermal energy, we find $$W_{tot}=\left(\frac{\omega _pmc}{e}\right)^2nmc^2$$ (13) In addition, in the pulsar plasma the energy in the secondary plasma is approximately equal to the energy in the beam. Then, the energy density given by Eq. (13) is of the order of the beam energy. The amount of energy lost by a beam due to the wave excitation depends on the complicated physics of the instability saturation mechanisms, but we can reasonably expect that to be $`110\%`$. In this case the the collisional frequency $`\nu _c0.010.1\omega _p`$ and the collisional conversion of Alfvén waves into O mode can be marginally effective. Another mechanism of linear wave transformation is related to the presence of velocity shear in the flow (Arons & Smith 1979, Mohajan et al. 1997, Chagelishvili et al. 1997). We can reasonably expect that the flow of pair plasma has some shear associated with the plasma generation in the polar gap. Given a strong shear Langmuir waves become coupled to the escaping O modes. Whether the shear expected in the polar outflow can provide enough coupling remains uncertain. ### 5.2 Nonlinear conversion #### 5.2.1 Wave - wave interaction Consider a merger of two Alfvén waves into escaping X or O mode. For the three wave processes to take place, the participating waves should satisfy resonance conditions (conservation of energy and momentum) and more subtle conditions on the polarization determined by the matrix elements of the third order nonlinear current (Melrose 1978 Eqs. (10.105) and (10.125)). From the energy and momentum conservation it is easy to see that only oppositely propagating Alfvén waves can merge into X or O mode. In calculating the matrix element of the three-wave interaction two simplifications are possible in the case of strongly magnetized electron-positron plasma. First, nonlinear current terms which are proportional to an odd power of the sign of the charge will cancel out. Secondly, we can make an expansion in powers $`1/\omega _B`$ and keep the lowest order. Under these assumption the matrix element becomes $$V\frac{ie}{mc}\frac{\omega _p}{\omega _B}$$ (14) Given the matrix element (14) we can find the the probability of emission in the random phase approximation (which assumes that radiation is broadband) (Melrose 1978) $$uV^2\omega _p=\frac{e^2}{m^2c^2}\frac{\omega _p^3}{\omega _B^2}$$ (15) Then, again assuming that the energy density of plasma is of the order of the thermal energy, the characteristic nonlinear decay time is $$\mathrm{\Gamma }\frac{\omega _p^3}{\omega _B^2}=2^{3/2}\mathrm{\Omega }\sqrt{\frac{\lambda _M^3\mathrm{\Omega }}{\omega _B}}2\times 10^2\left(\frac{r}{R_{NS}}\right)^{3/2}$$ (16) (compare with Mikhailovskii 1980). So that at the distance $`r50100R_{NS}`$ the nonlinear conversion of Alfvén waves is only marginally possible. The nonlinear conversion of Alfvén waves at lower altitudes is impossible because of the small growth rate. Thus, we come to a conclusion that though in principle the Alfvén waves can be converted into escaping modes by nonlinear selfinteraction, the conversion must occur at comparatively large radii $`r50100R_{NS}`$. There is another serious problem with nonlinear conversion - it requires a presence of strong backward propagating waves. Presence of such backward propagating waves is not obvious in the pulsar magnetosphere. They should either be excited by a backward propagating fast particles, whose origin cannot be simply justified, or be a result of a scattering of the initial forwards propagating waves. The Thompson scattering is strongly suppressed in the magnetized plasma and is probably ineffective, but induced scattering (see below) may provide such backward propagating waves. #### 5.2.2 Induced scattering Induced scattering of longitudinal waves in pulsar magnetosphere have been considered by Machabeli (1983) and Lyubarsky & Petrova (1996). They showed that it may be an effective process of transferring energy from the Alfvén branch to the O mode. There are several problems with this mechanism. First, the effective induced scattering of Alfvén waves on plasma particles in a superstrong magnetic field occurs in the same region where waves become strongly damped, the question which process is more effective (scattering or damping) again strongly depends on the unknown details of the plasma distribution function. Secondly, if scattering is effective, the induced scattering transfers energy to smallest wave vectors of the O mode (Langmiur condensate). Propagation and escape of the O mode has not been properly investigated, but since the O mode with initially small wave vector has a very small index of refraction ($`n0`$) it will be strongly refracted (by the angle $`\pi `$) as it converts into a vacuum mode with ($`n1`$). This strong refraction would then contradict the observed narrow pulsar profiles. Summarizing this section, we conclude that at this point we cannot make a decisive statement whether linear or nonlinear conversion of Alfvén wave into escaping modes is effective. ## 6 Conclusion The results of this work confirm the conclusion of Lyutikov (1999a) that in the pulsar magnetosphere the electromagnetic cyclotron instabilities are the most likely candidates for the pulsar radio emission generation. These instabilities develop on the X and O modes (in the outer regions of the pulsar magnetosphere) and on the Alfvén mode (possibly in the lower, $`r50100R_{NS}`$, regions). Cyclotron instabilities on the X and O modes have smaller (than on the Alfvén waves) growth rate, but generate waves that can directly escape from the plasma. The growth rates of the cyclotron instability on Alfvén waves can be very large, but the complications of the wave conversion and absorption in the outflowing plasma, which depend on the unconstrained details of the plasma distribution function, may put the model based on the Alfvén wave excitation at disadvantage. Both excitation and the nonlinear conversion of Alfvén waves is possible only for $`r50100R_{NS}`$. This is a comparatively large altitudes. If we suppose that the opening angle of the magnetic field lines controls the width of the pulsar emission beam (e.g, Rankin 1992), then the value of the opening angle would imply emission right near the surface $`rR_{NS}`$. If taken at a face value, this interpretation would exclude the Alfvén waves as a source of observed radio emission. The comparatively large radii of emission, $`r50100R_{NS}`$ and larger, may not be that unacceptable from the observational point of view as well (Lyutikov 1999b). Such effects as ”wide beam” geometry (Manchester 1995), emission bridge between some widely separated pulses, extra peaks in Crab at high frequencies maybe naturally explained by large altitudes of emission. The results of this work stress one again the difficulties of wave excitation at very small altitudes: Alfvén waves, as well as ordinary and extraordinary modes, cannot be excited beam instabilities or converted nonlinearly into escaping modes at low altitudes. On the other hand, both excitation and conversion should occur at larger $`r50R_{NS}`$ altitudes which remain our prefered regions for the generation of pulsar radio emission. I would like to thank George Machabeli for discussions and useful comments. FIGURE CAPTIONS Fig. 1(a). Resonances on the Alfvén mode for $`\mu <1`$. Fig. 1(b). Resonances on the Alfvén mode for $`\mu >1`$.
no-problem/9910/hep-ex9910058.html
ar5iv
text
# 1 Introduction ## 1 Introduction Exotic hadrons are named those hadrons, which structure is different from ordinary $`q_1\overline{q_2}`$ structure for mesons and $`q_1q_2q_3`$ for baryons. Their exotic nature could reveal itself in unusual properties of these hadrons like some suppressed or enhanced decays, too wide or too narrow widths, quantum numbers, forbidden in conventional structure, etc. Up to now among huge variety of hadrons about 10 candidates were found, which look like exotic states. In scalar meson sector such candidates are the lowest lying states $`f_0(980)`$ and $`a_0(980)`$. The main reasons, which lead to the conclusion on possible exotic structure of $`f_0(980)`$ and $`a_0(980)`$, are their suppressed production in $`J/\mathrm{\Psi }`$ decays, low values of $`\gamma \gamma `$-width, too low masses. Three models are used to describe $`f_0`$ and $`a_0`$ mesons \- conventional $`q\overline{q}`$ model, molecular model ($`K\overline{K}`$) and 4-quark model ($`q\overline{q}q\overline{q}`$). The $`q\overline{q}`$ model is hardly consistent with experimental data. More than 10 years ago the radiative decays $`\varphi f_0\gamma ,a_0\gamma `$ were proposed as a new sensitive test of $`f_0`$ and $`a_0`$ structure . These decays were studied recently in the reactions: $$e^+e^{}\varphi \pi ^0\pi ^0\gamma ,\pi ^+\pi ^{}\gamma $$ (1) $$e^+e^{}\varphi \eta \pi ^0\gamma $$ (2) which could proceed via radiative decay $`\varphi (1020)f_0\gamma ,a_0\gamma `$. We measured the branching ratios of these decays and of other rare decays of $`\varphi `$ and $`\rho (770)`$, $`\omega (783)`$. Other experimental data from Novosibirsk and conference contribution from IHEP, Protvino are reviewed in this talk. ## 2 Experiment The experiments to study the reactions (1), (2) have been carried out at VEPP-2M collider in the energy range 2E from 0.4 to 1.4 GeV. VEPP-2M is the lowest energy $`e^+e^{}`$ collider, operating in Novosibirsk since 1974. The collider luminosity $`L`$ sharply depends on its energy $`LE^4`$. At the energy of $`2E=M_\varphi `$ the maximum luminosity $`L_{max}=510^{30}cm^2s^1`$. At present two detectors CMD-2 and SND, located opposite each other, take data. CMD-2 is a magnetic detector (fig.1) with superconductive solenoid and 20 layer drift chamber with jet cell structure. Electromagnetic calorimeter consists of 892 CsI(Tl) crystals in barrel and of 680 BGO crystals in endcaps. The muon identification is provided by 4 layers of streamer tubes inside the yoke. The CMD-2 detectors operates at VEPP-2M since 1992 with $`27pb^1`$ of collected luminosity. SND is a general purpose nonmagnetic detector (fig.2). The main part of SND is three-layer spherical electromagnetic calorimeter with 1625 NaI(Tl) crystals of 3.6 t total weight. Detector includes also a 10-layer drift chamber and outer muon system, consisting of streamer tubes and plastic scintillation counters. SND resembles famous Crystal Ball detector constructed in SLAC, but unlike Crystal Ball it has a 3-layer crystal calorimeter, which provides better particle recognition $`e/\pi /\mu `$ and $`\gamma /K_L`$. The integrated luminosity accumulated by SND since 1995 is about $`27pb^1`$. Both detectors take data in parallel. Total number of produced resonances is $`N_\varphi 4.510^7`$, $`N_\rho 410^6`$, $`N_\omega 2.510^6`$. About half of the total time was used for scanning the energy range between resonances with the goal of the precise measurement of the quantity $`R=\frac{\sigma (e^+e^{}hadrons)}{\sigma (e^+e^{}\mu ^+\mu ^{})}`$ and study particular channels of $`e^+e^{}`$-annihilation. ## 3 Evidence of the decays $`\varphi f_0\gamma ,a_0\gamma `$ The first search for the decays $`\varphi f_0\gamma ,a_0\gamma \pi ^0\pi ^0\gamma ,\eta \pi ^0\gamma `$ was carried out with ND detector at VEPP-2M collider in 1987 . In that early experiment the upper limits on the decays branching ratios at a level $`10^3`$ were placed. Later it was shown by N.Achasov , that study of these decays can provide a unique information on the structure of lightest scalars $`f_0`$ and $`a_0`$. Subsequent studies proved this idea. In 1995 the experiments started at VEPP-2M with SND detector , which has photon detection capabilities much better than ND. Study of the decays $`\varphi f_0\gamma ,a_0\gamma \pi ^0\pi ^0\gamma ,\eta \pi ^0\gamma `$ was one of important goals of SND detector. In 1997 the first results from SND were reported with evidence of the processes (1), (2). The reaction (1) was studied by SND in neutral final state : $$e^+e^{}\varphi \pi ^0\pi ^0\gamma $$ (3) so both processes (1) and (2) were studied in 5 photon final state. The main background comes from the following reactions: $$e^+e^{}\varphi \eta \gamma 3\pi ^0\gamma $$ (4) $$e^+e^{}\omega \pi ^0\pi ^0\pi ^0\gamma $$ (5) $$e^+e^{}K_SK_Lneutrals$$ (6) In order to suppress background the SND events were selected with 5 photons, satisfying energy-momentum balance. The final state should contain $`2\pi ^0`$ for the process (3) or $`\eta \pi ^0`$ for (2). The contribution of the reaction (4) into the process (2) was suppressed by the cut on the maximum energy of the photon in an event. For suppression of the process (5) the cuts were imposed on the $`\pi ^0\gamma `$ effective mass, excluding the region around $`\omega (783)`$-mass. The processes (4) and (6) were suppressed by the parameter, describing the transverse shower profile in the calorimeter . Under chosen selection criteria the detection efficiency was determined to be $`15\%`$ and $`4\%`$ for the processes (3) and (2) respectively. In the experimental data sample with integrated luminosity $`4pb^{}1`$ about 150 events of process (3) were found. The number of found events of the process (2) in the full SND sample of $`210^7`$ produced $`\varphi `$ was $``$ 70 events. The angular distributions of the processes (2), (3) were studied in refs. . It was shown, that the distribution over polar angle $`\theta `$ of the recoiled photon is proportional to ($`1+cos^2\theta `$). The angle $`\psi `$ was defined as an angle between a pion direction in the $`\pi ^0\pi ^0`$ or $`\eta \pi ^0`$ center of mass reference frame and the recoiled photon direction. The distribution over $`cos\psi `$ was found to be flat. So, the experimental data confirm the conclusion that $`\pi ^0\pi ^0`$ and $`\eta \pi ^0`$ system are produced in scalar state. The study the $`\pi ^0\pi ^0`$ and $`\eta \pi ^0`$ mass spectra was important for the interpretation of the data. Figs. 4,4 show the obtained mass spectra after background subtraction and detection efficiency corrections. Both pictures show the considerable rise in the spectra at higher masses. The visible location of the peak in spectra is near 960 MeV. The table with numerical values of $`\pi ^0\pi ^0`$ mass spectrum can be found in . Summing data from the mass spectra in figs 4,4 and CMD-2 data one can obtain the branching ratios for particular mass ranges: 1 - SND result for $`m_{\pi \pi }>900`$ MeV : $$B(\varphi \pi ^0\pi ^0\gamma )=(0.50\pm 0.06\pm 0.06)10^4$$ (7) 2 - SND result for the whole mass spectrum : $$B(\varphi \pi ^0\pi ^0\gamma )=(1.14\pm 0.10\pm 0.12)10^4$$ (8) here and below the first error is statistical while the second one is systematic, which is determined mainly by the background subtraction error, detection efficiency error and normalization error. 3 - CMD-2 result for $`m_{\pi \pi }>700`$ MeV : $$B(\varphi \pi ^0\pi ^0\gamma )=(0.92\pm 0.08\pm 0.06)10^4$$ (9) 4 - SND result for $`m_{\eta \pi ^0}>950`$ MeV : $$B(\varphi \eta \pi ^0\gamma )=(0.36\pm 0.11\pm 0.03)10^4$$ (10) 5 - SND result for the whole mass spectrum : $$B(\varphi \eta \pi ^0\gamma )=(0.87\pm 0.14\pm 0.07)10^4$$ (11) 6 - CMD-2 result for the whole mass spectrum : $$B(\varphi \eta \pi ^0\gamma )=(0.90\pm 0.24\pm 0.10)10^4$$ (12) All results listed above are practically model independent, because they do not use an assumption about $`f_0`$ or $`a_0`$ contributions into the final state. Then, assuming $`f_0`$ and $`a_0`$ dominance in the final state, using relation based on isotopic invariance $`B(\varphi \pi ^+\pi ^{})=2B(\varphi \pi ^0\pi ^0)`$, and neglecting the decay $`\varphi KK\gamma `$, we can obtain for the decay $`\varphi f^0\gamma `$ and $`\varphi a^0\gamma `$: 7 - SND result : $$B(\varphi f^0\gamma )=(3.42\pm 0.30\pm 0.36)10^4$$ (13) 8 - SND result : $$B(\varphi a^0\gamma )=(0.87\pm 0.14\pm 0.07)10^4$$ (14) 9 - CMD-2 result : $$B(\varphi f^0\gamma )=(2.90\pm 0.21\pm 1.54)10^4$$ (15) The analysis of the $`\pi ^0\pi ^0`$ mass spectrum was done on the base of the work . The spectrum was described by a sum of contributions from $`f_0`$ and $`\sigma `$ mesons . The width of $`f_0`$ meson in the approximation of “broad resonance” depends on the product of coupling constants $`g_{\varphi KK}g_{fKK}`$. The $`f_0`$ fit parameters were mass $`m_f`$, coupling constant $`\frac{g_{fKK}^2}{4\pi }`$ and the ratio of coupling constants $`\frac{g_{fKK}^2}{g_{f\pi \pi }^2}`$. The optimal fit parameters were obtained : $$m_f=971\pm 6MeV,\mathrm{\Gamma }_f=188_{33}^{+48}MeV,\frac{g_{fKK}^2}{4\pi }=2.10_{0.56}^{0.88}GeV^2,\frac{g_{fKK}^2}{g_{f\pi \pi }^2}=4.1\pm 0.9.$$ (16) The statistical accuracy did not allow to define the contribution of $`\sigma `$ in the fit, so in (16) $`\sigma `$ was excluded from the fit. The $`\eta \pi ^0`$ mass spectrum was fitted also by the formulae from , but because of lower statistics the ratio of coupling constants was fixed $`\frac{g_{a\eta \pi }}{g_{aKK}}=0.85`$ . The following $`a_0`$ optimal parameters were obtained: $$m_a=992_7^{+22}MeV,\frac{g_{aKK}^2}{4\pi }=1.09_{0.24}^{+0.33}GeV^2$$ (17) The obtained value of $`a_0`$ mass does not contradict to the PDG Table value. If the $`a_0`$ mass is fixed, one could obtain more accurate value of coupling constant: $$\frac{g_{aKK}^2}{4\pi }=0.83\pm 0.13GeV^2$$ (18) CMD-2 carried out the search for the decay $`\varphi \pi ^+\pi ^{}\gamma `$ in the reaction : $$e^+e^{}\varphi \pi ^+\pi ^{}\gamma $$ (19) with the goal to find a contribution of the $`f_0\pi ^+\pi ^{}`$ channel in the final state. On the contrary to the neutral channel $`f_0\pi ^0\pi ^0`$, there is a significant background from the nonresonant process $`e^+e^{}\rho \gamma \pi ^+\pi ^{}\gamma `$ and interference with the processes $`e^+e^{}\varphi \pi ^+\pi ^{}\gamma `$ and $`e^+e^{}\rho \pi ^+\pi ^{}\gamma `$. It was found, that the process (19) cross section energy dependence exhibits interference wave near point $`2E=M_\varphi `$. The recoil photon energy spectrum (fig.6) shows a peak at $`E_\gamma 220`$ MeV due to the process $`e^+e^{}\rho \gamma `$ and an enhancement at $`E_\gamma 50`$ MeV from the decay $`f_0\pi ^+\pi ^{}`$, which roughly corresponds to the mass difference between $`\varphi `$ and $`f_0`$ mesons. To obtain the branching ratio $`B(\varphi \pi ^+\pi ^{}\gamma )`$ the fitting of photon spectra at different energy points was done using formulae from the work , which include contributions of the background reactions and the $`f_0\pi ^+\pi ^{}`$ decay. The optimal value of $`f_0`$ mass was $`m_f=976\pm 5`$ MeV, the branching ratio: $$B(\varphi f_0\gamma )=(1.93\pm 0.46\pm 0.59)10^4$$ (20) ## 4 Discussion on the decays $`\varphi f_0\gamma ,a_0\gamma `$ In the list of new VEPP-2M data the results (7)-(12) are model independent, because they are based on the total number of events. Other results - (13)-(18) use different assumptions, for instance, $`BR(\varphi f_0\gamma )`$ (13),(15) is based on assumption of $`f_0`$ dominance in the final $`\pi ^0\pi ^0`$ state. The main parameters of $`f_0`$ and $`a_0`$ like their masses, widths, coupling constants were obtained from the description of these decays, proposed by N.Achasov , so these parameters are also strongly model dependent. Below we give a conclusion on the nature of $`f_0`$ and $`a_0`$ scalars, which follows from the model dependent data (13)-(18) and the work . There are three main models, describing structure of $`f_0`$ and $`a_0`$ scalars: $`q\overline{q}`$ model ($`n\overline{n}`$ or $`s\overline{s}`$), molecular model $`(K\overline{K})`$ and 4-quark model $`(q\overline{q}q\overline{q})`$. The general accepted opinion is that $`f_0`$ and $`a_0`$ are difficult to fit into $`q\overline{q}`$ model. This opinion is based on the existing experimental data. For instance, the decays $`J/\psi f_0\gamma ,f_0\omega ,a_0\rho `$ are considerably suppressed in comparison with similar decays, where instead of $`f_0`$ and $`a_0`$ the tensor mesons $`f_2`$ or $`a_2`$ are produced. If $`f_0`$ and $`a_0`$ are $`q\overline{q}`$ mesons, their production in $`J/\psi `$ decays should be of the same order as the production of tensor mesons. Another example is two-photon width of $`f_0`$ and $`a_0`$. The experimental value $`\mathrm{\Gamma }0.3`$ keV is smaller than the value 0.6 keV predicted in $`K\overline{K}`$-model and value $`0.6÷15.`$ keV in $`q\overline{q}`$ model. But the four-quark model prediction (0.3 keV) agrees with experiment. The radiative decay $`\varphi f_0\gamma ,a_0\gamma `$ measurements were long awaited as a new test of $`f_0`$ and $`a_0`$ nature. The Table 1 shows the comparison of different models predictions with averaged experimental data at VEPP-2M (see preceding section). The accuracy of the model predictions is about $`50\%`$. The conclusion from the Table 1 is that VEPP-2M data are in good agreement with four-quark model of $`f_0`$ and $`a_0`$. But we remind the reader, that experimental data in the Table 1 assume, that $`f_0`$ and $`a_0`$ dominate in the final state of the reactions (1) and (2). This assumption is in good agreement with experimental spectra, but the present accuracy is not sufficient to exclude contributions of other scalars into the final state. There is one remark, concerning the decay $`\varphi a_0\gamma `$. Its branching ratio is close to that of $`\varphi \eta ^{}\gamma `$. So, $`a_0`$ should contain strange quarks like $`\eta ^{}`$, which is impossible for $`q\overline{q}`$ isovector meson, but is quite natural if $`a_0`$ is a four quark $`q\overline{q}s\overline{s}`$ meson. This discussion was based mainly on the work , where detailed analysis of existing data for $`f_0`$ and $`a_0`$ mesons, regarding their nature, is given. ## 5 Other rare $`\varphi `$ decays The large number of produced $`\varphi `$ mesons at both SND and CMD-2 detectors ($`N_\varphi 4.510^7`$) allows to carry out the search of rare $`\varphi `$ decay. The long awaited decay $`\varphi \eta ^{}(958)\gamma `$ was first observed by CMD-2 . In the decay chain $`\varphi \eta ^{}\gamma ,\eta ^{}\eta \pi ^+\pi ^{},\eta \gamma \gamma `$ the branching ratio was $`(8.2_{1.9}^{+2.1})10^5`$. For another chain $`\eta ^{}\pi ^+\pi ^{}\pi ^0(\gamma )`$ the CMD-2 result was $`(5.8\pm 1.8)10^5`$ . Later SND confirmed the existence of the decay $`\varphi \eta ^{}\gamma `$ with the branching ratio of $`6.7_{2.9}^{+3.4}10^5`$ . The clear signature of the decay $`\varphi \eta ^{}\gamma `$ is demonstrated in figs.8, 8. The averaged value of branching ratio is $`BR(\varphi \eta ^{}\gamma )=(6.9\pm 1.2)10^5`$. The statistical significance is greater than 5 standard deviations. This result is in agreement with nonrelativistic quark model prediction of $`(6÷10)10^5`$ . At present level of the accuracy no significant admixture of gluonim in $`\eta ^{}`$ is seen. The $`\varphi \pi ^+\pi ^{},\omega \pi ^0`$ decays are double suppressed - by isospin invariance and OZI rule. The $`\varphi \pi ^+\pi ^{}`$ decay was already observed. Its PDG Table value is $`Br(\varphi \pi ^+\pi ^{})=(0.8_{0.4}^{+0.5})10^4`$, while the second decay $`\varphi \omega \pi ^0`$ was not observed yet. SND performed the search for this decay in the the reaction : $$e^+e^{}\omega \pi ^0\pi ^+\pi ^{}\pi ^0\pi ^0$$ (21) The clear interference pattern in energy dependence of the process (21) was observed (fig.6). The decay amplitudes and branching ratios are the following : $$Re(Z)=0.112\pm 0.015,Im(Z)=0.104\pm 0.022,Br(\varphi \omega \pi ^0)=(4.6\pm 1.2)10^5$$ (22) The theoretical prediction for the branching ratio is about twice larger. In our case the real part Re(Z) is too low. The observed disagreement could be due to existence of direct $`\varphi \omega \pi ^0`$ transition or nonstandard mixing of light vector mesons. In similar way was studied the cross section of the process $$e^+e^{}\pi ^+\pi ^{}$$ (23) The results of fitting are : $$Re(Z)=0.061\pm 0.005,Im(Z)=0.042\pm 0.006,Br(\varphi \pi ^+\pi ^{})=(7.1\pm 1.0\pm 1.0)10^5$$ (24) The accuracy of the measurement is about 3 times higher than in PDG Tables. But here again SND result for real part Re(Z) is lower than predicted in and preliminary result of CMD-2 : $$Br(\varphi \pi ^+\pi ^{})=(18.1\pm 2.5\pm 1.9)10^5$$ (25) The disagreement between CMD-2 and SND in $`Br(\varphi \pi ^+\pi ^{})`$ is 3 standard deviations. Both detectors studied the rare decay $`\varphi \mu ^+\mu ^{}`$. The result of SND is $$Br(\varphi \mu ^+\mu ^{})=(33.0\pm 4.5\pm 3.2)10^5$$ (26) The result of CMD-2 is $$Br(\varphi \mu ^+\mu ^{})=(28.0\pm 3.0\pm 4.6)10^5$$ (27) The full review of other $`\varphi `$ rare decay studied at VEPP-2M can be found in refs . ## 6 Decays $`\rho ,\omega \pi ^0\pi ^0\gamma `$ The decays $`\rho ,\omega \pi ^0\pi ^0\gamma `$ are of interest for the study of the possible low-mass scalar resonance $`\sigma `$, decaying into $`\pi \pi `$ final state. Some contributions are expected also from the $`\rho ,\omega \omega \pi ^0,\rho \pi \pi ^0\pi ^0\gamma `$ decays. In our work , where the $`\rho \pi ^+\pi ^{}\gamma `$ decay was studied, an enhancement was observed in the high end of the photon bremsstrahlung spectrum, which can be interpreted as a manifestation of a light bound state, possibly $`\sigma `$ resonance. Later in Protvino, the decay $`\omega \pi ^0\pi ^0\gamma `$ was observed with the branching ratio $`(7.2\pm 2.5)10^5`$, which is $`3`$ times larger, than expected in Vector Dominance Model (VDM). In the recent work we studied neutral final state in the reaction $`e^+e^{}\rho ,\omega \pi ^0\pi ^0\gamma 5\gamma `$. Fig. 10 shows a 2-dimensional plot of the best neutral pion candidates, found in 5-photon final state. The measured cross section was fitted by the sum of the Breit–Wigner contributions from $`\omega `$ and $`\rho `$ resonances. The Born cross section and fitting curves are shown on the fig.10. One can see, that the measured cross section considerably exceeds VDM prediction. The fit parameters are the following: $$BR(\omega \pi ^0\pi ^0\gamma )=(8.4_{3.1}^{+4.9}\pm 3.5)10^5,\mathrm{\Gamma }_{\omega \pi ^0\pi ^0\gamma }0.7keV$$ (28) $$BR(\rho \pi ^0\pi ^0\gamma )=(4.2_{2.0}^{+2.9}\pm 1.0)10^5,\mathrm{\Gamma }_{\rho \pi ^0\pi ^0\gamma }6keV,(without\omega \pi ^0)$$ (29) So, the result (28) confirms the PDG value of $`BR(\omega \pi ^0\pi ^0\gamma )`$. Both branching ratios (28) and (29) are considerably ($`4`$ times) higher than VDM estimates. The possible explanation of this enhancement could be a contribution of light scalar $`\sigma `$, decaying into $`\pi ^0\pi ^0`$. It was suggested by Jaffe , that $`\sigma `$ could be lightest member of the four-quark nonet with the structure $`u\overline{u}d\overline{d}`$. Because of superallowed $`\sigma \pi \pi `$ decay, $`\sigma `$ is very broad. Among other members of four-quark nonet there are $`f_0(980)`$ and $`a_0(980)`$ \- the particles with also superallowed but phase space suppressed decay into $`K\overline{K}`$. So, both $`f_0(980)`$ and $`a_0(980)`$ have a narrow width 50–100 MeV. The further investigation of the decays $`\varphi ,\rho ,\omega \pi ^0\pi ^0\gamma `$ and in particular study of the $`\pi ^0\pi ^0`$ decay mass spectra could clarify the nature of light scalar mesons. ## 7 The process $`e^+e^{}\pi ^+\pi ^{}\pi ^0`$ above $`\varphi `$ resonance The energy region above $`\varphi `$ was scanned with the goal to measure $`e^+e^{}`$\- annihilation cross sections and quantity R - the ratio of total hadronic cross section to muon pair production cross section. Among the processes under study the process $$e^+e^{}\pi ^+\pi ^{}\pi ^0$$ (30) is of particular interest, because earlier it was measured with poor accuracy and new possible isoscalar vector resonances could be found here. The study of the process (30) was done by SND detector in the energy range 2E=1.04–1.38 GeV . The measured cross section, shown in fig.12,12 is in agreement with previous data from ND experiment and well matches DM2 measurements at higher energies . The systematic error in the cross section is $`10\%`$, but it grows up to $`50\%`$ closer to $`\varphi `$ because of radiative corrections. The Born cross section in fig.12 shows a broad peak with the visible position at $`2E1200`$ MeV. To describe the cross section in terms of sum of vector mesons, the fit was done including $`\omega (783)`$, $`\varphi (1020)`$, $`\omega (1600)`$ and an additional $`\omega `$-like state, named $`\omega (1200)`$, with its mass and width set free. For two latter resonances the widths were assumed independent of energy. The optimal fit parameters strongly depended on interference phases choice. The best fit occurs at the following phase set: $`\varphi _{\omega (783)}=0`$, $`\varphi _{\varphi (1020)}=\pi `$, $`\varphi _{\omega (1200)}=\pi `$, $`\varphi _{\omega (1600)}=0`$. The $`\omega (1200)`$ parameters are : $$M_{eff}=1170\pm 10MeV,\mathrm{\Gamma }_{eff}=187\pm 15MeV,\sigma _{max}=7.8\pm 1.0nb,$$ (31) The parameters of the resonance $`\omega (1600)`$ are confirmed by the fit, but another resonance $`\omega (1420)`$ is not seen in our fit. If the existence of $`\omega (1200)`$ is confirmed, the question of its nature arises. It could be either first radial excitation $`2^3S_1`$ or radial excitation (D-wave) $`1^3D_1`$ of $`\omega (783)`$. In any case, new analyses of isoscalar cross sections data are needed to clarify the problem of $`\omega `$ family excitations. ## 8 Project VEPP-2000 A new project is studied now in Novosibirsk. It is planned to replace VEPP-2M ring which has a maximum center of mass energy of $`2E=`$1400 MeV by a new one with the higher energy up to $`2E=`$2000 MeV. Fig. 13 shows the location of the new and the old rings in the VEPP-2M hall. A remarkable feature of the new collider is a round beam optics, where instead of conventional quadrupole lenses the superconductive solenoids are used. The beam itself has equal horizontal and vertical size, which promises the higher luminosity in single bunch mode. The future collider is named VEPP-2000. Its designed luminosity is $`10^{32}cm^2s^1`$ at $`2E=`$2000 MeV and $`10^{31}cm^2s^1`$ at $`2E=`$1000 MeV. The design and construction of VEPP-2000 is planned to start in 2000. The physical program is aimed to detailed study of $`e^+e^{}`$ annihilation processes in the energy range $`2E=`$1–2 GeV. ## 9 Evidence of possible exotic baryon X(2000) Among the contributed papers, there is one, presented by L.Landsberg, IHEP, Protvino , related to the subject of exotic hadrons. In this work the diffractive production of baryon resonances was studied with the SPHINX setup in the reaction: $$p+NYK+N,Y=[\mathrm{\Sigma }^0K^+],\mathrm{\Sigma }^0\mathrm{\Lambda }\gamma $$ (32) The mass spectrum of $`\mathrm{\Sigma }^0K^+`$ shows a clear peak at $`2000`$ MeV, which is referred below as X(2000). The fitting gives more accurate values: $`M_X=1989\pm 6MeV,\mathrm{\Gamma }_X=91\pm 20MeV`$. The statistical significance is more than 10 standard deviations. The production cross section is $`95\pm 20`$ nb. The unusual dynamic properties of X(2000) are the following: 1 - the value $`R=\frac{Br(X\mathrm{\Sigma }K)}{Br(Xnonstrange)}1`$, while for usual $`qqq`$ isobar $`R10^2`$; 2 - the width of X(2000) $`\mathrm{\Gamma }_X100`$ MeV, which is considerably less than for isobars - $`300÷400`$ MeV. All these properties of X(2000) allow to consider it as a serious candidate for pentaquark exotic baryon with hidden strangeness $`uuds\overline{s}`$. Latest data from SPHINX experiment confirmed the existence of X(2000) in another final state $`Y=[\mathrm{\Sigma }^+K^0],\mathrm{\Sigma }p\pi ^0,K^0\pi ^+\pi ^{}`$. New preliminary data from SELEX experiment at Fermilab also supports X(2000). In analysis of the reaction $`\mathrm{\Sigma }^{}+N\mathrm{\Sigma }^{}K^+K^{}+N`$ they observed a peak in $`Y=[\mathrm{\Sigma }^{}K^+]`$ system, with parameters close to X(2000) : $`M_X=1962\pm 12MeV,\mathrm{\Gamma }_X=96\pm 32MeV`$. Now a lot of statistics is accumulated on tapes with upgraded SPHINX detector. The analysis of new data is in progress. ## 10 General Conclusions * Experiments were carried out in Novosibirsk at VEPP-2M $`e^+e^{}`$ collider with two detectors SND and CMD-2 with total integrated luminosity $`50pb^1`$ and total number of produced $`\varphi `$ mesons $`410^7`$. * Electric dipole radiative decays $`\varphi \pi \pi \gamma `$, $`\eta \pi ^0\gamma `$ were observed with branching ratios $`10^4`$, indicating exotic 4-quark structure of lightest scalars $`f_0(980),a_0(980)`$. * Several new rare $`\varphi `$-meson decays were observed with branching fractions $`10^4÷10^5`$, e.g., $`\varphi \omega \pi ^0`$, $`\varphi \eta ^{}\gamma `$, $`\varphi 4\pi `$, $`\varphi \pi ^0e^+e^{}`$,… * A resonance-like structure in $`e^+e^{}\pi ^+\pi ^{}\pi ^0`$ cross section near $`2E`$1.2 GeV was observed, which might be a manifestation of the lightest excited $`\omega `$ state, * The decays $`\rho ,\omega \pi ^0\pi ^0\gamma `$ were seen. Their rates exceed VMD level, which might be a manifestation of lightest scalar state $`\sigma `$(400-1200), decaying into $`\pi ^0\pi ^0`$. * Design and construction of a new VEPP-2000 $`e^+e^{}`$ machine with round beams to replace existing VEPP-2M ring are started in Novosibirsk. The maximum designed energy of the new machine is $`2E`$=2000 MeV, designed luminosity — $`L=110^{32}`$. * In SPHINX experiment, Protvino, a narrow X(2000) state with a width $`\mathrm{\Gamma }`$90 MeV was observed. It is proposed as a candidate for pentaquark exotic baryon $`qqqs\overline{s}`$ with hidden strangeness. The author is grateful to Nikolai Achasov, Vladimir Golubev and Evgeny Solodov for numerous fruitful discussions. Discussion L. G. Landsberg (IHEP, Protvino): What can you say about the two-photon production of the $`a_0`$ and $`f_0`$ mesons? Are these data in agreement with the $`qq\overline{q}\overline{q}`$ or $`q\overline{q}`$ models for these mesons? Serednyakov: The measured two photon widths of $`a_0`$ and $`f_0`$ $`0.3`$ KeV are significantly lower than predictions of $`q\overline{q}`$ model and agree with $`qq\overline{q}\overline{q}`$ model. Norbert Wermes (Bonn University): Are the two detectors at VEPP-2M capable of measuring $`R_{\mathrm{had}}`$? Will they be able to perform a scan in energy? Serednyakov: Both detectors have already accumulated $`50pb^1`$ of data and continue data taking in the energy range from 0,4 to 1.4 GeV. The measurement of $`R`$ is one of the major goals of these experiments. B.F.L. Ward (University of Tennessee): In your table of model predictions vs. experiment, why do you say that value 2.5, for the $`q\overline{q}`$ model, is farther than 20, for the 4-quark model, from the experimental value of 9? Serednyakov: Because the accuracy of theoretical prediction is about $`40÷50\%`$ , so the 4-quark value $`20\pm 10`$ considerably better agrees with experimental value 9 than 2-quark value $`2.5\pm 1.3`$. Harry Lipkin (Weizmann Institute): There are very beautiful data on $`D_sf^0\pi 3\pi `$ from Fermilab and on $`\overline{p}pf^{}\pi 3\pi `$ from CERN. Dalitz plot analyses of these reactions should be available soon. Serednyakov: The data on $`D_sf^0\pi `$ decay show that $`f_0`$ should include $`s`$-quarks. The $`n\overline{n}`$ structure of $`f_0`$ is not supported in $`D_sf^0\pi `$ decay.
no-problem/9910/cond-mat9910477.html
ar5iv
text
# Optimized energy calculation in lattice systems with long-range interactions ## I Introduction The numerical study of systems with long-range interactions is notoriously difficult, due to the large number of interactions that has to be taken into account. Specifically, the number of operations to calculate the energy of a single particle scales as the total number of particles in the system, in contrast to the case of short-range interactions, where the corresponding number of operations is of order unity. This implies that Monte Carlo-based methods are generally restricted to very small system sizes, which are still hampered by strong finite-size effects. Some years ago, this problem was resolved for the case of $`\mathrm{O}(n)`$ spin systems with (ferromagnetic) long-range interactions, for which a dedicated cluster algorithm was developed . Since the efficiency of this algorithm is independent of the number of interactions per spin, speed improvements of several orders of magnitude could be obtained compared to a conventional cluster algorithm. This speed-up pertains to the generation of independent configurations, for which the calculation of the energy is not required. Indeed, a variety of interesting physical results could be obtained by means of this method, see, e.g., Refs. . Whenever one needs to sample the internal energy, however, the improvement is much less dramatic: the major remaining advantage is that one only has to calculate the energy for truly independent configurations, rather than in every Monte Carlo step. Whereas this still implies that one can study systems which are an order of magnitude larger than those that can be accessed via Metropolis-type simulations (cf. Ref. ), one is eventually limited by the fact that the total computing time scales quadratically with the system size. One major disadvantage of this inaccessibility of the energy is the fact that it is not possible to apply histogram interpolations in order to obtain information on thermodynamic quantities over a large parameter space . In this paper, we point out that, for systems with periodic boundary conditions, this problem can be circumvented by calculating the internal energy in momentum space. Thus, one can apply a Fast Fourier Transform (FFT), reducing the total computational effort to $`𝒪(N\mathrm{log}N)`$ for a system containing $`N`$ spins. Indeed, this is a natural choice if one recognizes that the total energy is just given by a (discrete) convolution, which is one of the major applications of the FFT. Remarkably, the computational overhead entailed by the FFT turns out not to be a limiting factor: already for very small systems it is more efficient than a direct calculation of the energy. The remainder of this paper is organized as follows. First, we derive an expression for the energy in terms of the Fourier-transformed spin system and point out that several other observables can be obtained on the fly, at negligible additional cost. We also give a detailed comparison of our approach and the conventional method. Next, we illustrate our approach by means of several new physical results for one-dimensional systems with long-range interactions. We end with a summary of our results. ## II Energy calculation We will now first illustrate our approach for a $`(d=1)`$-dimensional system with an $`n`$-component order parameter, i.e., a generalized $`\mathrm{O}(n)`$ spin chain. This system is described by the Hamiltonian $$=\frac{1}{2}\underset{x=1}{\overset{N}{}}\underset{y=1}{\overset{N}{}}J(xy)𝐒(x)𝐒(y),$$ (1) where the spins $`𝐒(x)`$ are $`n`$-component unit vectors and $`N`$ is the system size. The interaction $`J(x)`$ is defined for all $`x`$. Under the condition that periodic boundary conditions are employed, the effective coupling $`\stackrel{~}{J}(x)`$ between two spins is given by the sum over all periodic copies, $$\stackrel{~}{J}(x)\underset{m=\mathrm{}}{\overset{\mathrm{}}{}}J(x+mN)$$ (2) and hence has a period $`N`$. We set the self energy $`\stackrel{~}{J}(mN)`$, which is just an additive constant in the total energy, equal to zero. Each component of the spin configuration $`𝐒(x)`$ and the interaction $`\stackrel{~}{J}(x)`$ can then be written as a Fourier sum $$f(x)=\frac{1}{\sqrt{N}}\underset{k=0}{\overset{N1}{}}f_ke^{2\pi ikx/N},$$ (3) where the Fourier coefficients $`f_k`$ are obtained from the discrete Fourier transform of $`f(x)`$. By means of the discrete convolution theorem it is then straightforward to show that Eq. (1) can be written as $$=\frac{\sqrt{N}}{2}\underset{k=0}{\overset{N1}{}}\stackrel{~}{J}_k𝐒_k𝐒_k.$$ (4) The essential step is now, that application of the Fast Fourier Transform reduces the computational effort for the calculation of the $`N`$ Fourier coefficients from $`𝒪(N^2)`$ to $`𝒪(N\mathrm{log}N)`$, thus, in principle, greatly speeding up the calculation of the total energy. The sum in (4) adds another $`𝒪(N)`$ operations, but this is compensated for by the fact that one typically also wants to calculate the magnetic susceptibility $`N^1|_{x=0}^{N1}𝐒(x)|^2`$, which in the momentum-space representation is immediately given by $`|𝐒_{k=0}|^2`$. For maximum efficiency, one has to restrict the system size to powers of 2. Naturally, the calculation of the coefficients $`\stackrel{~}{J}_k`$ has to be carried out only once. Even more can be gained, if one also desires to calculate the spin–spin correlation function $`g(r)=𝐒(0)𝐒(r)=N^1_{x=0}^{N1}𝐒(x)𝐒(x+r)`$. The discrete correlation theorem states that the Fourier transform $`g_k`$ of $`g(r)`$ is equal to $`N^{1/2}𝐒_k𝐒_k`$, so that $`g_k`$ is obtained by $`N`$ multiplications rather than another $`𝒪(N^2)`$ operations in the real-space representation. All the above estimates are only measures for the complexity of the algorithm, which become valid for sufficiently large $`N`$. It remains to be seen whether the FFT-based approach is actually faster for the range of system sizes that can be accessed in present-day Monte Carlo simulations, which for lattice models go up to $`N10^510^6`$. Figure 1 compares the required CPU time per spin for the calculation of the internal energy via Eq. (1) and Eq. (4), respectively, and the susceptibility. As expected, the former method scales asymptotically linearly with $`N`$. For the latter method, two estimates are given, which only differ in the choice of the FFT implementation. The slower results (open squares) were obtained by means of the routines of Ref. and the faster (triangles) by means of those of Ref. . Although these two estimates differ by as much as a factor of $`5`$, both of them outperform the conventional method already for $`N10`$; for $`N=2^{18}`$ the improvement amounts to roughly four orders of magnitude. The initial downward trend in Fig. 1 is due to overhead being distributed over an increasing number of spins. Likewise can the “irregularities” in the FFT estimates be attributed to computational aspects. The slight deviations from linearity in the conventional estimates, however, are due to statistical inaccuracies in the timing: for $`N𝒪(10^4)`$ this method becomes already prohibitively slow. Thus, we conclude that the method presented here provides a very efficient approach to energy calculations in lattice systems with long-range interactions; while there is still a weak system-size dependence in the required computational effort per spin, this no longer constitutes a bottleneck for practical applications. Note that higher-dimensional models can also be treated in this fashion. ## III Applications ### A The Ising chain As a first example, we consider the Ising chain with algebraically decaying interactions, $`J(x)J|x|^{1\sigma }`$. The critical behavior for this system is essentially classical for $`0<\sigma \frac{1}{2}`$ and nonclassical for $`\frac{1}{2}<\sigma 1`$, see Ref. . Numerical results for the thermal exponent $`y_t`$ have indicated that the latter regime can be subdivided into two parts: $`y_t>\frac{1}{2}`$ for $`\frac{1}{2}<\sigma 0.65`$ and $`y_t<\frac{1}{2}`$ for interactions that decay faster. This implies that the specific heat only diverges in a part of the nonclassical regime and should display a cusp-like singularity in the remaining part. By means of illustration, we have calculated the specific heat for $`\sigma =0.25`$ and $`\sigma =0.90`$. In both cases, we expect to find a function that does not diverge at the critical point, although the behavior should be qualitatively different. Simulations were carried out for $`N=2^p`$, with $`3p16`$, at a number of different couplings, for several times $`10^6`$ independent samples per system size. The full curves were determined by means of the multiple-histogram method , where great care was taken to minimize systematic errors due to the histogram interpolation. Figure 2 shows the specific heat $`C`$ for $`\sigma =0.25`$, as a function of the reduced coupling $`KJ/(k_\mathrm{B}T)`$. It displays several close similarities to the specific heat of the mean-field model, including the build-up of a jump discontinuity at the critical point, the crossing of the finite-size curves in a single point at $`K_c`$ (up to corrections to scaling), and (not visible on this scale) an excess peak in the curves for finite systems, i.e., $`lim_N\mathrm{}C_{\mathrm{max}}(N)lim_{KK_c}lim_N\mathrm{}C(K,N)`$ . As shown in Ref. , the location of the specific-heat maximum shifts as a function of system size, according to $$K_{\mathrm{max}}=K_c+a_1L^{y_t^{}}+a_2L^{2y_t^{}}+b_1L^{\sigma 1}+\mathrm{},$$ (5) where $`y_t^{}=\frac{1}{2}`$ and the coefficients $`a_i,b_i`$ are nonuniversal. A fit to this expression yielded $`y_t^{}=0.51(6)`$ and $`K_c=0.1147(5)`$, in good agreement with $`K_c=0.114142(2)`$ . The inset shows the peak height as a function of system size, strongly suggesting that the maximum is indeed finite in the thermodynamic limit. The case $`\sigma =0.90`$, shown in Fig. 3, clearly exhibits a distinctly different behavior. The specific heat is now nonzero in the thermodynamic limit, on either side of the critical point, and indeed displays the expected cusp-like singularity. The inset confirms that the maximum is convergent for $`N\mathrm{}`$. Since $`y_t`$ is still sufficiently close to $`\frac{1}{2}`$, i.e., the absolute value of the exponent $`\alpha `$ is sufficiently small, the location of the maximum cannot be distinguished from the critical point, unlike the case $`\sigma =1`$, where it is expected to occur at a coupling $`K<K_c`$. ### B The three-state Potts chain The ferromagnetic Potts model provides a particular generalization of the Ising model with respect to the number $`q`$ of possible coexisting ordered phases. For $`q=2`$ the Ising model is recovered; for $`q>2`$ the ferromagnetic Potts model defines a genuine universality class distinct from the Ising and the more general $`\mathrm{O}(n)`$ universality class. The Potts model is of particular theoretical interest, because the phase transition it describes may be of first or second order depending on $`q`$ and the spatial dimension $`d`$, even in the absence of symmetry-breaking fields. For nearest-neighbor interactions in $`d=2`$ many properties of the Potts model are exactly known . In particular, if the model is in the first-order regime and $`q`$ is sufficiently large, the asymptotic finite-size properties of the nearest-neighbor ferromagnetic Potts model have been established in a rigorous fashion . For long-range interactions, however, much less is known and currently available numerical data are limited to rather small systems . We demonstrate in the following, that also for the Potts model the cluster algorithm introduced in Ref. can be combined with the FFT, allowing the numerical treatment of much larger systems. Again, we concentrate on the case of algebraically decaying interactions $`J(x)|x|^{1\sigma }`$. The Hamiltonian of the ferromagnetic Potts chain with periodic boundary conditions can then be written in the same form as Eq. (1), where the Potts spins $`𝐒(x)`$ are unit vectors which mark the corners of a (hyper)tetrahedron in $`q1`$ dimensions. For the present case $`q=3`$ we employ the complex notation $$𝐒(x)𝒮(x)\{1,e^{2\pi i/3},e^{4\pi i/3}\},$$ (6) i.e., $`𝐒(x)𝐒(y)=\mathrm{}[𝒮(x)𝒮(y)^{}]`$, where the asterisk denotes the complex conjugate. The spin representation of the Potts model given by Eq. (1) is equivalent to the standard Kronecker representation, but it has the advantage that the configurational energy is directly accessible by means of the FFT. According to mean-field theory the ferromagnetic Potts model should always show a first-order phase transition for $`q>2`$. For our case of $`d=1`$ and algebraically decaying interactions, one therefore expects the mean-field prediction to be correct for sufficiently small values $`\sigma >0`$ of the decay exponent of the interaction, i.e., there should be a critical value $`\sigma _c`$ separating first- and second-order behavior. Mean-field theory provides an important guideline for the interpretation of our Monte Carlo data, so we briefly summarize the basic mean-field predictions. Following Ref. we introduce the probability $`p_\kappa (x)`$ that lattice site $`x`$ is occupied by the Potts state $`\kappa `$, $`1\kappa q`$, and we define a homogeneous scalar order parameter $`s`$ indicating a broken symmetry with respect to Potts state $`\kappa =1`$: $$p_1(x)m_1=\frac{1+(q1)s}{q},p_\kappa (x)m_\kappa =\frac{1s}{q},\mathrm{\hspace{0.33em}2}\kappa q.$$ (7) For a given value of $`s`$ the mean-field free-energy density $`f_{\mathrm{MF}}(s)`$ (in units of $`k_\mathrm{B}T`$) is then obtained as $$f_{\mathrm{MF}}(s)=K\zeta (1+\sigma )s^2+\left\{[1+(q1)s]\mathrm{log}[1+(q1)s]+(q1)(1s)\mathrm{log}(1s)\right\}/q,$$ (8) where $`K=J/(k_\mathrm{B}T)`$ denotes the reduced coupling and $`\zeta (\alpha )`$ is the Riemann zeta function. Note that the replacement $`K(q1)q^1K`$ transforms Eq. (8) from the spin representation into the Kronecker representation. The transition point $`K=K_{\mathrm{MF}}^t`$ from the disordered phase $`s=0`$ to the ordered $`(\kappa =1)`$ phase $`s=s_{\mathrm{MF}}^t`$ follows from standard mean-field arguments : $$K_{\mathrm{MF}}^t\zeta (1+\sigma )=\frac{(q1)^2}{q(q2)}\mathrm{log}(q1),s_{\mathrm{MF}}^t=\frac{q2}{q1}.$$ (9) According to Eqs. (7) and (9) the distribution function $`P(m_1)`$ for a finite system displays three maxima near the transition temperature $`T_{\mathrm{MF}}^tJ/(k_\mathrm{B}K_{\mathrm{MF}}^t)`$: one at $`m_1=1/q`$ for the disordered phase, one at $`m_1=(q1)/q`$ for the ordered $`(\kappa =1)`$ phase, and one at $`m_1=1/[q(q1)]`$ for the ordered phases with respect to the remaining Potts states $`(\kappa 2)`$. Note that all ordered phases appear with equal probability in the course of the simulation. For $`\sigma 0.4`$ and in our case of $`q=3`$ these three peaks in $`P(m_1)`$ are indeed located very close to their mean-field positions. For $`\sigma =0.6`$ the peaks are still clearly separated, but they occur at positions shifted with respect to the mean-field predictions and for $`\sigma 0.7`$ the peaks start to overlap strongly and can only be identified for very large systems (see below). Although the algorithm introduced in Ref. is by far the most efficient one for the simulation with spin systems with long-range interactions, it is not able to deal with first-order phase transitions beyond a certain system size. The reason is that like the Metropolis algorithm the Wolff cluster algorithm encounters an activation barrier between states with and without long-range order, which is set by the energy-density gap between the disordered and the ordered phase. For a given size of the gap the tunneling time between disordered and ordered phases, and therefore the required sampling time, increases exponentially with the system size so that the attainable system size $`N`$ is severely limited. This tunneling problem can be solved by employing the well-established ideas of multicanonical sampling , however, the generalization of the cluster algorithm to an efficient multicanonical algorithm is beyond the scope of the present article. The data we present in the following have been obtained from histograms of the energy taken at several temperatures. The data are again conveniently analyzed by the optimized multiple-histogram method . For the values $`\sigma =0.2`$ and $`\sigma =0.4`$ the Potts chain undergoes a strong first-order phase transition which limits the chain length to $`N=2^{13}`$ in the former case and to $`N=2^{14}`$ in the latter case. We reinvestigate chains from $`N=2^{10}`$ spins to the respective maximum chain length by taking a few times $`10^6`$ independent samples for each system size and temperature, where a comparison with the finite-size theory of Borgs et al. turns out to be very instructive. We restrict the detailed presentation to the case $`\sigma =0.4`$; the data for $`\sigma =0.2`$ are qualitatively very similar. Near the transition temperature the energy distribution function $`P(E)`$ displays two peaks characterizing the ordered and the disordered phase, respectively . In Ref. the temperature of equal peak height is taken as an estimate for the transition temperature on the finite system, where the leading finite-size corrections are of the order $`𝒪(1/N)`$. If the systems are large enough, i.e., the peaks in the energy distribution are well separated, the ratio $`W_o/W_d`$ of the weight of the ordered phase $`W_o`$ and the weight of the disordered phase $`W_d`$ provides a far more convenient indicator of the transition temperature, because the associated finite-size corrections decay exponentially with $`N`$ . Our result for long-range interactions is shown in Fig. 4. The transition temperature is marked by the intersection of $`W_o/W_d`$ as a function of temperature for the three largest systems. For $`N=2^{11}`$ the peaks in the energy distribution are not well separated so that $`W_o/W_d`$ is not well defined in this case. For $`N2^{12}`$ the curves meet at $`W_o/W_d=1.67(2)`$ as shown by the solid line. For $`\sigma =0.2`$ we find a corresponding intersection at $`W_o/W_d=1.25(2)`$. For nearest-neighbor interactions in $`d2`$ and sufficiently large $`q`$ the value $`W_o/W_d=q`$ is expected to indicate the transition temperature . Surprisingly, we find a much smaller value here which appears to increase with $`\sigma `$. From Ref. one furthermore expects that the curves displaying the energy density for different system sizes as a function of temperature exhibit an intersection close to the transition temperature, where the deviations are predicted to be exponentially small in $`N`$. In Fig. 5 this situation is shown for long-range interactions with $`\sigma =0.4`$. A corresponding result has been obtained for $`\sigma =0.2`$. The energy densities intersect near the transition temperature found in Fig. 4, where the shifts between mutual intersections seem to be compatible with exponentially small finite-size effects. Still too few data are available for a quantitative analysis of these shifts, but finite-size effects of the order $`1/N`$ can be ruled out. The fourth-order energy cumulant $`U_4`$ defined by $$U_4^4/^2^2$$ (10) is shown in Fig. 6 for different system sizes as a function of temperature, where $``$ is the Hamiltonian given by Eq. (1). These cumulants should also intersect at the transition temperature in the limit $`N\mathrm{}`$. The data displayed in Fig. 6 show the expected tendency, but the finite-size corrections are much larger than those for the weight ratio or the energy density (see Figs. 4 and 5, respectively). For $`\sigma =0.2`$ a corresponding result has been found. The systematic shift of the intersections of $`U_4`$ for different system sizes is compatible with a $`1/N`$ behavior as anticipated in Ref. for nearest-neighbor interactions, but the present amount of data is too limited to give reliable quantitative evidence for this behavior. For nearest-neighbor interactions and periodic boundary conditions the energy density asymptotically obeys the scaling law (cf. Eq. (1) of Ref. ) $$E(\beta ,L)\frac{E_d+E_o}{2}\frac{E_dE_o}{2}\mathrm{tanh}\left[\frac{E_dE_o}{2}(\beta \beta _t)N+\frac{\mathrm{log}q}{2}\right],$$ (11) where $`\beta =1/(k_\mathrm{B}T)`$ is the inverse temperature and $`\beta _t`$ is the transition point. It is instructive to compare the scaling form given by Eq. (11) with the data displayed in Fig. 5. The energies of the disordered phase $`E_d`$ and the ordered phase $`E_o`$ can be read off from the positions of the two maxima of the energy distribution function. It turns out that the data in Fig. 5 and their counterpart for $`\sigma =0.2`$ are in fact consistent with Eq. (11) within the error bars, provided the number of states $`q`$ on the right-hand side of Eq. (11) is replaced by the effective value $`q_{\mathrm{eff}}(\sigma =0.4)W_o/W_d|_{\beta =\beta _t}=1.67`$ measured in Fig. 4 at the transition temperature or its counterpart $`q_{\mathrm{eff}}(\sigma =0.2)=1.25`$, respectively. The quantitative comparison of our data for $`\sigma =0.2`$, $`N=2^{13}`$ and $`\sigma =0.4`$, $`N=2^{14}`$ with Eq. (11) is shown in Fig. 7. Within the statistical errors, the agreement is excellent except for larger values of the scaling variable $`(\beta \beta _t)N`$. These deviations are due to the fact that Eq. (11) only holds asymptotically for sufficiently large systems. For finite systems additional finite-size corrections enter through the residual $`N`$-dependence of $`E_d`$, $`E_o`$, and $`q_{\mathrm{eff}}(\sigma )`$ which appear as parameters in Eq. (11). Figure 7 demonstrates that the finite-size effects in the three-state Potts chain with periodic boundary conditions and long-range interactions can be interpreted in terms of the Borgs–Kotecký theory for the nearest-neighbor Potts model in higher dimensions for an effective number of states $`q_{\mathrm{eff}}(\sigma )`$. The physical meaning of $`q_{\mathrm{eff}}(\sigma )`$, however, remains unclear. The proof of Eq. (11) also requires the assumption that $`q`$ is sufficiently large , so $`q=3`$ may not be sufficient for a quantitative comparison. On the other hand, numerical investigations have shown that the $`q5`$ nearest-neighbor Potts model in $`d=2`$ and the $`q=3`$ nearest-neighbor Potts model in $`d=3`$ follow the theory of Ref. very closely despite the small values of $`q`$. Further analytical and numerical studies are required to settle this question. We close our discussion of the three-state Potts chain with a brief summary of our results for $`\sigma =0.6`$, 0.7, and 0.75, which have been studied with reduced statistics ($`10^5`$ independent samples for each system size and temperature). The value $`\sigma =0.6`$ is still located in the first-order regime , but in order to obtain a well-defined weight ratio $`W_o/W_d`$ system sizes of $`N2^{16}`$ Potts spins are required, although the maxima in the energy distribution are well separated already for $`N2^{14}`$. We have performed simulations for $`N=2^{14}`$ up to $`N=2^{17}`$ at four to six temperatures for each system size. Even for $`N=2^{17}`$ the finite-size corrections are too large to identify an intersection of the energy densities as accurately as displayed in Fig. 5. Data acquisition for $`N>2^{17}`$ is strongly hampered by the energy gap so that we refrain from discussing our data for $`\sigma =0.6`$ in any more detail. In contrast to Ref. the value $`\sigma =0.7`$ of the decay exponent can undoubtedly be identified as a member of the first-order regime. For system sizes $`N2^{16}`$ the energy distribution function displays the typical double-peak structure which becomes sharper as the system size is increased at fixed temperature. We illustrate this in Fig. 8, where the data for $`P(E)`$ are shown at $`T/T_{\mathrm{MF}}^t=0.8095`$, which is close to the transition point. The same analysis has been repeated for $`\sigma =0.75`$ and system sizes up to $`N=2^{19}`$ spins. Although $`P(E)`$ also develops a plateau similar to the one displayed in Fig. 8 for $`N=2^{15}`$, no double-peak structure could be resolved up to $`N=2^{19}`$ so that $`\sigma =0.75`$ may already belong to the second-order regime of the three-state Potts chain with long-range interactions. However, the detection of a double-peak structure in $`P(E)`$ for a given value of $`\sigma `$ is essentially a matter of attainable system size (see Fig. 8), so $`\sigma _c>0.7`$ is the only safe conclusion here. ## IV Conclusions The combination of the recently developed cluster algorithm for systems with long-range interactions with the Fast Fourier Transform for the calculation of the configurational energy leads to a Monte-Carlo algorithm with a very high efficiency. In particular, the FFT allows to extend the attainable system sizes by two orders of magnitude in comparison with other approaches (cf. Ref. ). Histogram interpolation methods then allow the investigation of thermodynamic properties of these systems with unprecedented resolution. By construction the algorithm can only deal with first-order phase transitions up to a limited system size. In order to avoid this limitation the algorithm must be generalized to include multicanonical sampling. Here, we have demonstrated the potential of the algorithm for the Ising chain and the three-state Potts chain with algebraically decaying interactions. For completeness, it is mentioned that also for system sizes that are not integer powers of two a considerable gain can be obtained by performing the discrete Fourier Transform via, e.g., a prime-factor algorithm. For the Ising chain we have investigated the finite-size behavior of the specific heat in the classical regime for $`\sigma =0.25`$ and in the nonclassical regime for $`\sigma =0.9`$. In the former case the specific heat behaves essentially mean-field like, i.e., the expected discontinuity in the specific heat at the critical temperature in the thermodynamic limit builds up as the system size is increased. On the other hand, the choice $`\sigma =0.9`$ is expected to yield a negative specific-heat exponent, i.e., a cusp singularity should appear with increasing system size. Our numerical data confirm this behavior as well and clearly show the different shapes of the specific-heat curves in the two cases. The three-state Potts chain is expected to show a first-order phase transition for $`\sigma <\sigma _c`$, where our results indicate that $`\sigma _c>0.7`$. For $`\sigma =0.2`$ and $`\sigma =0.4`$, for which the $`q=3`$ Potts chain displays a strong first-order phase transition, our data confirm the Borgs–Kotecký scenario of the first-order transition in Potts models with nearest-neighbor interactions in higher dimensions, provided the number $`q`$ of states is replaced by the effective number of states $`q_{\mathrm{eff}}(\sigma )=W_o/W_d|_{\beta =\beta _t}<q`$ which also enters the finite-size scaling form of the energy density near the transition temperature. For $`\sigma =0.6`$ the same behavior can be confirmed only on a semi-quantitative level, because much larger systems must be investigated in order to obtain sufficient resolution. The mechanism that leads to the reduction of the effective number of states and the physical interpretation of $`q_{\mathrm{eff}}(\sigma )`$ are not known. ###### Acknowledgements. We gratefully acknowledge helpful discussions with T. Neuhaus and W. Janke, and stimulating comments by K. Binder. Furthermore, the authors wish to thank the organizers of the Twelfth Annual Workshop on Recent Developments in Computer Simulation Studies (Athens, Georgia), where the collaboration leading to this work was initiated. M. Krech also gratefully acknowledges financial support through the Heisenberg program of the Deutsche Forschungsgemeinschaft.
no-problem/9910/cond-mat9910002.html
ar5iv
text
# Bandgap renormalization of modulation doped quantum wires ## Abstract We measure the photoluminescence (PL) spectra for an array of modulation doped, T-shaped quantum wires as a function of the 1d density $`n_e`$ which is modulated with a surface gate. We present self-consistent electronic structure calculations for this device which show a bandgap renormalization which, when corrected for excitonic energy and its screening, are largely insensitive to $`n_e`$ and which are in quantitatively excellent agreement with the data. The calculations show that electron and hole remain bound up to $`3\times 10^6cm^1`$ and that therefore the stability of the exciton far exceeds the conservative Mott criterion. Exchange and correlation in an electron gas formed in a semiconductor act to counter the direct Coulomb interaction by reducing the inter-particle overlap. For two component systems, such as the electron-hole plasma created in optical experiments, this effect tends to produce a bandgap renormalization (BGR) with increasing density $`n_e`$ and/or $`n_h`$, which reduces the energy of photons emitted upon recombination from the band edges . Exciton formation further reduces the bandgap but exciton binding is weakened by mobile charges and so the trend with density opposes that of exchange-correlation induced BGR. Investigations of the bandgap, which has a pivotal dependence on the dimensionality of the system, is of interest both for its significance to optical technology and for the illumination it provides for the many-body problem. Consequently there are numerous experimental and theoretical studies of BGR which have focused on systems of successively lower dimension over the past several years . For one dimensional systems, or quantum wires (QWRs), a number of recent experimental and theoretical accounts have begun to clarify the often competing effects which result in density dependent changes to the observed photoluminescence energy . In general, BGR depends on the densities, $`n_e`$ and $`n_h`$, of both components of the electron-hole plasma. Typically, however, research has concentrated on intrinsic samples wherein $`n_e=n_h=n`$. One difficulty with this approach has been that in order to vary $`n`$, an increase in photoexcitation has been required, or else the time development as the excitation subsides has been observed, and the resulting spectra are complicated with highly non-equilibrium effects such as phase space filling. In this paper we present an experimental study of the evolution of the photoluminescence energy in a doped, T-shaped QWR sample whose conduction band electron density $`n_e`$ can be modulated with the voltage applied to a surface gate. This structure (see Fig. 1), fabricated via the cleaved edge overgrowth techniques has appreciable advantages. First it provides wires of high precision, with structural variations restricted to the monolayer regime. Second, it permits the comparison of wire and quantum well photoluminescence in a single sample. To complement the measurements, we perform self-consistent electronic structure calculations within density functional theory (DFT) for this structure, using the local density approximation (LDA) for exchange and correlation (XC), $`V_{xc}`$. The theoretical bandgap renormalization, which is usually calculated with many-body techniques, is equivalent to the difference between the LDA calculated bandgap and that calculated within a pure Hartree approximation, which omits the $`V_{xc}`$ term. We have further calculated the effect of exciton formation and its screening on the bandgap, using a simplified model potential with parameters derived from the (translationally invariant) DFT calculation. Our principle result is that, as with experiments on two-component plasma in V-groove wires , the photoluminescence peak position is largely insensitive to density. The calculated screening of the exciton reduces the binding energy with a functional form that neatly cancels most of the XC induced BGR, predicting a recombination energy in excellent agreement with experiment. Additionally, the appearance of sharp structure in the PL data, indicating recombination from excitons localized at monolayer potential fluctuations, which gradually vanishes with increasing $`n_e`$, supports this BGR+exciton screening model. The calculation suggests that the exciton remains bound for very high density, also in agreement with Ref. , however the sharp structure disappears at much lower density, $`n_e1\times 10^6cm^2`$, indicating delocalization of the exciton. The cleaved edge overgrowth (CEO) technique employed for our QWR structure has been described in detail elsewhere . Our structure consists of 22 periods of (001)-oriented $`GaAs`$ ($`5nm`$) / $`Al_{0.32}Ga_{0.68}As`$ ($`44nm`$) quantum wells (multiple quantum wells, MQWs), grown between two digital alloys with 90 periods of $`GaAs`$ ($`2nm`$) / $`Al_{0.32}Ga_{0.68}As`$ ($`8nm`$) each. These digital alloys permit us to observe the PL from the overgrowth single quantum well (SQW), which is defined by growing along the -crystal axis $`5nm`$ $`GaAs`$, a $`30nm`$ $`Al_{0.35}Ga_{0.65}As`$ spacer, a silicon $`\delta `$-doping (n-modulation doping), and $`70nm`$ $`Al_{0.35}Ga_{0.65}As`$. After both growth steps, 10 nm thick cap layers are added, which are not included in Fig. 1. T-shaped QWRs form with atomic precision at each $`5\times 5nm^2`$ wide intersection of the SQW with one of the multiple quantum wells . In order to continuously vary the electron density in the QWRs and the SQW, we evaporate a $`10nm`$ thick, semi-transparent titanium gate on the surface of the overgrowth layer of a second set of samples. When the gate is grounded, the electron density in the SQW and in the QWRs are close to that of the un-gated samples. To maximize spatial resolution of the photoluminescence (PL) and photoluminescence excitation (PLE) spectroscopy, we focus the excitation beam of a tunable cw dye laser, pumped by an $`Ar`$-ion laser, with a microscope objective onto the sample, which is attached inside a cryostat to a copper block at the nominal temperature $`5K`$. On the sample, the diameter of the almost diffraction-limited laser spot amounts to about $`800nm`$ full-width at half-maximum. A confocal imaging system guarantees that only PL limited to the laser spot region is detected. For both un-gated and gated samples, PLE spectra reveal that, due to an electron transfer from the doping layer into the SQW, an electron system is generated both in the SQW, between digital alloy and overgrowth spacer, and in the QWRs. Exciting an un-gated sample on the ($`\overline{1}10`$)-surface, we are able to identify the QWRs PL because it is localized exactly and exclusively at the intersecting region of single and multiple quantum wells and is emitted at lower energy than the PL of the SQW and the MQWs . Of course, individual QWRs cannot be resolved, since they are spaced by $`44nm`$ only, with respect to a spatial resolution of our instrument of about $`800nm`$. In order to interpret the PL lineshape of the n-modulation doped T-shaped QWRs, we compare it with the lineshape of intrinsic T-shaped QWRs, as shown in figure 2 (the curves are aligned horizontally so that the peaks coincide). Interface roughness, particularly in the (110)-oriented single quantum well , results in an inhomogeneously broadened, on average symmetric PL line of the intrinsic QWRs. The spectrally sharp peaks on the PL line are attributed to excitons localized at monolayer potential fluctuations . In the presence of free carriers, however, the excitonic electron-hole interaction is screened and the sharp peaks on the PL line disappear in the case of n-modulation doped QWRs. Furthermore, the asymmetric PL lineshape indicates the formation of an 1D electron plasma in our modulation doped QWRs and, as the dominating recombination mechanism, band-to-band transitions between electrons of the Fermi sea and photogenerated holes. The Maxwell-Boltzmann distribution of the photogenerated holes and the joint one-dimensional density of states $`1/\sqrt{E}`$ result in a decreasing recombination rate with increasing transition energy, if we assume a constant transition matrix element for k-conserving band-to- band recombinations . In k-space, these transitions occur from the $`\mathrm{\Gamma }`$-point of the first Brillouin zone up to the Fermi wave vector, if we presume low temperatures. The decrease of the recombination rate with increasing transition energy is interpreted as the origin of the wide high energy tail and therefore the asymmetry of the PL line for the modulation doped QWRs. On the other hand, observing band-to-band transitions means that the electron density in the QWRs, whose charge density is estimated at $`1\times 10^6cm^1`$ , exceeds the Mott density . According to simulation results , only electrons in the first QWRs subband have maximum probability density at the T-intersections. Electrons in the second subband are localized principally between pairs of T-intersections and have inappreciable overlap with the hole subbands. Therefore recombinations from higher subbands can be excluded as the origin of the asymmetric PL lineshape. For the gated sample, excitation is performed from the (001)-sample surface. This permits us to observe simultaneously and compare the PL (figure 3) of the QWRs (peak on low energy side) and the SQW (high side) between digital alloy and overgrowth spacer. The MQWs PL occurs at higher energy than the exhibited energy window. Applying a negative gate voltage relative to the electron system, the charge density in the QWRs and in the SQW is reduced. In figure 3, we have converted applied gate voltage into electron density per unit length for the QWRs and per unit area for the SQW. The bottom spectra of figure 3 displays the response for complete depletion as confirmed by a series of PLE measurements. If the depletion voltage is further increased, neither the PL peak position, nor qualitatively the PL lineshape change, which is consistent with a total depletion of the electron systems for the bottom spectra. With decreasing electron density, the PL lines of both the QWRs and the SQW narrow slightly, which is consistent with a reduction of the Fermi wave vector for both the QWRs and the SQW. Note that the estimated electron density of $`2\times 10^{11}cm^2`$ in the SQW exceeds the 2D Mott-density . At low densities, moreover, sharp peaks appear on the PL lines, which we attribute to excitonic, spatially localized recombination. For the SQW (figure 4), one notices a characteristic redshift of the energetic peak position of about $`56meV`$, obtained by a simple lineshape fit, as the 2d electron density, $`N_e`$, is increased from zero to about $`2\times 10^{11}cm^2`$. Correcting for the energy shift due to the quantum confined Stark effect , determined by self-consistently solving Schrödinger s and Poisson s equation, the residual shift, due to 2D BGR, amounts to about $`5\pm 1meV`$, in good agreement with earlier results for a n-modulation doped quantum well . The indicated tolerance takes into account the uncertainty in determining the real PL peak position, as two PL lines overlap in figure 3. The principle result of figure 4, however, is the weak variation of the peak position for the QWRs as $`n_e`$ varies. The overall shift of only about $`3meV`$, when the electron density is increased from zero to about $`1\times 10^6cm^1`$, is similar to results found for wires with a two component plasma in high excitation . The observation is in excellent agreement with the variation of the bandgap determined by the LDA calculation when the excitonic screening correction is included. The details of our calculation, which are based on a total free energy functional for the interacting wire-gate system will be discussed in a separate publication . In figure 4 we plot the variation of the translationally invariant band edge, the calculated exciton binding energy and the combination of the two as a function of $`n_e`$. Clearly, without the screening of the exciton, the band edge variation disagrees markedly with measurement. The variation of the exciton binding, however, is functionally nearly the inverse of the band edge variation, with variation at low $`n_e`$ strongest in both cases. The result is a close cancellation and a trend with $`n_e`$ that recapitulates the data. A similar cancellation of exciton binding energy and BGR has been derived recently by Das Sarma and Wang using the Bethe-Salpeter equation, for the case of a two-component, neutral plasma (i.e. for $`n_e=n_h`$). However one striking contrast between our results and those of Ref. is that, up to our highest density $`n_e=3\times 10^6cm^1`$, we find that the electron and hole remain bound (cf. Fig. 4), whereas those authors find a merging of the exciton with the continuum, a so-called “Mott transition,” in the range of $`0.3\times 10^6cm^1`$. The robustness of the exciton revealed in our calculations emerges from the requirement of orthogonality between the free, screening electrons and those bound to the hole; a constraint which is not maintained in the many-body calculation. Therefore, at least in the case of a one component plasma, we find that the stability of the exciton exceeds that predicted by the conservative Mott criterion. In addition our calculation employs the full non-linear screening, whereas the many-body calculation assumes linear screening and hence is not valid in the low density limit. Regarding this point, it is in the low density regime where BGR and the excitonic binding energy change most rapidly with density. The result that there remains a strong tendency for the two effects to cancel is therefore suggestive of a fundamental connection between the two processes. The exchange portion of the energy, which dominates $`V_{xc}`$ at low density, varies as $`\rho ^{1/3}`$. Therefore a $`+\rho ^{1/3}`$ dependence for the screened exciton interaction is suggested, although we do not have a fundamental argument for this. In conclusion, we have presented photoluminescence measurements of a modulation doped and surface gated T-shaped quantum wire which exhibit a weak dependence of the peak position on the density of conduction band electrons in the wire. We have also reported on density functional calculations for the structure which show a bandgap renormalization of $`10meV`$ over the range of measured densities. A calculation of the excitonic binding energy and its screening shows a complementary trend to the BGR such that the combined results are largely insensitive to $`n_e`$ and agree well with the observed line peak. Finally, we find that while the exciton binding weakens with density, it nonetheless remains bound up to $`n_e=3\times 10^6cm^1`$, suggesting an excitonic stability well in excess of the Mott criterion. $``$ permanent address: Universität Regensburg, 93040 Regensburg, Germany
no-problem/9910/astro-ph9910558.html
ar5iv
text
# Superhumps in a Peculiar SU UMa-Type Dwarf Nova ER Ursae Majoris ## 1 Introduction As a member of the ER UMa stars or RZ LMi stars, a small subgroup of SU UMa type dwarf novae, ER Ursae Majoris (= PG 0913+521) has received intensive attention. The history of studies for this star was described by Kato et al. (1998). Recent photometry (Kato and Kunjaya 1995, Robertson et al. 1995) revealed that this star shows the supercycle with a period of 43 d in which the superoutburst lasts about 20 d and the normal outburst with a period of four days. These authors detected the superhumps with periods of 0.06549-0.06573 days during superoutburst. But no clear evidence of periodic hump with an amplitude larger than 0.05 mag was found during normal outburst. Subsequent observation revealed the existence of large-amplitude superhump during the earliest stage of superoutbursts (Kato et al. 1996). Based on the thermal-tidal instability model for SU UMa-type dwarf novae, Osaki (1995) reproduced the light curves of ER UMa by increasing mass transfer rate up to 4$`\times `$10<sup>16</sup> g s<sup>-1</sup>. It is rather late to establish the orbital period of ER UMa. Thorstensen et al. (1997) obtained a precise orbital period of 0.06366 d (91.67 min) based on emission line radial velocities, which provides a good condition to study in detail the variations of superhump in both super- and normal-cycles. In this letter, we report the superhump behavior during the rise to a supermaximum and during a normal outburst. ## 2 Observation We observed ER UMa for 44 hr over 10 nights in 1998 December and 1999 March, when the star was in a rise to its superoutburst maximum and a normal outburst respectively, using a TEK1024 CCD camera attached to the Cassegrain focus of the 1.0 m reflector at Yunnan Observatory. A total of 629 useful object frames were obtained through V filter. The exposure times are rather long, in order to assure enough signal to noise ratio even if brightness of the star drops to its minimum. The journal of the observations is summarized in Table 1. After bias subtraction and flat-fielding, we removed the sky background and measured magnitudes of ER UMa and four secondary photometric standards (number 2, 3, 4, and 10 in the finding chart of Henden and Honeycutt 1995) so that could find the best comparison star. No variability has been detected in the differential magnitudes between number 4 and 10 (0.004 mag standard deviation) in the several-hour-observation runs, while number 2 faded by 0.1 mag in a four-hour run. We, therefore, selected number 4 as the comparison star in this study, which is 2$`\mathrm{}`$15.5$`\mathrm{}`$ southeast of ER UMa. In our differential light curves, the zero point is V = 14.2 mag and error bars for each point are, in general, less than $`\pm 0.017`$ mag. The AAVSO light curve (Mattei, private communication) for the system around the times of our observations is shown in Fig.1, which indicates that a rise to the superoutburst maximum and a normal outburst have been caught. ### 2.1 Superhumps during the rise to a supermaximum Our observation in 1998 December began shortly before the minimum followed by a superoutburst and ended near the supermaximum (Fig.2). At some time on December 21, the star fell into its minimum brightness. Therefore this outburst has full amplitude of nearly 3 mag. The average rising rate is about 2.5 mag/d during Dec.21-22. The daily light curves given in Fig.3 show periodic modulations and are very different from each other in period, amplitude and waveform, which prevents us from combining them to do periodic analysis. We had to do separately for daily time series at some sacrifice of accuracy. ER UMa was in the decline stage of a short outburst before the superoutburst on Dec.20 and showed in its light curve an unknown origin, periodic modulation with an amplitude of about 0.1 mag. Next short time series represents the early rising stage of the superoutburst. Although we have not enough data to detect any period, periodic double-humps with unequal amplitudes of $``$0.3 and $``$0.2 mag can be seen clearly. Evident superhump appeared in December 22 light curve with a period of 0.0589 d$`\pm 0.0007`$ d and gradually enhance its amplitude from 0.04 mag to 0.13 mag (Table 1). The period is 7.5% less than the orbit period, and thus the superhump is negative. On December 23, ER UMa was nearing its supermaximum. The superhump had changed to positive one with a period of 0.0654 d$`\pm 0.0005`$ d, 2.8% larger than the orbit period. The light curve reveals the superhump in the maximum stage to have larger amplitude from 0.21 to 0.25 mag (Table 1). ### 2.2 Superhumps in the normal outburst Fig.4 shows the light curves obtained during 1999 March 16-21. Obviously, the period of the short outburst is about 4 days. Full amplitude of this outburst is about 2 mag. Average rising rate is 2 mag/d and average decline rate is about 0.8 mag/d. Light curves of the six days are shown in Fig.5 respectively. Marked periodic modulations can be found in March 16, 18 and 20 light curves. The periodic modulation disappeared on March 17. On the contrary, a periodic modulation was taking shape on March 21. A close inspection can marginally reveals a variation of $``$0.04 mag in March 19 light curve. The light curve on March 16 shows the hump with an amplitude of 0.12 mag and a period of 0.0653 d$`\pm 0.001`$ d, 2.6% longer than the orbital period. A periodic modulation also exists on March 18 with an amplitude larger than 0.22 mag. Period of the superhump on March 20 is only 0.0642 d$`\pm 0.0004`$ d, 0.88% larger than the orbit period, and the amplitude is about 0.12 mag with gradually enhancing. We identify the modulations (at least those occurred on March 16, 18 and 20) with superhumps for the reasons: 1) The amplitude of the modulations are large (0.12 - 0.22 mag). And 2) The period is 0.88% - 2.6% longer than the orbital period. To our knowledge, nothing other than superhumps can satisfy these features. ## 3 Discussion Although we observed ER UMa for only ten nights, a complete rise to the supermaximum was caught and a normal outburst was covered. The light curves show the following features: 1) The superhump occurred during the rise to the superoutburst. 2) A negative superhump with only 0.07 mag amplitude appeared in December 22 light curve, while the superhump on the next night was positive and had the larger amplitude of 0.24 mag and a different waveform from that of the preceding night. and 3) In the normal outburst we captured, the superhumps with larger or smaller amplitudes seem to always exist, although it is not necessarily true for every normal outburst. These results show great resemblance with V1159 Ori (Patterson et al. 1995) whose light curve shows the superhump persisting far beyond the end of the superoutburst and the negative superhump appearing on two occasions. It is more likely that superhumps occasionally exist at essentially all phases of the eruption cycles of ER UMa stars. The superhump phenomenon is now well understood to be the signature of an eccentric, precession disk (see Whitehurst 1988, Hirose, Osaki 1990). The thermal-tidal instability model (Osaki 1989) for SU UMa stars has successfully interpreted not only both normal and superoutburst behaviors but also superhump phenomenon occurring during superoutburst. In this model, the supercycle begins with a compact disk. The thermal instability produces the quasi-periodic, normal outburst, but the accretion mass in each normal outburst is less than the mass transferred from the secondary. With gradually building up of both mass and angular momentum in the disk, its radius expands with successive outburst until it eventually exceeds the critical radius for 3:1 resonance. The tidal instability leads to produce an eccentric disk. Enhanced tidal torque of the eccentric disk efficiently removes angular momentum from the disk. The last normal outburst is carried into a superoutburst. The eccentric disk exhibits a slow prograde precession, and a beat between the precession of the disk and the orbital motion of the binary is observed as a superhump. After the end of the superoutburst, the disk returns to the starting compact state. Based on this model and adopted a mass-transfer rate a factor of ten higher than that expected in the standard CV evolution theory, Osaki (1995) reproduced the light curve of ER UMa. If we accept the fact that ER UMa (and V1159 Ori) shows superhumps at all the phases in its eruption cycle, the disk in this star must be always eccentric and precessing. In other words, the tidal torque of the eccentric disk is not strong enough to return the disk to the initial compact state. It will arouse two questions: First, what triggers the superoutburst, i.e. what mechanism causes the fast growth of tidal instability or sudden increase in the viscosity (as assumed in Murray (1998)’s simulation)? Second, if we take R<sub>0</sub> $``$ R<sub>crit</sub> in a simulation similar to that in Osaki (1995), where R<sub>0</sub> and R<sub>crit</sub> stand for the disk radius at the end of superoutburst and the critical disk radius for 3:1 resonance respectively, it can be inferred that the light curve of ER UMa might be reproduced with a lower mass-transfer rate. Even if so, we can not determine without concrete computation whether the value of mass-transfer rate is consistent with the well-known suggestion in which ER UMa and the other very short recurrence time systems have Ṁ in the vicinity of the critical mass-transfer rate given in the disk instability model and separating the nova-like systems from dwarf nova systems. Another interesting problem is the explanation of the negative superhump. Patterson et al. (1993, 1997) proposed that this was the signature of the precessional motion of a tilted disk. They hypothesized that the accretion disk was simultaneously eccentric and tilted. The prograde precession of the disk’s major axis gives rise to the positive superhump signal, while the retrograde precession of the disk’s line of modes is responsible for the negative one. A fluid disk in a binary potential is subject to both eccentric and tilt instabilities at the 3:1 resonance (Lubow 1992). Although the three-dimensional numerical simulation by Murray and Armitage (1998) shows that the tidal inclination instability in an accretion disk is too weak to produce a significant tilt in the high state, there seems to remain room for investigating this mechanism. We would like to thank our referee for his inspiring comments and suggestion, Dr. Mattei for providing the unpublished AASVO data of ER UMa and Optical Astronomy Lab., Chinese Academy of Science for scheduling the observations. This work is supported by the grants 19873006 and 19733001 from the National Science Foundation of P. R. China.
no-problem/9910/astro-ph9910379.html
ar5iv
text
# The artificial sky brightness in Europe derived from DMSP satellite data. ## 1. Introduction An effective battle against light pollution requires the knowledge of the situation of the night sky in large territories, the recognition of the most concerned areas, the determination of the growth trends, the identification of more polluting cities. Therefore a method to map the artificial sky brightness in large territories is required. This is also useful in order to recognize less polluted areas and potential astronomical sites. DMSP satellite allows direct information on the upward light emission from almost all countries around the World (Sullivan 1989; Elvidge et al. 1997a, 1997b, 1997c, 1999; Isobe & Hamamura 1998). We present the outlines of a method to map the artificial sky brightness in large territories measuring the upward flux in DMSP satellite night-time images in order to bypass errors arising when using population data to estimate upward flux, and computing its effects with detailed modelling of light pollution propagation in the atmosphere. Details will be extensively discussed in a forecoming paper (Cinzano et al. 1999, in prep.). ## 2. Satellite data U.S. Air Force Defense Meteorological Satellite Program (DMSP) satellites are in low altitude (830 km) sun/synchronous polar orbits with an orbital period of 101 minutes. With 14 orbits per day they generate a global nightime and daytime coverage of the Earth every 24 hours. The Operational Linescan System (OLS) is an oscillating scan radiometer with low-light visible and thermal infrared imaging capabilities. At night the instrument for visible imagery is a Photo Multiplier Tube (PMT) sensitive to radiation from 470 nm to 900 nm FWHM with the highest sensitivity at 550-650 nm where the most widely used lamps for external night-time lighting have the strongest emission. Most of data received by National Oceanic and Atmospheric Administration (NOAA) National Geophysics Data Center (NGDC), which archives DMSP data since 1992, are smoothed by on-board averaging of 5 by 5 adjacent detector pixels and have a nominal space resolution of 2.8 km. In three observational runs made during the darkest portions of lunar cycles during March of 1996 plus January and February of 1997, NGDC acquired OLS data at reduced gain settings in order to avoid saturation produced in a large number of pixels inside cities in normal gain operations due at high OLS-PMT sensivity. Three different gain settings were used on alternating nights to overcome the dynamic range limitations of the OLS. With these data a cloud-free radiance calibrated composite image of the Earth (Elvidge et al. 1999) has been obtained. The temporal compositing makes it possible to remove noise and lights from ephemeral events such as fire and lightning. Main steps in the nighttime lights product generation are: 1) establishment of a reference grid with finer spatial resolution than the input imagery; 2) identification of the cloud free section of each orbit based on OLS thermal band data; 3) identification of lights and removal of noise and solar glare; 4) projection of the lights from cloud-free areas from each orbit into the reference grid, with calibration to radiance units; 5) tallying of the total number of light detections in each grid cell and calculation of the average radiance value; 6) filtering images based on frequency of detection to remove ephemeral events. The final image was transformed in a latitude/longitude projection with 30”x30” pixel size. The map of Europe was obtained with a portion of 5000x5000 pixel of this final image, starting at longitude 10 30’ west and latitude 72north. ## 3. Mapping technique Scattering from atmospheric particles and molecules spreads the light emitted upward by the sources. If $`e(x,y)`$ is the upward emission per unit area in $`(x,y)`$, the total artificial sky brightness in a given direction of the sky in a site in $`(x^{},y^{})`$ is: $$b(x^{},y^{})=e(x,y)f((x,y),(x^{},y^{}))𝑑x𝑑y$$ (1) where $`f((x,y),(x^{},y^{}))`$ give the artificial sky brightness per unit of upward light emission produced by the unitary area in $`(x,y)`$ in the site in $`(x^{},y^{})`$. The light pollution propagation function $`f`$ depends in general on the geometrical disposition (altitude of the site and the area, and their mutual distance), on the atmospheric distribution of molecules and aerosols and their optical characteristics in the choosen photometrical band, on the shape of the emission function of the source and on the direction of the sky observed. In some works this function has been approximated with a variety of semi-empirical propagation law like Treanor Law (Treanor 1973; Falchi and Cinzano 1999; Cinzano and Falchi 1999), Walker Law (Walker 1973), Berry Law (Berry 1976), Garstang Law (Garstang 1991b). However, all of them do not take into account the effects of Earth curvature that cannot be neglected in accurate mapping of large and non-isolated territories. We obtained the propagation function $`f((x,y),(x^{},y^{}))`$ for each couple of points $`(x,y)`$ and $`(x^{},y^{})`$ with detailed models for the light propagation in the atmosphere based on the modelling technique introduced and developed by Garstang (1986, 1987, 1988, 1989a, 1989b, 1991a, 1991b, 1991c, 1999) and also applied by Cinzano (1999a, 1999b, 1999c). The models assume Rayleigh scattering by molecules and Mie scattering by aerosols and take into account extinction along light path and Earth curvature. They allow to associate the predictions to well-defined parameters related to the aerosol content, so the atmospheric conditions at which predictions refer can be well known. Depending results on an integration over a large zone, the resolution of the maps is better than resolution of the original images and is generally of the order of the distance between two pixel centers (less than 1km). However where sky brightness is dominated by contribution of nearest land areas, effects of the resolution of the original image could became relevant. We assumed the atmosphere in hydrostatic equilibrium under the gravitational force and an exponential decrease of number density for the atmospheric haze aerosols. Measurements show that for the first 10 km this is a reasonable approximation. We are interested at average atmospheric conditions, better if typical and not at particular conditions of a given night, so a detailed modelling of local aerosol distribution at a given night is beyond the scope of this work. We neglected presence of sporadic denser aerosol layers at various heights or at ground level, the effects of the Ozone layer and the presence of volcanic dust. We take into account changes in aerosol content as Garstang (1986) introducing a parameter $`K`$ which measures the relative importance of aerosol and molecules for scattering light. The adopted modelling technique allows to assess the atmospheric conditions for which a map is computed giving observable quantities like the vertical extinction at sea level in magnitudes. More detailed atmospheric models could be used whenever available. The angular scattering function for atmospheric haze aerosols can be measured easily with a number of well known remote-sensing techniques. Being interested in a typical average function, we adopted the same function used by Garstang (1991a) and we neglected geographical gradients. The normalized emission function of each area gives the relative upward flux per unit solid angle in each direction. It is the sum of the direct emission from fixtures and the reflected emission from lighted surfaces, normalized to its integral and is not known. In this paper we assumed that all land areas have the same average normalized emission function. This is equivalent to assuming that lighting habits are similar on average in each land area and that differences from the average are casually distributed in the territory. We choose to assume this function and check its consistency with satellite measurements, rather than directly determine it from satellite measurements because at very low elevation angles the spread is too much large to constrain adequately the function shape. We adopted for the average normalized emission function the normalized city emission function from Garstang (1986). ## 4. Results Figures 1-6 show the maps of the artificial sky brightness in Europe at sea level in V band. The maps have been computed for clean atmosphere with an aerosol clarity $`K=1`$, corresponding at a vertical extinction of $`\mathrm{\Delta }m=0.33`$ mag in V band, horizontal visibility $`\mathrm{\Delta }x=26`$ km, optical depth $`\tau =0.36`$. Gray levels from black to white correspond to ratios between the artificial sky brightness and the natural sky brightness of: $`<`$11%, 11%-33%, 33%-100%, 1-3, 3-9, $`>`$9. We limited our computations to zenith sky brightness even if our method allows determination of brightness in other directions. This would be useful to predict visibility in large territories of particular astronomical phenomena. A complete mapping of the artificial brightness of the sky of a site, like Cinzano (1999a), but using satellite data instead of population data is possible (Cinzano 1999, in prep.). We are more interested in understanding and comparing light pollution distributions rather than predicting the effective sky brightness for observational purposes, so we computed everywhere the artificial sky brightness at sea level, in order to avoid introduction of altitude effects in our maps. We will take account of altitudes in a forecoming paper devoted to mapping the limiting magnitude and naked-eye star visibility which requires the computation of star-light extinction and natural sky brightness for the altitude of each land area. We neglected the presence of mountains which might shield the light emitted from the sources from a fraction of the atmospheric particles along the line-of-sight of the observer. Given the vertical extent of the atmosphere in respect to the highness of the mountains, the shielding is not negligible only when the source is very near the mountain and both are quite far from the site (Garstang 1989, see also Cinzano 1999a,b). Earth curvature emphasizes this behaviour. We calibrated the maps on the basis of both (i) accurate measurements of sky brightness together with extinction from the earth-surface and (ii) analysis of before-fly radiance calibration of OLS-PMT. Map calibration based on pre-fly irradiance calibration of OLS PMT require the knowledge, for each land area, of (a) the average vertical extinction $`\mathrm{\Delta }m`$ during satellite observations and (b) the relation between the radiance in the choosen photometrical band and the radiance measured in the PMT spectral sensitivity range, which depends on the emission spectra. The result of this calibration is well inside the errorbar of the Earth-based calibration in spite of the large uncertainties both in the extinction and in the average emission spectra. As soon as a large number of sky brightness measurements will be available, a better calibration will be possible. We are extending this work to the rest of the World. ### Acknowledgments. We are indebted to Roy Garstang of JILA-University of Colorado for his friendly kindness in reading and refereeing this paper, for his helpful suggestions and for interesting discussions. ## References Berry, R. 1976, J. Royal Astron. Soc. Canada, 70, 97-115 Cinzano, P. 1999a, in Measuring and Modelling Light Pollution, ed. P. Cinzano, Mem. Soc. Astron. Ital., 70, 3, in press Cinzano, P. 1999b, in Measuring and Modelling Light Pollution, ed. P. Cinzano, Mem. Soc. Astron. Ital., 70, 3, in press Cinzano, P. 1999c, in Measuring and Modelling Light Pollution, ed. P. Cinzano, Mem. Soc. Astron. Ital., 70, 3, in press Cinzano, P., Falchi, F. 1999, Mem. Soc. Astron. Ital., submitted Elvidge, C.D., Baugh, K.E., Kihn, E.A., Kroehl, H.W., Davis, E.R. 1997a, Photogrammetric Engineering and Remote Sensing, 63, 727-734 Elvidge, C.D., Baugh, K.E., Kihn, E.A., Kroehl, H.W., Davis, E.R., Davis, C. 1997b, Int. J. of Remote Sensing, 18, 1373-1379 Elvidge, C.D., Baugh, K.E., Hobson, V.H., Kihn, E.A., Kroehl, H.W., Davis, E.R., Cocero, D. 1997c, Global Change Biology, 3, 387-395 Elvidge, C.D., Baugh, K.E., Dietz, J.B., Bland, T., Sutton, P.C., Kroehl, H.W. 1999, Remote Sensing of Environment, 68, 77-88 Falchi, F. 1999, Thesis, University of Milan Falchi, F., Cinzano, P. 1999, in Measuring and Modelling Light Pollution, ed. P. Cinzano, Mem. Soc. Astron. Ita., 70, 3, in press Garstang, R.H. 1986, Publ. Astron. Soc. Pacific, 98, 364-375 Garstang, R.H. 1987, in Identification, optimization and protection of optical observatory sites, eds. R.L. Millis, O.G. Franz, H.D. Ables & C.C. Dahn (Flagstaff: Lowell Observatory), 199-202 Garstang, R.H. 1988, The Observatory, 108, 159-161 Garstang, R.H. 1989a, Publ. Astron. Soc. Pacific, 101, 306-329 Garstang, R.H. 1989b, Ann. Rev. Astron. Astrophys., 27, 19-40 Garstang, R.H. 1991a, Publ. Astron. Soc. Pacific, 103, 1109-1116 Garstang, R.H. 1991b, in Light Pollution, Radio Interference and Space Debris, IAU Coll. 112, ed. D.L. Crawford, Astron. Soc. of Pacific Conference Series 17, 56-69 Garstang, R.H. 1991c, The Observatory, 111, 239-243 Garstang, R.H. 1999, in Measuring and Modelling Light Pollution, ed. P. Cinzano, Mem. Soc. Astron. Ital., 70, 3, in press Isobe S. & Hamamura, S. 1998, in Preserving the Astronomical Windows, IAU JD5, ed. S. Isobe, Astron. Soc. of Pacific Conference Series 139, 191-199 Sullivan, W.T., 1989, Int. J. of Remote Sensing, 10, 1-5 Treanor, P.J.S.J. 1973, The Observatory, 93, 117-120 Walker, M.F. 1973, Publ. Astron. Soc. Pacific, 85, 508-519
no-problem/9910/astro-ph9910431.html
ar5iv
text
# 1 INTRODUCTION ## 1 INTRODUCTION FU Orionis objects – sometimes known as FUors – are eruptive pre–main-sequence stars located in active star forming regions (Herbig (1966), 1977; Hartmann & Kenyon (1996); Kenyon (1999)). Roughly half of the 11 commonly accepted FUors have been observed to rise 3–5 mag in optical or near-IR brightness on timescales of 1–10 yr. Other FUors have been identified based on properties similar to eruptive FUors, including (i) absorption features of F–G supergiants on optical spectra and K–M giants on near-IR spectra (Herbig (1977); Mould et al. (1978); Carr et al. (1987); Stocke et al. (1988); Staude & Neckel (1992)); (ii) large excesses of radiation over normal F–G stars at ultraviolet, infrared, submillimeter, and centimeter wavelengths (Weintraub et al. (1989), 1991; Kenyon & Hartmann (1991); Rodriguez et al. (1990); Rodriguez & Hartmann (1992)); (iii) distinctive reflection nebulae (Goodrich (1987)); and (iv) clear association with optical jets, HH objects, and molecular outflows (Reipurth (1991); Evans et al. (1994)). FUor eruptions are often accepted as accretion events in the disk surrounding a low-mass pre–main-sequence star (Hartmann & Kenyon (1985); Lin & Papaloizou (1985); Hartmann & Kenyon (1996); for alternative interpretations, see Herbig & Petrov 1992; Petrov et al. 1998). In this picture, the accretion rate through the disk increases by 2–3 orders of magnitude to $`10^4\mathrm{M}_{}\mathrm{yr}^1`$. In addition to providing energy for the luminosity increases of FUors, this model naturally explains the broad spectral energy distributions, the variation of rotational velocity and spectral type with wavelength, and color changes during the optical decline of V1057 Cyg, among other observed properties. Despite the success of the disk hypothesis, one observable characteristic of disk accreting systems – flickering – has not been observed in any known FUor. In most systems with luminous accretion disks, flickering is observed as a series of random brightness fluctuations with amplitudes of 0.01–1.0 mag that recur on dynamical timescales (Robinson (1976); Bruch (1992)). Often accepted as a ‘signature’ of disk accretion, flickering is believed to be a dynamical variation of the energy output from the disk<sup>1</sup><sup>1</sup>1In the cataclysmic variables (short period binary systems with an accretion disk surrounding a white dwarf), random flickering occurs on timescales of seconds to minutes. Dwarf nova oscillations and quasi-periodic oscillations are semi-coherent periodic variations observed on similar timescales. Flickering is also occasionally associated with material in the ‘bright spot’ at the outer edge of the disk (see Warner (1995)). It is unclear whether or not these classes have distinct analogs among accreting pre-main sequence stars. In this paper, we use flickering to distinguish rapid variations of light from the inner disk from variations of the bright spot.. Thus, it provides some measure of the fluctuations in the physical structure of the disk and might someday serve as a diagnostic of physical properties within the disk (Bruch & Duschl (1993); Bruch (1994); Warner (1995)). In this paper, we search for evidence of flickering in the historical light curve of FU Ori. In addition to the outburst, we find good evidence for small-amplitude brightness fluctuations on a timescale of 1 day or less. Color changes correlated with the brightness changes indicate a variable source with the optical colors of a G0 supergiant. The amplitude, color temperature, and timescale of the variations have much in common with the flickering observed in short period interacting binary systems. After ruling out several possible alternatives, we conclude that flickering is the most likely interpretation for short-term variability in FU Ori. The most plausible location for the flickering source is the inner edge of the disk, where temperatures lie between the stellar temperature and the maximum disk temperature of $``$ 7000 K. We describe the observations in §2, analyze the light curve in §3, and conclude with a brief discussion and summary in §4. ## 2 OBSERVATIONS We acquired UBV photometry of FU Ori with the 60-cm Zeiss reflector at the Crimean Laboratory of the Sternberg State Astronomical Institute. The observations usually were made through a 13<sup>′′</sup> aperture; a 27<sup>′′</sup> aperture was used on nights of poor seeing. These data were reduced using BD+81051 as the comparison star and other nearby stars as controls (see Kolotilov & Petrov (1985)). Table 1 lists the results. The uncertainty in the calibration is $`\pm `$0.01–0.02 mag for V and B–V, and $`\pm `$0.02–0.04 mag for U–B. We supplement the UBV data with additional photoelectric photometry from the Maidanak High Altitude Observatory. Ibragimov (1997) describes UBVR data acquired during 1981–1994 as part of the ROTOR project. The data have been reduced to the standard UBVR system with typical errors of $`\pm `$0.015 mag for V and V–R, $`\pm `$0.02 mag for B–V, and $`\pm `$0.04–0.08 mag for U–B. We also consider visual observations of FU Ori compiled by the American Association of Variable Star Observers. The error of a typical estimate is 0.1–0.2 mag using standard stars in the field of FU Ori calibrated from photoelectric observations. We simplify a comparison with photoelectric data by computing twenty day means of the over 7000 AAVSO observations. With $``$ 15 observations per twenty day interval, this procedure reduces the typical error of a twenty day mean to $``$ 0.03 mag, only slightly larger than the quoted error of the photoelectric data. Finally, we obtained low resolution optical spectra of FU Ori during 1995–1998 with FAST, a high throughput, slit spectrograph mounted at the Fred L. Whipple Observatory 1.5-m telescope on Mount Hopkins, Arizona (Fabricant et al. (1998)). We used a 300 g mm<sup>-1</sup> grating blazed at 4750 Å, a 3<sup>′′</sup> slit, and recorded the spectra on a thinned Loral 512 $`\times `$ 2688 CCD. These spectra cover 3800–7500 Å at a resolution of $``$ 6 Å. On photometric nights, we acquired standard star observations to reduce the FU Ori data to the Hayes & Latham (1975) flux scale using NOAO IRAF<sup>2</sup><sup>2</sup>2IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract to the National Science Foundation. These calibrations have an accuracy of $`\pm `$0.05 mag. This uncertainty is comparable to the probable error in spectrophotometric data acquired with the Kitt Peak National Observatory Intensified Reticon Spectrograph reported in Kenyon et al. (1988). Table 2 lists indices for several absorption lines using O’Connell’s (1973) definition, $`I_\lambda =F_\lambda /\overline{F}_\lambda `$, where $`F_\lambda `$ is the measured flux in a 20 Å bandpass centered at wavelength $`\lambda `$ and $`\overline{F}_\lambda `$ is the continuum flux interpolated from continuum bandpasses on either side of the absorption feature. Repeat measurements indicate an error of $`\pm `$ 0.03 mag for each index. ## 3 LIGHT CURVE ANALYSIS The lower panel of Figure 1 shows historical B+photographic and optical light curves for FU Ori. Following a 5–6 mag rise at B, the system has declined by $``$ 1 mag in nearly 70 yr. The decline in visual light has closely followed the B light curve for the past 30 yr. In addition to a long-term wave-like variation, the brightness shows considerable scatter of roughly 0.1 mag at BVR and almost 0.2 mag in U on timescales of days to months. These fluctuations are much larger than the quoted photometric errors. Ibragimov (1997) commented on fluctuations in brightness and color indices throughout 1981–1994. Similar variations are visible in light curves of V1057 Cyg and V1515 Cyg (Ibragimov (1997)). We plan analyses of these other FUors in future publications. ### 3.1 Outburst Model To analyze brightness fluctuations in FU Ori, we begin by separating long-term changes with timescales of years from shorter timescale variations. We consider a simple model for the outburst $$m=m_0+m_{rise}(tt_0)$$ (1) where $`m_0`$ is the average quiescent magnitude and $`t_0`$ is the time of the outburst. The brightness during outburst is a function of time, $`t`$, $$m_{rise}=\{\begin{array}{ccc}0\hfill & & \hfill t<t_0\\ \delta m_{rise}\{1e^{(tt_0)/\tau _{rise}}\}+\dot{m}(tt_0)\hfill & & \hfill tt_0\end{array}$$ (2) where $`\delta m_{rise}`$ is the amplitude of the outburst, $`\tau _{rise}`$ is the e-folding time of the rise to maximum, and $`\dot{m}`$ is the rate of decline from maximum. We derive parameters for this model using a downhill simplex method to minimize the function $$\chi ^2=\underset{i=1}{\overset{N}{}}[(mm_i)/\sigma _i]^2,$$ (3) for $`N`$ observations $`m_i`$ having uncertainty $`\sigma _i`$. We derive the model parameters $`m_0`$, $`\delta m_{rise}`$, $`\dot{m}`$, and $`\tau _{rise}`$ for an adopted $`t_0`$ and vary $`t_0`$ separately to produce a minimum in $`\chi ^2`$. This procedure works better than fitting all parameters at once due to the sparse nature of the light curve for $`tt_0`$. To estimate errors of the model parameters, we compute residuals $`r_i`$ of the light curve about the fit, construct new light curves by adding gaussian noise with amplitude $`r_i`$ to the data, and extracting new model parameters. We adopt as best the median values for model parameters from 10,000 such trials; the quoted errors are the inter-quartile ranges of each model parameter. Table 3 summarizes results of model fits to the light curves in Figure 1. The sparse data before outburst yield a poor measure of the pre-outburst brightness and the start of the outburst. We estimate $`t_0`$ = JD 2428497 $`\pm `$ 40 and $`m_0=15.55\pm 0.40`$ from the B-band data; these errors set the uncertainties in the other model parameters. The uncertainty in the decline rate at B is small; the visual data yield a nearly identical rate of decline despite lack of data near maximum. The spectroscopic data show little evidence for any change in the mean optical spectral type during the recent 0.3 mag decline in the mean V brightness. The Mg I index, which tracks the spectral type rather well, increased by at most 0.02 mag during these years. The H$`\beta `$ and Na I indices, which should measure the wind of FU Ori, have also remained constant. We show below that the mean colors, U–B, B–V and V–R, of FU Ori have changed by $``$ 0.02 mag in 15 years. The constant colors and spectral indices indicate that the optical spectral type has remained constant to within $`\pm `$ 1 subclass since 1985. The upper panel of Figure 1 shows residual light curves about the best-fit model. The rms dispersion about the light curves is 0.06 mag for photoelectric data, 0.11 mag for visual data, and 0.24 mag for photographic data. All are large compared to typical uncertainties. Some of the scatter in the residual light curves is due to a long-term wave with an amplitude of several tenths of a magnitude. The residual B light curve has three prominent crests at $``$ JD 2428500, JD 2432000, and JD 2436000, with a possible small crest at $``$ JD 2433500. This behavior appears to vanish in the B light curve at later times. However, the residual visual light curve then shows a similar wave-like oscillation with crests at $``$ JD 2439000 and JD 2442500. Another crest at $``$ JD 2447000 is present in the photoelectric V light curve shown in Figure 2 and described below. These peaks are roughly in phase with the residual B light curve for periods of $``$ 4000 days. There are considerable photometric variations in addition to the wave. These occur on short timescales, tens of days or less, and have amplitudes that seem uncorrelated with the overall system brightness. We will analyze these in the next section and then consider possible origins for the short-term and long-term variations. Despite the large residuals, the model fits leave negligible trends in the data. The median Spearman rank correlation coefficient between the V brightness and time for the 10,000 Monte Carlo trials is 0.87 for the B + photographic data and 0.84 for the visual data. The median slope of a linear fit to the residual light curves is $`<10^4`$ mag yr<sup>-1</sup> for the B + photographic data and $`<10^6`$ mag yr<sup>-1</sup> for the visual data. We conclude that the simple model provides a reasonable fit to the long-term light curve and now consider the nature of the shorter timescale fluctuations. ### 3.2 Fluctuation Timescale To test whether or not the short timescale photometric variations in FU Ori are flickering, we need to verify that (i) they are real changes in the brightness and color of FU Ori, (ii) they occur on short timescales but are not periodic, and (iii) they can be plausibly associated with radiation from the disk. We establish the reality of the variations in two parts. We first demonstrate that the variations occur on short timescales with amplitudes larger than the photometric errors. We show later that color changes correlate with brightness changes. We consider a Monte Carlo model for the photoelectric data described in §2. The lower panel of Figure 2 shows V-band data acquired during the last 15 yr. Both the wave-like oscillation and the large scatter about this oscillation are apparent. The solid line in this Figure is the seasonal mean light curve, the average brightness for each year of observation. The top panel in the Figure is the difference between the actual data and the seasonal mean. This residual light curve has a large amplitude but no linear trend or wave-like feature. Our analysis indicates a small periodic component in residual light curves for B and V data. Periodograms suggest a period of 17 $`\pm `$ 1 days in the B data, which has been reported previously (Kolotilov & Petrov (1985)). This periodic component has an amplitude of 0.009 $`\pm `$ 0.003 mag. The V data has a best period of 111.4 $`\pm `$ 1.6 days with an amplitude of 0.012 $`\pm `$ 0.003 mag. There is no indication of the 17 day period in the V light curve. The amplitudes of these ‘periodicities’ are comparable to the photometric errors but small compared to the amplitude of the fluctuations in the residual light curves (see Figures 1–2). To measure the amplitude of the non-periodic component in the residual light curve, we use a Monte Carlo model. We replace each observation V<sub>i</sub> with a random brightness having amplitude $`a_v`$ and offset $`v_0`$, $$v_i=v_0+a_vg_i,$$ (4) where $`g_i`$ is a normally distributed deviate with zero mean and unit variance (Press et al. (1992)). Artificial light curves that provide a good match to the actual light curve should have the same amplitude and offset. We quantify a ‘good match’ by comparing the magnitude distributions of actual and artificial light curves using the Kolmogorov-Smirnov (K-S) test. We reject poor matches with a low probability of being drawn randomly from the same distribution as the data. The ‘best’ match maximizes the median K-S probability from 10,000 trials. We establish error estimates for the best parameters and the proper scale for this measure by comparing two artificial light curves generated in each of 10,000 trials. This procedure yields best parameters of $`v_0=0.0016\pm 0.0009`$ mag and $`a_v=0.033\pm 0.005`$ mag for the residual V light curve. Artificial light curves with these parameters have a high probability, 68% or larger, of being drawn randomly from the same distribution as the actual data. The offset of the model light curve is consistent with zero. The amplitude of the model is roughly twice the quoted 1$`\sigma `$ error of 0.015 mag. The artificial data sets have periodic variations similar to the real data sets. Periods of 10–100 days are common in the artificial B and V light curves. Mean light curves folded on these periods have amplitudes, 0.01 $`\pm `$ 0.003 mag, similar to those quoted above for the real data. The ‘best’ period is different in each artificial data set, but the amplitude is nearly constant. This amplitude is small compared to the random component of the fluctuation. We suspect the periodic variations are due to the sampling of the light curve. Such ‘periodicities’ illustrate some of the dangers of period analysis. There are two explanations for the 0.03 mag variations in the residual V light curve. A simple test should distinguish real fluctuations from measurement error. Real fluctuations in V should be accompanied by correlated variations in the color indices, U–B, B–V, and V–R. Color variations should be uncorrelated with the V brightness and with each other if the measurement error is 0.03 mag in V instead of 0.015 mag. In either case, the Monte Carlo model demonstrates that the fluctuations occur on short timescales. The typical observation frequency is $``$ 1 day<sup>-1</sup> (344 out of 663 observations), 0.5 day<sup>-1</sup> (82 observations), or 0.333 day<sup>-1</sup> (54 observations). Fluctuations must occur on timescales $``$ 1 day based on the success of the Monte Carlo model in reproducing the light curve. ### 3.3 Color Variations Figure 3 shows the variations in the optical colors as a function of time. The solid line in each of the left-hand panels is the seasonal mean color for the individual data points. There is little evidence for a substantial color change in FU Ori during a 0.3 mag decline in V. The seasonal means for B–V and V–R are constant to within the photometric errors. The variation in the mean U–B color is roughly twice the photometric error but shows no obvious trend with time. Despite the lack of long-term variation in the colors, there are considerable short-term variations. The right hand panels in Figure 4 show the color fluctuations about the seasonal means in the left hand panels. The full amplitudes of the color variations are $``$ 0.2 mag in U–B, and $``$ 0.1 mag in B–V and V–R. These variations are 3–5 times the quoted photometric errors. Figures 4–5 show the correlation of color variations with the V-band fluctuations analyzed in §3.2. To construct these plots, we derived separate seasonal means for the V-band observations associated with each color observation (because a given color was not always obtained with each V measurement) and subtracted the appropriate seasonal mean from each data point. The Spearman rank correlation coefficient is 1.3 $`\times 10^2`$ for 438 $`\delta `$(U–B)–$`\delta `$V pairs, 2.6 $`\times 10^9`$ for 626 $`\delta `$(B–V)–$`\delta `$V pairs, and 2.9 $`\times 10^{11}`$ for 622 $`\delta `$(V–R)–$`\delta `$V pairs. We fit each correlation with a straight line using the Press et al. (1992) subroutine FITEXY, assuming that the 1$`\sigma `$ errors in each coordinate are equal to the quoted photometric errors. The results yield: $$\delta (UB)=0.0002\pm 0.0034(0.40\pm 0.14)\delta V$$ (5) $$\delta (BV)=(0.24\pm 8.0)\times 10^4(0.12\pm 0.02)\delta V$$ (6) $$\delta (VR)=(0.015\pm 6.1)\times 10^4+(0.15\pm 0.02)\delta V$$ (7) The correlation between $`\delta `$(U–B) and $`\delta `$V is weak; U–B becomes redder as the source becomes brighter. Both $`\delta `$(B–V) and $`\delta `$(V–R) correlate well with $`\delta `$V; as the source brightens, B–V becomes redder while V–R becomes bluer. The slopes of the short-term color variations in equations (5)–(7) differ from the apparent slope of the long-period wave in the B and visual light curves. The lack of a clear B variation associated with the visual wave suggests $`\delta (BV)C\delta V`$, with $`C`$ 1. This behavior suggests that the wave and the short-term fluctuations have different physical origins. Analysis of photoelectric data with a longer time baseline is needed to verify this point. To test the accuracy of the correlation coefficients and measured slopes, we constructed artificial color curves using the Monte Carlo model described in §3.2. We matched model color curves having amplitudes similar to the observations to model V-band light curves, measured the correlation coefficients, and derived the slopes of straight line fits to the artificial residuals. We repeated this exercise using known correlations of color index with brightness. Random light curves yield no correlation between color index and brightness. Correlated changes in color index with brightness yield the measured correlation coefficients if the amplitude of the brightness variation is $`a_v`$ = 0.036 $`\pm `$ 0.007, if the slopes and 1$`\sigma `$ errors of the color variations are those quoted in Equations (5)–(7), and if the photometric errors are 0.015 $`\pm `$ 0.01 in V, 0.02 $`\pm `$ 0.01 in B–V and V–R, and 0.07 $`\pm `$ 0.02 in U–B. The small color variations make it difficult to find robust correlations among the color indices. There is a 3$`\sigma `$ correlation between U–B and B–V, with a Spearman rank coefficient of $`4.7\times 10^3`$. The Spearman rank correlation coefficient is much larger, 0.25, for B–V and V–R. To test the importance of these results, we repeated these tests using artificial data sets with known correlations between the color indices. The correlation between $`\delta `$(U–B) and $`\delta `$(B–V) is always detected in data with small photometric errors; the Spearman rank correlation coefficient is $`10^3`$ or smaller in all of our trials. The correlation between $`\delta `$(B–V) and $`\delta `$(V–R) has 2$`\sigma `$ or smaller significance in all of our trials. Reducing the photometric errors by factors of 2–3 allows us to recover known correlations at the 3$`\sigma `$ level. These results provide good evidence that the observed variations in brightness and color are intrinsic to FU Ori. The correlations between the B–V or V–R color and the V brightness are robust. The measured correlation coefficients are reasonable given the magnitude of the color changes and the photometric errors. We next consider the physical nature of the brightness changes and then compare these properties with the flickering observed in other accreting systems. ### 3.4 Physical Nature of Light and Color Variations The observed optical light and color variations in FU Ori are small, $``$ 0.035 mag in V, $``$ 0.015 mag in U–B, and $``$ 0.004 mag in B–V and V–R. To understand the origin of this variability, we consider the observed colors as small perturbations about the colors of the ‘average’ source in FU Ori. We adopt the mean colors of the system as the average colors, $$UB=0.84BV=1.35VR=1.15.$$ (8) and correct these colors for interstellar reddening assuming a standard extinction law (Mathis (1990)) and $`A_V`$ = 2.2 mag (Kenyon et al. (1988)), $$(UB)_0=0.33(BV)_0=0.64(VR)_0=0.60.$$ (9) Figure 6 compares the reddening-corrected color indices of FU Ori with colors for F and G supergiants as indicated in the plot. The average colors of FU Ori, shown as the box, are offset from standard stellar loci. The solid line in each panel shows how the colors change as the source brightens; the colors move away from the supergiant locus if the source fades in brightness. Both lines indicate that the best stellar match to the changes in brightness and color is a G0 supergiant. The dashed lines indicate how the colors change if the slopes of the relation between the V-band brightness and the color indices are $`\pm 2\sigma `$ different from those derived in Equations (5)–(7). This result shows that the uncertainty in the stellar match is less than one spectral subclass. The best stellar match to the color variations is correlated with the adopted reddening. An F5 supergiant can match the observed color variations for $`A_V`$ = 3.2 mag; a G5 supergiant is the best match for $`A_V`$ = 1.2 mag. Intrinsic colors for $`A_V`$ 1.5 mag and $`A_V`$ 3 mag are inconsistent with the G-type optical spectrum. This constraint limits the spectral type of the variable source to F7–G3. To find a reasonable explanation for the color variations, we first consider mechanisms appropriate for single stars without circumstellar disks. Although we believe an accretion disk provides the best model for observations of all FUors, it is important to consider alternatives before we identify flickering as the source of the variability in FU Ori. Obscuration, pulsation, and rotation are obvious choices for short period variations with small amplitudes in an isolated star. Random obscuration events by small, intervening dust clouds near the central source are a popular model for rapid variations in Herbig Ae/Be stars and other pre-main sequence stars (e.g., Natta et al. (1997); Rostopchina et al. (1997); and references therein). The observed pulsational amplitude of the pre-main sequence star HR 5999, 0.013 mag (Kurtz & Marang (1995)), is close to the amplitude observed in FU Ori. The period of 5 hr is short compared to the spacing of our data; a similar period in FU Ori might well be missed by the analysis described above. Rotational modulations with small amplitudes and periods of 1–2 days are observed in many pre–main-sequence stars (Kearns et al. (1997) and references therein) and could plausibly produce a similar variation in FU Ori. Despite its attractiveness, a stellar pulsation in FU Ori seems unlikely. With an effective temperature of $``$ 6000–6500 K and a luminosity of $``$ 200 $`\mathrm{L}_{}`$ (e.g., Kenyon (1999)), FU Ori lies within the classical instability strip (see Gautschy & Saio (1995)). The bright Cepheid $`\alpha `$ UMi (Polaris) has a comparable amplitude and other Cepheids have similar periods. Recent period-luminosity relations for Cepheids (Feast & Catchpole (1997); Hindsley & Bell (1989)) yield periods of $``$ 0.8 days for FU Ori if d = 500 pc and $`A_V`$ = 2.2 mag. Although this period is reasonably consistent with our observations, FU Ori lies well above the pre–main-sequence instability strip in the HR diagram (Marconi & Palla (1998)). To test the possibility that a pulsational instability might occur anyway, we constructed artificial light curves with amplitudes of 0.035 mag and periods of 0.3–2.0 days. We sampled these light curves in time as in our real observations and added noise equivalent to our photometric uncertainties. Our failure rate for recovering known periods in the artificial light curves is small, $`<10^3`$, for all periods considered, based on 10,000 artificial light curves at each of 20 periods between 0.3 and 2.0 days. The failure rate is largest for periods of 0.5, 1.0, and 1.5 days due to the $``$ 1 day spacing of the light curve. The failure rate decreases to $`10^4`$ or less at other periods. Our inability to detect any periodicity in the FU Ori light curve and the lack of a theoretical instability strip for the observed luminosity and temperature of FU Ori is a strong indication that pulsations do not produce the observed variation. Stellar rotation is also an unlikely source of the variability. The light curve analysis for short periodicities rules out a rotational modulation of the light curve, unless the dark or bright spots responsible for the variations vary in size or intensity on timescales of several rotational periods. At the 1–2 day periods that seem most plausible, FU Ori currently rotates close to breakup. If the star conserved angular momentum during the rise to maximum, the pre-outburst rotational velocity would have exceeded the breakup velocity by a factor of $``$ 3 (see also Hartmann & Kenyon (1985)). Obscuration events similar to those envisioned in Herbig Ae/Be stars (see Rostopchina et al. (1997) and references therein) require special circumstances to explain the variability. Reddening by small dust clouds requires unusual particles to account for a very steep reddening law, $`R_V`$ 8, that changes sign at V. Light scattered off a reflection nebula might account for the different color variations of B–V and V–R, but it is difficult to derive as steep a color variation as is observed with simple geometries and a wide range of dust properties. Photopolarimetry would test this conclusion. Finally, fluctuations in the wind of FU Ori are a less plausible source of brightness changes than flickering. Errico et al. (1997) have suggested that the continuum optical depth through the wind exceeds unity in the Balmer and Paschen continua. Small variations in the optical depth due to inhomogeneities in the outflow might account for small brightness changes. If correct, this hypothesis would predict a decrease in the amplitude of the variation with decreasing wavelength, because the optical depth in the Paschen continuum decreases with decreasing wavelength. For reasonable wind temperatures and densities, $``$ 5000–10000 K and $``$ $`10^{10}`$$`10^{14}\mathrm{cm}^3`$ (Calvet et al. (1993); Hartmann & Calvet (1995)), the increase in the optical depth from B to R is roughly a factor of 3 for a gas in LTE. There is no evidence for this behavior in FU Ori. We conclude that fluctuations associated with a stellar photosphere or wind from the disk provide a poor explanation for the rapid, small-amplitude photometric variations in FU Ori. The variations in FU Ori have much in common, however, with the flickering observed in other accreting systems, such as cataclysmic variables (CVs) and low mass X-ray binary systems (Bruch (1992), 1994; Bruch & Duschl (1993); Warner (1995) and references therein). In most CVs, 0.01–1 mag fluctuations occur on the dynamical timescale, seconds to minutes, of the inner disk. Within each flicker, a CV becomes bluer as it gets brighter. The large color temperature, $``$ 20,000 K, of the variable source also plausibly confines the flickering to the inner disk in many CVs. Despite the lack of a good physical model for flickering in CVs, it clearly probes physical conditions in the inner disk. The observed variations of FU Ori are also plausibly associated with the inner regions of a circumstellar accretion disk. The timescale of the variation, $``$ 1 day, is close to the dynamical timescale of the inner disk, $``$ 0.1 day for a 1 $`\mathrm{M}_{}`$ central star with a radius of 4 $`\mathrm{R}_{}`$. The temperature of the variable source, $``$ 6000 K for a G0 supergiant, is comparable to the inner disk temperature of 6500 K derived from detailed fits to the spectral energy distribution and the profiles of various absorption lines (Kenyon et al. (1988); Bell et al. (1995); Turner et al. (1997)). Finally, the amplitude of the variation is similar to that observed in other accreting systems. Given this behavior, we believe that the variations in FU Ori are flickering and thus provide additional evidence for an accretion disk in this system. The main alternative to a variable accretion disk in FU Ori is variations associated with a magnetic accretion column. Despite the success of this model in other pre–main-sequence stars (e.g., Gullbring et al. (1996)), truncating an accretion disk with the large accretion rate, $`10^4\mathrm{M}_{}\mathrm{yr}^1`$, estimated in FU Ori requires a large magnetic field, $``$ 10 kG. Limits on the magnetic fields of other pre–main-sequence stars are much smaller, $``$ 2–3 kG (Johns-Krull et al. (1999)). A much larger field in FU Ori is unlikely. The temperature of the variable source in FU Ori, $``$ 6000 K, also seems too cool to be associated with a magnetic accretion column, where the typical temperature is $`10^4`$ K (e.g., Lamzin (1998); Calvet & Gullbring (1999)). Both of these arguments make a stronger case for flickering as the source of the variability in FU Ori. To see what we can learn about the inner disk from the variations, we consider several simple models for the flickering. We adopt the observed colors – U–B, B–V, and V–R – as the colors of the average state of the disk. We assume several sources of variability in a steady-state disk, (i) discrete changes in the mass accretion rate, $`\dot{\mathrm{M}}`$, through the entire disk, (ii) random fluctuations in the flux from any annulus in the disk, and (iii) random fluctuations in the flux from specific annuli in the disk. We chose a steady-state disk as the average source, because steady disks provide a reasonable fit to the complete spectral energy distribution of FU Ori. Several experiments with non-steady disks yield similar results. Models where the entire disk can vary in brightness do not reproduce the observations. We assume a disk composed of discrete annuli with width $`\delta R`$ at a distance $`R`$ from the central star, with $`\delta RR`$. Each annulus radiates as a star with the effective temperature assigned to the annulus. The arrows in Figure 6 indicate the color variations produced by models where the flux from each annulus is a random fraction, between 1.0 and 1.1, of the flux from the annulus of a steady-state disk. The color variation of the model clearly fails to account for the observed color variation. Allowing all annuli to vary coherently also produces color variations that disagree with the observed variation. Successful models allow only specific parts of the disk to vary in brightness. If disk annuli with the colors of F9–G1 supergiants are the only annuli that vary, the color variation of the model follows the slope of the observed variation. Our results for flickering in FU Ori are at odds with predictions of the simple steady-state accretion disk model. In steady disk models for FUors, the disk temperature rises rapidly from zero at the stellar photosphere, $`R=R_{}`$, to $`T_{max}`$ 6500–7000 K at $`R=1.36R_{}`$ and then decreases radially outward (Kenyon et al. (1988)). The G0 temperature of the flickering source has a temperature of $``$ 6000 K. Disk material with this temperature has $`R=2.5R_{}`$ in the steady model. Fluctuations in the energy output of this region, and the lack of fluctuations in hotter disk material, seem unlikely. If we associate flickering with small changes in the mass accretion rate through the disk, $`\dot{\mathrm{M}}`$, or in the scale height of the disk, $`H`$, we expect these to produce larger variations at smaller disk radii. Recent calculations indicate that the inner regions of FUor disks may be much different than predicted by the simple disk model. The steady-state temperature distribution assumes a physically thin disk, $`HR`$ (Lynden-Bell & Pringle (1974)). FUor disks are probably much thicker (Lin & Papaloizou (1985); Clarke et al. (1989), 1990). Steady models that include a self-consistent treatment of the boundary layer between the inner disk and the stellar photosphere predict large scale heights, $`H/R`$ 0.1–0.3 at $`R12R_{}`$ (Popham et al. (1993), 1996). Time-dependent models further indicate that $`H/R`$ can vary in a complicated way close to the central star (Turner et al. (1997); Kley & Lin (1999)). Both types of model predict that the disk temperature peaks just outside the stellar photosphere at $`R`$ 1.1–1.2 $`R_{}`$. The decline in disk temperature at smaller radii can be as large as 25%–50%. Applied to FU Ori, these models predict that the disk temperature close to the central star is $``$ 5000–6000 K, comparable to the temperature derived for the flickering source, if the peak temperature at 1.1–1.2 $`R_{}`$ is $``$ 7000 K. We propose that the flickering source in FU Ori lies between the stellar photosphere and the peak temperature in the disk at 1.1–1.2 $`R_{}`$. In the models described above, this region produces $``$ 5% of the total optical light. Observable variations in the total light from the disk – as we have reported here – thus imply significant changes in the physical structure of the inner disk. Our data for FU Ori require 50% variations in the light output of the inner disk. Large changes in the physical structure of the disk can be avoided if the spatially thick portion of the disk occults the inner disk. If we view the disk at an inclination $`i_{crit}`$ = tan$`{}_{}{}^{1}H/R`$, small variations in $`H/R`$ can produce small changes in brightness and color. Rapid variations similar to those observed are possible if $`H/R`$ varies on the dynamical timescale and if the occulted portion of the disk radiates as a G0 I star. The required change in $`H/R`$, $``$ 10%, is small compared to the 50% change in light output needed above. The required geometry is, however, very special and yields no variation if the real viewing angle is much less than $`i_{crit}`$. Observations of V1057 Cyg and V1515 Cyg will test this idea, because these systems probably have smaller $`i`$ than FU Ori (Hartmann & Kenyon (1996)). ## 4 DISCUSSION AND SUMMARY Our results provide the first evidence for rapid photometric variations, flickering, in a FUor. The amplitude of the variation is small, $``$ 0.035, and just detectable with photoelectric data covering a long time interval. Observations with smaller photometric uncertainties are needed to verify the detection and to place better limits on the color variations. Differential photometry using a CCD on a small telescope can achieve the required precision, but the field of FU Ori has few bright comparison stars within 15–20 arcmin. The richness of the field may compensate and allow the high quality photometry needed to improve our results. Previous attempts to find similar short-term variations in a pre–main-sequence star have met with mixed success. Smith et al. (1996) placed upper limits of 0.01 mag on short-term fluctuations in four classical T Tauri stars. Gullbring et al. (1996) detected flare-like activity in the classical T Tauri star BP Tau, with amplitudes and timescales comparable to that observed in FU Ori; Hessman & Guenther (1997) noted similar behavior in three classical T Tauri stars, DG Tau, DR Tau, and DI Cep. These studies all interpreted the variations with a magnetospheric disk model, where jitter in the magnetically channeled flow from the inner disk to the stellar photosphere produces small amplitude ‘flares’. We prefer to associate the variation in FU Ori with flickering of the inner accretion disk. The accretion rate in FU Ori, $`10^4\mathrm{M}_{}\mathrm{yr}^1`$, is a factor of $``$ 1000 larger than accretion rates derived for T Tauri stars, which makes it difficult to truncate the disk with the modest magnetic fields, $``$ 1–2 kG, detected in pre-main sequence stars. Future observations can test the magnetic alternative by placing better limits on any periodic component of the photometric variation and by measuring the magnetic field strength. However these observational issues are resolved, it is clear that high precision photometry can probe the physical conditions of the inner accretion disk of a pre–main-sequence star. The amplitudes and timescales of these variations already provide some challenge to theory. The amplitude of the FU Ori variation implies large fluctuations in the physical structure of the disk on short timescales. Flares and other short-term variations in T Tauri stars suggest smaller, but still significant, changes in disk structure close to the central star. Recent hydrodynamical calculations show that the disk structure can change significantly on longer timescales, but theoretical models do not yet address rapid fluctuations in the disk similar to those observed (Kley & Lin (1999)). Future calculations that consider this behavior should lead to a better understanding of mass flow in the inner disk in FUors and other types of accreting systems. We thank F. Hessman for a careful and thoughtful review which improved our presentation of the data and our discussion of possible models. S.K. thanks N. Kylafis and C. Lada for the hospitality of the NATO ASI, The Origins of Stars and Planetary Systems. Gentle Mediterranean waves rolling onto the beaches of Crete inspired portions of this study.
no-problem/9910/astro-ph9910352.html
ar5iv
text
# Inhomogeneous Neutrino Degeneracy and Big Bang Nucleosynthesis ## I Introduction Although the standard model of Big Bang nucleosynthesis (BBN) is highly successful (for a recent discussion, see references ) many variations on this model have been proposed . One of the most frequently investigated variations on the standard model is neutrino degeneracy, in which each type of neutrino is allowed to have a non-zero chemical potential \- , and a number of recent models have been proposed to produce a large lepton degeneracy \- . More recently, Dolgov and Pagel have suggested the possibility of inhomogeneous neutrino degeneracy . Their model was proposed to explain the apparent discrepancy between various measurements of the primordial deuterium abundance in high-redshift Lyman-alpha clouds. Here we consider a more mundane possibility: that the neutrino chemical potential is inhomogeneous, but on much smaller scales. In particular, we assume that the amplitude of the inhomogeneities is small on length scales larger than the typical baryonic diffusion scales after nucleosynthesis, so that the element abundances are homogeneous today. We calculate the element abundances for this scenario and compare to observational limits. Using a method similar to that in reference , we can simulate arbitrary distributions of the neutrino chemical potential, and so determine the upper and lower bounds on the baryon-to-photon ratio $`\eta `$ in this model. In the next section, we discuss our model for inhomogeneous neutrino degeneracy and its physical consequences. In Section 3, we use a linear programming technique to calculate upper and lower bounds on $`\eta `$ in this model. Our conclusions are summarized in Section 4. We find that when the chemical potential is inhomogeneous, there are no BBN limits on the overall neutrino energy density; the only limits in this case come from other cosmological considerations such as structure formation or the CMB . Not surprisingly, inhomogeneous neutrino degeneracy allows for a wider range of values for $`\eta `$ than does homogeneous degeneracy. ## II Inhomogeneous Neutrino Degeneracy Consider first the case of homogeneous neutrino degeneracy. In this case, each type of neutrino is characterized by a chemical potential $`\mu _i`$ ($`i=e,\mu ,\tau `$), which redshifts as the temperature, so it is convenient to define the constant quantity $`\xi _i\mu _i/T_i`$. In terms of $`\xi _i`$, the neutrino and antineutrino number densities are given by $$\nu _i=\frac{1}{2\pi ^2}T_\nu ^3_0^{\mathrm{}}\frac{x^2dx}{1+\mathrm{exp}(x\xi _i)},$$ (1) and $$\overline{\nu }_i=\frac{1}{2\pi ^2}T_{\overline{\nu }}^3_0^{\mathrm{}}\frac{x^2dx}{1+\mathrm{exp}(x+\xi _i)},$$ (2) while the total energy density of the neutrinos and antineutrinos is $$\rho =\frac{1}{2\pi ^2}T_\nu ^4_0^{\mathrm{}}\frac{x^3dx}{1+\mathrm{exp}(x\xi _i)}+\frac{1}{2\pi ^2}T_{\overline{\nu }}^4_0^{\mathrm{}}\frac{x^3dx}{1+\mathrm{exp}(x+\xi _i)}.$$ (3) Degeneracy of the electron neutrinos alters the $`np`$ weak rates relevant for BBN through the number densities given in equations (1) and (2), while the change in the expansion rate due to the altered density in equation (3) affects BBN for degeneracy of any of the three types of neutrinos (see, for example, reference for a more detailed discussion). What happens if this degeneracy is not homogeneous, as assumed in almost all previous work, but instead varies with position? Dolgov and Pagel considered such a model in order to explain the discrepancy in observed deuterium abundances at high-redshift. In their model, $`\xi `$ varies on scales $`1001000`$ Mpc, producing an observable inhomogeneity in the present-day element abundances. We make the opposite assumption: we take the variation in $`\xi `$ to be small on such large scales, and large on much smaller scales, so that elements are well-mixed before the present day, erasing any detectable inhomogeneities. Although models have been proposed which produce inhomogeneities in $`\xi `$ (see, for example, the discussion in reference ), we will follow the example of reference and keep our discussion as general as possible. In general, one would expect a distribution of fluctuations in $`\xi `$ over all length scales. However, since the neutrinos are relativistic, they will free-stream and erase any fluctuations on length scales smaller than the horizon at any given time. We will make only two assumptions concerning the fluctuations in $`\xi `$: that the fluctuations are significant on large enough scales to avoid being erased by free-streaming, and that they become negligible on small enough scales that the resulting element distribution is homogenous today. The first of these conditions requires that the fluctuations are significant on scales larger than the horizon scale when the $`np`$ reactions freeze out at $`T1`$ MeV. If this were not the case, then free-streaming would erase all of the fluctuations in $`\xi `$ before BBN began. This horizon scale corresponds to a comoving length scale $`100`$ pc today. The condition that the element distribution be homogeneous today requires that the fluctuations in $`\xi `$ decrease sufficiently quickly with length scale that they have no significant effect on nucleosynthesis on scales above the element diffusion length. Although no detailed studies of element diffusion have been performed in connection with inhomogeneous BBN scenarios, it seems safe to assume that complete mixing of the primordial elements will occur on scales well within the nonlinear regime today, $`<1`$ Mpc. By requiring the fluctuations in $`\xi `$ to be negligible above this scale, we can also ignore any constraints from CMB observations, which severely constrain models with fluctuations on larger scales . Given these assumptions, we can assume that BBN takes place in separate horizon volumes, with the value of $`\xi `$ being homogeneous within each volume. At late times, the elements produced within each volume mix uniformly to produce the observed element abundances today. Note that with this set of assumptions, it is no longer meaningful to talk about the value of $`\xi `$ for the neutrinos at the present. Since the neutrinos from different horizon volumes diffuse freely up to the scale of the horizon, the different thermal distributions with different values of $`\xi `$ will combine to give a highly non-thermal neutrino distribution at late times. Thus, in the inhomogeneous scenario, it is still possible to put constraints on the present value of $`\rho _\nu `$, but it is meaningless to discuss limits on $`\xi `$, since the present neutrino distribution will be non-thermal and cannot be characterized by a single value of $`\xi `$. This effect is present at some level even at the time of nucleosynthesis. The neutrinos remain in thermal equilibrium down to a temperature $`T23`$ MeV, so that the neutrino distribution remains thermal down to this temperature, with a single unique value of $`\xi `$ in each horizon volume. However, the $`np`$ rates do not freeze out until $`T1`$ MeV, so that neutrino free-streaming after decoupling will tend to produce a somewhat non-thermal background as early as the beginning of nucleosynthesis. We have neglected this effect, which will be negligible in any case if the neutrino inhomogeneities are confined to comoving scales $`>100`$ pc. We also assume for simplicity that $`\eta `$ remains uniform in the presence of an inhomogeneous lepton distribution. This need not be the case if, for example, baryogenesis is related in some way to the lepton number . ## III The Effect on Big Bang Nucleosynthesis Given our discussion in the previous section, we assume that the distribution of each type of neutrino is homogeneous within a given horizon volume during nucleosynthesis, and characterized by a single degeneracy parameter $`\xi _i`$ ($`i=e,\mu ,\tau `$). Different horizon volumes may have different values of $`\xi _i`$, so we characterize the distribution of $`\xi _i`$ by a distribution function $`f(\xi _i)`$, which gives the probability that a given horizon volume has a value of $`\xi _i`$ between $`\xi _i`$ and $`\xi _i+d\xi _i`$. What form should we choose for $`f(\xi _i)`$? In analogy with the distribution of primordial density perturbations (and in accordance with the central limit theorem) the most obvious choice is a Gaussian distribution. However, we can consider a more general case than this. Using linear programming techniques like those in reference , it is possible to analyze the general case of an arbitrary distribution $`f`$. Consider first the case where only $`\nu _e`$ is degenerate, and suppose that $`f(\xi _e)`$ is an arbitrary distribution. Then all of the element abundances will be functions of $`\xi _e`$ (for fixed $`\eta `$), and we can write, for a given nuclide $`A`$, $$\overline{X}_A=_{\mathrm{}}^{\mathrm{}}X_A(\xi _e)f(\xi _e)𝑑\xi _e,$$ (4) where $`X_A(\xi _e)`$ is the mass fraction of $`A`$ as a function of $`\xi _e`$, and $`\overline{X}_A`$ is the mass fraction of $`A`$ averaged over all space; after the matter is thoroughly mixed, $`\overline{X}_A`$ will be the final observed primordial element abundance. In order to test all possible distribution functions $`f`$, we can divide the range in $`\xi _e`$ into discrete bins (not necessarily all of the same size), and approximate the integral in equation (4) as a sum: $$\overline{X}_A=\underset{j}{}X_{Aj}f_j\mathrm{\Delta }\xi _{ej},$$ (5) where the dependence of $`X_A`$ and $`f`$ on $`\xi _e`$ is expressed through their dependence on the bin number $`j`$. For each of the elements of interest (<sup>4</sup>He, D, and <sup>7</sup>Li) we have an upper and a lower observational bound. Thus, for each of these three elements, we can write down equations of the form: $$X_{\mathrm{lower}\mathrm{bound}}<\underset{j}{}X_{Aj}f_j\mathrm{\Delta }\xi _{ej}<X_{\mathrm{upper}\mathrm{bound}}.$$ (6) Furthermore, $`f(\xi _e)`$ is normalized to unity, so $$\underset{j}{}f_j\mathrm{\Delta }\xi _{ej}=1.$$ (7) If we now define $$p_jf_j\mathrm{\Delta }\xi _{ej},$$ (8) then equations (6) and (7) become: $$X_{\mathrm{lower}\mathrm{bound}}<\underset{j}{}X_{Aj}p_j<X_{\mathrm{upper}\mathrm{bound}},$$ (9) and $$\underset{j}{}p_j=1.$$ (10) If we put an upper and lower cutoff on these sums, so that we retain only a finite number of terms, then equations (9) and (10) are in the form of the constraint equations in a linear programming problem, with the N independent variables being the $`p_j`$’s. In reference , the variable under consideration was $`\eta `$ rather than $`\xi `$, so that the final quantity which needed to be maximized or minimized was the mean value of $`\eta `$. In our case, we wish to determine, for a given value of $`\eta `$, whether there is a solution to equations (9) and (10). Since there are non-BBN limits on $`\rho _\nu `$, we have chosen to take the quantity $`\rho _\nu ^{}/\rho _\nu `$ as our objective function, where $`\rho _\nu ^{}`$ is the final mean total neutrino density in the degenerate case, and $`\rho _\nu `$ is the neutrino density in the absence of degeneracy. (These densities include all three neutrinos and antineutrinos). We then determine whether a solution exists to our constraint equations for a given value of $`\eta `$, and scan through the allowed range of $`\eta `$ until we reach an upper and lower value of $`\eta `$ for which a solution no longer exists. At these limiting values for $`\eta `$, our linear programming routine gives the minimum possible value of $`\rho _\nu ^{}/\rho _\nu `$, which we can compare to other constraints. We consider two representative cases of interest: first, the case where $`\xi _e0`$ and $`\xi _\mu =\xi _\tau =0`$, which is equivalent to $`\xi _e\xi _\mu ,\xi _\tau `$, and the case $`\xi _e=\xi _\mu =\xi _\tau `$. The latter is probably the most physically realistic case . Although we have discussed our linear programming procedure only for the case of $`\nu _e`$ degeneracy, it generalizes in an obvious way for the case where $`\xi _e=\xi _\mu =\xi _\tau `$. We have not considered the most general possible case, in which all three degeneracy parameters vary independently. However, as we shall see, arbitrary inhomogeneity in $`\xi _e`$ alone allows absurdly large values of $`\eta `$ to be compatible with BBN, so there is nothing further to be gained in considering the most general case. We use for our limits on the element abundances the values in the recent review in reference . For the primordial helium-4 mass fraction, $`\mathrm{Y}_P`$, we take $$0.228\mathrm{Y}_P0.248.$$ (11) The limits on the number ratios of deuterium and lithium-7 to hydrogen are: $$2.9\times 10^5\mathrm{D}/\mathrm{H}4.0\times 10^5,$$ (12) and $$1.3\times 10^{10}{}_{}{}^{7}\mathrm{Li}/\mathrm{H}2.0\times 10^{10}.$$ (13) However, a BBN calculation with these limits alone yields no single value of $`\eta `$ consistent with all three sets of limits. One can argue either that the theoretical uncertainties are large enough to account for this discrepancy , or that one of these sets of limits (most likely lithium) does not represent the true primordial abundance . We have chosen the former approach. Folding in the theoretical uncertainties in the BBN predictions from reference , we take the following limits on D/H and <sup>7</sup>Li/H: $$2\times 10^5\mathrm{D}/\mathrm{H}5\times 10^5,$$ (14) and $$1\times 10^{10}{}_{}{}^{7}\mathrm{Li}/\mathrm{H}4\times 10^{10}.$$ (15) We have ignored the theoretical uncertainty in helium-4 because it represents a much smaller fractional change in Y<sub>P</sub>. We wish to emphasize that our general results are fairly insensitive to small changes in the limits quoted above. Since we are exploring a rather radical change to the standard model, we make no effort to perform an ultra-high-precision calculation. We used the procedure discussed above to determine the largest and smallest values of $`\eta `$ which are consistent with the limits on Y<sub>P</sub>, D/H, and <sup>7</sup>Li/H in equations (11), (14), and (15). Our mixing procedure requires the use of mass fractions, rather than ratios to hydrogen, so we have made this conversion in our calculation. Consider first the “standard model” with no degeneracy. For the limits quoted above, we obtain bounds on $`\eta `$ of $`3.7\times 10^{10}\eta 5.3\times 10^{10}`$. Now what happens if we add a homogeneous neutrino degeneracy? We have calculated the bounds on $`\eta `$ for the case in which $`\xi _e`$ (only) can have an arbitrary value, and for the case where $`\xi _e=\xi _\mu =\xi _\tau `$ can be set to any desired value. For both cases, we find that the bounds on $`\eta `$ are almost unchanged. (The lower and upper limits are both enlarged by less than 2%). While this might seem surprising in light of earlier similar calculations , it is a consequence of the increasingly narrower limits on the primordial element abundances. With such sharp limits as those considered here, even a free variation in $`\xi _e`$ or in $`\xi _e=\xi _\mu =\xi _\tau `$ cannot significantly alter the limits on $`\eta `$. (We could obtain a larger range in $`\eta `$ by allowing $`\nu _e`$ and either $`\nu _\mu `$ or $`\nu _\tau `$ to vary independently, but we would still expect a narrower allowed range than in reference because of the improved observational limits). Now we proceed to the case of inhomogeneous degeneracy. As we have noted previously, there is no well-defined mean final value of $`\xi _i`$ in this case, since the neutrinos mix at late times to produce a non-thermal distribution. However, the mean final value of $`\rho _\nu `$ is still well-defined, so we can attempt to constrain it with BBN. Consider first the case of $`\xi _e\xi _\mu ,\xi _\tau `$. In this case, all of the element abundances go to zero in the limit of large $`\xi _e`$. Thus, if we take $`f(\xi _e)`$ to have the form $`f(\xi _e=0)1`$, and $`f(\xi _e=\xi _0)=f_01`$, where $`\xi _0`$ is a sufficiently large value of $`\xi _e`$ such that all of the element abundances are neglible, then as we take the limit where $`\xi _0\mathrm{}`$, the element abundances approach their values in the standard nondegenerate model, while $`\rho _{\nu _e}`$ goes to infinity. Thus, in the case of inhomogeneous $`\nu _e`$ degeneracy, there is no BBN limit on $`\rho _{\nu _e}`$. Of course, there are other cosmological limits on $`\rho _\nu `$ in this case, from the requirement that structure formation not be disrupted by the extra radiation and that the extra radiation not distort the CMB fluctuation spectrum . Our argument also applies to the case where all three neutrinos have equal chemical potentials. There are still interesting limits to be placed on $`\eta `$. To determine these limits, we calculated the BBN element abundances for a grid of values of $`\xi `$. We took $`\xi `$ in steps of $`\mathrm{\Delta }\xi =1.0`$ between $`\xi =60`$ and $`\xi =10`$. We embedded a smaller grid between $`\xi =1.0`$ and $`\xi =1.0`$ in steps of $`\mathrm{\Delta }\xi =0.05`$. In calculating the element abundances for the degenerate case, we used the approximation given in reference for the decrease in the neutrino temperature at large $`\xi `$. Although rough, this approximation is adequate for our purposes. For the case $`\xi _e\xi _\mu ,\xi _\tau `$, we find acceptable solutions for $`\eta `$ in the range $$3.0\times 10^{10}\eta 1.1\times 10^8,$$ (16) while for the case $`\xi _e=\xi _\mu =\xi _\tau `$, we have $$3.1\times 10^{10}\eta 1.0\times 10^9.$$ (17) The actual $`\xi `$ values of the non-zero bins, along with the corresponding values for $`p_j`$ and $`\rho _\nu ^{}/\rho _\nu `$ are given in tables I and II. Note that our linear programming method will always yield a final optimal distribution for the $`p_j`$’s in which at most seven of the bins are non-zero (since equations (9) and (10) correspond to a total of seven constraint equations); effectively, this corresponds to a final distribution for $`f(\xi )`$ which is a sum of at most seven delta functions (see references for a more detailed discussion). We see that allowing for a free distribution of the degeneracies significantly increases the upper bound on $`\eta `$, particularly for the case of $`\xi _e\xi _\mu ,\xi _\tau `$, but decreases the lower bound only slightly. Furthermore, the minimum increase in the neutrino density needed to achieve these lower bounds is inconsistent with both structure formation considerations and CMB observations . On the other hand, the value of $`\rho _\nu ^{}/\rho _\nu `$ needed to achieve the upper bounds on $`\eta `$ is well within the regime allowed by both structure formation and the CMB. ## IV Discussion Our results indicate that, not surprisingly, the introduction of inhomogeneous neutrino degeneracy allows for a much wider range of $`\eta `$ within the constraints of BBN. Current limits on the primordial element abundances are so tight that even models with homogeneous degeneracy are tightly constrained. Similarly, inhomogeous neutrino degeneracy does not allow for a significant decrease in $`\eta `$, and such models tend to give a neutrino energy density in conflict with other cosmological limits. It is quite impressive that even with the radical model discussed here, the limits on the primordial element abundances have become so tight that a significantly lower value of $`\eta `$ cannot be achieved. On the other hand, inhomogeneous neutrino degeneracy can increase the upper bound on $`\eta `$ to quite large values: up to $`\eta =1.0\times 10^9`$ for the case of equal degeneracies in all three neutrinos, and $`\eta =1.1\times 10^8`$ if only $`\nu _e`$ is degenerate. These correspond to $`\mathrm{\Omega }_bh^2=0.036`$ and $`0.40`$, respectively. The distributions of $`\xi `$ which produce these extreme values for $`\eta `$ do not correspond to physically likely models. The usefulness of our linear programming calculation is that it allows us to establish upper and lower bounds on $`\eta `$ for arbitrary distributions of $`\xi `$, while at the same time giving the smallest value for $`\rho _\nu ^{}/\rho _\nu `$ corresponding to a given value of $`\eta `$. Any other distribution $`f(\xi )`$ is guaranteed to give values of $`\eta `$ which lie inside of our bounds. This work does not exhaust the possible models with spatially-varying $`\xi `$. It is possible to use our methodology to investigate models in which two or three neutrino degeneracies are independent. In addition, if baryogenesis is related to the lepton number in some way , then one would expect a correlation between $`\eta `$ and $`\xi `$ at each point in space. Given a specification for this correlation, such models could also be examined within the framework we have outlined here. Less general but more physically realistic distributions for $`f(\xi )`$ (e.g. a Gaussian distribution) could also be considered. ###### Acknowledgements. We thank A. Dolgov, G. Steigman and D. Weinberg for helpful discussions. S.E.W. was supported at Ohio State under the NSF Research Experience for Undergraduates (REU) program (NSF PHY-9605064). R.J.S. is supported by the Department of Energy (DE-FG02-91ER40690).
no-problem/9910/cond-mat9910319.html
ar5iv
text
# Metal-insulator transitions: Influence of lattice structure, Jahn-Teller effect, and Hund’s rule coupling ## Abstract We study the influence of the lattice structure, the Jahn-Teller effect and the Hund’s rule coupling on a metal-insulator transition in A<sub>n</sub>C<sub>60</sub> (A= K, Rb). The difference in lattice structure favors A<sub>3</sub>C<sub>60</sub> (fcc) being a metal and A<sub>4</sub>C<sub>60</sub> (bct) being an insulator, and the coupling to H<sub>g</sub> Jahn-Teller phonons favors A<sub>4</sub>C<sub>60</sub> being nonmagnetic. The coupling to H<sub>g</sub> (A<sub>g</sub>) phonons decreases (increases) the value $`U_c`$ of the Coulomb integral at which the metal-insulator transition occurs. There is an important partial cancellation between the Jahn-Teller effect and the Hund’s rule coupling. The competition between the Coulomb repulsion, the kinetic energy, the Jahn-Teller effect and the Hund’s rule coupling leads to interesting physics. Examples are perovskites, e.g., the manganites, and alkali-doped fullerenes. Here we focus on the metal-insulator transition for an integer number of electrons per site. This is particularly relevant for the fullerenes, since A<sub>3</sub>C<sub>60</sub> (A= K, Rb) is a metal while A<sub>4</sub>C<sub>60</sub> is a nonmagnetic insulator. According to band theory both are metals, and A<sub>4</sub>C<sub>60</sub> must therefore be an insulator due to interactions left out in band structure calculations. The metal insulator transition in a correlated system is usually discussed in terms of the ratio $`U/W`$, where $`U`$ is the Coulomb interaction between two electrons on the same molecule and $`W`$ is the one-particle band width $`W`$. The ratio $`U/W`$ is, however, almost identical for A<sub>3</sub>C<sub>60</sub> and A<sub>4</sub>C<sub>60</sub>. The question is then why not both systems are either metals or insulators. To study this, we apply the dynamical mean-field theory (DMFT), projection Quantum Monte-Carlo (QMC) and exact diagonalization techniques to models of A<sub>n</sub>C<sub>60</sub>. For the Fullerenes it is believed that $`U/W1.52.5`$. In spite of this large ratio, these systems are close to a metal-insulator transition due to the orbital degeneracy $`N=3`$ of the partly occupied $`t_{1u}`$ band. The lattice structure is fcc for A<sub>3</sub>C<sub>60</sub> and bct for A<sub>4</sub>C<sub>60</sub>. The important electron-phonon coupling is to H<sub>g</sub> Jahn-Teller phonons. We find that the difference in lattice structure alone can explain why A<sub>3</sub>C<sub>60</sub> is a metal but A<sub>4</sub>C<sub>60</sub> is an insulator and that the electron-phonon coupling can explain why A<sub>4</sub>C<sub>60</sub> is nonmagnetic. We find an important competition between the Jahn-Teller effect and the Hund’s rule coupling. The H<sub>g</sub> and A<sub>g</sub> intramolecular phonons are found to have the opposite effect on the critical $`U_c`$, for which the metal-insulator transition occurs. We consider a model of A<sub>n</sub>C<sub>60</sub> which includes a three-fold degenerate $`t_{1u}`$ level on each molecule and the hopping between different molecules $$H_{\mathrm{hop}}=\underset{\sigma ,m}{}\epsilon _{t_{1u}}n_{i\sigma m}+\underset{<ij>\sigma mm^{}}{}t_{ijmm^{}}\psi _{i\sigma m}^{}\psi _{j\sigma m^{}},$$ (1) where $`\psi _{i\sigma m}^{}`$ creates an electron on molecule $`i`$ with the quantum number $`m`$ and spin $`\sigma `$. The hopping matrix elements $`t_{ijmm^{}}`$ include the orientational disorder and the lattice structure, with nearest neighbor hopping for the fcc structure and a weak second nearest neighbor hopping for the bct structure. The Coulomb interaction is given by $`H_\mathrm{U}=`$ $`U_{xx}{\displaystyle \underset{im}{}}n_{im}n_{im}+U_{xy}{\displaystyle \underset{i\sigma \sigma ^{^{}}}{}}{\displaystyle \underset{m<m^{^{}}}{}}n_{i\sigma m}n_{i\sigma ^{^{}}m^{^{}}}`$ (2) $`+`$ $`{\displaystyle \frac{1}{2}}K{\displaystyle \underset{i\sigma \sigma ^{^{}}}{}}{\displaystyle \underset{mm^{^{}}}{}}\psi _{im\sigma }^{}\psi _{im^{^{}}\sigma ^{^{}}}^{}\psi _{im\sigma ^{^{}}}\psi _{im^{^{}}\sigma }`$ (3) $`+`$ $`{\displaystyle \frac{1}{2}}K{\displaystyle \underset{\sigma }{}}{\displaystyle \underset{mm^{^{}}}{}}\psi _{m\sigma }^{}\psi _{m\sigma }^{}\psi _{m^{^{}}\sigma }\psi _{m^{^{}}\sigma },`$ (4) where $`U_{xx}`$ and $`U_{xy}`$ describe the interaction between equal and unequal orbitals, respectively. $`K`$ is an exchange integral and $`U_{xx}=U_{xy}+2K`$. Finally we include the interaction with a five-fold degenerate H<sub>g</sub> phonon on each site $$H_{\mathrm{ph}}=\omega _{ph}\underset{i\nu }{}b_{i\nu }^{}b_{i\nu }+\frac{g}{2}\underset{i\nu \sigma mm^{^{}}}{}V_{mm^{^{}}}^{(\nu )}c_{im\sigma }^{}c_{im^{^{}}\sigma }(b_{i\nu }+b_{i\nu }^{}),$$ where $`b_{i\nu }`$ creates a phonon with the quantum number $`\nu `$ on the molecule $`i`$. The matrices $`V_{mm^{^{}}}^{(\nu )}`$ are determined by symmetry. The coupling constant $`g`$ is related to the dimensionless electron-phonon coupling $`\lambda =(5/3)N(0)g^2/\omega _{ph}`$. We also consider the coupling to A<sub>g</sub> phonons, for which $`V_{mm^{^{}}}^\nu `$ is diagonal in $`m`$ and $`m^{^{}}`$. In a first step we analyze the effect of the lattice structure alone, neglecting the electron-phonon coupling ($`g=0`$) and the multiplet effects ($`K=0`$ and $`U_{xx}=U_{xy}U`$). We use a projection Quantum Monte-Carlo (QMC) $`T=0`$ method in the fixed node approximation, which gives quite accurate ground-state results for this model. A<sub>3</sub>C<sub>60</sub> and A<sub>4</sub>C<sub>60</sub> differ in the number $`n`$ of conduction electrons per site and in the lattice structures. For a fcc lattice, $`n=3`$ and $`n=4`$ give Mott transitions at almost the same $`U_c`$. We therefore focus on the difference in lattice structure, and consider $`n=4`$ for clusters with $`M`$ molecules put on fcc or bct lattices. The band gap for filling $`n`$ is $$E_g=E(nM+1)+E(nM1)2E(nM),$$ (5) where $`E(N)`$ is the energy of a system with $`N`$ electrons. We want to extrapolate to $`M\mathrm{}`$ and determine the $`U_c`$ for which $`E_g`$ is zero. To reduce the finite size effects, we add $$\stackrel{~}{E}_g(U)=E_g(U)\frac{U}{M}E_g(U=0),$$ (6) where $`E_g(U=0)`$ is the band gap for $`U=0`$. These corrections go to zero for large $`M`$, but they improve the extrapolation $`M\mathrm{}`$. Fig. 1 shows that the metal-insulator transition happens for a substantially smaller $`U/W`$ for the bct ($`U_c/W1.3`$) than for the fcc structure ($`U_c/W2.3`$). The insulating state is antiferromagnetic. To understand these results, we note that on the fcc lattice it is possible to hop on a triangle, i.e., to return to the original site after three hops. On a bct lattice, on the other hand, this is not possible if the small second nearest neighbor hopping integrals are neglected. The simplest systems with these properties are a triangle and a square, each site having a level with spin but no orbital degeneracy. A nearest neighbor hopping integral $`t<0`$ connects the orbitals. The one-particle spectrum is $`\pm 2t`$ for the square and $`2|t|`$ and $`t`$ for the triangle. For the triangle there is a state with maximum bonding character ($`2|t|`$), but it is not possible to construct an optimally anti-bonding state, due to the presence of frustration. Thus the one-particle band width are $`W=3t`$ and $`4t`$ for the triangle and the square, respectively. The curves in Fig. 1 mainly differ in the large $`U`$ limit and we therefore consider this limit. We construct the many-body states of the triangle with two, three and four electrons, which determine the band gap (Eq. (5)). The energy $`E(3)=O(t^2/U)`$, since hopping is suppressed to order $`t/U`$. For the case of four electrons, we construct all states with the minimum (one) double occupancy and $`S_z=0`$. These states describe how the double occupancy hops around the triangle. The original state is, however, not recovered after one loop, since the spins on the sites with a single occupancy have been flipped. Moving the double occupancy around the triangle a second time restores the spins and the original state is recovered after six moves. The corresponding $`6\times 6`$ matrix has the extreme eigenvalues $`\pm 2t`$. In the lowest many-body state of the triangle with four electrons, it is therefore not possible to restore the state in an odd number of hops, and the frustration does not show up. In a similar way we obtain the lowest energy $`2t`$ for the two-electron state. The square has the same energies. Thus $`\begin{array}{cc}E_g=U4|t|=U\frac{4}{3}W\hfill & \text{for a triangle}\hfill \\ E_g=U4|t|=UW\hfill & \text{for a square}\hfill \end{array}`$ (9) Both the triangle and the square have no frustration in their many-body states, and for fixed $`t`$ the gaps are the same. The one-particle band width $`W`$, however, is reduced by the frustration in the triangle, and expressing the $`E_g`$ in terms of $`W`$ requires a larger prefactor in the frustrated case. These results give a qualitative explanation of Fig. 1. Although the calculation above can explain why A<sub>4</sub>C<sub>60</sub> is an insulator while A<sub>3</sub>C<sub>60</sub> a metal, it incorrectly predicts A<sub>4</sub>C<sub>60</sub> to be antiferromagnetic. The calculation neglects, however, the coupling to the Jahn-Teller phonons, which tends to make A<sub>4</sub>C<sub>60</sub> a nonmagnetic insulator. The electron-phonon interaction has been estimated from photoemission experiments for a free molecule. We describe the eight H<sub>g</sub> phonons by an effective mode, with the logarithmically averaged frequency $`\omega _{ph}=0.089`$ eV, and the effective coupling $`g=0.089`$ eV. For a free molecule this leads to a singlet being 0.29 eV below the lowest triplet. This triplet-singlet splitting is larger than an experimental estimate of 0.1 eV for A<sub>4</sub>C<sub>60</sub>. The splitting is, however, reduced by the competition with the Hund’s rule coupling. An estimate of the exchange integral $`K`$ based on an ab initio SCF calculation gave $`K=0.11`$ eV. This number is, however, expected to be reduced by correlation effects. For instance, for atomic multiplets a reduction by 25 $`\%`$ has been found. Indeed, we find that the experimental triplet-singlet splitting is reproduced by using $`K=0.07`$ eV. Since the metal-insulator transition depends on a competition between the kinetic and Coulomb energies, and since we may expect the electron-phonon coupling to reduce the hopping, we may expect this to reduce $`U_c`$. We therefore study the effect of phonons on $`U_c`$ (for $`K=0`$). For this purpose we apply the dynamical-mean field theory DMFT. We use hopping integrals for a Bethe lattice in the infinite dimensional limit $`t_{imjm^{^{}}}t^{}\delta _{mm^{^{}}}/\sqrt{z}`$, where $`z\mathrm{}`$ is the connectivity. The impurity model, resulting in the DMFT, is solved with a QMC method. The phonon fields are treated fully quantum mechanically, and they are updated together with the Fermion auxiliary fields in each Monte Carlo step. We use the one-particle band width $`W=2`$ and a Trotter break up $`\mathrm{\Delta }\tau =1/3`$. For an insulator $`G(\tau =\beta /2)`$ decays exponentially with $`\beta `$, where $`G(\tau )`$ is the electron Green’s function on the imaginary time axis. We therefore use $`G(\beta /2)`$ to determine whether the system is a metal or an insulator. We first compare the coupling to A<sub>g</sub> and H<sub>g</sub> phonons for $`n=3`$. Fig. 2a shows that $`G(\beta /2)`$ is reduced as $`U/W`$ is increased, since the system gets closer to a metal-insulator transition. For $`\lambda =0`$ extrapolation suggests a rather large $`U_c/W`$. For H<sub>g</sub> phonons an increase in $`\lambda `$ leads to a rapid reduction of $`G(\beta /2)`$ and $`U_c`$, while for $`A_g`$ phonons this leads to an increase in $`G(\beta /2)`$ and $`U_c`$. To understand these results we study a free molecule (Table I) and a system consisting of two molecules (dimer) (Table II) in the limit $$K\frac{g^2}{\omega _{ph}}E_{JT}\omega _{ph}WU.$$ (10) Table II shows the energy gap of the dimer. In agreement with the full DMFT results ($`K=0`$ and $`n=3`$) the gap is increased by a coupling to H<sub>g</sub> but decreased by a coupling to A<sub>g</sub> phonons. We first consider the A<sub>g</sub> case. Since $`V_{mm^{^{}}}=\delta _{mm^{^{}}}V_{mm}`$ we can transform the electron-phonon coupling to the form $$g\underset{i}{}(n_in)(b_i+b_i^{}),$$ (11) where $`n_i`$ is the total occupation number operator for site $`i`$ and $`n`$ is the (integer) filling. An irrelevant constant has been neglected. We first study the state with $`2n`$ electrons. In the limit $`WU`$ hopping is suppressed, and $`n_in0`$. The coupling (Eq. (11)) is then negligible, and the electron-phonon contribution to the energy is small. In the case of an extra electron or hole, however, this additional charge can hop even for $`WU`$. The coupling to the phonons then lowers the energy, and according to Eq. (5) this reduces the gap. For coupling to H<sub>g</sub> phonons, the state with $`2n`$ electrons can lower its energy via the (dynamic) Jahn-Teller effect. Since hopping is very efficiently suppressed, the energy gain is accurately given as twice the energy for a free molecule (Table I). In the case of an extra electron or hole, on the other hand, hopping dominates over the Jahn-Teller effect in the limit (10). The system can then only take advantage of this effect to the extent that it does not interfere with the hopping. The electron-phonon coupling then gives a much smaller lowering of the energy than for the state with $`2n`$ electrons, which increases the gap (Eq. (5)). Fig. 2b shows results for coupling to H<sub>g</sub> phonons and filling $`n=4`$. $`U_c/W`$ is smaller than for $`n=3`$, although the lattice structure is the same as for $`n=3`$. This can be understood from Table I, which shows that the energy gain in the free molecule due to the electron-phonon coupling is larger for $`n=4`$. This enters in $`E(nM)`$, while the electron-phonon coupling plays a smaller role for $`E(nM\pm 1)`$. The electron-phonon coupling alone would then tend to favor A<sub>4</sub>C<sub>60</sub> being an insulator and A<sub>3</sub>C<sub>60</sub> being a metal. As we will see below, this effect is, however, partly cancelled by the Hund’s rule coupling. The coupling to the H<sub>g</sub> phonons pushes $`U_c`$ for A<sub>3</sub>C<sub>60</sub> to the lower end of the physical range of $`U/W`$, raising some questions of why not also A<sub>3</sub>C<sub>60</sub> is an insulator. Although, the A<sub>g</sub> phonons tends to increase $`U_c`$, this should not be important due to the weak coupling to the A<sub>g</sub> phonons. However, there is a substantial coupling to a plasmon in A<sub>3</sub>C<sub>60</sub>. This should tend to increase $`U_c`$, since it couples to the electrons in the same way as the A<sub>g</sub> phonons. Below we show that the Hund’s rule coupling also plays an important role in this context. We next consider the effects of the Hund’s rule coupling ($`K>0`$). Since these terms in Eq. (2) lead to a sign-problem in the DMFT QMC calculation, we use exact diagonalization. To reduce the size of the Hilbert space we consider a four-site system with two-fold orbital and phonon degeneracies. The nearest neighbor hopping $`t_{im,jm^{^{}}}=t_{ij}\delta _{mm^{^{}}}`$ is chosen randomly, thus reducing the degeneracy and the one-particle spacing. We limit the size of the Hilbert space by allowing a maximum of two phonons per site. Due to this limitation, the calculation is not fully converged for the larger coupling constants considered below. From the finite size corrected band gap $`\stackrel{~}{E}_g(U_{xx})`$ we estimate the critical $`U_{xx}`$ as $`U_{xx}\stackrel{~}{E}_g(U_{xx})`$, shown in Fig. 3. The figure illustrates that for $`\lambda =0`$ an increase in $`K`$ leads to a decrease in $`U_c`$. In analogy to the discussion for the Jahn-Teller effect, the Hund’s rule coupling can effectively lower the energy of the state with $`nM`$ electrons while for the states with $`nM\pm 1`$ electrons, the stronger interference with hopping leads to a smaller lowering of the energy. For $`\lambda >0`$ the competition between the Jahn-Teller effect and the Hund’s rule coupling tends to reduce the influence of either effect on $`U_c`$. This is shown in Table I and II and in Fig. 3. To summarize, we have found that the difference in lattice structure favors A<sub>3</sub>C<sub>60</sub> being a metal and A<sub>4</sub>C<sub>60</sub> being an insulator. The Jahn-Teller effect wins over the Hund’s rule coupling, making A<sub>4</sub>C<sub>60</sub> a nonmagnetic insulator. The coupling to the H<sub>g</sub> phonons tends to strongly reduce the critical $`U`$ for a metal-insulator transition, raising questions about why not also A<sub>3</sub>C<sub>60</sub> is an insulator. This effect is, however, partially cancelled by the Hund’s rule coupling. The coupling to plasmons tends to further increase the critical $`U`$. This work has been supported by the Max-Planck-Forschungspreis.
no-problem/9910/hep-ph9910326.html
ar5iv
text
# References Introduction We investigate the constrained ‘Minimal Supersymmetric extension of the Standard Model’ (MSSM) as used for example by the LEP collaborations. It assumes Grand Unification, no extra CP violation, a common scalar mass scale, etc., so that out of more than 100 possible new constants in a general SUSY model only the following free parameters are left: * $`m_0`$ = Universal scalar mass at the GUT scale * $`M_2`$ = $`SU(2)`$ gaugino mass at the electroweak scale * $`\mu `$ = Higgs(ino) mass parameter (elw. scale) * $`\mathrm{tan}\beta `$ = Ratio of higgs vacuum expectation values (elw. scale) The additional parameters $`A_0`$ and $`m_A`$ are not important here. The quantum number R Parity is assumed to be conserved, so that the lightest supersymmetric particle (LSP) is stable. Cosmological arguments together with limits on abundances of atoms with anomalous charge over mass ratios require that the LSP carries neither colour nor electrical charge. In the MSSM only two particles fulfil these constraints: The lightest neutralino, $`\stackrel{~}{\chi }_1^0`$, and the sneutrino $`\stackrel{~}{\nu }`$. Note that the common scalar mass $`m_0`$ implies that the three sneutrinos $`\stackrel{~}{\nu }_e,\stackrel{~}{\nu }_\mu ,\stackrel{~}{\nu }_\tau `$ are degenerate in mass, and we do not distinguish between them. A third LSP candidate is the gravitino, but in the constrained MSSM it is assumed to be heavier than the other SUSY particles, as predicted in supergravity models. A priori it is not clear which one is the LSP. Since the existing upper mass limit for the sneutrino is better than for the lightest neutralino , many physicists concentrate on the hypothesis LSP = $`\stackrel{~}{\chi }_1^0`$. In this paper we investigate for which SUSY parameters the sneutrino plays the role of the LSP, and to what extent this possibility is ruled out by existing experimental bounds. Limit on sneutrino mass First we analyse the experimental bounds; it turns out that the limit obtained in $`e^+e^{}`$ collision experiments with centre of mass energies around the Z pole is most stringent . The LEP I measurements of the Z properties allow to constrain the non Standard Model contributions to the invisible Z width to $`\mathrm{\Delta }\mathrm{\Gamma }_{\mathrm{inv}}<2.0\mathrm{MeV}\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}95}\%CL,`$ (1) assuming 3 light neutrino species. ‘Invisible’ decay channels are those, for which a substantial fraction (typ. 50% or more) of the energy carried by the final state particles is unseen in the detector and which are inconsistent with fermion pair production. Also sneutrino pairs might be produced in Z decays. If they act as LSP they are stable and undetected, thus contributing to $`\mathrm{\Gamma }_{\mathrm{inv}}`$. For the conclusions of this paper it is sufficient to discuss this case. The sneutrino contribution to the invisible Z width is given by: $`\mathrm{\Delta }\mathrm{\Gamma }_{\mathrm{inv}}^{\stackrel{~}{\nu }}=3{\displaystyle \frac{1}{2}}\left[1\left({\displaystyle \frac{2m_{\stackrel{~}{\nu }}}{m_Z}}\right)^2\right]^{3/2}\mathrm{\Gamma }_{\mathrm{inv}}^\nu `$ (2) Here $`\mathrm{\Gamma }_{\mathrm{inv}}^\nu =167\mathrm{MeV}`$ is the neutrino contribution for one family. The factor 3 stands for the 3 families, $`\frac{1}{2}`$ results from the different spins of neutrinos and sneutrinos, and the term in brackets containing the sneutrino mass describes the kinematical suppression. The experimental upper limit (1) can be converted into a sneutrino mass limit of $`m_{\stackrel{~}{\nu }}^{LSP}>44.6\mathrm{GeV}\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}95}\%CL`$ (3) This bound improves the older limit of $`43.1\mathrm{GeV}`$. It should be noted that our limit holds also in the more general case that either $`\stackrel{~}{\nu }`$ or $`\stackrel{~}{\chi }_1^0`$ act as the LSP. In the latter case the sneutrino will decay. If it is long lived, it escapes detection. If it is short lived the two dominant decay modes are neutrino plus neutralino and lepton plus chargino. In the first case all or a large fraction of the energy escapes undetected. The second case is already ruled out from the lower limit on the chargino mass of $`m_Z/2`$, derived from the total Z width measured at LEP I. Sneutrino-LSP in the MSSM Now we turn to the sparticle masses as predicted in the constrained MSSM and investigate if we can set a theoretical upper limit on $`m_{\stackrel{~}{\nu }}`$. To be the LSP the sneutrino mass must in particular fulfil the two relations $`m_{\stackrel{~}{\nu }}`$ $`<`$ $`m_{\stackrel{~}{e}_R}`$ (4) $`m_{\stackrel{~}{\nu }}`$ $`<`$ $`m_{\stackrel{~}{\chi }_1^0}`$ (5) which are not true in large regions of the MSSM parameter space. Note that $`m_{\stackrel{~}{e}_L}>m_{\stackrel{~}{e}_R}`$ is always fulfilled. The charged sleptons $`\stackrel{~}{\mu }`$ and $`\stackrel{~}{\tau }`$ are heavier than $`\stackrel{~}{e}_R`$ (with the stau possibly making an exception, if mixing is large; this would lead to the additional constraint $`m_{\stackrel{~}{\nu }}<m_{\stackrel{~}{\tau }}`$, yielding an even better sneutrino mass limit than the one presented below). To understand the first relation (4) we calculate the two slepton masses using the approximate formulae given in : $`m_{\stackrel{~}{\nu }}^2`$ $`=`$ $`m_0^20.5m_Z^2{\displaystyle \frac{\mathrm{tan}^2\beta 1}{\mathrm{tan}^2\beta +1}}+0.80M_2^2`$ (6) $`m_{\stackrel{~}{e}_R}^2m_e^2`$ $`=`$ $`m_0^2+\mathrm{sin}^2\theta _Wm_Z^2{\displaystyle \frac{\mathrm{tan}^2\beta 1}{\mathrm{tan}^2\beta +1}}+0.22M_2^2`$ (7) The second term on the right hand side is due to quartic sfermion-higgs couplings. The term proportional to $`M_2^2`$ describes the running of the masses from the GUT scale to the electroweak scale. Thus (4) is fulfilled if $`{\displaystyle \frac{\mathrm{tan}^2\beta 1}{\mathrm{tan}^2\beta +1}}>\mathrm{\hspace{0.33em}0.79}{\displaystyle \frac{M_2^2}{m_Z^2}}`$ (8) using $`\mathrm{sin}^2\theta _W=0.23`$ and neglecting the electron mass. Since the left hand side is smaller than 1, we find in particular $`M_2<1.13m_Z=103\mathrm{GeV}`$ (9) Using the program SUSYGEN, in which the sparticle masses are calculated more precisely, we find a similar bound of $`104\mathrm{GeV}`$. The condition (5) is more difficult to understand, since two more MSSM parameters come into play: $`m_0`$, which determines the sneutrino mass, and the higgsino mass parameter $`\mu `$, appearing in the neutralino mass matrix. Using the basis for the interaction eigenstates as given in reference , the mass matrix becomes $`\left(\begin{array}{cccc}0.61M_2& 0.21M_2& 0& 0\\ 0.21M_2& 0.88M_2& m_Z& 0\\ 0& m_Z& \mu \mathrm{sin}2\beta & \mu \mathrm{cos}2\beta \\ 0& 0& \mu \mathrm{cos}2\beta & \mu \mathrm{sin}2\beta \end{array}\right)`$ (14) Here the GUT gaugino mass relations and the numerical value for the weak mixing angle have been used. The smallest eigenvalue, the neutralino mass $`m_{\stackrel{~}{\chi }_1^0}`$, can become large only if both $`M_2`$ and $`|\mu |`$ are large. Equation (9) therefore implies an upper bound on $`m_{\stackrel{~}{\chi }_1^0}`$ and, through (5), on $`m_{\stackrel{~}{\nu }}`$, of the order of $`m_Z`$. After these qualitative arguments we need to determine the upper limit on the LSP sneutrino mass quantitatively. We computed $`m_{\stackrel{~}{\nu }}`$ for many points in the MSSM parameter space and calculated the maximum mass value from the subset of points which respect (4) and (5). First we used the mass formulae as given above and diagonalised the neutralino mass matrix numerically. The parameter space was scanned in the range $`0<M_2<110\mathrm{GeV}`$, $`0<m_0<1000\mathrm{GeV}`$, $`\pm \mu <1000\mathrm{GeV}`$, $`1<\mathrm{tan}\beta <50`$. The characteristic value of $`1000\mathrm{GeV}`$ is motivated by the requirement that SUSY solves the hierarchy problem. More than 1 billion points have been considered. Result: $`m_{\stackrel{~}{\nu }}^{LSP}<44.3\mathrm{GeV}`$. We repeated the procedure with SUSYGEN, which is more precise but less fast. In order to save computer time, we scanned only through that subset of the MSSM parameters for which the approximate formulae predict high values of $`m_{\stackrel{~}{\nu }}^{LSP}`$. The step sizes were $`0.1\mathrm{GeV}`$ in $`M_2`$ and $`m_0`$, $`0.1`$ in $`\mathrm{tan}\beta `$ and $`5\mathrm{GeV}`$ in $`\mu `$ (on which the sneutrino mass depends only indirectly). The resulting theoretical upper limit is $`m_{\stackrel{~}{\nu }}^{LSP}<44.2\mathrm{GeV}`$ (15) in good agreement with the approximate value of $`44.3\mathrm{GeV}`$. The corresponding MSSM parameters are $`M_2=84.1\mathrm{GeV}`$, $`m_00`$, $`\mathrm{tan}\beta =4.2`$ and $`\mu 190\mathrm{GeV}`$. The neutralino mass is nearly degenerate with the sneutrino mass in this case. The difference between the experimental and theoretical limits on the sneutrino mass derived in this paper is rather small. Therefore the inclusion of higher oder corrections both to the sneutrino contribution to the Z width as well as to the sparticle masses is desirable. An improved experimental limit cannot be expected in the near future. The LEP I data taking and analyses are completed, and at LEP II the cross section for the relevant channel, $`\mathrm{e}^+\mathrm{e}^{}\stackrel{~}{\nu }\overline{\stackrel{~}{\nu }}\gamma `$, is small. Conclusions LEP I data show that the sneutrino must be heavier than $`44.6\mathrm{GeV}`$ at the $`95\%`$ confidence level. In the sneutrino LSP scenario this experimental lower bound is inconsistent with the theoretical upper limit on the sneutrino mass. Therefore - within the constrained MSSM - the sneutrino can not be the LSP! Acknowledgements We would like to thank Martin Grünewald, Christian Preitschopf and Daniel Ruschmeier for valuable comments. References
no-problem/9910/astro-ph9910334.html
ar5iv
text
# Horizontal-Branch Models and the Second-Parameter Effect. III. The Impact of Mass Loss on the Red Giant Branch, and the Case of M5 and Palomar 4/Eridanus ## 1. Introduction One of the most important ingredients for the construction of a model of the formation of the Galaxy concerns whether the globular clusters (GCs) in the Galactic outer halo are younger or older than those in the inner halo (e.g., Mironov & Samus 1974; Searle & Zinn 1978; Zinn 1980, 1993; van den Bergh 1993; Majewski 1994). The outer halo of the Galaxy is not well populated. From Table 7 in Borissova et al. (1997), one finds that in the “extreme” outer halo (galactocentric distances $`R_{\mathrm{GC}}>50`$ kpc) there are five very scarcely populated and loose clusters with (mostly) red horizontal-branch (HB) morphologies (Palomar 3, Pal 4, Pal 14, Eridanus, and AM-1), and one cluster with a blue HB. This blue-HB globular—NGC 2419—is, however, more massive (by a factor of $`8`$) than the sum of all outer-halo GCs with red HBs. Hubble Space Telescope (HST) observations have revealed that NGC 2419 is coeval with M92 (NGC 6341) (Harris et al. 1997), a much closer blue-HB globular which has always been considered to be among the very oldest GCs in the Galaxy (e.g., Bolte & Hogan 1995; Pont et al. 1998; Salaris, Degl’Innocenti, & Weiss 1997; VandenBerg, Bolte, & Stetson 1996). Until recently, however, little information was available on the ages of the remaining extreme outer-halo GCs. HST observations have again helped remedy the situation. Stetson et al. (1999) have presented additional results of their ongoing HST survey of GCs lying at $`R_{\mathrm{GC}}>50`$ kpc. In particular, they presented deep WFPC2 F555W, F555W–F814W ($`V`$, $`VI`$) color-magnitude diagrams (CMDs) for Pal 4 and Eridanus—both of which have exclusively red HBs.<sup>1</sup><sup>1</sup>1Stetson et al. (1999) also presented HST observations for Pal 3 and derived its age relative to M3 (NGC 5272). We defer analysis of this pair to a future paper because of the current difficulty in determining “representative” HB morphology parameters for M3, a cluster which appears to show a strong radial gradient in HB type. The latter conclusion can be obtained from a comparison among the datasets presented by Buonanno et al. (1994), Ferraro et al. (1997a) and Ferraro (1998)—the latter referring to HST-WFPC2 data for the innermost cluster regions (Ferraro et al. 1997b). Stetson et al. undertook an analysis of the ages of these GCs, as provided by traditionally employed techniques (e.g., Stetson, VandenBerg, & Bolte 1996 and references therein), and found that, under the assumption that M5 (NGC 5904) and Pal 4/Eridanus share the same chemical composition, these extreme outer-halo GCs with red HBs are younger than M5 ($`R_{\mathrm{GC}}6.2`$ kpc; Harris 1996) by $`1.52`$ Gyr. VandenBerg (1999a) has recently reanalyzed the HST CMDs for the extreme outer-halo GCs with red HBs. Again assuming identical chemical compositions, he supports slightly smaller age differences ($`11.5`$ Gyr) between M5 and Pal 4/Eridanus than reported by Stetson et al. (1999). Therefore, 2 Gyr appears to be a safe upper limit on such an age difference. Fig. 1.— Combined HST CMDs for Pal 4 ($`\times `$) and Eridanus (+). As indicated, reddenings of $`E(VI)=0.030`$ and 0.029 mag have been assumed for Pal 4 and Eridanus, respectively (based on Schlegel et al. 1998). The Eridanus CMD has been shifted by $`\mathrm{\Delta }V=+0.4`$ mag, thus accounting for the relative distance moduli of the two clusters (VandenBerg 1999a). The vertical dotted line, in gray, indicates the mean color of the bulk of the HB populations in the two clusters, $`(VI)_0=0.78`$ mag. Note that the Schlegel et al. reddening values imply intrinsically bluer HBs than do the canonical reddening values tabulated by Harris (1996). Though a lower age for Pal 4 and Eridanus would qualitatively appear consistent with their red HB types, Stetson et al. (1999) did not attempt to provide a reliable quantitative description of how large an age difference would be required to explain the difference in HB morphology between Pal 4/Eridanus and M5. Lee, Demarque, & Zinn (1994) have recently stated: “only a small number of clusters have been dated to sufficiently high precision to test the hypothesis that the second parameter is age, and there is some doubt that the detected age differences are consistent with the HB morphologies of the clusters. If they are not, this would suggest that age cannot be the sole second parameter.” We concur with such a statement and emphasize, therefore, that tests of age as the second parameter cannot be properly carried out without the required comparison with adequate models of the HB morphology of the clusters under consideration. As we have done in the previous papers of this series (Catelan & de Freitas Pacheco 1993, 1994, 1995), we shall provide here the quantitative estimates of the age difference that is required to explain the HB morphologies of M5 vs. Pal 4/Eridanus. We shall assume that age is the sole second parameter. Using results reported in an Appendix (see also Catelan 1999), we shall examine in detail the effect of an age-dependent red giant branch (RGB) mass loss upon the inferred age differences, since there have been suggestions (e.g., Lee et al. 1994) that such an age dependence may help explain the second parameter phenomenon in terms of age. We begin in the next section by describing the observational data for M5, Pal 4 and Eridanus employed in the present study. In §3, we describe our technique for obtaining synthetic HB models for these clusters. In §4, we explain how the age difference between Pal 4/Eridanus and M5 that is required to account for their different HB types was obtained from the models, taking into account several different analytical mass loss formulae for the mass loss in red giants. Finally, we present conclusions and provide additional discussion in §5. ## 2. Observational Data ### 2.1. HB Morphology of M5 Sandquist et al. (1996) have provided a very extensive account of the CMD morphology of M5. Recently, Sandquist (1998) has kindly readdressed the HB morphology parameters for the cluster. His latest values can be found in Table 1. In column 1, the Mironov (1972) index $`B/(B+R)`$ is given. In column 2, one finds the so-called “Lee–Zinn parameter” $`(BR)/(B+V+R)`$, first defined and used by Zinn (1986). In column 3, Buonanno’s (1993) index $`(B2R)/(B+V+R)`$, where $`B2`$ is the number of blue-HB stars bluer than $`(\mathrm{B V})_0=0.02`$ mag, is provided. As usual, $`B`$, $`V`$, $`R`$ are the numbers of blue, variable (RR Lyrae–type) and red HB stars, respectively. The final two columns provide two alternative values for Fusi Pecci’s $`HB_{\mathrm{RE}}`$ indicator (a “subjective” estimate of the red end of the HB distribution in B V; Fusi Pecci et al. 1993), where the first disregards the presence of a few red-HB stars lying “above the zero-age HB,” and the second takes such stars into account (Sandquist 1998). For additional information and references related to these indices, the reader is referred to Catelan et al. (1998). It is important to note that Buonanno’s (1993) index is (unfortunately) reddening-dependent. For M5, $`E(\mathrm{B V})=0.03\pm 0.01`$ mag (cf. Sandquist et al. 1996 and references therein). The value of the Buonanno parameter provided in Table 1 corresponds to the assumption that $`E(\mathrm{B V})=0.03`$ mag. Sandquist (1998) has kindly evaluated the effect of the reddening uncertainty upon this index for M5: he finds that, if the reddening is actually 0.02 or 0.04 mag, then this ratio would have the values $`0.056`$ or $`0.016`$, respectively. ### 2.2. HB Morphology of Palomar 4 and Eridanus Pal 4 and Eridanus have exclusively red HBs, making the computation of most of the above HB morphology indices meaningless for our purposes. From the HST CMDs (see Stetson et al. 1999), it is clear that both Pal 4 and Eridanus share very similar HB morphologies. A combined CMD for the two clusters is shown in Figure 1. The stars were de-reddened by the indicated amounts, based on Schlegel, Finkbeiner, & Davis (1998). The Eridanus data were further shifted by $`\mathrm{\Delta }V=+0.4`$ mag, in order to account for the relative distance moduli of the two clusters (VandenBerg 1999a). As indicated by the vertical dotted line, the HB color distribution clearly shows a peak at $`(VI)_00.78`$ mag (and little scatter around this point). Eridanus has a few ($`3`$) stars scattered towards brighter magnitudes and redder colors than does Pal 4; however, as shown below, this feature can be accounted for by statistical fluctuations related to the small number of HB stars (a total of $`25`$) available in the HST samples and evolution away from the zero-age HB (ZAHB). ## 3. Theoretical Framework: Synthetic HBs The HB evolutionary tracks employed in the present project are the same as described in Catelan et al. (1998). The following chemical composition was assumed: main-sequence helium abundance $`Y_{\mathrm{MS}}=0.23`$, overall metallicity $`Z=0.001`$ (see Sneden et al. 1992; Sandquist et al. 1996; Borissova et al. 1999; Stetson et al. 1999; and VandenBerg 1999a for discussions of the metallicities of M5, Pal 4, and Eridanus). Consistent with our working hypothesis that age is the sole second parameter, we assume that M5 and Pal 4/Eridanus have the same chemical composition. We have assumed throughout this paper that the HB morphology of the studied GCs can be reproduced by unimodal Gaussian deviates in ZAHB mass (see Catelan et al. 1998 for a detailed discussion). A relevant numerical improvement is the adoption of Hill’s (1982) interpolation algorithm also to interpolate among the evolutionary tracks of different masses in order to infer the physical parameters $`\mathrm{log}L`$, $`\mathrm{log}T_{\mathrm{eff}}`$ of the “stars” in the HB simulations. The synthetic HBs were converted to the observational planes using the prescriptions provided by VandenBerg (1999b). ### 3.1. The Case of M5 Synthetic HBs have been computed aiming at estimating the optimum parameters $`M_{\mathrm{HB}}`$ (mean mass) and $`\sigma _M`$ (mass dispersion) required to reproduce the observed HB morphology parameters for M5 (Table 1). The adopted procedure is completely analogous to that employed by Catelan et al. (1998). We have computed synthetic HBs assuming an overall number of HB stars $`B+V+R=553`$, as in Sandquist’s (1998) sample. For each ($`M_{\mathrm{HB}}`$$`\sigma _M`$) combination, we computed a series of 100 Monte Carlo simulations and obtained HB morphology parameters therefrom. After many such trials varying the above two free parameters, we have converged on a set of models characterized by the following values: $$M_{\mathrm{HB}}=0.6325M_{\mathrm{}},\sigma _M=0.025M_{\mathrm{}}.$$ Such a combination leads to the mean HB morphology parameters described in Table 2 (where the numbers in parentheses represent the standard deviation of the mean over the set of 100 simulations with 553 “stars” in each). Note the nice agreement between the observed (Table 1) and theoretical (Table 2) parameters, to within the errors. It should be remarked that, if Buonanno’s (1993) parameter were bluer (implying a higher reddening; cf. §2.1), our simulations indicate that it would have been easier to account for the overall ratio between blue stars and RR Lyrae variables. Indeed, Schlegel et al. (1998) give $`E(\mathrm{B V})=0.038`$ mag for this cluster. The $`M_{\mathrm{HB}}`$ value would not differ significantly from the one quoted above though. It follows that the above $`M_{\mathrm{HB}}`$ value for M5 is a quite robust result for the assumed chemical composition and theoretical framework. In Figure 2, we plot two synthetic HB/upper RGB models for M5, picked at random from the pool of 100 simulations. The plus signs indicate RR Lyrae variables (a strip width of 0.075 in $`\mathrm{log}T_{\mathrm{eff}}`$ has been assumed). Random scatter has been included following the prescriptions of Robertson (1974), but without any special effort to make the CMD dispersion on the RGB match closely the observed one in Sandquist et al. (1996). ### 3.2. The Case of Pal 4/Eridanus To model the HBs of the extreme outer-halo clusters Pal 4 and Eridanus is a significantly more complicated and challenging task than to model that of M5, given the small number of HB stars detected in the HST studies and the total lack of HB stars lying blueward of the red HB. After trying a few different possibilities, we decided to adopt the following approach. Starting with a mass distribution which, in the mean, would give roughly the same number of stars on the red HB and inside the instability strip, we ran many sets of (twelve) synthetic HB simulations, increasing the mean mass by 0.01 $`M_{\mathrm{}}`$ from one set to the next, and holding the mass dispersion (as well as the total number of HB stars—25) fixed (at $`\sigma _M=0.01M_{\mathrm{}}`$) in all cases. The mean mass range covered by our simulations was the following: $`M_{\mathrm{HB}}=0.65,\mathrm{\hspace{0.17em}0.66}\mathrm{}\mathrm{\hspace{0.17em}0.78},\mathrm{\hspace{0.17em}0.79}M_{\mathrm{}}`$. Again, random scatter was added following Robertson (1974). Here, however, we did make an effort to reproduce (approximately) the errors in the HST photometry (Stetson 1999) around the HB level. Upon inspection of each of the plots thus produced, and paying particular attention to their corresponding color distributions in comparison to that shown in Figure 1, we reached the conclusion that the following parameters provide an adequate match to both the Pal 4 and the Eridanus HST CMDs at the HB level: $$M_{\mathrm{HB}}=0.75M_{\mathrm{}},\sigma _M=0.01M_{\mathrm{}}.$$ Plots containing the simulations for this case can be found in Figure 3. The vertical dotted lines, as in Figure 1, indicate the color $`(VI)_0=0.78`$ mag. Notice that in some cases even the “bright” red HB stars found (especially) in the Eridanus HST CMD are well reproduced. We interpret these stars as being the result of evolution away from the ZAHB towards the asymptotic giant branch, combined with statistical fluctuations due to the small sample size. Fig. 2.— Synthetic CMDs for M5 (see text). The reddening values from Schlegel et al. (1998) are generally larger than the commonly employed values tabulated by Harris (1996). In the case of Pal 4, one has: $$E(\mathrm{B V})=0.023\mathrm{mag}E(VI)0.030\mathrm{mag}$$ from Schlegel et al. (1998); and $$E(\mathrm{B V})=0.01\mathrm{mag}E(VI)0.013\mathrm{mag}$$ from Harris (1996). In the case of Eridanus, one finds: $$E(\mathrm{B V})=0.022\mathrm{mag}E(VI)0.029\mathrm{mag}$$ from Schlegel et al. (1998); and $$E(\mathrm{B V})=0.02\mathrm{mag}E(VI)0.026\mathrm{mag}$$ from Harris (1996). The transformation between $`E(\mathrm{B V})`$ and $`E(VI)`$ was carried out adopting a ratio of $`1.3`$ between the two (see Stetson et al. 1999). As is apparent, the difference in $`E(VI)`$ values between the two sources is smaller in the case of Eridanus. This uncertainty in the $`E(VI)`$ value for Pal 4 may affect the choice of model for the red-HB clusters. The larger reddening from Schlegel et al. (1998) implies an intrinsically bluer HB distribution than what would be inferred from Harris (1996). Because of possible systematic uncertainties in $`E(VI)`$ and in the color transformations, we show, in Figures 4 and 5, synthetic HBs similar to those displayed in Figure 3, but varying $`M_{\mathrm{HB}}`$ by $`0.02M_{\mathrm{}}`$ (Fig. 4) and $`+0.02M_{\mathrm{}}`$ (Fig. 5). The latter case might be considered more appropriate if the canonical reddening value from Harris (1996) were adopted—implying larger relative ages with respect to M5 (see below). Inspection of Figures 4 and 5 shows that statistical fluctuations can also play a role—though it appears more likely that this would lead to a larger $`M_{\mathrm{HB}}`$ for Pal 4/Eridanus. Indeed, more of the models with larger $`M_{\mathrm{HB}}`$ (Fig. 5) resemble those in Figure 3 in terms of $`(VI)_0`$ than is the case for models with smaller $`M_{\mathrm{HB}}`$ (Fig. 4). This is because of the decreasing dependence of HB temperature/color on stellar mass towards the red end of the HB. Thus, if important, statistical fluctuations would tend to lead to an underestimate of the age difference between M5 and Pal 4/Eridanus, as derived from HB morphology arguments. A set of 700 synthetic HB computations shows that changing $`\sigma _M`$ from $`0.01M_{\mathrm{}}`$ to $`0.025M_{\mathrm{}}`$ leads to a change in mean HB color equivalent to a reddening error by $`\mathrm{\Delta }E(\mathrm{B V})0.002`$ mag, holding $`M_{\mathrm{HB}}`$ fixed at $`0.75M_{\mathrm{}}`$. Had a larger, “M5-like” $`\sigma _M`$ value been adopted, we would have been forced to adopt a slightly larger $`M_{\mathrm{HB}}`$ for Pal 4/Eridanus. It is easy to see why: for a given $`M_{\mathrm{HB}}`$, the low-mass tail of the distributions gets closer and closer to the instability strip region with increasing $`\sigma _M`$. This must be compensated for by increasing $`M_{\mathrm{HB}}`$, also implying a (slightly) larger age difference between Pal 4/Eridanus and M5 than reported in the next sections. However, we have been unable to obtain as satisfactory matches to the HBs of Pal 4/Eridanus using the larger $`\sigma _M`$, possibly pointing to a real difference in the mass dispersion among these systems. ## 4. Estimating Relative Ages from the HB and RGB Models In order to estimate the relative ages required to produce the relative HB types of M5 and Pal 4/Eridanus, we follow a similar approach as described in previous papers of this series (e.g., Catelan & de Freitas Pacheco 1995). The main difference here is that we shall evaluate the effects of an age-dependent mass loss on the RGB, as implied by the several different analytical formulae discussed in the Appendix, upon the relative ages thus estimated. RGB mass loss is estimated on the basis of the RGB models of VandenBerg et al. (2000) for a chemical composition $`[\mathrm{Fe}/\mathrm{H}]=1.41`$, $`[\alpha /\mathrm{Fe}]=+0.3`$. It is important to note that the VandenBerg et al. results for both the RGB and HB phases are in very good agreement with those from A. V. Sweigart (see VandenBerg et al. for a discussion). From the VandenBerg et al. (2000) models, we first obtained the age–RGB tip mass ($`M_{\mathrm{RGB}}^{\mathrm{tip}}`$) relationship for the adopted chemical composition. Then we obtained the age–overall mass loss on the RGB ($`\mathrm{\Delta }M_{\mathrm{RGB}}^{\mathrm{tip}}`$) relationship from the Appendix. The required estimate of $`\eta `$ values was then accomplished by evaluating, for each given (assumed) age for M5, $$\eta =\frac{M_{\mathrm{RGB}}^{\mathrm{tip}}M_{\mathrm{HB}}}{\mathrm{\Delta }M_{\mathrm{RGB}}^{\mathrm{tip}}},$$ (1) where $`M_{\mathrm{HB}}M_{\mathrm{HB}}_{\mathrm{M5}}=0.6325M_{\mathrm{}}`$. Holding the $`\eta `$ value thus derived fixed, the age that leads to a good fit to the Pal 4/Eridanus HB morphology (as characterized by the mean HB mass value described in the previous section) was easily obtained from the $`M_{\mathrm{RGB}}^{\mathrm{tip}}`$–age and $`\mathrm{\Delta }M_{\mathrm{RGB}}^{\mathrm{tip}}`$–age relationships. Hill’s (1982) algorithm was used for the interpolations that defined such relationships. The implied age difference between M5 and Pal 4/Eridanus followed immediately from this. Table 3 shows our derived $`\eta `$ values and ages for Pal 4/Eridanus for each assumed age for M5 and for each of the mass loss formulae discussed in the Appendix, including Reimers’ (1975a, 1975b) widely adopted one. The inferred age differences are also listed. Figure 6 summarizes our results; the hatched regions indicate the relative ages favored by the HST analyses of Stetson et al. (1999) and VandenBerg (1999a). From Table 3 and Figure 6, it is clear that only for extremely low ages—$`9`$ Gyr—can one reproduce the relative HB types of M5 and Pal 4/Eridanus in terms of “age as the second parameter,” even by assuming several different possibilities for the form of an age-dependent mass loss formula for giant stars. Equation (A2) is the one which leads to the smaller age differences from HB morphology arguments. Except for equation (A3), Reimers’ (1975a, 1975b) is the one from which the largest relative ages are inferred. Note that only the upper limits on the possible age difference range estimated by Stetson et al. (1999) are reached for M5 ages of about 9 Gyr.<sup>2</sup><sup>2</sup>2This result is actually somewhat underemphasized by the way we have chosen to present the Stetson et al. (1999) and VandenBerg (1999a) results in Figure 6. The reason for this is that, in these studies, an absolute age of $`1315`$ Gyr was adopted, whereas an absolute age $`10`$ Gyr would lead to even smaller relative turnoff ages. In other words, what we show as perfectly horizontal hatched areas in Figure 6 should actually be somewhat slanted, with the relative turnoff ages decreasing with decreasing M5 age—thereby making it even harder for the relative ages derived from HB morphology arguments to match the relative turnoff ages obtained from deep HST photometry, even for very low absolute ages. VandenBerg’s (1999a) results are not reproduced at all; extrapolation of the curves shown in Figure 6 suggests that an M5 age $`8`$ Gyr would be required to match the relative ages derived by VandenBerg. The situation would become somewhat less critical if the synthetic HBs shown in Figure 4 were adopted for Pal 4/Eridanus, as shown in Figure 7. Absolute ages for M5 of $`10`$ Gyr would be required in this case, and it might be possible to achieve agreement with VandenBerg’s (1999a) results for an M5 age of $`9`$ Gyr. However, the opposite holds if the synthetic HBs displayed in Figure 5 are adopted instead, as one can see from Figure 8. We recall, from the arguments in the previous section, that the models in Figure 5 might be the better alternative to those in Figure 3 as genuinely representing Pal 4/Eridanus, if one were to adopt the canonical reddening values from Harris (1996) or if statistical fluctuation effects are important. In summary, we conclude that the requirement of extremely low ages, $`<10`$ Gyr, for all GCs under consideration is a very firm result of the present investigation. ## 5. Conclusions and Discussion As far as the second-parameter phenomenon goes, we have demonstrated that age cannot be the only second parameter at play, unless one is willing to accept that the ages of M5-like GCs are less than 10 Gyr.<sup>3</sup><sup>3</sup>3One should bear in mind that this result depends critically on the accuracy of the HST-WFPC2 data (see, e.g., Stetson 1998 in regard to charge-transfer effects), particularly the photometric zero points. Analysis of this subject is beyond the scope of this paper. The same conclusion was reached in previous papers of this series (Catelan & de Freitas Pacheco 1993, 1994, 1995; see also Ferraro et al. 1997b) but, in the present study, we have fully taken into account the effects of an age-dependent mass loss on the RGB. If the \[$`\alpha `$/Fe\] ratio in Pal 4/Eridanus is lower than commonly found among GCs (Carney 1996), resembling instead the cases of the “young,” loose GCs Ruprecht 106 and Pal 12 (Brown, Wallerstein, & Zucker 1997)—a hypothesis which is perhaps not unlikely, since all these clusters may have partaken a common origin (Majewski 1994; Fusi Pecci et al. 1995; Lynden-Bell & Lynden-Bell 1995)—the “turnoff age difference” between M5 and Pal 4/Eridanus would decrease, as inferred from isochrone fits to the HST data (Stetson et al. 1999). On the other hand, the “HB morphology age difference” would increase significantly, as inferred from the relative HB types of Pal 4/Eridanus vs. M5, because the red HBs of Pal 4/Eridanus would require a further decrease in the ages needed to match their HBs than derived in the present paper, due to their lower overall \[M/H\]. We note that proper-motion studies indicate that M5 too is an outer-halo GC, which just happens to lie close to its perigalacticon (Cudworth 1997 and references therein). According to such work, M5 actually spends much of its time at galactocentric distances larger than $`50`$ kpc. Therefore, we should keep in mind that, when using M5 to compare its age against those of (other) outer-halo GCs, we may simply be measuring the age dispersion in the outer Galactic halo, and not the age difference between the inner and the (extreme) outer halo—contrary to what is often assumed. The author wishes to express his gratitude to D. A. VandenBerg for providing many useful comments and suggestions, and also for making his latest evolutionary computations available in advance of publication. F. R. Ferraro, E. L. Sandquist, and P. B. Stetson have supplied crucial observational data and/or information, and are also warmly thanked. Useful comments by F. Grundahl, W. B. Landsman, and R. T. Rood are gratefully acknowledged, as are the suggestions by an anonymous referee which greatly helped improve the presentation of these results. Support for this work was provided by NASA through Hubble Fellowship grant HF–01105.01–98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5–26555. Analytical Mass Loss Formulae Revisited Mass loss on the RGB is widely recognized as one of the most important ingredients, as far as the HB morphology goes (e.g., Catelan & de Freitas Pacheco 1995; Lee et al. 1994; Rood, Whitney, & D’Cruz 1997). Up to now, investigations of the impact of RGB mass loss upon the HB morphology have mostly relied on Reimers’ (1975a, 1975b) mass loss formula. We note, however, that Reimers’ is by no means the only mass loss formula available for this type of study. In particular, alternative formulae have been presented by Mullan (1978), Goldberg (1979), and Judge & Stencel (1991, hereafter JS91). We have undertaken a revision of all these formulae, employing the latest and most extensive dataset available in the literature—namely, that of JS91. The mass loss rates provided in JS91 were compared against more recent data, and excellent agreement was found (Fig. A1). If the distance adopted by JS91 lied more than about $`2\sigma `$ away from that based on Hipparcos trigonometric parallaxes, the star was discarded. Only five stars (L<sup>2</sup> Pup, U Hya, X Her, g Her, $`\delta ^2`$ Lyr) turned out to be discrepant, in a sample containing more than 20 giants. Employing ordinary least-squares (OLS) regressions and following the Isobe et al. (1990) guidelines \[“if the problem is to predict the value of one variable from the measurement of another, then OLS($`Y|X`$) should be used, where $`Y`$ is the variable to be predicted”\] we find that the following formulae provide adequate fits to the data (see also Fig. A2): $$\frac{\mathrm{d}M}{\mathrm{d}t}=8.5\times 10^{10}(\begin{array}{c}L\\ \overline{gR}\end{array})^{+1.4}M_{}\mathrm{yr}^1,$$ () with $`g`$ in cgs units, and $`L`$ and $`R`$ in solar units. As can be seen, this represents a “generalized” form of Reimers’ original mass loss formula, essentially reproducing a later result by Reimers (1987). Formally, the exponent (+1.4) differs from the one in Reimers’ (1975a, 1975b) formula (+1.0) at $`3\sigma `$; $$\frac{\mathrm{d}M}{\mathrm{d}t}=2.4\times 10^{11}(\begin{array}{c}g\\ \overline{R^{\frac{3}{2}}}\end{array})^{0.9}M_{}\mathrm{yr}^1,$$ () likewise, but in the case of Mullan’s (1978) formula; $$\frac{\mathrm{d}M}{\mathrm{d}t}=1.2\times 10^{15}R^{+3.2}M_{}\mathrm{yr}^1,$$ () idem, Goldberg’s (1979) formula. Interestingly, the exponent (+3.2) is indistinguishable from +3.0 to well within $`1\sigma `$; $$\frac{\mathrm{d}M}{\mathrm{d}t}=6.3\times 10^8g^{1.6}M_{}\mathrm{yr}^1,$$ () ibidem, JS91’s formula. In addition, the expression $$\frac{\mathrm{d}M}{\mathrm{d}t}=3.4\times 10^{12}L^{+1.1}g^{0.9}M_{}\mathrm{yr}^1,$$ () suggested to us by D. VandenBerg, also provides a good fit to the data. “Occam’s razor”<sup>4</sup><sup>4</sup>4Entia non multiplicanda praeter necessitatem.” (“Entities must not be multiplied beyond necessity.”) Occam’s Razor is often referred to as the “Principle of Simplicity” or the “Law of Parsimony” as well. would favor equations (A3) or (A4) in comparison with the others, but otherwise we are unable to identify any of them as being obviously superior. We emphasize that mass loss formulae such as those given above should not be employed in astrophysical applications (stellar evolution, analysis of integrated galactic spectra, etc.) without keeping in mind these exceedingly important limitations: 1. As in Reimers’ (1975a, 1975b) case, equations (A1) through (A5) were derived based on Population I stars. Hence they too are not well established for low-metallicity stars. Moreover, there are only two first-ascent giants ($`\alpha `$ Boo and $`\beta `$ Peg) in the adopted sample; 2. Quoting Reimers (1977), “besides the basic \[stellar\] parameters $`\mathrm{}`$ the mass-loss process is probably also influenced by the angular momentum, magnetic fields and close companions. The order of magnitude of such effects is completely unclear. Obviously, many observations will be necessary before we get a more detailed picture of stellar winds in red giants” (emphasis added). See also Dupree & Reimers (1987); 3. Similarly, Reimers (1975a) has pointed out that such mass loss relations “should be considered as interpolation formulae only and not as strictly valid. Deviations due to various other properties can be expected and must be left to future research”; 4. “One should always bear in mind that a simple $`\mathrm{}`$ formula like that proposed can be expected to yield only correct order-of-magnitude results if extrapolated to the short-lived evolutionary phases near the tips of the giant branches” (Kudritzki & Reimers 1978); 5. Intrinsic scatter among mass loss rates on the RGB is expected to be present (e.g., Dupree & Reimers 1987; Rood et al. 1997; and references therein). The origin of such scatter, inferred from the CMDs of GCs (e.g., Rood 1973; Renzini & Fusi Pecci 1988), is currently unknown; 6. According to Willson (1999), “correlations between observed mass loss rates and physical parameters of cool stars may be (and usually are) dominated by selection effects. Most observations have been interpreted using models that are relatively simple (stationary, polytropic, spherically symmetric, homogeneous) and thus ‘observed’ mass loss rates or limits may be in error by orders of magnitude in some cases.” She further claims that “Reimers’ relation tells us the properties of stars that are losing mass, and not the mass loss rate that arises from a certain set of stellar parameters. It has been widely misunderstood and widely misused in stellar evolution and stellar population studies”; 7. The two first-ascent giants analyzed by Robinson, Carpenter, & Brown (1998) using HST-GHRS, $`\alpha `$ Tau and $`\gamma `$ Dra, appear to both lie about one order of magnitude below the relations that best fit the JS91 data—two orders of magnitude in fact, if compared to Reimers’ formula (see Fig. A2). The K supergiant $`\lambda `$ Vel, analyzed by the same group (Mullan, Carpenter, & Robinson 1998), appears in much better agreement with the adopted dataset and best fitting relations. In effect, mass loss on the RGB is an excellent, but virtually untested, second-parameter candidate. It may be connected to GC density, rotational velocities, and abundance anomalies on the RGB. It will be extremely important to study mass loss in first-ascent, low-metallicity giants—in the field and in GCs alike—using the most adequate ground- and space-based facilities available, or expected to become available, in the course of the next decade. Moreover, in order to properly determine how (mean) mass loss behaves as a function of the fundamental physical parameters and metallicity, astrometric missions much more accurate than Hipparcos, such as SIM and GAIA, will certainly be necessary. In the meantime, we suggest that using several different mass loss formulae constitutes a better approach than relying on a single one. In this sense, the latest RGB evolutionary tracks by VandenBerg et al. (2000) were employed in an investigation of the amount of mass lost on the RGB and its dependence on age. As in some previous work (e.g., D’Cruz et al. 1996), the effects of mass loss upon RGB evolution were ignored, which is a good approximation except for those stars which lose a considerable fraction of their mass during their evolution up the RGB (e.g., Castellani & Castellani 1993). In Figure A3, the mass loss–age relationship is shown for each of equations (A1) through (A5), and also for Reimers’ (1975a, 1975b) formula, for a metallicity $`[\mathrm{Fe}/\mathrm{H}]=1.41`$, $`[\alpha /\mathrm{Fe}]=+0.30`$. Note that even though these formulae are all based on the very same dataset (JS91), the implications do differ from case to case.
no-problem/9910/chao-dyn9910021.html
ar5iv
text
# Regular Tunnelling Sequences in Mixed Systems ## Abstract Abstract We show that the pattern of tunnelling rates can display a vivid and regular pattern when the classical dynamics is of mixed chaotic/regular type. We consider the situation in which the dominant tunnelling route connects to a stable periodic orbit and this orbit is surrounded by a regular island which supports a number of quantum states. We derive an explicit semiclassical expression for the positions and tunnelling rates of these states by use of a complexified trace formula. PACS: 03.65.Sq, 73.40Gk, 05.45.Mt, 05.45.-a Keywords: tunnelling, chaos, periodic orbit theory Tunnelling in systems whose classical limit displays a mixture of chaotic and integrable behaviour is often quite complex and impossible to predict analytically. Much attention has been paid recently, for example, to the regime of chaos-assisted tunnelling in which dynamical tunnelling occurs between quasimodes supported in integrable island-chains embedded in a chaotic sea. By contrast, we report on a remarkably ordered structure that appears in the tunnelling behaviour of a particular kind of mixed system and give analytical estimates for the corresponding tunnelling rates. It is distinct from the case of chaos-assisted tunnelling because tunnelling is through an energetic barrier rather than through dynamical barriers such as KAM tori. The special feature of these systems is that the complex orbit which defines the optimal tunnelling route across the barrier connects to a stable periodic orbit at the centre of an island chain (which is generally embedded in a chaotic sea). The ordered nature of these tunnelling rates is immediately evident in Fig. 1 where we show the numerically obtained splittings between quasi-doublets of the double-well potential $`V(x,y)=(x^21)^4+x^2y^2+2y^2/5`$. We have held the energy fixed (at $`E=9/10`$) and found the values of $`q=1/\mathrm{}`$ for which this is an energy level. The resulting spectrum of $`q`$ values is equivalent in most respects to a standard energy-spectrum, with the advantage that the classical dynamics is fixed throughout. The largest splittings in Fig. 1 are highly ordered, forming a regular progression of families which grow larger in number higher in the spectrum. These correspond to states supported near the centre of the island and we will offer a simple analytical prediction for them. The smallest splittings in Fig. 1 form a disordered jumble. These correspond to states supported in the chaotic sea and dynamically excluded from the main tunnelling route. To analyse the ordered sequence we use a method developed in and used until now primarily to understand predominantly chaotic systems. We first present the analysis for the case that $`q=1/\mathrm{}`$ is held fixed and an energy-spectrum is computed. We then give the simple extension for the fixed energy $`q`$-spectrum, appropriate for the results of Fig. 1. We call the mean energy of the $`n`$’th doublet $`E_n`$ and the corresponding splitting $`\mathrm{\Delta }E_n`$ and define the following dimensionless spectral function $$f(E,q)=\underset{n}{}\mathrm{\Delta }E_n\delta (EE_n).$$ (1) There is an analogous definition for metastable wells which have extremely narrow resonances, in which the widths $`\mathrm{\Gamma }_n`$ play the role of splittings. Such a system was studied in for a fully integrable system; the structure of the spectrum was like that shown here except with no irregular jumble at the bottom. While similar in outline, the detailed method of analysis was rather different, making use of the action-angle variables which exist in that situation. In we approximate (1) semiclassically as a sum over complex tunnelling orbits which traverse the barrier (in analogy to Gutzwiller’s formula for the density of states using real orbits ). We shall consider the special case in which there is an additional reflection symmetry such that the dominant tunnelling route lies on the symmetry axis so that it connects smoothly to a real periodic orbit. We then identify three distinct contributions to $`f(E,q)`$. There is the so-called instanton which has a purely imaginary action $`iK_0`$, lives under the barrier and runs along the symmetry axis between the classical turning points. It is important for determining the mean behaviour of the splittings but does not affect the fluctuation effects which we are trying to capture here. The second contribution comes from orbits which execute real dynamics along the real periodic orbit lying on the symmetry axis in addition to the instanton dynamics beneath the well. We imagine an orbit which starts at one of the turning points, executes $`r`$ repetitions of the real periodic orbit and then tunnels along the instanton path to finish at the other turning point. This orbit has a complex action $`S=rS_0+iK_0`$ where $`S_0`$ is the real action of the real periodic orbit. The contribution to $`f(E,q)`$ is given by $$f_{\mathrm{osc}}(E,q)=\frac{2}{\pi }\mathrm{Re}\underset{r=1}{\overset{\mathrm{}}{}}\frac{e^{qK_0+riqS_0}}{\sqrt{\mathrm{det}(W_0M_0^rI)}}.$$ (2) The matrices $`W_0`$ and $`M_0`$ are the monodromy matrices of the instanton and of the real periodic orbit respectively; the composite orbit has a monodromy matrix which is simply a product of these. The third contribution, discussed in , comes from homoclinic orbits which explore the real wells far away from the symmetry axis. They play no role in the present discussion. For fully developed chaos, all periodic orbits are unstable. The denominator of (2) then decays exponentially with $`r`$ and large repetitions are numerically unimportant. If the orbit is stable, however, there is no corresponding suppression. Singularities corresponding to distinct states arise when the expression is summed. This follows very closely the analogous development of Miller and Voros for the Gutzwiller trace formula when there is a stable orbit, except here we find splittings in addition to the positions of energies. (As was done in one dimension by Miller .) In the stable case, $`M_0`$ has eigenvalues $`e^{\pm i\alpha }`$ on the unit circle. (In higher dimensions, there would be a number of such eigenvalues and the theory would be generalised accordingly.) Let the diagonal matrix elements of $`W_0`$ in the complex eigenbasis of $`M_0`$ be $`A`$ and $`B`$ so that $`\mathrm{det}(W_0M_0^rI)`$ $`=`$ $`\mathrm{Tr}W_0M_0^r2`$ (3) $`=`$ $`Ae^{ir\alpha }+Be^{ir\alpha }2.`$ Since the instanton’s period is imaginary, complex conjugation acts as a time-reversal operation and we find that $`W_0^{}=W_0^1`$. We therefore conclude that $`(\mathrm{Tr}W_0M_0^r)^{}=\mathrm{Tr}(W_0M_0^r)^1=\mathrm{Tr}W_0M_0^r`$ (the latter equality holds because every symplectic matrix is conjugate to its inverse). Comparing this with (3) we conclude that $`A`$ and $`B`$ are real; we discuss how they are computed and offer a geometrical interpretation in the appendix. We now make use of the generating function of the Legendre polynomials to conclude $$\frac{1}{\sqrt{\mathrm{det}(WM_0^rI)}}=\underset{k=0}{\overset{\mathrm{}}{}}e^{i(k+1/2)r\alpha }\frac{Q_k(AB)}{B^{k+1/2}}$$ (4) where we assume without loss of generality that $`B`$ is the larger in magnitude of $`(A,B)`$ and we let $`Q_k(z)`$ denote the polynomial $$Q_k(z)=z^{k/2}P_k(z^{1/2}).$$ (5) Using this in (2) and summing the resulting geometric series in $`r`$ for each $`k`$ we get, $$f_{\mathrm{osc}}(E)=\frac{2e^{qK_0}}{\pi }\mathrm{Re}\underset{k=0}{\overset{\mathrm{}}{}}\frac{a_k}{e^{i\mathrm{\Phi }_k}1},$$ (6) where we have defined $`a_k`$ $`=`$ $`{\displaystyle \frac{Q_k(AB)}{B^{k+1/2}}}`$ $`\mathrm{\Phi }_k`$ $`=`$ $`qS_0+(k+1/2)\alpha .`$ (7) Semiclassical energy levels are found when the distribution above has poles and are implicit solutions $`E_{mk}`$ of $`\mathrm{\Phi }_k=2\pi m`$. From the residues we recover estimates of the corresponding splittings. This is a form of torus quantisation in which $`k`$ is a transverse quantum number, treated in harmonic approximation, and $`m`$ counts nodes along the orbit. The corresponding states are localised on the tori surrounding the stable periodic orbit and we find that their respective tunnelling rates are much larger than those of other states. Note that near $`E_{mk}`$ we can write $$\frac{1}{e^{i\mathrm{\Phi }_k}1}\frac{i}{qT_0}\frac{1}{EE_{mk}},$$ (8) where we have used that the period is $`T_0=S_0/E`$. Using the standard identity $$\mathrm{Im}\frac{1}{EE_{mk}}=\pi \delta (EE_{mk})$$ (9) we conclude $$\mathrm{\Delta }E_{mk}=\frac{2\mathrm{}}{T_0}e^{K_0/\mathrm{}}a_k(W_0,M_0),$$ (10) where the notation now stresses that $`a_k`$ depends on the transverse dynamics of the instanton and its real extension through the monodromy matrices $`(W_0,M_0)`$. All classical quantities are evaluated at energy $`E_{mk}`$. This formula has the same form as in one-dimensional tunnelling except for the additional factor $`a_k(W_0,M_0)`$. We will find this factor to be of order unity when $`k`$ is small and to decrease as $`k`$ increases. In we reported some specific numerical results for the energy quantisation. We now extend this result to the the $`q`$-spectrum, the advantage being that the classical quantities $`K_0`$, $`W_0`$ and $`M_0`$ are constant. The function $`f(E,q)`$ can equally well be interpreted as a function of $`q`$ at fixed $`E`$ so that (6) still applies. Solving for the poles and residues as above, we conclude $`q_{mk}`$ $`=`$ $`\left(2m\pi (k+1/2)\alpha \right)/S_0`$ $`\mathrm{\Delta }q_{mk}`$ $`=`$ $`{\displaystyle \frac{2}{S_0}}e^{q_{mk}K_0}a_k(W_0,M_0).`$ (11) The sequences of Fig. 1 can now be interpreted in terms of the quantum numbers $`m`$ and $`k`$. The central states with $`k=0`$ correspond to the largest splittings, about $`35`$ times larger than the local average. In Fig. 1 they are the uppermost curve of points (along which $`m`$ varies). Keeping $`m`$ fixed and letting $`k`$ increase one gets a sequence, which appears as a left-sloping shoulder in Fig. 1, along which $`q`$ and $`\mathrm{\Delta }q`$ decrease. In Fig.2 we show a subset of the spectrum with the semiclassical predictions for $`m=40`$ and a sequence of $`k`$ values (using $`A=0.861`$, $`B=4.043`$ and $`\alpha =2.783`$.) Clearly the small $`k`$ states are well reproduced. Our analysis essentially extrapolates the properties of the central periodic orbit to the entire island , and this is less accurate for the large $`k`$ states which are localised further from the periodic orbit. Reproducing the large $`k`$ values would require a more sophisticated analysis , although our formalism does at least capture the correct qualitative behaviour of these states. The irregular jumble of splittings at the bottom of the figure corresponds to states in the chaotic part of phase space. No simple theory exists for them though one could well imagine that the formalism of chaos-assisted tunnelling, in particular the interplay between regular and chaotic states, might be of use in describing them. As remarked, the number of well-defined $`k`$ states increases as we go up in the spectrum. This is because the number of states which the regular island can support increases as $`\mathrm{}`$ decreases. Also, there are occasional irregularities in the lattice of regular states, for example the $`k=0`$ state near $`q=18`$. These are due to near degeneracies between the regular state and some other state — either another regular one or a chaotic one. The actual eigenstates are then strongly mixed and hence so are the tunnelling rates. Appendix To apply Eq. (Regular Tunnelling Sequences in Mixed Systems) in practice one needs the parameters $`A`$ and $`B`$; we give here simple basis-independent expressions for them. In particular, we note that it is not explicitly necessary to transform $`W_0`$ to the eigenbasis of $`M_0`$. We calculate $`A`$ and $`B`$ as the smaller and larger respectively of $$A\text{ or }B=\mathrm{cosh}\beta \pm \gamma \mathrm{sinh}\beta $$ (12) where $`\mathrm{Tr}W_0=2\mathrm{cosh}\beta `$ and, $$\gamma =\frac{\mathrm{Im}\mathrm{Tr}W_0M_0}{2\mathrm{sin}\alpha \mathrm{sinh}\beta }.$$ (13) This is obtained by expressing $`W_0=e^{i\beta JH}`$ and $`M_0=e^{\alpha JK}`$ where $`J`$ is the unit symplectic matrix and $`H`$ and $`K`$ are real, positive-definite, $`2\times 2`$ symmetric matrices normalised so that $`\mathrm{det}H=\mathrm{det}K=1`$. Expanding $$W_0=e^{i\beta JH}=\mathrm{cosh}\beta iJH\mathrm{sinh}\beta $$ (14) and similarly for $`M_{0}^{}{}_{}{}^{r}=e^{r\alpha JK}`$, one recovers (3) with $`A`$ and $`B`$ as given above. The factor $`\gamma `$ has the following geometric interpretation. The action $`KMKM^T`$ of $`2\times 2`$ symplectic matrices $`M`$ on symmetric matrices $`K`$ can be identified with $`(2+1)`$-dimensional Lorentz transformations (since the relevant Lie algebras are isomorphic), the invariant $`\mathrm{det}K`$ playing the role of proper time. The matrices $`H`$ and $`K`$ define unit time-like $`(2+1)`$-vectors $`X=(x,y,t)`$ and $`\mathrm{\Xi }=(\xi ,\eta ,\tau )`$ respectively. For example $$H=\left(\begin{array}{cc}t+x& y\\ \\ y& tx\end{array}\right),t^2x^2y^2=1,$$ (15) and similarly for $`K`$. One then observes that $`\gamma =\mathrm{Tr}JHJK/2=X,\mathrm{\Xi }=t\tau x\xi y\eta `$. This can be interpreted as the dilation factor to boost the rest-frame of $`X`$ to that of $`\mathrm{\Xi }`$. In particular, $`\gamma >1`$.
no-problem/9910/quant-ph9910052.html
ar5iv
text
# Geometric Quantum Computation with NMR ## Acknowledgements Theory was developed by V.V, A.E., and G.C.; NMR experiments were developed and performed by J.A.J. We thank N. Soffe for helpful discussions. J.A.J. and A.E. are Royal Society Research Fellows. J.A.J. and A.E. thank Starlab (Riverland NV) for financial support. Correspondence should be addressed to J.A.J. (e-mail: jonathan.jones@qubit.org).
no-problem/9910/hep-ph9910336.html
ar5iv
text
# Four-Neutrino Oscillations ## I Introduction The existence of neutrino masses and mixing is today one of the hottest topics in high-energy physics and is considered as one of the best ways to obtain indications on the physics beyond the Standard Model. If neutrinos are massive and mixed, the left-handed components $`\nu _{\alpha L}`$ ($`\alpha =e,\mu ,\tau ,\mathrm{}`$) of the flavor neutrino fields are superpositions of the left-handed components $`\nu _{kL}`$ ($`k=1,\mathrm{},N`$) of neutrino fields with definite mass $`m_k`$, $`\nu _{\alpha L}={\displaystyle \underset{k=1}{\overset{N}{}}}U_{\alpha k}\nu _{kL}`$, where $`U`$ is a $`N\times N`$ unitary mixing matrix. In this case neutrino oscillations occur. From the measurement of the invisible decay width of the $`Z`$-boson it is known that the number of light active neutrino flavors is three, corresponding to $`\nu _e`$, $`\nu _\mu `$ and $`\nu _\tau `$. This implies that the number $`N`$ of massive neutrinos is bigger or equal to three. If $`N>3`$, in the flavor basis there are $`N_s=N3`$ sterile neutrinos, $`\nu _{s_1}`$, …, $`\nu _{s_{N_s}}`$. In this case the flavor index $`\alpha `$ takes the values $`e,\mu ,\tau ,s_1,\mathrm{},s_{N_s}`$. Evidences in favor of neutrino oscillations have been found in solar neutrino experiments , in atmospheric neutrino experiments and in the LSND accelerator experiment . The observed disappearance of atmospheric $`\stackrel{()}{\nu _\mu }`$’s can be explained by $`\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _\tau }`$ and/or $`\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _s}`$ transitions, the observed disappearance of solar $`\nu _e`$’s can be explained by $`\nu _e\nu _\mu `$ and/or $`\nu _e\nu _\tau `$ and/or $`\nu _e\nu _s`$ transitions, and $`\overline{\nu }_\mu \overline{\nu }_e`$ and $`\nu _\mu \nu _e`$ transitions have been observed in the LSND experiment. ## II The necessity of at least three independent $`\mathrm{\Delta }𝐦^2`$’s The three evidences in favor of neutrino oscillations found in solar and atmospheric neutrino experiments and in the accelerator LSND experiment imply the existence of at least three independent neutrino mass-squared differences. This can be seen by considering the general expression for the probability of $`\nu _\alpha \nu _\beta `$ transitions in vacuum, that can be written as (see ) $$P_{\nu _\alpha \nu _\beta }=\left|\underset{k=1}{\overset{N}{}}U_{\alpha k}^{}U_{\beta k}\mathrm{exp}\left(i\frac{\mathrm{\Delta }m_{kj}^2L}{2E}\right)\right|^2,$$ (1) where $`\mathrm{\Delta }m_{kj}^2m_k^2m_j^2`$, $`j`$ is any of the mass-eigenstate indices, $`L`$ is the distance between the neutrino source and detector and $`E`$ is the neutrino energy. The range of $`L/E`$ characteristic of each type of experiment is different: $`L/E10^{11}10^{12}\mathrm{eV}^2`$ for solar neutrino experiments, $`L/E10^210^3\mathrm{eV}^2`$ for atmospheric neutrino experiments and $`L/E1\mathrm{eV}^2`$ for the LSND experiment. From Eq. (1) it is clear that neutrino oscillations are observable in an experiment only if there is at least one mass-squared difference $`\mathrm{\Delta }m_{kj}^2`$ such that $`\mathrm{\Delta }m_{kj}^2L/2E0.1`$ (the precise lower bound depends on the sensitivity of the experiment) in a significant part of the energy and source-detector distance intervals of the experiment (if this condition is not satisfied, $`P_{\nu _\alpha \nu _\beta }\left|_kU_{\alpha k}^{}U_{\beta k}\right|^2=\delta _{\alpha \beta }`$). Since the range of $`L/E`$ probed by the LSND experiment is the smaller one, a large mass-squared difference is needed for LSND oscillations, $`\mathrm{\Delta }m_{\mathrm{LSND}}^210^1\mathrm{eV}^2`$. The 99% CL maximum likelihood analysis of the LSND data in terms of two-neutrino oscillations gives $$0.20\mathrm{eV}^2\mathrm{\Delta }m_{\mathrm{LSND}}^22.0\mathrm{eV}^2.$$ (2) Furthermore, from Eq. (1) it is clear that a dependence of the oscillation probability from the neutrino energy $`E`$ and the source-detector distance $`L`$ is observable only if there is at least one mass-squared difference $`\mathrm{\Delta }m_{kj}^2`$ such that $`\mathrm{\Delta }m_{kj}^2L/2E1`$. Indeed, the exponentials of all the phases $`\mathrm{\Delta }m_{kj}^2L/2E1`$ are equal to one and the contributions of all the phases $`\mathrm{\Delta }m_{kj}^2L/2E1`$ are washed out by the average over the energy and source-detector ranges characteristic of the experiment. Since a variation of the oscillation probability as a function of neutrino energy has been observed both in solar and atmospheric neutrino experiments and the ranges of $`L/E`$ characteristic of these two types of experiments are different from each other and different from the LSND range, two more mass-squared differences with different scales are needed: $$\mathrm{\Delta }m_{\mathrm{sun}}^210^{12}10^{11}\mathrm{eV}^2\text{(VO)},\mathrm{\Delta }m_{\mathrm{atm}}^210^310^2\mathrm{eV}^2.$$ (3) The condition (3) for the solar mass-squared difference $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ has been obtained under the assumption of vacuum oscillations (VO). If the disappearance of solar $`\nu _e`$’s is due to the MSW effect (see ), the condition $$\mathrm{\Delta }m_{\mathrm{sun}}^210^4\mathrm{eV}^2\text{(MSW)}$$ (4) must be fulfilled in order to have a resonance in the interior of the sun. Hence, in the MSW case $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ must be at least one order of magnitude smaller than $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$. It is possible to ask if three different scales of neutrino mass-squared differences are needed even if the results of the Homestake solar neutrino experiment is neglected, allowing an energy-independent suppression of the solar $`\nu _e`$ flux. The answer is that still the data cannot be fitted with only two neutrino mass-squared differences because an energy-independent suppression of the solar $`\nu _e`$ flux requires large $`\nu _e\nu _\mu `$ or $`\nu _e\nu _\tau `$ transitions generated by $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ or $`\mathrm{\Delta }m_{\mathrm{LSND}}^2`$. These transitions are forbidden by the results of the Bugey and CHOOZ reactor $`\overline{\nu }_e`$ disappearance experiments and by the non-observation of an up-down asymmetry of $`e`$-like events in the Super-Kamiokande atmospheric neutrino experiment . ## III Four-neutrino schemes The existence of three different scales of $`\mathrm{\Delta }m^2`$ imply that at least four light massive neutrinos must exist in nature. Here we consider the schemes with four light and mixed neutrinos, which constitute the minimal possibility that allows to accommodate the results of all neutrino oscillation experiments. In this case, in the flavor basis the three active neutrinos $`\nu _e`$, $`\nu _\mu `$, $`\nu _\tau `$ are accompanied by a sterile neutrino $`\nu _s`$. The six types of four-neutrino mass spectra with three different scales of $`\mathrm{\Delta }m^2`$ that can accommodate the hierarchy $`\mathrm{\Delta }m_{\mathrm{sun}}^2\mathrm{\Delta }m_{\mathrm{atm}}^2\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ are shown qualitatively in Fig. III. In all these mass spectra there are two groups of close masses separated by the “LSND gap” of the order of 1 eV. In each scheme the smallest mass-squared difference corresponds to $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ ($`\mathrm{\Delta }m_{21}^2`$ in schemes I and B, $`\mathrm{\Delta }m_{32}^2`$ in schemes II and IV, $`\mathrm{\Delta }m_{43}^2`$ in schemes III and A), the intermediate one to $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ ($`\mathrm{\Delta }m_{31}^2`$ in schemes I and II, $`\mathrm{\Delta }m_{42}^2`$ in schemes III and IV, $`\mathrm{\Delta }m_{21}^2`$ in scheme A, $`\mathrm{\Delta }m_{43}^2`$ in scheme B) and the largest mass squared difference $`\mathrm{\Delta }m_{41}^2=\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ is relevant for the oscillations observed in the LSND experiment. The six schemes are divided into four schemes of class 1 (I–IV) in which there is a group of three masses separated from an isolated mass by the LSND gap, and two schemes of class 2 (A, B) in which there are two couples of close masses separated by the LSND gap. It has been show that the schemes of class 1 are disfavored by the data if also the negative results of short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments are taken into account . This is basically due to the fact that the non-observation of neutrino oscillations due to $`\mathrm{\Delta }m_{41}^2`$ in short-baseline disappearance experiments imply that, in each scheme in Fig. III, $`\nu _e`$ and $`\nu _\mu `$ are mainly superpositions of one of the two groups of mass eigenstates separated by the LSND gap. Hence, in the schemes of class 1 $`\nu _e`$ and $`\nu _\mu `$ almost coincide with superpositions of the three grouped mass eigenstates or with the isolated mass eigenstate. Moreover, only the possibility of both $`\nu _e`$ and $`\nu _\mu `$ mainly superpositions of the three grouped mass eigenstates allows to explain the results of solar and atmospheric neutrino experiments with neutrino oscillations. This is because disappearance of solar $`\nu _e`$’s and atmospheric $`\nu _\mu `$’s is possible only if $`\nu _e`$ and $`\nu _\mu `$ have large mixing with the mass eigenstates whose mass-squared differences give $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ and $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$. In all schemes of class 1 $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ and $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ are mass-squared differences between two of the three grouped mass eigenstates neutrinos. However, if both $`\nu _e`$ and $`\nu _\mu `$ are mainly superpositions of the three grouped mass eigenstates, short-baseline $`\nu _\mu \nu _e`$ oscillations due to $`\mathrm{\Delta }m_{41}^2`$ are strongly suppressed and one can calculate that the allowed transition probability is smaller than that observed in the LSND experiment . Hence, we conclude that the schemes of class 1 are disfavored by neutrino oscillation data. The two four-neutrino schemes of class 2 are compatible with the results of all neutrino oscillation experiments if the mixing of $`\nu _e`$ with the two mass eigenstates responsible for the oscillations of solar neutrinos ($`\nu _3`$ and $`\nu _4`$ in scheme A and $`\nu _1`$ and $`\nu _2`$ in scheme B) is large and the mixing of $`\nu _\mu `$ with the two mass eigenstates responsible for the oscillations of atmospheric neutrinos ($`\nu _1`$ and $`\nu _2`$ in scheme A and $`\nu _3`$ and $`\nu _4`$ in scheme B) is large . This is illustrated qualitatively in Figs. III and III, as we are going to explain. Let us define the quantities $`c_\alpha `$, with $`\alpha =e,\mu ,\tau ,s`$, in the schemes A and B as $$c_\alpha ^{(\mathrm{A})}\underset{k=1,2}{}|U_{\alpha k}|^2,c_\alpha ^{(\mathrm{B})}\underset{k=3,4}{}|U_{\alpha k}|^2.$$ (5) Physically $`c_\alpha `$ quantify the mixing of the flavor neutrino $`\nu _\alpha `$ with the two massive neutrinos whose $`\mathrm{\Delta }m^2`$ is relevant for the oscillations of atmospheric neutrinos ($`\nu _1`$, $`\nu _2`$ in scheme A and $`\nu _3`$, $`\nu _4`$ in scheme B). The negative results of short-baseline disappearance experiments imply that $$c_\alpha a_\alpha ^0\text{or}c_\alpha 1a_\alpha ^0(\alpha =e,\mu ).$$ (6) The quantities $`a_e^0`$ and $`a_\mu ^0`$, that depend on $`\mathrm{\Delta }m_{41}^2=\mathrm{\Delta }m_{\mathrm{LSND}}^2`$, are obtained, respectively, from the exclusion plots of short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ experiments (see ). From the exclusion curves of the Bugey reactor $`\overline{\nu }_e`$ disappearance experiment and of the CDHS and CCFR accelerator $`\nu _\mu `$ disappearance experiments it follows that $`a_e^03\times 10^2`$ for $`\mathrm{\Delta }m_{41}^2`$ in the LSND range (2) and $`a_\mu ^00.2`$ for $`\mathrm{\Delta }m_{41}^20.4\mathrm{eV}^2`$. The shadowed areas in Figs. III and III illustrate qualitatively the regions in the $`c_e`$$`c_\mu `$ plane allowed by the negative results of short-baseline $`\overline{\nu }_e`$ and $`\nu _\mu `$ disappearance experiments for a fixed value of $`\mathrm{\Delta }m_{41}^2`$. Figure III is valid for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ and shows that there are four regions allowed by the results of short-baseline disappearance experiments: region SS with small $`c_e`$ and $`c_\mu `$, region LS with large $`c_e`$ and small $`c_\mu `$, region SL with small $`c_e`$ and large $`c_\mu `$ and region LL with large $`c_e`$ and $`c_\mu `$. The quantities $`c_e`$ and $`c_\mu `$ can be both large, because the unitarity of the mixing matrix imply that $`c_\alpha +c_\beta 2`$ and $`0c_\alpha 1`$ for $`\alpha ,\beta =e,\mu ,\tau ,s`$. Figure III is valid for $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$, where there is no constraint on the value of $`c_\mu `$ from the results of short-baseline $`\nu _\mu `$ disappearance experiments. It shows that there are two regions allowed by the results of short-baseline $`\overline{\nu }_e`$ disappearance experiments: region S with small $`c_e`$ and region L with large $`c_e`$. Let us take now into account the results of solar neutrino experiments. Large values of $`c_e`$ are incompatible with solar neutrino oscillations because in this case $`\nu _e`$ has large mixing with the two massive neutrinos responsible for atmospheric neutrino oscillations and, through the unitarity of the mixing matrix, small mixing with the two massive neutrinos responsible for solar neutrino oscillations. Indeed, in the schemes of class 2 the survival probability $`P_{\nu _e\nu _e}^{\mathrm{sun}}`$ of solar $`\nu _e`$’s is bounded by $`P_{\nu _e\nu _e}^{\mathrm{sun}}c_e^2/2`$, and its possible variation $`\mathrm{\Delta }P_{\nu _e\nu _e}^{\mathrm{sun}}(E)`$ with neutrino energy $`E`$ is limited by $`\mathrm{\Delta }P_{\nu _e\nu _e}^{\mathrm{sun}}(E)\left(1c_e\right)^2`$ . If $`c_e`$ is large as in the LS or LL regions of Fig. III or in the L region of Fig. III, we have $`P_{\nu _e\nu _e}^{\mathrm{sun}}\left(1a_e^0\right)^2/21/2`$ and $`\mathrm{\Delta }P_{\nu _e\nu _e}^{\mathrm{sun}}(E)(a_e^0)^210^3`$, for $`\mathrm{\Delta }m_{41}^2=\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ in the LSND range (2). Therefore, $`P_{\nu _e\nu _e}^{\mathrm{sun}}`$ is bigger than 1/2 and practically does not depend on neutrino energy. Since this is incompatible with the results of solar neutrino experiments interpreted in terms of neutrino oscillations, we conclude that the regions LS and LL in Fig. III and the region L in Fig. III are disfavored by solar neutrino data, as illustrated qualitatively by the vertical exclusion lines in Figs. III and III. Let us consider now the results of atmospheric neutrino experiments. Small values of $`c_\mu `$ are incompatible with atmospheric neutrino oscillations because in this case $`\nu _\mu `$ has small mixing with the two massive neutrinos responsible for atmospheric neutrino oscillations. Indeed, the survival probability of atmospheric $`\nu _\mu `$’s is bounded by $`P_{\nu _\mu \nu _\mu }^{\mathrm{atm}}\left(1c_\mu \right)^2`$ , and it can be shown that the Super-Kamiokande up–down asymmetry of high-energy $`\mu `$-like events generated by atmospheric neutrinos, $`𝒜_\mu =0.311\pm 0.043\pm 0.01`$ , and the exclusion curve of the Bugey $`\overline{\nu }_e`$ disappearance experiment imply the upper bound $`c_\mu 0.45b_\mu ^{\mathrm{SK}}`$. This limit is depicted qualitatively by the horizontal exclusion lines in Figs. III and III. Therefore, we conclude that the regions SS and LS in Fig. III and the small-$`c_\mu `$ parts of the regions S and L in Fig. III are disfavored by atmospheric neutrino data. Finally, let us consider the results of the LSND experiment. In the schemes of class 2 the amplitude of short-baseline $`\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _e}`$ oscillations is given by $`A_{\mu e}=\left|{\displaystyle \underset{k=1,2}{}}U_{ek}U_{\mu k}^{}\right|^2=\left|{\displaystyle \underset{k=3,4}{}}U_{ek}U_{\mu k}^{}\right|^2`$ ($`A_{\mu e}`$ is equivalent to $`\mathrm{sin}^22\vartheta `$, where $`\vartheta `$ is the two-generation mixing angle used in the analysis of the data of short-baseline $`\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _e}`$ experiments). The second equality is due to the unitarity of the mixing matrix. Using the Cauchy–Schwarz inequality we obtain $$c_ec_\mu A_{\mu e}^{\mathrm{min}}/4\text{and}\left(1c_e\right)\left(1c_\mu \right)A_{\mu e}^{\mathrm{min}}/4,$$ (7) where $`A_{\mu e}^{\mathrm{min}}`$ is the minimum value of the oscillation amplitude $`A_{\mu e}`$ observed in the LSND experiment. The bounds (7) are illustrated qualitatively in Figs. III and III. One can see that the results of the LSND experiment confirm the exclusion of the regions SS and LL in Fig. III and the exclusion of the small-$`c_\mu `$ part of region S and of the large-$`c_\mu `$ part of region L in Fig. III. Summarizing, if $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ only the region SL in Fig. III, with $$c_ea_e^0\text{and}c_\mu 1a_\mu ^0,$$ (8) is compatible with the results of all neutrino oscillation experiments. If $`\mathrm{\Delta }m_{41}^20.3\mathrm{eV}^2`$ only the large-$`c_\mu `$ part of region S in Fig. III, with $$c_ea_e^0\text{and}c_\mu b_\mu ^{\mathrm{SK}},$$ (9) is compatible with the results of all neutrino oscillation experiments. Therefore, in any case $`c_e`$ is small and $`c_\mu `$ is large. However, it is important to notice that, as shown clearly in Figs. III and III, the inequalities (7) following from the LSND observation of short-baseline $`\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _e}`$ oscillations imply that $`c_e`$ and $`1c_\mu `$, albeit small, have the lower bounds $$c_eA_{\mu e}^{\mathrm{min}}/4\text{and}1c_\mu A_{\mu e}^{\mathrm{min}}/4.$$ (10) ## IV Long-baseline experiments The smallness of $`c_e`$ in the schemes A and B implies that electron neutrinos do not oscillate in atmospheric and long-baseline neutrino oscillation experiments. The transition probabilities of electron neutrinos and antineutrinos into other states in long-baseline experiments (LBL) are bounded by $$1P_{\stackrel{()}{\nu _e}\stackrel{()}{\nu _e}}^{(\mathrm{LBL})}a_e^0\left(2a_e^0\right).$$ (11) The solid line in Fig. IV shows the corresponding limit obtained from the 90% CL exclusion plot of the Bugey experiment. The shadowed region in Fig. IV is allowed if $`\mathrm{\Delta }m_{41}^2`$ lies in the LSND range (2). The dash-dotted line in Fig. IV shows the upper bound for the transition probability of $`\overline{\nu }_e`$’s into other states obtained from the final 90% exclusion plot of the CHOOZ experiment for $`\mathrm{\Delta }m_{\mathrm{atm}}^23\times 10^3\mathrm{eV}^2`$ (the final 95% exclusion plot of the CHOOZ experiment gives $`P_{\stackrel{()}{\nu _e}\stackrel{()}{\nu _e}}^{(\mathrm{LBL})}0.6`$). One can see that the results of the CHOOZ experiment agree with the upper bound (11), that is more stringent than the CHOOZ bound for $`\mathrm{\Delta }m_{41}^2`$ in the LSND range. The probability of $`\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _e}`$ transitions in vacuum in LBL experiments is limited by $$\frac{1}{4}A_{\mu e}^{\mathrm{min}}P_{\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _e}}^{(\mathrm{LBL})}\mathrm{min}[a_e^0\left(2a_e^0\right),a_e^0+\frac{1}{4}A_{\mu e}^0],$$ (12) where $`A_{\mu e}^0`$ is the upper bound for the amplitude $`A_{\mu e}`$ of short-baseline $`\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _e}`$ transitions measured in accelerator neutrino experiments and $`A_{\mu e}^{\mathrm{min}}`$ is the minimum value of $`A_{\mu e}`$ observed in the LSND experiment. The bound obtained with Eq. (12) from the 90% CL exclusion plots of the Bugey experiment and of the BNL E776 and KARMEN experiments is depicted by the dashed line in Fig. IV. The dark shadowed region is allowed by the results of the LSND experiment, taking into account the lower bound in Eq. (12). The solid line in Fig. IV shows the upper bound on $`P_{\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _e}}^{(\mathrm{LBL})}`$ in the K2K experiment taking into account matter effects . In this case there is no lower bound and the dark plus light shadowed regions are allowed by the results of the LSND experiment. The expected 90% CL sensitivity of the K2K long-baseline accelerator neutrino experiment for $`\mathrm{\Delta }m_{\mathrm{atm}}^23\times 10^3\mathrm{eV}^2`$ is indicated in Fig. IV by the dash-dotted line. It can be seen that the results of short-baseline experiments indicate an upper bound for $`P_{\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _e}}^{(\mathrm{LBL})}`$ smaller than the expected sensitivity of the K2K experiment, unless $`\mathrm{\Delta }m_{41}^20.20.3\mathrm{eV}^2`$. Let us emphasize that the upper bounds for the oscillation probabilities in long-baseline experiments presented in Figs. IV and IV depend on $`\mathrm{\Delta }m_{41}^2`$, that is the mass-squared difference relevant for oscillations in short-baseline experiment. The transition probabilities measured in each long-baseline experiment can be much smaller that the maximal one, that lies below the upper bounds in Figs. IV and IV, if $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ is much smaller than the mass-squared difference to which the experiment is most sensitive. A further consequence of the smallness of $`c_e`$ and $`1c_\mu `$ in the schemes A and B is the existence of a stringent upper bound for the size of CP or T violation that could be measured in long-baseline experiments in the $`\nu _\mu \nu _e`$ and $`\overline{\nu }_\mu \overline{\nu }_e`$ channels . On the other hand, the effects of CP violation in long-baseline $`\nu _\mu \nu _\tau `$ and $`\overline{\nu }_\mu \overline{\nu }_\tau `$ transitions can be as large as allowed by the unitarity of the mixing matrix . ## V Conclusions We have seen that only the two four-neutrino schemes A and B of class 2 in Fig. III are compatible with the results of all neutrino oscillation experiments. These two schemes are equivalent for the phenomenology of neutrino oscillations. We have shown that the quantities $`c_e`$ and $`1c_\mu `$ in the schemes A and B are small. Physically $`c_\alpha `$, defined in Eq. (5), quantify the mixing of the flavor neutrino $`\nu _\alpha `$ with the two massive neutrinos whose $`\mathrm{\Delta }m^2`$ is relevant for the oscillations of atmospheric neutrinos ($`\nu _1`$, $`\nu _2`$ in scheme A and $`\nu _3`$, $`\nu _4`$ in scheme B). Considering long-baseline neutrino oscillation experiments, the smallness of of $`c_e`$ implies stringent upper bounds for the probability of $`\stackrel{()}{\nu _e}`$ transitions into other states, for the probability of $`\stackrel{()}{\nu _\mu }\stackrel{()}{\nu _e}`$ transitions and for the size of CP or T violation effects in $`\nu _\mu \nu _e`$ and $`\overline{\nu }_\mu \overline{\nu }_e`$ transitions.
no-problem/9910/cond-mat9910108.html
ar5iv
text
# Study of the connection between hysteresis and thermal relaxation in magnetic materials ## I Introduction The joint presence of hysteresis and thermal relaxation is a common situation in physical systems characterized by metastable energy landscapes (magnetic hysteresis, plastic deformation, superconducting hysteresis), and their interpretation still represents a challenge to non-equilibrium thermodynamics. Hysteresis is the consequence of the fact that, when the system is not able to reach thermodynamic equilibrium during the time of the experiment, the system will remain in a temporary local minimum of its free energy, and its response to external actions will become history dependent. On the other hand, the fact that the system is not in equilibrium, makes it spontaneously approach equilibrium, and this will give rise to relaxation effects even if no external action is applied to the system. In magnetic materials, thermal relaxation effects (also termed magnetic after-effects or magnetic viscosity effects) are particularly important in connection with data storage and with the performance of permanent magnets, where a certain magnetization state must be permanently conserved. Magnetic viscosity experiments often show an intricate interplay with hysteresis and with the role of field history in the preparation of the system . Thermal-activation-type models have been proposed to interpret the fact that the initial stages of relaxation often exhibit a logarithmic time decay of the magnetization , but these models fail in explaining the connection of viscosity to hysteresis properties. Extensions were proposed to describe thermally activated dynamic effects on hysteresis loops and non-logarithmic decay of the magnetization. There exist in the literature models for the prediction of coercivity, where thermal activation over barriers plays a key role , but these models usually do not pay attention to the problem of the prediction of hysteresis under more complicated field histories. An alternative is represented by detailed micromagnetic descriptions of the magnetization process, coupled to Montecarlo techniques for the study of its time evolution . In these cases, the extrapolation of the results to long times and the identification of slow (log-type) relaxation laws is far from straightforward. Particular attention has been recently paid in the literature to the joint description of hysteresis and thermal relaxation in systems that are the superposition of elementary bistable units . The working hypothesis, inspired by the results of several previous authors , is that the free energy of the system can be decomposed into the superposition of simple free energy profiles, each characterized by two energy minima separated by a barrier. The approach is able to predict, together with hysteresis effects, the commonly observed logarithmic decay of the magnetization at constant applied field as well as more complicated history-dependent relaxation phenomena . In this, it yields conclusions similar to those given by hysteresis models driven by stochastic input. When thermal activation effects are negligible, the approach reduces to the Preisach model of hysteresis, which provides quite a detailed description of several key aspects of hysteresis . The most remarkable feature of the approach is that it yields this joint description of hysteresis and thermal relaxation on the basis of a few simple assumptions common to both aspects of the phenomenology. Following these considerations, in this article we investigate the connection between hysteresis and thermal relaxation both from the theoretical and the experimental viewpoint. Starting from the general approach developed in Ref. we derived analytical laws for various types of history-dependent relaxation patterns (Section II). The model predictions were then applied to interpret experiments on a magnetic system particularly suited to this task (Section III). The system, obtained by partial crystallization of the amorphous Fe<sub>73.5</sub>Cu<sub>1</sub>Nb<sub>3</sub>Si<sub>13.5</sub>B<sub>9</sub> alloy and known in the literature as Finemet-type alloy, consists of a $`70\%`$ volume fraction of Fe-Si crystallites with diameters of about 10 nm, imbedded in an amorphous matrix . The crystalline and the amorphous phases are both ferromagnetic at room temperature, where the system behaves like a good soft magnetic material. However, the two phases have distinct Curie temperatures, $`T_c350^{}`$C for the amorphous phase and $`T_c700^{}`$C for the crystalline phase. When the temperature is raised above $`350^{}`$C and the amorphous phase becomes paramagnetic, the grain-grain coupling provided by the ferromagnetic matrix is switched off, and the system is transformed into an assembly of magnetic nanograins randomly dispersed in a non-magnetic matrix. This change results in a steep increase of the coercive field, due to the decoupling of the nanograins , and in a definite enhancement of thermal relaxation effects , not only because the temperature is increased, but also (and mainly) because the typical activation volumes involved in magnetization reversal are strongly reduced, again by the decoupling of the nanograins. This is a situation where hysteresis and thermal relaxation acquire comparable importance in determining the response of the system, and where the time scale of relaxation effects becomes small enough (in the range of seconds) to be amenable to a detailed study. We carried out a systematic experimental investigation of hysteresis and thermal relaxation under various conditions, and we made use of the theoretical approach of Section II to interpret the experimental results. A remarkable general agreement between theory and experiment was found. More precisely our main results can be summarized as follows. (i) The various relaxation patterns observed after different field histories are all consistent with a unique value of the fluctuation field , $`H_f=k_BT/\mu _0M_sv`$, where $`k_B`$ is the Boltzmann constant, $`T`$ is the absolute temperature, $`M_s`$ is the saturation magnetization and $`v`$ is the activation volume. At $`T=430^{}`$C, we found $`H_f`$ 8 Am<sup>-1</sup>. This corresponds to an activation volume $`v`$ of linear dimensions $`v^{1/3}`$ of the order of 100 nm, which indicates that, even well beyond the Curie point of the amorphous matrix magnetization reversal still involves a consistent number of coupled nanograins. (ii) The saturation loop coercive field $`H_c`$ depends on the applied field rate $`dH/dt`$ according to the law: $$H_c=H_f\mathrm{ln}(|dH/dt|)+C$$ (1) where $`H_f`$ is the fluctuation field previously mentioned and $`C`$ is a suitable constant. (iii) Remarkable regularities are exhibited by the family of relaxation curves $`M(t;H_0,dH/dt)`$, generated by starting from a large positive field (positive saturation), then changing the field down to the final value $`H_0`$ at the rate $`dH/dt`$, and finally measuring the time decay of magnetization under the constant field $`H_0`$. The family of experimental curves shows a definite non-logarithmic behavior. However, all relaxation curves collapse onto a single curve by plotting $`M(t;H_0,dH/dt)`$ as a function of the athermal field $`H_{ath}`$, defined as $$H_{ath}(t;H_0,dH/dt)=H_0\pm H_f\mathrm{ln}\left(\frac{t}{\tau _0}+\frac{H_f}{\tau _0|dH/dt|}\right)$$ (2) where $`\tau _0`$ is a typical attempt time and the $`\pm `$ sign is the sign of the field rate $`dH/dt`$. This experimental result represents an important confirmation of the theoretical approach. In fact, the existence of the curve $`M(H_{ath})`$ is a direct consequence of the fact that hysteresis and thermal relaxation are controlled by the same distribution of energy barriers. The curve $`M(H_{ath})`$ represents the magnetization curve that one would measure if it was hypothetically possible to switch off thermal effects completely and the athermal field $`H_{ath}`$ plays the role of effective field summarizing the joint effect of applied field and temperature. (iv) When the external field is reversed at the turning point $`H_p`$, the susceptibility $`dM/dH`$ after the turning point obeys the law: $$\frac{dM}{dH}=\frac{\chi _{irr}}{2\mathrm{exp}(|HH_p|/H_f)1}$$ (3) where $`\chi _{irr}`$ is the irreversible susceptibility just before the turning point. This law is independent of the field rate $`dH/dt`$. (v) The role of field history is investigated by measurements of the magnetization decay $`M(t)`$ under constant field $`H_0`$, carried out after first decreasing the field from positive saturation down to a certain reversal field $`H_p<0`$, and then increasing it from $`H_p`$ up to $`H_0>H_p`$. Three distinct regimes emerge quite distinctly: i) monotonic decrease of $`M(t)`$ for small $`H_0H_p`$; ii) monotonic increase for large $`H_0H_p`$ ; iii) non monotonic behavior in a intermediate region. We will show that also this experimental behavior is in agreement with the predictions of the model discussed in Section II. The general conclusion of our analysis is that the individual magnetization curves and relaxation laws can be quite complicated and are not in general described by log-type laws. However, all of them can be reduced to a small number of universal laws, which are precisely the laws predicted by the model. ## II Thermal activation in Preisach systems The presence of metastable states in the system free energy is the key concept to understand hysteresis and thermal relaxation effects. In this respect, the study of a system composed by a collection of elementary bistable units yields substantial simplification without losing the basic physical aspects of the problem. Each unit carries a magnetic moment which can attain one of the two values $`+\mathrm{\Delta }m`$ (”up” or ”+” state) and $`\mathrm{\Delta }m`$ (”down” or ”-” state). The unit is characterized by a simple double-well potential described by the barrier height $`\mu _0h_c\mathrm{\Delta }m`$ and the energy difference $`2\mu _0h_u\mathrm{\Delta }m`$ between the two states, where $`h_c>0`$ and $`h_u`$ are field parameters characterizing the unit. Let us consider a system consisting of a collection of many such units. The state of the collection is defined by specifying the two subsets of units, S<sub>+</sub> and S<sub>-</sub>, that are in the up and down state at a certain time. At zero temperature the history of the applied field only controls the shape of these subsets and the description reduces to the Preisach model. If we represent each elementary unit as a point of the plane $`(h_c,h_u)`$, known as Preisach plane, a certain field history will produce a line $`b(h_c)`$ in the Preisach plane separating the S<sub>+</sub> and S<sub>-</sub> subsets (see Fig.1). The $`b(h_c)`$ line is just the set of internal state variables that are needed to characterize the metastability of the system. In this sense, the approach can be interpreted as a thermodynamic formulation applicable to systems with hysteresis for which the hypothesis of local equilibrium is not valid. All thermodynamic functions can be expressed as functional of $`b(h_c)`$. In particular, the magnetization $`M`$ is given by the integral : $$M=2M_s_0^{\mathrm{}}𝑑h_c_0^{b(h_c)}p(h_c,h_u)𝑑h_u$$ (4) where $`M_s`$ is the saturation magnetization and $`p(h_c,h_u)`$ is the so-called Preisach distribution, giving the statistical weight of each elementary contribution. Eq.(4) holds under the symmetry assumption $`p(h_c,h_u)=p(h_c,h_u)`$. In presence of thermal activation, each unit $`(h_c,h_u)`$ relaxes to its energy minimum, with the transition rates given by the Arrhenius law. The interplay between external field changes and thermal relaxation effects, once averaged over the entire collection of units, determines the time evolution of the system. When the system is far from equilibrium, the relaxation picture is extremely complex and strongly history dependent. These aspects have been extensively discussed in Ref.. It has been shown that, if the temperature is not too high, the relevant contributions to the relaxation process are all concentrated around the time-dependent state line $`b(h_c,t)`$, and the time evolution of the state of the system is reduced to the time dependence of the $`b(h_c,t)`$ line itself. The following evolution equation governs the state line: $$\frac{b(h_c,t)}{t}=2\frac{H_f}{\tau _0}\mathrm{sinh}\left[\frac{H(t)b(h_c,t)}{H_f}\right]\mathrm{exp}\left[\frac{h_c}{H_f}\right]$$ (5) where $`\tau _0`$ is a typical attempt time, of the order of 10<sup>-9</sup>-10<sup>-10</sup> s, and $`H_f=k_BT/\mu _0\mathrm{\Delta }m`$ is the so-called fluctuation field. In the limit $`H_f0`$ the effect of thermal activation vanishes and the solution of Eq.(5) yields the Preisach switching rules . In order to make quantitative predictions about the magnetization $`M(t)`$ (Equation(4)), one must know the system state, given by the line $`b(h_c,t)`$, and the Preisach distribution $`p(h_c,h_u)`$. The state line $`b(h_c,t)`$ can be derived, given the field history $`H(t)`$, by solving Eq.(5). We consider here the case where the field history is composed of arbitrary sequences of time intervals where the field stays constant or the field varies at a given constant rate. In this case Eq.(5) can be exactly solved. Given a time interval where $`H(t)`$ changes at a given constant rate $`dH/dt`$, that is $`H(t)=H_0+(dH/dt)t`$, with initial conditions $`b(h_c,t=0)=b_0(h_c)`$ and $`H_0=b_0(0)`$, one obtains: $$b(h_c,t)=H(t)2H_f\text{Arth}\left[2\frac{\tau _H}{\tau _c}+\frac{\tau _H}{\tau _s}\text{th}\left[\pm \frac{t}{2\tau _s}+\text{Arth}\left(\pm 2\frac{\tau _s}{\tau _c}+\frac{\tau _s}{\tau _H}\text{th}\left(\frac{H_0b_0(h_c)}{2H_f}\right)\right)\right]\right]$$ (6) where the upper (lower) sign correspond to positive (negative) $`dH/dt`$, and $`\tau _c=\tau _0\mathrm{exp}\left({\displaystyle \frac{h_c}{H_f}}\right)`$ (7) $`\tau _H={\displaystyle \frac{H_f}{|dH/dt|}}`$ (8) $`\tau _s={\displaystyle \frac{\tau _H\tau _c}{\sqrt{4\tau _H^2+\tau _c^2}}}`$ (9) The special case where $`H`$ is constant in time, that is $`H(t)=H_0`$, is obtained by taking the limit $`dH/dt0`$, in Eqs.(6)-(9). Equation(6) reduces to: $$b(h_c,t)=H_02H_f\text{Arth}\left[\text{th}\left(\frac{H_0b_0(h_c)}{2H_f}\right)\mathrm{exp}\left(\frac{2t}{\tau _c}\right)\right]$$ (10) The limit of Eq.(10) for $`t\mathrm{}`$ represents the equilibrium configuration at constant field. One finds $`b(h_c,\mathrm{})=H_0`$. On the other hand, the limit of Eq.(6) for $`t\mathrm{}`$ gives the stationary state line under constant field rate, when all transients related to the initial state $`b_0(h_c)`$ have died out. One finds: $$b(h_c,t)=H(t)\pm 2H_f\text{Arth}\left(2\frac{\tau _H}{\tau _c}\frac{\tau _H}{\tau _s}\right)$$ (11) For an arbitrary sequence of $`n`$ time intervals in which $`H`$ varies at different field rates, the resulting state line $`b(h_c,t)`$ is obtained by using the solution (Eq.(6)) at the step $`n1`$ as the initial condition of step $`n`$. Of particular interest is the field history commonly considered in a magnetic viscosity experiment. The system is prepared by starting from positive saturation ($`H\mathrm{}`$), then bringing the field down to the final value $`H_0`$ at the rate $`dH/dt`$, and then, from the instant $`t=0`$, keeping $`H_0`$ constant over time. At $`t=0`$, the state line $`b(h_c,0)`$ is given by Eq.(11) (minus sign), with $`H(0)=H_0`$. The state line describing the relaxation is obtained by inserting Eq.(11) as the initial condition of Eq.(10). The resulting $`b(h_c,t)`$ (see Fig.2) can be approximately divided into two parts: $$b(h_c,t)=\{\begin{array}{cc}H_0\hfill & h_c<H^{}(t)\hfill \\ H_0H^{}(t)+h_c\hfill & h_c>H^{}(t)\hfill \end{array}$$ (12) where $$H^{}(t)=H_f\mathrm{ln}\left(\frac{\tau _H}{\tau _0}\right)+H_f\mathrm{ln}\left(1+\frac{t}{\tau _H}\right)$$ (13) At the initial time $`t=0`$ the state line is already relaxed in the portion $`h_c<H_f\mathrm{ln}(\tau _H/\tau _0)`$ as a consequence of the previous field history. Then the front propagates at logarithmic speed and the final equilibrium state is gradually approached. As a conclusion to this section, we discuss a useful approximate form of the results obtained so far. Let us consider the state line of Eq.(12). The relaxed part, $`b(h_c,t)=H_0`$ for $`h_c<H^{}(t)`$, extends over an $`h_c`$ interval of a few times $`H_f`$ (see Eq.(13)). Usually $`H_fH_c`$, where $`H_c`$ represents the coercive field. If the Preisach distribution is concentrated around $`h_cH_c`$, the contributions to the magnetization, Eq.(4), coming from the region $`h_c<H^{}(t)`$ will be small. Therefore, the magnetization calculated from Eq.(4) will not change substantially if one modifies the true state line of Fig.2 into the line $`b(h_c,t)=H_{ath}(t)+h_c`$ where $`H_{ath}(t)=H_0H^{}(t)`$. Perfectly analogous considerations apply to the case where $`H_0`$ is reached under positive $`dH/dt`$. The conclusion is that the magnetization associated with different combinations of time and field rate will be the same if one expresses the results in terms of the function $`M(H_{ath})`$, where $`H_{ath}`$ is given by Eq.(2). The field $`H_{ath}`$ plays the role of effective field summarizing the effects of the applied field and of thermal activation. We stress the fact that the existence of the function $`M(H_{ath})`$ is independent of the details of the energy barrier distribution of the system, provided the main approximation previously mentioned is satisfied. The approximate law of corresponding states expressed by $`M(H_{ath})`$ will be exploited in the analysis of the experimental results presented in the next section. ## III Thermal relaxation and hysteresis in nanocrystalline materials ### A Experimental setup We investigated the hysteresis properties of nanocrystalline Fe<sub>73.5</sub>Cu<sub>1</sub>Nb<sub>3</sub>Si<sub>13.5</sub>B<sub>9</sub> (Finemet) alloys. This material is commonly prepared by rapid solidification in the form of ribbons approximately 20$`\mu `$m thick. The material is amorphous in the as-cast state. Partial crystallization is induced by subsequent annealing in furnace at 550C for 1h, with the growth of Fe-Si crystal grains (with approximately 20 at % Si content). About 70% of the volume fraction turns out to be occupied by the Fe-Si crystal phase, in the form of nanograins of about 10nm linear dimension, imbedded in the amorphous matrix. The crystalline and the amorphous phases are both ferromagnetic, but have quite distinct Curie temperatures: $`T_c350^{}`$C for the amorphous phase, $`T_c700^{}`$C for the crystalline phase. Therefore, above $`350^{}`$C one has a system composed of ferromagnetic nanograins imbedded in a paramagnetic matrix, a situation in which the grain-grain coupling is strongly reduced and relaxation phenomena become important. The measurements were performed on a single strip (30cm long, 10mm wide and 20$`\mu `$m thick) placed inside an induction furnace. The temperature in the oven ranged from 20C to 500C, always below the original annealing temperature. The sample, the solenoid to generate the field and the compensated pick-up coils were inserted in a tube kept under controlled Ar atmosphere. The temperature, measured by a thermocouple, was checked to be constant along the sample. The large thermal inertia of the furnace permitted us to perform measurements under controlled temperature with the heater off, in order to reduce electrical disturbances. Experiments were performed up to a maximum temperature of 500C and it was checked that no structural changes were induced by the measurement at the highest temperature. Experiments were performed under field rate $`dH/dt`$ in the range $`1010^6`$ Am<sup>-1</sup>s<sup>-1</sup>. From 20C to 400C, we observed the increase of coercivity due to magnetic hardening . The paramagnetic transition of the amorphous matrix causes a strong increase of the coercive field and a decrease of the saturation magnetization. As expected, after a peak around 400C, the coercivity decreases due to the reduction of the Fe-Si anisotropy constant and the onset of superparamagnetic effects. We selected, for our investigation, the temperature $`T=430^{}`$C, above the maximum coercivity, as the point where nanograins are substantially decoupled. At this temperature we measured thermal activation effects on: * saturation loop, that is: i) loops and coercivity versus field rate and ii) relaxation curves versus applied field $`H_0`$ and field rate $`dH/dt`$; * return branches with turning point $`H_p`$, that is: i) branch shapes versus field rate and ii) relaxation curves versus field history ($`H_0`$ and $`H_p`$). ### B Thermal relaxation and dynamics along the saturation loop #### 1 $`H_c`$ vs. $`dH/dt`$ We found that above $`T350^{}`$C hysteresis loop shapes strongly depend on the field rate. Given the small ribbon thickness and the high electrical resistivity of the alloy, this dependence cannot be attributed to eddy current effects at least for magnetizing frequencies below 100Hz. Fig.3 shows hysteresis loops measured under different field rates at $`T=430^{}`$C. The inset shows the coercive field dependence on field rate, together with the prediction of Eq.(1). Curve fitting with $`H_f`$ as an adjustable parameter gives the result $`H_f`$=8Am<sup>-1</sup>. Eq.(1) was found to be valid for the description of hard magnetic materials and ultrathin ferromagnetic films. In the model of Section II, the coercive field $`H_c`$ is the field at which the state line $`b(h_c)`$ divides the Preisach plane in two parts giving equal and opposite contributions to the magnetization (Eq.(4)). When the external field decreases from positive saturation at the constant rate $`dH/dt`$, Eq.(11) describes the stationary regime where the state line $`b(h_c,t)`$ follows the field at the same velocity and can be approximately divided into two parts as in Eq.(12). When thermal activation is negligible ($`H_f0`$), the first part ($`h_c<H^{}`$) is absent and the coercive field $`H_cH_c^i`$ is the field at which the line $`b(h_c)=H_c^i+h_c`$ gives $`M=0`$ (Eq.(4)). When thermal activation is important ($`H_f0`$) the state line is given by Eq.(12) and, under the hypothesis that $`p(h_c,h_u)`$ is significantly different from zero only in the region $`h_c>H^{}`$ (see end of section II), the zero magnetization state is given by the state line of Fig.2 at $`t=0`$, $`H_c=H_0`$ and $`H_{ath}(0)=H_c^i`$. Taking into account Eq.(2) (with $`t=0`$), we conclude that the coercive field will depend on field rate according to the law: $$H_c=H_c^iH_f\mathrm{ln}\left(\frac{\tau _H}{\tau _0}\right)$$ (14) where $`\tau _H`$ is given by Eq.(8). By assuming $`\tau _010^{10}`$s, we obtain from the data of Fig.3 $`H_c^i320`$Am<sup>-1</sup>. At the lowest measured field rate $`dH/dt`$= 13.3 Am<sup>-1</sup>s<sup>-1</sup>, we have, from Eq.(13), that the state line is relaxed up to $`h_c180`$Am<sup>-1</sup>. By using Eq.(14), one can derive the limit field rate at which thermal effects become unimportant as $`dH/dt=H_f/\tau _0`$ = 8 10<sup>10</sup> Am<sup>-1</sup>s<sup>-1</sup>; and the superparamagnetic limit, where the coercive field vanishes, as $`dH/dt`$ = 5.6 10<sup>-7</sup> Am<sup>-1</sup>s<sup>-1</sup>. #### 2 Relaxation vs. $`H_0`$ and $`dH/dt`$ The relaxation experiment is performed by applying a large positive field, which is then decreased at a fixed rate $`dH/dt`$ to the final negative value $`H_0`$. The magnetization is then measured as a function of time, under constant $`H_0`$. We performed a systematic study of the relaxation behavior by changing $`H_0`$ and $`dH/dt`$. In general, we found that thermal relaxation results in large non-logarithmic variations of the magnetization. Figs.4,5,6 show i) the relaxation at different fields reached under the same field rate and ii) the relaxation at the same field, when $`H_0`$ is reached at different field rates. All relaxation curves collapse onto a single curve by plotting $`M`$ versus $`H_{ath}`$, given by Eq.(2) (see Fig.7). To obtain the $`M(H_{ath})`$ curve, the only parameter to be set is the fluctuation field $`H_f`$. Data collapse onto a unique curve by assuming $`H_f=8`$Am<sup>-1</sup>. The same curve collapse was found to be valid for the loops of Fig.3, when plotted as a function of the athermal field $`H_{ath}`$, with $`t=0`$. As an example, Fig.7 shows the result obtained for the loop measured at $`dH/dt`$=6.25 10<sup>3</sup> Am<sup>-1</sup>s<sup>-1</sup> (Fig.4), again assuming $`H_f=8`$Am<sup>-1</sup>. These regularities can be derived under the Preisach description of the system by the approximations discussed at the end of Section II, that is, by assuming that the Preisach distribution is significantly different form zero only in the region $`h_c>H^{}`$. The field $`H_{ath}`$ plays the role of effective driving field, summarizing the effect of applied field and thermal activation. This conclusion supports the idea that hysteresis and thermal activation phenomena depend on the same distribution of energy barriers. ### C Thermal relaxation and dynamics along return branches #### 1 Return branches vs. $`dH/dt`$ The role of field history on hysteresis curves was investigated by the measurement of recoil branches. We observed that when the field is reversed at the turning point $`H=H_p`$, the differential susceptibility after the reversal point is initially negative and equal to the susceptibility just before the turning point (see Fig.8). This effect is found to be independent of the field rate. After the turning point, the negative susceptibility decays to zero in a field interval of the order of the fluctuation field $`H_f`$. This effect was observed for several temperatures and peak field amplitudes. In order to explain this behavior, let us consider the system state in the Preisach plane. The $`b_T(h_c)`$ line correspondent to the turning point is given by Eq.(11) with $`H_0=H_p`$. When the field $`H`$ is increased after the turning point, $`b(h_c)`$ is given by Eq.(6), with $`dH/dt>0`$ and $`b_T(h_c)`$ as the initial condition. The resulting solution for $`b(h_c,t)`$(see Fig.9) shows that after the turning point a part of the state line still moves downward even if the field is increasing. This part of the line can be approximately described as $`b(h_c)=H^++h_c`$,where: $$H^+=H_pH_f\left[\mathrm{ln}\left(\frac{\tau _H}{\tau _0}\right)+\mathrm{ln}\left(2\mathrm{exp}\left(\frac{|HH_p|}{H_f}\right)\right)\right]$$ (15) where the $`\pm `$ is the sign of $`dH/dt`$ before the turning point. Under the approximation described at the end of Section II, i.e. that the Preisach distribution is concentrated at $`h_c>H^{}`$, the susceptibility after the turning point is obtained by inserting $`b(h_c)=H^++h_c`$ into Eq.(4) and taking the first derivative with respect to $`H`$. The result is given by Eq.(3), where $`\chi _{irr}=2M_sp(h_c,b(h_c))𝑑h_c`$ is the irreversible susceptibility before the turning point and the dependence on $`|dH/dt|`$ disappears. The fit of Eq.(3) to experimental data is shown in Fig.10, where the only free parameter $`H_f`$ is found to be $``$8 A/m, coherently with the other results previously discussed. #### 2 Relaxation curves vs. field history ($`H_0`$ and $`H_p`$) The role of field history on the relaxation effects was investigated by measuring the time decay of $`M(t)`$ at the field $`H_0`$ applied after the turning point $`H_p`$. In the case $`H_p<0`$ and $`H_0>H_p`$ we found tree distinct regimes: i) monotone decrease of $`M(t)`$ for small $`H_0H_p`$; ii) monotone increase for large $`H_0H_p`$ ; iii) non monotone behavior in a intermediate region. These three regimes are predicted by the model of Section II. Fig.9 shows that part A and B relax at logarithmic speed toward equilibrium with two different time constants and give contributions to the magnetization of opposite sign. However, quantitative predictions need a detailed knowledge of the Preisach distribution shape. We limit here our analysis to the case i) where, the contribution of the front A of Fig.9 is small. In the region $`h_c>H^{}`$ one finds that the state line can be approximately described as $`b(h_c)=H^+(t)+h_c`$, where $$H^+(t)=H^+(0)H_f\mathrm{ln}\left(1+\frac{t}{\tau _H\left(2\mathrm{exp}\left(|H_0H_p|/H_f\right)1\right)}\right)$$ (16) and $`H^+(0)`$ is given by Eq.(15). Since the system state can be identified by $`H^+(t)`$, this field assumes the same role of the athermal field of Section III.B.ii). By plotting a relaxation curve $`M(t)`$ measured at $`H_0=153`$Am<sup>-1</sup>, $`H_p=163`$Am<sup>-1</sup> and $`dH/dt=210^3`$Am<sup>-1</sup>s<sup>-1</sup> as a function of $`H^+(t)`$ of Eq.(16) we found that, with $`H_f=8`$Am<sup>-1</sup>, the resulting curve collapse on the $`M(H_{ath})`$ of Fig.7. ## IV Conclusions We have studied hysteresis and magnetic relaxation effects in Finemet-type nanocrystalline materials above the Curie temperature of the amorphous matrix, where the system consists of ferromagnetic nanograins ($``$10 nm linear size) imbedded in a paramagnetic matrix. This is a situation where hysteresis and thermal relaxation acquire comparable importance in determining the response of the system, and where the time scale of relaxation effects becomes small enough (in the range of seconds) to be amenable to a detailed study. Experiments have been carried out by investigation of the hysteresis loops dependence on the field rate, the magnetization time decay at different constant fields and the magnetization curve shape after field reversal. It is shown that all the experimental data can be explained by a model based on the assumption that the system consists of an assembly of elementary bistable units, distributed in energy levels and energy barriers. This approach permits one to describe all the measured effects in terms of a single parameter, the fluctuation field $`H_f`$, that was found to be $`H_f`$8Am<sup>-1</sup> (at 430C). This corresponds to an activation volume $`v`$ of linear dimensions $`v^{1/3}`$ of the order of 100 nm, which indicates that, even well beyond the Curie point of the amorphous matrix, magnetization reversal still involves a consistent number of coupled nanograins . In addition, the joint effect of the applied field and the thermal activation can be summarized by an effective field $`H_{ath}`$, and the measured curves can be rescaled onto to a single curve $`M(H_{ath})`$. The existence of the $`M(H_{ath})`$ curve, which is a direct prediction of the model, strongly support the idea that hysteresis and thermal relaxation are controlled by the same distribution of energy barriers. The results here obtained may represent a general framework for the study of the connection between hysteresis and thermal relaxation in different systems, such as materials for recording media and permanent magnets.
no-problem/9910/cond-mat9910373.html
ar5iv
text
# High-resolution Ce 3𝑑-edge resonant photoemission study of CeNi2 ## Abstract Resonant photoemission (RPES) at the Ce $`3d4f`$ threshold has been performed for $`\alpha `$-like compound CeNi<sub>2</sub> with extremely high energy resolution (full width at half maximum $`<`$ 0.2 eV) to obtain bulk-sensitive 4$`f`$ spectral weight. The on-resonance spectrum shows a sharp resolution-limited peak near the Fermi energy which can be assigned to the tail of the Kondo resonance. However, the spin-orbit side band around 0.3 eV binding energy corresponding to the $`f_{7/2}`$ peak is washed out, in contrast to the RPES spectrum at the Ce $`4d4f`$ threshold. This is interpreted as due to the different surface sensitivity, and the bulk-sensitive Ce $`3d4f`$ RPES spectra are found to be consistent with other electron spectroscopy and low energy properties for $`\alpha `$-like Ce-transition metal compounds, thus resolves controversy on the interpretation of Ce compound photoemission. The 4$`f`$ spectral weight over the whole valence band can also be fitted fairly well with the Gunnarsson-Schönhammer calculation of the single impurity Anderson model, although the detailed features show some dependence on the hybridization band shape and (possibly) Ce 5$`d`$ emissions. For several decades Ce metal and its compounds have attracted much attention because of their interesting physical properties such as Kondo behavior, mixed valency, heavy fermion property, various magnetic states, and superconductivity, etc. Such properties are believed to originate from the interplay of strong correlation between Ce 4$`f`$ electrons and hybridization between 4$`f`$ and conduction electrons, which is usually described by the periodic Anderson model. Although it is now generally agreed that low energy properties are well described by the Anderson model, there is still controversy as to the interpretation of high energy probes such as photoemission and inverse photoemission, which directly measure one-electron spectral weights. Gunnarsson-Schönhammer calculation (GS: Ref. ) and noncrossing approximation (NCA: Ref. ) of an impurity version of the model, i.e., the single impurity Anderson model (SIAM), make it possible to compare directly theoretical 4$`f`$-electron spectrum with experimental photoemission data. Thus in principle one can obtain model parameters of the SIAM for each compound from photoemission data, which can then be used to understand its low-energy properties. Resonant photoemission spectroscopy (RPES) at the Ce $`4d4f`$ edge, x-ray photoelectron spectroscopy (XPS) for Ce 3$`d`$ core-levels, and bremsstrahlung isochromat spectroscopy (BIS) have been used for this purpose and shown to be quite successful for many Ce compounds. On the other hand, Arko and co-workers dispute this interpretation, claiming that the 4$`f`$ weights of many Ce compounds measured by photoemission do not follow these schemes in that $`4d4f`$ RPES spectra of extremely $`\alpha `$-like Ce compounds show some discrepancy with core-level XPS and BIS spectra, which has not been completely understood as yet. One possible source of these discrepancy and controversy is the surface effect. From angle-dependent Ce 3$`d`$ core-level XPS spectra and threshold-dependent RPES spectra of several $`\alpha `$-like Ce compounds, Laubschat et al. proposed that surface electronic structures of those compounds are not $`\alpha `$-like but $`\gamma `$-like, which is now pretty well established. Since the photon energy of the $`4d4f`$ threshold is so low that $`4d4f`$ RPES is quite surface sensitive, the discrepancy between experimental $`4d4f`$ RPES spectrum and theoretical one, which is obtained from parameters mainly determined by XPS and BIS, can be understood in terms of surface effects. In this context, $`3d4f`$ RPES is more desirable to examine bulk electronic structures of Ce compounds because the escape depth of photoelectrons is longer. However, the resolution of photon source around the $`3d4f`$ threshold has been much poorer than that at the $`4d4f`$ threshold, which rendered limited information. In this work, we present $`3d4f`$ RPES spectra of very high Kondo temperature material CeNi<sub>2</sub> ($`T_\mathrm{K}1000`$ K) with the extremely high experimental energy resolution (0.2 eV full width at half maximum (FWHM)). We found that on-resonance spectrum shows a sharp resolution-limited peak near the Fermi energy ($`E_\mathrm{F}`$) which can be assigned to the tail of the Kondo resonance. Comparison with a GS calculation of the SIAM shows good agreement between theory and experiment, thus high-resolution $`3d4f`$ RPES opens new opportunities to study bulk electronic structures of Ce compounds. Polycrystalline CeNi<sub>2</sub> was prepared by arc melting of high-purity metals under argon atmosphere. The structure and homogeneity were checked by x-ray diffraction. $`3d4f`$ RPES measurements of CeNi<sub>2</sub> were performed at the beamline BL25SU of the SPring-8. FWHM of photon source around the $`3d4f`$ threshold was better than 200 meV and the temperature of the sample was maintained at 30 K throughout the measurements. The SCIENTA SES200 electron analyzer was used to obtain an overall experimental resolution of $``$ 0.2 eV FWHM. Clean sample surface was obtained by scraping in situ with a diamond file under the pressure of $`4\times 10^{10}`$ Torr. $`E_\mathrm{F}`$ of the sample was referenced to that of a gold film deposited onto the sample substrate. $`4d4f`$ RPES measurements of CeNi<sub>2</sub> were also carried out at the beamline BL-3B of the Photon Factory, High Energy Accelerator Research Organization (KEK) in Tsukuba. FWHM of photon sources around the $`4d4f`$ threshold was about 30 meV and the overall experimental resolution of 40 meV was obtained with the SCIENTA SES200 electron analyzer. Scraping was incorporated for the sample cleaning under the base pressure better than $`5\times 10^{10}`$ Torr, and all the measurements were done at 30 K. $`E_\mathrm{F}`$ of the sample was referenced to that of a gold film deposited onto the sample substrate and its position was accurate to better than 2 meV. Figure 1 shows the valence-band RPES spectra of CeNi<sub>2</sub> around the Ce $`4d4f`$ threshold. All the spectra are normalized according to the photon flux. The spectra are overall consistent with early data except for the difference of energy resolution. As the photon energy changes, the spectrum does not show a remarkable resonant enhancement of the Ce 4$`f`$ character in contrast to other Ce-non-transition-metal compounds. This fact was already noticed in the previous poorer-resolution $`4d4f`$ RPES study and was attributed to strong Ni 3$`d`$ emission. Photoemission cross section of Ni 3$`d`$ electrons is strongly dependent on the photon energy around the $`4d4f`$ threshold, thus it is hardly possible to extract reliable Ce 4$`f`$ removal spectrum using the conventional method. In the on-resonance spectrum at $`h\nu =122`$ eV, we can see that two features grow up at about 3 eV and near $`E_\mathrm{F}`$. The former could be assigned to an $`f^0`$ peak, and the latter to an $`f^1`$ one. In the inset of Fig. 1, the detailed spectra of the $`f^1`$ peak in the narrow region near $`E_\mathrm{F}`$ is shown. Similar to other Ce compounds, two features are enhanced on resonance. As usual, we can assign the peak at the Fermi level to the tail of Kondo resonance of the $`f_{5/2}`$ peak, while the one around 0.3 eV binding energy is its spin-orbit side band from the $`f_{7/2}`$ peak. The fact that $`f_{7/2}`$ side band is clearly observed around 0.3 eV binding energy is somewhat inconsistent with the GS analysis (see below). $`3d4f`$ RPES spectra of CeNi<sub>2</sub> are presented in Fig. 2. Contrary to the case of $`4d4f`$ RPES in Fig. 1, the Ce 4$`f`$ character is dramatically enhanced in the on-resonance spectrum ($`h\nu =881.4`$ eV) in comparison with the off-resonance spectrum ($`h\nu =868.1`$ eV). Especially, thanks to the extremely high resolution, we can see a very sharp peak at $`E_\mathrm{F}`$, whose position is limited by the experimental resolution. Thus this peak is undoubtedly assigned to the tail of the Kondo resonance as was done for a lower-$`T_\mathrm{K}`$ CeSi<sub>2</sub> system. We also observe a small hump around 1 eV binding energy and a broad feature around 3 eV binding energy. The broad feature around 3 eV binding energy probably originates from the $`f^0`$ character as generally accepted, but the origin of the 1 eV peak is a little controversial and this will be discussed later. Another interesting point is that we do not see any structure around 0.3 eV binding energy in the $`3d4f`$ on-resonance spectrum, which corresponds to the $`f_{7/2}`$ peak and is clearly noticeable in $`4d4f`$ RPES of Fig 1. We first suspected this may be due to the poorer energy resolution of $`3d4f`$ RPES than that of $`4d4f`$ RPES, but we discarded this possibility for the following reason. In order to see whether the lineshape is due to the experimental resolution, we simulated a 4$`f`$ spectrum of a low-$`T_\mathrm{K}`$ system, in which the $`f_{7/2}`$ peak is clearly resolved with 0.1 eV resolution (FWHM), with our experimental resolution determined by fitting gold $`E_\mathrm{F}`$ spectrum. We then found that the highest peak position is around the center of the $`f_{5/2}`$ and $`f_{7/2}`$ peaks and the lineshape is rather symmetrical. These facts contradict the on-resonance spectrum in that the highest peak position is very close to $`E_\mathrm{F}`$ and the lineshape is quite asymmetric as shown in Fig. 2, which implies that the intensity of the $`f_{7/2}`$ peak is smaller than in the $`4d4f`$ RPES spectrum or the peak is indistinguishable from the tail of the Kondo resonance. In fact, according to the GS and NCA schemes of the SIAM, the lineshape of the $`f_{7/2}`$ peak shows such a behavior as $`T_\mathrm{K}`$ increases. We conclude that the spin-orbit side band observed in previous high-resolution $`4d4f`$ RPES and He II photoemission spectra of high-$`T_\mathrm{K}`$ Ce compounds, which was not well reproduced by GS and NCA calculations with parameters suitable for bulk physical properties, originates from the surface where the Ce 4$`f`$ spectrum is more $`\gamma `$-like. This fact was also noticed by Kim et al. by analyzing $`4d4f`$ and $`3d4f`$ RPES spectra of CeIr<sub>2</sub>. In order to see whether the bulk-sensitive 4$`f`$ spectrum obtained from $`3d4f`$ RPES of CeNi<sub>2</sub> is quantitatively explained by the SIAM, we have performed GS calculations which includes spin-orbit splitting of the 4$`f`$ level. Since it is not simple to separate surface and bulk contribution from the experimental data, we will neglect the surface effect for the bulk-sensitive $`3d4f`$ RPES spectra here. Figure 3 shows the 4$`f`$ spectrum derived from $`3d4f`$ RPES spectra (empty circles) and the GS-calculation results (solid lines) employing the $`4d4f`$ off-resonance spectrum (lower graph) and a semielliptical shape (upper graph) for the hybridization matrix elements $`V(\epsilon )^2`$. For basis states we employed the lowest order $`f^0`$, $`f^1`$, and $`f^2`$, and the second-order $`f^0`$ states. The used parameter values are as follows: The 4$`f`$-electron energy $`\epsilon _f`$ is $`1.13`$, the spin-orbit splitting of the $`f`$ level $`\mathrm{\Delta }_{so}`$ is 0.28 eV, the hybridization strength averaged over the occupied valence band $`\mathrm{\Delta }_{av}`$ is 89 meV, and the on-site Coulomb interaction between 4$`f`$ electrons $`U`$ is 6 eV, which give the 4$`f`$-level occupancy $`n_f=0.78`$. The static, $`T=0`$ susceptibility $`\chi (0)`$ of CeNi<sub>2</sub> gives the estimates $`n_f=0.76`$ and 0.83 depending on the reference compound, which is comparable to the present spectroscopic estimate. To compare the theoretical spectrum with experimental data, we first broadened the calculated spectrum with a Lorentzian of the width given by $`0.01+0.20|EE_\mathrm{F}|`$ eV, and then the spectral weight above $`E_\mathrm{F}`$ was removed using the method of Liu et al., and finally the resulting curve was convoluted by a Gaussian for experimental resolution. The theoretical curves shown in Fig. 3 match the experimental data quite well, especially near the $`E_\mathrm{F}`$ region and the bottom of the valence band. This is taken as the evidence that the GS calculation with parameter values consistent with low energy properties can reproduce the experimental photoemission spectra well even for high-$`T_\mathrm{K}`$ material CeNi<sub>2</sub>. The only region showing discrepancy between theory and experiment is around the binding energy of 1 eV. Similar 1 eV structure has been observed before in other Ce compounds, and its origin was a little controversial. Lawrence et al. claimed that the contribution of Ce 5$`d`$ emission, whose position is around 1 eV, to the 4$`f`$ spectrum is considerable (about 30 %). Recent angle-resolved RPES studies of LaSb (Ref. ) and La metal (Ref. ) show enhancement of La 5$`d`$ emission at the $`4d4f`$ resonance, although its magnitude is much less than claimed in Ref. . The enhancement of La 5$`d`$ emission in La compounds was also observed in $`3d4f`$ RPES. On the other hand, such 1 eV structure could be reproduced by the GS calculations without considering 5$`d`$ emission as demonstrated in the cases of $`\alpha `$\- and $`\gamma `$-Ce metal using realistic hybridization shape $`V(\epsilon )^2`$ by Liu et al., and recently it was also proposed that similar 1 eV structure for CeIr<sub>2</sub> would be reproduced if realistic $`V(\epsilon )^2`$ is used in the GS calculations. Though the off-resonance spectrum may not be very realistic for $`V(\epsilon )^2`$, our GS calculation presented in Fig. 3 using this off-resonance curve reveals distinctive 1 eV structure, which is not observed in the calculation using structureless semielliptical band, where all other parameter values are kept the same (upper graph of Fig. 3). It strongly suggests that the hybridization between Ce 4$`f`$ and Ni 3$`d`$ electrons plays an important role for this 1 eV peak, although 5$`d`$ emission may also contribute. Thus it is quite essential to employ realistic $`V(\epsilon )^2`$ in GS calculations in order to fully interpret experimental spectra. In conclusion, we have performed high-resolution $`4d4f`$ and $`3d4f`$ RPES measurements of CeNi<sub>2</sub>. It was nearly impossible to extract Ce 4$`f`$ spectrum from the $`4d4f`$ RPES spectra because of overlapping Ni 3$`d`$ bands, but the $`3d4f`$ RPES spectra with extremely high resolution provide a clear bulk-sensitive 4$`f`$ spectrum. The experimental 4$`f`$ spectrum thus obtained is well reproduced using a GS calculation of the SIAM. This work is supported by the Korean Science and Engineering Foundation (KOSEF) through the Center for Strongly Correlated Materials Research (CSCMR) at Seoul National University (1999), and Grant-in-Aid for COE Research (10CE2004) of the Ministry of Education, Science, Sports and Culture, Japan. The authors thank K. Matsuda, M. Ueda, H. Harada, T. Matsushita, M. Kotsugi, and T. Nakatani for partial support for the experiments. The 3$`d`$ RPES was performed under the approval of the Japan Synchrotron Radiation Research Institute (1997B1047-NS-np). The 4$`d`$ RPES was performed under the approval of PF-PAC (92S002).
no-problem/9910/astro-ph9910273.html
ar5iv
text
# Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks ## 1 Introduction The cosmological origin of GRBs, established as a result of optical follow-up observations of fading X-ray counterparts to GRBs (Costa, et al. (1997)), requires an extraordinarily large amount of energy to flood the entire universe with gamma rays ($`10^{52}10^{54}`$ erg). The source of this energy is assumed to be a cataclysmic event (neutron star-neutron star merger, neutron star-black hole merger, or the formation of a black hole). The lack of apparent photon-photon attenuation of high energy photons implies substantial bulk relativistic motion. The relativistic shell must have a high Lorentz factor, $`\mathrm{\Gamma }=(1\beta ^2)^{1/2}`$, on the order of $`10^2`$ to $`10^3`$. A growing consensus is that a central site releases energy in the form of a wind or multiple shells over a period of time commensurate with the observed duration of GRBs (Rees & Mészáros (1994)). Each subpeak in the GRB is the result of a separate explosive event at the central site. General kinematic considerations impose constraints on the temporal structure produced when the energy of a relativistic shell is converted to radiation. The purpose of this paper is to analyze the time histories of many GRBs to uncover the temporal evolution of the pulse width. In an earlier report (Ramirez-Ruiz & Fenimore (1999)) we found no significant change in the average peak width in long bursts. Here, we analyze both long and short bursts in greater detail, as well as small and large amplitude pulses in individual bursts, and compare the results to internal shock models. ## 2 Observations ### 2.1 Temporal Evolution of the Average Pulse Width Gamma-ray burst temporal profiles are enormously varied. Many bursts have a highly variable temporal profile with a time scale variability that is significantly shorter than the overall duration. Our aim is to characterize and measure the pulse shape as a function of arrival time. We will use the aligned peak method which measures the average pulse temporal structure by aligning the largest peak of each burst (Mitrofanov (1993)). The Burst and Transient Source Experiment (BATSE) catalog provides durations called $`T_{90}`$ (Meegan et al. (1996)), where $`T_{90}`$ is the time which contains 90% of the counts. For the purpose of our analysis, we used two sets of bursts from the BATSE 4B Catalog that were brighter than 5 photons s<sup>-1</sup> cm<sup>-2</sup> (256 ms peak flux measured in the 50-300 keV energy range) and with a 64ms temporal resolution. The first set used all 53 bursts that were longer than 20s, and the second set used all 23 bursts that were shorter than 20s. Each burst must have at least one peak, as determined by a peak-finding algorithm (similar to Li & Fenimore (1996)), in each third of its duration. The largest peak in each third was normalized to unity and shifted in time, bringing the largest peaks of all bursts into common alignment. This method was applied in each third of the duration of the bursts. Thus, we obtained one averaged pulse shape, $`I(t)`$, for each third of the bursts (as shown in Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocksa for the long duration bursts and in Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocksb for the short duration bursts).The average width is notably identical in each 1/3 of $`T_{90}`$ in both long and short bursts. We estimate the differential spread, $`S`$, to be $``$ 1% for the long duration bursts and $``$ 5% for the short duration bursts. The values $`I(t)`$ along the aligned timescale represent the average level of the emissivity of all contributing sources aligned at their primary peaks and thus the general character of the emission evolution of GRBs (see Mitrofanov (1993) for details). To resolve the true differences between the timescales in GRB pulses, one has to find the appropriate temporal correspondences in order to align the events, despite their probably different time histories. However, such a correspondence seems to exist because each burst has a specific moment, namely, the highest peak of the time history, which may be regarded as a physically unique reference moment. Furthermore, the highest peak is also where the highest signal-to-noise ratio is observed. The selection of a high brightness sample (5 photons s<sup>-1</sup> cm<sup>-2</sup> in this case) is appropriate in order to avoid systematic effects that might change the observed time histories with different statistics. The time histories of dim events would be more randomized by fluctuations than the time histories of bright bursts. Using other GRB samples with a high signal-to-noise ratio ($``$ 3 photons s<sup>-1</sup> cm<sup>-2</sup>) gives similar results. Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks shows that the pulse width does not increase with time. It could be argued that the peak alignment method is uncertain because it only reflects the temporal evolution of the largest pulse width in the time histories. Thus, in the following section, we expand our analysis to individual pulses in GRB time histories. ### 2.2 Average Temporal Evolution of the Pulse Width The substantial overlap of the temporal structures in the burst have made the study of individual pulses somewhat difficult. An excellent analysis has been provided by Norris, et al. (1996), who examined the temporal structure of bright GRBs by fitting time histories with pulses. The time histories were fit until all structure was accounted for within the statistics, thus, they effectively deconvolved the time history into all of the constituent pulses. From the set of pulses that they analyzed, we used the 28 bursts that have five or more fitted pulses (in the 55 keV - 115 keV BATSE channel) within their $`T_{90}`$ duration. There was a total of 387 pulses in those 28 bursts. We obtain the Full Width Half Maximum (FWHM) from the pulse shape parameters found by Norris, et al. (1996). To find the average pulse width as a function of time, we first normalized the FWHM of each peak, within a burst, to the average FWHM of that burst. The purpose of such normalization is so that no one burst is allowed to dominate the pulse width average. Second, we normalized all the pulse amplitudes to the average amplitude. This is required in order to differentiate intrinsically large and small pulses in all bursts despite the total net counts. Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks shows the average pulse width, $`\frac{W}{<W>}`$, as a function of temporal position in the time history. The filled symbols give the average normalized width of the pulses that have a normalized amplitude, $`\frac{A}{<A>}`$, greater than 1.0, while the open symbols show $`\frac{W}{<W>}`$ for the pulses with a normalized amplitude less than 1.0. Each group has about 180 pulses. The resulting average (in both samples) appears to be fairly constant in time. One cannot determine strict error bars because the uncertainties are not due to counting statistics (which, after all, are very good, since we are adding together $``$ 180 pulses ). Rather, the fluctuations are due to the way in which the various peaks add together. We used a linear fit to search for a trend. The resulting average temporal evolution of the pulse width is remarkably constant for both samples: $$\frac{W}{<W>}=0.820.01\frac{T}{T_{90}}\mathrm{if}\frac{A}{<A>}>1.0$$ $$=1.280.02\frac{T}{T_{90}}\mathrm{if}\frac{A}{<A>}<1.0.$$ (1) These curves are shown in Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks as dotted lines. A visual inspection of the pulses fitted to gamma ray bursts by Norris, et al. (1996) shows that the low amplitude pulses (in a single burst) tend to be wider although their shape may not be well determined. This is due to the fact that the actual temporal profile may contain “hidden” pulses which are not easy to deconvolve but contribute to the total emission. Furthermore, there are pulses that may overlap their neighbors and the pulse model is not sufficiently detailed to represent all of the individual emission events. Thus, larger pulses are much more succesfully deconvolved than the smaller amplitude ones. It is more difficult to conclude that the average temporal evolution of the pulse width in small amplitude pulses is as constant over $`T_{90}`$ as that of the large amplitude pulses for two reasons. First, the standard deviation of the distributions of pulse width values for small peaks is $``$ 1.7 - 2.2 times greater than that found for the analysis of large amplitude pulses. Second, the linear correlation coefficient of the linear fit to the large amplitude pulses is $``$ 1.12 times greater than the one found in the linear fit to the small amplitude pulses. Nevertheless, the small pulses show the same consistency in pulse widths as the large pulses. This analysis of individual time histories agrees with what was found for the evolution of the average pulse structure for large peaks. Individual bursts show that larger peaks have about the same width at the beginning of the burst as near the end of the burst with a rather small variation. This is also true for smaller pulses. However, as we show in the next section, small and large pulses do not have the same pulse width within a single profile. ### 2.3 Pulse Width as a Function of Amplitude GRBs are very diverse, with time histories ranging from as short as 50 ms to longer than $`10^3`$ s. The long bursts often have very complex temporal structure with many subpeaks. The process that produces the peaks has a random nature, and the peaks that are produced vary substantially in amplitude. These pulses tend to be wider as their amplitudes decrease, within a single profile. To investigate the amplitude dependency of the pulse width, we used the 28 bursts described in section 2.2. Each pulse (in each profile) was normalized to the average amplitude found in that burst. We selected four regions of normalized amplitudes: 0.1 - 0.3, 0.3 - 0.9, 0.9 - 1.5, 1.5 - 2.0. Each group has about 95 pulses. Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks shows the aligned average pulse shape for the four ranges of normalized amplitudes. The pulse shape was calculated based on the general pulse shape proposed by Norris, et al. (1996): $$\mathrm{I}(\mathrm{t})=\mathrm{A}\mathrm{exp}[(\frac{|tt_{peak}|}{\sigma _r})^\nu ]\mathrm{if}\mathrm{t}<\mathrm{t}_{\mathrm{peak}},$$ $$=\mathrm{A}\mathrm{exp}[(\frac{|tt_{peak}|}{\sigma _d})^\nu ]\mathrm{if}\mathrm{t}>\mathrm{t}_{\mathrm{peak}},$$ (2) where $`t_{peak}`$ is the time of the pulse’s maximum intensity ($`A`$); $`\sigma _r`$ and $`\sigma _d`$ are the rise and decay time constants, respectively; and $`\nu `$ is a measure of pulse sharpness, which Norris, et al. (1996) refer to as “peakedness”. Each pulse, in each range of amplitude, was normalized to unity and then shifted in time, bringing the center of all pulses into common alignment. Note that the smaller peaks are wider than the larger peaks (see Fig. Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks). We characterized the amplitude dependency of the pulse width in GRB time histories using the FWHM of the aligned average pulse shapes in Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks. The open diamonds in Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks are the widths, $`W`$, of each aligned average pulse shape measured at the half maximum. We have fitted a power law and an exponential function to the points. The best-fit power law is $`\frac{A}{<A>}[W_{FWHM}]^{2.8}`$ (The exponential fits had $`\chi ^2`$ values that were 1.4 times larger). The power-law function is shown in Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks as a dotted line. This a robust result. Using the width at other values of the average pulse shape gives similar results. One thing that is not clear in our formulation is at what normalized amplitude to place the points. We have placed them at the mid-point of the selected amplitude ranges. If we were to use the average point of all the normalized amplitudes in each selected range, the result is still a power law: $`\frac{A}{<A>}[W_{FWHM}]^{3.0}`$. In summary, we find that the aligned average pulse shape can measure the amplitude dependency of the pulse width. The dependency is a power law in pulse width with an index that is between -2.8 and -3.0, depending on how it is measured. Some limitations are necessarily inherent in our approach and selection of data. The conclusions we reach are based on measurements of a subset of the bursts detected by BATSE. We analyze the pulses deconvolved by Norris, et al. (1996) from a subset of relatively bright bursts with 64 ms temporal resolution. Analysis of pulses in shorter bursts using different data with much higher resolution will be the subject of another paper. In a multipeaked event, all peaks would be seen if the burst is intense, whereas some peaks might be missed in a weak version of the event owing to the decrease of the signal-to-noise ratio. Moreover, smaller peaks of dimmer events might be missed owing to the absence of triggering of the instrument at those peaks. These effects might lead to a systematic decrease of the average number of peaks and/or to a decrease of estimated burst duration with decreasing burst intensity. Although our approach utilizes a sufficient number of pulses to represent adequately the temporal profile of a certain burst, our inferences concerning pulse shape are drawn from those fitted pulses which do not overlap, as estimated by the relative amplitudes of two pulses and the intervening minimum. Several mutually reinforcing trends have been found by Norris, et al. (1996) in the analysis of the same sample. Thus, supporting the validity of our results. From a phenomenological point of view, it has not been clear what the fundamental “event” in gamma-ray bursts is. The premise of our work has been that pulses are the basic unit in bursts. The relationship between pulse width and intensity supports this hypothesis. However, there may be other components in bursts, undefined by our approach, including long smooth structures at lower energies or very short spiky features at higher energies, which might represent distinct physical processes from the ones that are responsible for pulse emission. Evidence for a separate emission component, similar to those of the afterglows at lower energies, has been clearly found in some GRB light curves (Giblin, et al. (1999)). These observations may indicate that some sources display a continued activity (at a variable level). ## 3 Pulses From Internal Shocks Internal shocks occur when the relativistic ejecta from the central site are not moving uniformly. If some inner shell moves faster than an outer one ($`\mathrm{\Gamma }_i>\mathrm{\Gamma }_j`$) it will overtake the slower at a radius $`R_c`$. The two shells will merge to form a single one with a Lorentz factor $`\mathrm{\Gamma }_{ij}`$. The emitted radiation from each collision will be observed as a single pulse in the time history (Piran & Sari (1997), Sari & Piran (1995)). Several groups have modeled this process by randomly selecting the initial conditions at the central site (Kobayashi, Piran, & Sari (1997), Daigne & Mochkovitch (1998), Fenimore & Ramirez-Ruiz (1999)). We will compare two aspects of these internal shock models to the pulse evolution studied in this paper: the trend for smaller pulses to be wider and the narrowness of the pulse width distribution. ### 3.1 Pulse Width vs. Intensity We have simulated internal shocks as described in Fenimore & Ramirez-Ruiz (1999). In the notation of that paper, we have set the maximum initial energy per shell, $`E_{\mathrm{max}}`$, to be $`10^{53.5}`$ erg, the maximum thickness to be 0.2 lt-s, and the ambient density to be 1.0 cm<sup>-3</sup>. We generated about 1.4 shells per s. Nine values of the maximum Lorentz factor, $`\mathrm{\Gamma }_{\mathrm{max}}`$, were simulated from $`10^{2.5}`$ to $`10^{4.5}`$. The minimum Lorentz factor, $`\mathrm{\Gamma }_{\mathrm{min}}`$, was 100. We took the resulting pulses and determined the peak intensity (assuming 0.064 s samples) and the FWHM. Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocksa shows the distribution of pulse widths and intensities if $`\mathrm{\Gamma }_{\mathrm{max}}`$ if $`10^{2.5}`$ and Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocksb is for $`\mathrm{\Gamma }_{\mathrm{max}}`$ is $`10^{4.5}`$. The solid line is a power law with the index determined from the observations (i.e., from Fig. Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks). Internal shocks show a trend that smaller pulses are wider. Indeed, one can estimate $`\mathrm{\Gamma }_{\mathrm{max}}`$ by measuring the index of the width vs. intensity distribution. By running models with a variety of values of $`\mathrm{\Gamma }_{\mathrm{max}}`$, we have found that the index is $`5.25+0.975\mathrm{log}_{10}(\mathrm{\Gamma }_{\mathrm{max}})`$. Our observed index of $`2.8`$ (from Fig. Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks) indicates that $`\mathrm{\Gamma }_{\mathrm{max}}`$ is $`\genfrac{}{}{0pt}{}{_<}{^{}}10^3`$. ### 3.2 Pulse Width as an indicator of $`R_c`$ A shell that coasts without emitting photons and then emits for a short period of time produces a pulse with a rise time related to the time the shell emits and a decay dominated by curvature effects (Fenimore, Madras, & Nayakshin (1996)). In the internal shock model, the shell emits for $`\mathrm{\Delta }t_{\mathrm{cross}}`$, where $`\mathrm{\Delta }t_{\mathrm{cross}}`$ is the time it takes the reverse shock to cross the shell that is catching up. Following Kobayashi, Piran, & Sari (1997), $`\mathrm{\Delta }t_{\mathrm{cross}}=l_j/(\beta _j\beta _{rs})`$, where $`l_j`$ is the width of the rapid shell ($`\beta _j`$). To calculate the observed pulse shape, one needs to combine Doppler beaming with the volume of material that can contribute at time $`T`$. Following Fenimore & Ramirez-Ruiz (1999) and Summer & Fenimore (1998), the resulting pulse shape is $`V(T)`$ $`=`$ $`0\mathrm{if}T<0`$ $`=`$ $`\psi {\displaystyle \frac{(R_c+2\mathrm{\Gamma }_{ij}^2cT)^{\alpha +3}R_c^{\alpha +3}}{(R_c+2\mathrm{\Gamma }_{ij}^2cT)^{\alpha +1}}}\mathrm{if}0<2\mathrm{\Gamma }_{ij}^2T<\mathrm{\Delta }t_{\mathrm{cross}}`$ $`=`$ $`\psi {\displaystyle \frac{(R_c+\mathrm{\Delta }t_{\mathrm{cross}})^{\alpha +3}R_c^{\alpha +3}}{(R_c+2\mathrm{\Gamma }_{ij}^2cT)^{\alpha +1}}}\mathrm{if}2\mathrm{\Gamma }_{ij}^2T>\mathrm{\Delta }t_{\mathrm{cross}}`$ where $`\psi `$ is a constant, $`T`$ is measured from the start of the pulse and $`\alpha `$ ($``$ 1.5) is the power-law index of the rest-frame photon number spectrum. The amplitude, $`\psi `$, depends on the amount of energy converted to gamma rays in a given collision. Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks shows the FWHM obtained from equation 3.2 (assuming that $`l_j`$ = 1 light second and $`\frac{\mathrm{\Gamma }_i}{\mathrm{\Gamma }_j}`$=10) as a function of the radius of emission, $`R_c`$, and the Lorentz factor of the resulting shell, $`\mathrm{\Gamma }_{ij}`$. Note that a wide range of widths map into a narrow range of radii. In the internal shock scenario, the observed temporal structure reflects directly the activity of the inner engine. In Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks we show the distribution of radii of emission using the FWHM calculated by equation 2.3 for the parameters provided by Norris, et al. (1996). The FWHM is used with Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks to find a radius for each of the 387 pulses. The radius of emission is normalized by $`\mathrm{\Gamma }_{ij}^2`$, and, since the curves in Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks are self-similar, all values of $`\mathrm{\Gamma }_{ij}`$ give the same distribution when divided by $`\mathrm{\Gamma }_{ij}^2`$. We define the radius spread, $`\frac{\mathrm{\Delta }R_c}{R_c}`$, to be the ratio between the center and the standard deviation of the distribution of the radius of emission. This distribution shows that, if the spread of values of the Lorentz factors of the shells ($`\mathrm{\Gamma }_{ij}`$) is small, the dynamical range of the radii of emission is also small: $`\mathrm{\Delta }R_c0.07R_c`$. The multiple-peaked time histories in the BATSE catalog reveal that the dynamical ranges in observed timescales within cosmic gamma-ray bursts (GRBs) are very large (see Norris, et al. (1996)). For example, the total event durations range from 10 ms to 1000 s, with a dynamical range of almost $`10^5`$. Thus, the small variation in the values of the pulse width and radius spread parameter is remarkable. In the internal shock scenario, the observed temporal structure reflects directly the activity of the inner engine. This engine must operate for a long duration, up to hundreds of seconds in some cases, and it must produce a highly variable wind to form shells that radiate. If the spread of values of the Lorentz factors ($`\mathrm{\Gamma }_{ij}`$) is small, the range of radius is $`\mathrm{\Delta }R_c0.07R_c`$. The arrival time of the pulses at a detector such as BATSE has a one-to-one relationship with the time the shell was created at the central site. The time of arrival, $`T_{\mathrm{toa}}`$, is $`t_{ij}R_c/c`$ where $`t_{ij}`$ is the time of the collision. But $`t_{ij}R_c/c`$ is roughly $`t_{oi}`$, the time the shell was produced at the central site and is not dependent on other parameters such as $`R_c`$ or the time of the collision (see, for example, Eq. 5 in Fenimore & Ramirez-Ruiz (1999)). Thus, internal shocks also explain why the pulse width tends to be constant throughout the burst: the time of arrival in the time history is effectively just the time of generation of the pulses at the central site and is not related to the conditions or parameters of the collision. ## 4 Summary We calculated the temporal evolution of the pulse width in gamma ray bursts. We found that the average aligned pulse width is a universal function that can measure the timescale of the largest pulses in the burst. For long and short bursts we found that the average aligned pulse width undergoes no significant change during the gamma-ray phase (see Fig. Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks). The analysis of individual time histories agrees with what was found in the average aligned method. Individual bursts typically have no time evolution of the width of the largest pulses. This is also true for small pulses (see Fig. Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks). However, in a time history, the smallest amplitude peaks tend to be wider (see Fig. Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks). The dependency, as shown in Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks, is a power law in a amplitude with an index that is between -2.8 and -3.0, depending on how it is measured. We have found that internal shocks can explain most of these characteristics. The time of arrival of a pulse is not related to the collision parameters so internal shocks can produce pulses that have the same characteristics at the beginning as at the end. Internal shocks produce pulses that are wider for smaller intensities. If the maximum $`\mathrm{\Gamma }`$ is $`\genfrac{}{}{0pt}{}{_<}{^{}}10^3`$ the observed distribution (Fig. Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks) is similar to the simulated distribution (Fig. Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks). For such low values of $`\mathrm{\Gamma }`$, deceleration is usually not important and the simulated time histories do not have pulses that get progressively wider (see Fenimore & Ramirez-Ruiz (1999)). This is consistent with the analysis of this paper which did not find progressively wider pulses, although such pulses might have been missed because it is difficult to deconvolve many overlapping small pulses. Without substantial deceleration, the efficiency for converting bulk motion into radiation is $`\genfrac{}{}{0pt}{}{_<}{^{}}`$ 25% (Fenimore & Ramirez-Ruiz (1999)). In the internal shock scenario, the temporal structure directly reflects the temporal behavior of the inner engine that drives the GRB. The pulse width gives information about the radius of colliding shells. Figure Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks shows that a wide range of widths maps into a narrow range of radii (see Fig. Pulse Width Evolution in Gamma-Ray Bursts: Evidence for Internal Shocks). We thank Jay Norris for providing the pulse-fit parameters. Figure 1a Figure 1b Figure 2 Figure 3 Figure 4 Figure 5a Figure 5b Figure 6 Figure 7
no-problem/9910/cond-mat9910491.html
ar5iv
text
# Magneto-roton excitation of fractional quantum Hall effect: Comparison between theory and experiment ## Abstract A major obstacle toward a quantitative verification, by comparison to experiment, of the theory of the excitations of the fractional quantum Hall effect has been the lack of a proper understanding of disorder. We circumvent this problem by studying the neutral magneto-roton excitations, whose energy is expected to be largely insensitive to disorder. The calculated energies of the roton at 1/3, 2/5 and 3/7 fillings of the lowest Landau level are in good agreement with those measured experimentally. Quantitative tests of the theory of the fractional quantum Hall effect (FQHE) have focused in the past primarily on the gap to charged excitation, determined experimentally from the temperature dependence of the longitudinal resistance. A factor of two discrepancy between theory and experiment has persisted over the years, believed to be caused by disorder for which a quantitatively reliable theoretical treatment is not available at the moment. In recent years there has been tremendous experimental progress in the measurement of the energy of the neutral magneto-roton excitation , both by inelastic Raman scattering and by ballistic phonon absorption , and its energy has been determined at Landau level fillings of 1/3, 2/5, and 3/7. While the neutral magneto-roton is of great interest in its own right, being the lowest energy excitation of the FQHE state, the chief motivation of this work is the observation that the disorder is not likely to affect its energy significantly, in contrast to the energy of the charged excitation, because the roton has a much weaker dipolar coupling to disorder due to its overall charge neutrality, and the coupling is further diminished because the disorder in modulation doped samples is typically smooth on the scale of the size (on the order a magnetic length) of the spatially localized roton. There is also compelling experimental evidence for the insensitivity of the roton energy to disorder: the same roton energy was found for samples for which the gaps in transport experiments differed by as much as a factor of two . The roton therefore provides a wonderful opportunity for testing the quantitative validity of our understanding of the excitations of the fractional quantum Hall state. With this goal in mind, we have undertaken a comprehensive and realistic calculation of the roton energy at several filling factors in the lowest Landau level (LL). The neutral excitation of the FQHE will be treated in the framework of the composite fermion (CF) theory , the composite fermion being the bound state of an electron and an even number of flux quanta (a flux quantum is defined as $`\varphi _0=hc/e`$). According to this theory, the interacting electrons at Landau level filling factor $`\nu =n/(2pn\pm 1)`$, $`n`$ and $`p`$ being integers, transform into weakly interacting composite fermions at an effective filling $`\nu ^{}=n`$; the ground state corresponds to $`n`$ filled CF-LLs and the neutral excitation to a particle-hole pair of composite fermions, called the CF exciton. Microscopic wave functions for the CF ground state and the CF exciton are readily constructed by analogy to the known wave functions of the electron ground state at filling factor $`n`$, $`\mathrm{\Phi }_n^{gs}`$, and its exciton, $`\mathrm{\Phi }_n^{ex}`$: $$\mathrm{\Phi }_{\frac{n}{2n+1}}^{gs}=𝒫_{LLL}\underset{j<k}{}(z_jz_k)^{2p}\mathrm{\Phi }_n^{gs}$$ (1) $$\mathrm{\Phi }_{\frac{n}{2n+1}}^{ex}=𝒫_{LLL}\underset{j<k}{}(z_jz_k)^{2p}\mathrm{\Phi }_n^{ex},$$ (2) where $`z_j=x_j+iy_j`$ is the position of the $`j`$th particle, and $`𝒫_{LLL}`$ denotes projection of the wave function into the lowest Landau level. It was shown earlier that $`\mathrm{\Phi }_\nu `$ can be obtained from $`\mathrm{\Phi }_n`$ by substituting the single electron wave functions $`Y_\alpha (𝐫_j)`$ by the ‘single CF wave functions’ $`Y_\alpha ^{CF}(𝐫_j)=𝒫_{LLL}_k^{^{}}(z_jz_k)^pY_\alpha (𝐫_j)`$, the explicit form of which has been given in the literature . (The prime denotes the condition $`kj`$.) The composite fermion interpretation of $`\mathrm{\Phi }_\nu `$ follows since multiplication by the Jastrow factor $`_{j<k}(z_jz_k)^{2p}`$ is tantamount to attaching $`2p`$ flux quanta to each electron, converting it into a composite fermion. These wave functions have been found to be quite accurate in tests against exact diagonalization results available for small systems . The Hamiltonian for the many electron system is given by $$H=\frac{1}{2}\underset{jk}{}V(r_{jk})+V_{eb}$$ (3) where $`V_{eb}`$ is the electron-background interaction, with the background assumed to be comprised of a uniform positive charge, and $`V(r)`$ is the effective two-dimensional electron-electron interaction. (The kinetic energy is quenched in the lowest Landau level.) For a strictly two-dimensional system, $`V(r_{jk})=\frac{e^2}{ϵ|r_jr_k|},`$ where $`ϵ`$ is the dielectric constant of the background material. As we will see, an important quantitative correction comes from the finite transverse extent of the electron wave function, which alters the form of the effective two-dimensional interaction at short distances. The effective interaction can be calculated straightforwardly once the transverse wave function is known, which in turn will be determined by self-consistently solving the Schrödinger and Poisson equations, taking into account the interaction effects through the local density approximation (LDA) including the exchange correlation potential . Two geometries, single heterojunction and square quantum well (SQW), are considered due to their experimental relevance. To simplify the calculation, we assume that the electron wave function is confined entirely on the GaAs side of the heterojunction, which is a reasonably good approximation for deep confinement. It is stressed that neither the microscopic wave function nor the effective interaction contains any adjustable parameters; the former depends only on the filling factor, while the latter is determined from a first principles, self-consistent LDA calculation, with the two-dimensional density, the sample type (heterojunction or square quantum well) and the known sample parameters as the only input. The energy of the exciton at $`\nu =\frac{n}{2n+1}`$, $$\mathrm{\Delta }^{ex}=\frac{<\mathrm{\Phi }_\nu ^{ex}|H|\mathrm{\Phi }_\nu ^{ex}>}{<\mathrm{\Phi }_\nu ^{ex}|\mathrm{\Phi }_\nu ^{ex}>}\frac{<\mathrm{\Phi }_\nu ^{gs}|H|\mathrm{\Phi }_\nu ^{gs}>}{<\mathrm{\Phi }_\nu ^{gs}|\mathrm{\Phi }_\nu ^{gs}>}$$ (4) is computed by Monte Carlo methods in the standard spherical geometry . Since moving a single particle at each step of the Monte Carlo changes the single CF wave function $`Y^{CF}`$ for all particles, the full wave function must be computed at each step. The exciton wave function is a linear superposition of $`N/n`$ Slater determinants, but each of these differs from the ground state only in one row, and the clever techniques for upgrading Slater determinants significantly reduce the computing time, enabling us to study reasonably large systems (up to 63 composite fermions were used in the present study). The ground and excited state energies are evaluated sufficiently accurately to get a reasonable estimate for the gap. Due to the lack of edges in the spherical geometry being studied, we expect that the gap will have a linear dependence on $`N^1`$ to leading order, which is also borne out by our results. A linear extrapolation to the thermodynamic limit $`N^10`$ is taken after correcting the energies for the finite size deviation of the density from its thermodynamic value in the standard manner . All results below are thermodynamic extrapolations, unless mentioned otherwise. The energies are quoted in units of $`e^2/ϵl_0`$, where $`l_0=\sqrt{\mathrm{}e/eB}`$ is the magnetic length. We have determined the dispersion of the CF exciton for 1/3, 2/5, and 3/7, corresponding to one, two, and three filled CF-LLs, respectively. The typical dispersion contains several minima, as shown in Fig. (1). We will term the lowest energy minimum the “fundamental” roton, or simply the roton, the others being secondary rotons. Since only discrete values of $`k`$ are available in our finite systems, the energy of the roton is obtained by fitting the points near the minimum to a parabolic dispersion $$\mathrm{\Delta }_k^{ex}=\mathrm{\Delta }+\frac{\mathrm{}^2(kk_0)^2}{2m_R^{}}$$ (5) for each $`N`$, and then extrapolating $`\mathrm{\Delta }`$ to the thermodynamic limit. The energies in the $`kl_00`$ limit and at the roton minimum are given in Table I for a strictly two dimensional system, along with $`m_R^{}`$. Figs. (2) and (3) plot the energy of the CF roton and the long wavelength CF exciton for a heterojunction and a square quantum well (of width 25nm) as a function of density, calculated with the realistic LDA interaction, along with the experimental energies obtained in phonon absorption (at 1/3, 2/5, and 3/7) as well as in inelastic light scattering experiments (at 1/3). A more detailed comparison is given in Table II. In the small wave vector limit, the calculated energy at $`1/3`$ is off by $``$ 30%. It has been suggested that here the true lowest energy excitation may contain two CF-excitons, and there has been debate as to which excitation is being probed by the Raman scattering in this case . For the roton, the theoretical energies, obtained with no adjustable parameters, are in excellent agreement with the observed ones . One may worry that the situation will be spoiled by Landau level mixing. This turns out not to be the case. Following Ref. , we have estimated the importance of LL mixing for the roton energy by considering a variational wave function which is a linear combination of the projected and unprojected wave functions, and found that the corrections are on the order of 5% for typical densities, consistent with similar conclusion for the transport gap . While the ballistic phonon absorption experiments directly measure the minimum energy (i.e., the roton) , the Raman experiments ideally probe the $`kl_00`$ limit of the CF exciton dispersion, the wavelength of the light being much larger than $`l_0`$. However, a breakdown of momentum conservation due to the presence of disorder can activate rotons as well , as a result of a singularity in the density of states. This has been crucial in explaining multiple peaks in the Raman spectra for the inter-LL excitations. At $`\nu =1/3`$, a low energy Raman peak has been interpreted as the roton . Recently, Kang et al. have also observed modes at 2/5 and 3/7, at energies of 0.031 and 0.008 $`e^2/ϵl_0`$, respectively, which they interpret as the long wavelength neutral mode. Our calculated energies at 2/5 and 3/7 for a quantum well of width 30nm and density $`\rho =5.4\times 10^{10}`$ cm<sup>-2</sup> are 0.031(3) and 0.021(3) $`e^2/ϵl_0`$, respectively, for the roton, and 0.070(1) and 0.056(3) $`e^2/ϵl_0`$ for the $`kl_0=0`$ limit of CF exciton. At 2/5, the energy of the observed excitation is consistent not only with the calculated roton energy but also with those measured in ballistic phonon absorption experiments, and substantially smaller than the calculated $`kl_00`$ limit, which might suggest an identification with the roton. \[We note here that the observation of the 1/3 roton implies that the violation of the momentum conservation is sufficiently wide-spread as to render the fundamental rotons at 2/5 and 3/7 observable as well, which occur at roughly the same wave vectors ($`kl_01.61.7`$) as the 1/3 roton ($`kl_01.4`$).\] The energy of the 3/7 mode of Ref. is anomalously low, however. Further work will be required to ascertain the origin of these new Raman modes; an experimental observation of multiple roton peaks will be especially helpful in clarifying this issue. In conclusion, the insensitivity of the roton energy to disorder has afforded an opportunity for a direct quantitative confirmation of our theoretical understanding of the excitations of the fractional quantum Hall effect. We are grateful to Professor Aron Pinczuk for communicating his results to us prior to publication and for the continuous exchange of information, and to Xiaomin Zu for numerous helpful discussions. This work was supported in part by the National Science Foundation under grant no. DMR-9615005, and by a grant of computing time by the National Center for Supercomputing Applications at the University of Illinois (Origin 2000).
no-problem/9910/astro-ph9910078.html
ar5iv
text
# Section 1 Introduction ## Section 1 Introduction It is generally believed that magnetic fields play a central role in solar eruptive phenomena such as flares and coronal mass ejections. The energy released through solar eruptive processes is considered to be stored in nonpotential magnetic fields. The magnetic energy is supplied to the corona either by plasma flows moving around magnetic fields in the inertia-dominant photosphere or by magnetic flux emerging from below the photosphere. Since measurements of magnetic fields at the coronal altitude are not available, magnetograms taken at the photospheric level have been widely used for studies of magnetic nonpotentiality in flare-producing active regions and are also used through extrapolation to compute coronal magnetic fields. Several attempts have been made to identify the relationships between time variation of nonpotentiality parameters and development of solar flares (Hagyard et al., 1984; Hagyard et al., 1990; Wang et al., 1996; Wang 1997). Moon et al. (1999c, Paper I) reviewed previous studies on magnetic nonpotentiality indicators and discussed the problems involved in them. Specifically, they studied the evolution of nonpotentiality parameters in the course of an X-class flare of AR 6919 using MSO (Mees Solar Observatory) magnetograms. They showed that the magnetic shear obtained from the vector magnetograms increased just before the flare and then decreased after it, at least near the $`\delta `$ spot region. Moon et al. (1999a) proposed a measure of magnetic field discontinuity, MAD, defined as Maximum Angular Difference between two adjacent field vectors, as a flare activity indicator. They applied this concept to three magnetograms of AR 6919 and found that the high MAD regions well match the soft X-ray bright points observed by Yohkoh. It was also found that the MAD values increased just before an X-class flare and then decreased after it. This paper constitutes one of the series of studies on evolution of magnetic nonpotentiality associated with major X-ray flares, which are performed using MSO vector magnetograms. Metcalf et al. (1995) studied MSO magnetograms of AR 7216, which are obtained from the observation of Na I 5896 spectral line, employing a weak field derivative method (Jefferies and Mickey, 1991) and concluded that the magnetic field of AR 7216 is far from force-free at the photospheric level. This method could underestimate transverse field strengths for strong field regions due to the saturation effect and calibration problems (Hagyard and Kineke, 1995; Moon, Park, and Yun, 1999b). Metcalf et al. (1991) used polarization data of Fe I 6302 line to compare two calibration methods: the weak field derivative method (Jefferies and Mickey, 1991) and the nonlinear least square method (Skumanich and Lites, 1987). They found that there are noticeable differences in magnetic field strength larger than $`B1200\mathrm{G}`$ between the two methods. On the other hand, Pevtsov et al. (1997) analyzed 655 photospheric magnetograms of 140 active regions to examine the spatial variation of force-free coefficients. In their results, some of the active regions show a good correlation between $`B_z`$ and $`J_z`$, but others do not. It is very natural that the force-free coefficient varies depending on the active region in question even when the coefficient is more or less constant over the active region. Now we raise the question of whether a certain relation can be drawn between the force-free coefficient and the evolutionary stage of an active region. Among the selections by Pevtsov et al. (1997), AR 5747 is exemplary of showing a good correlation between $`B_z`$ and $`J_z`$. Thus, we have taken AR 5747 as the object of our study. The purpose of this paper is to examine the magnetic nonpotentiality of AR 5747 associated with solar flares and investigate the evolution of the linear force-free coefficient. For this study, we have used the magnetograms spanning three days obtained from full Stokes polarization profiles of MSO. In Section 2, an description is given of the observation and analysis of the vector magnetograms. Computation of nonpotentiality parameters and their evolutions in relation to flaring activities is presented in Section 3. In Section 4, we discuss the evolution of the active region field as a linear force-free field. Finally, a summary and conclusion are given in Section 5. ## Section 2 Observation and Analysis For the present work, we have selected a set of MSO magnetograms of AR 5747 taken on Oct. 20–22, 1989. The magnetogram data were obtained by the Haleakala Stokes polarimeter (Mickey, 1985) which provides simultaneous Stokes I,Q,U,V profiles of the Fe I 6301.5, 6302.5 Å doublet. The observations were made by a rectangular raster scan with a pixel spacing of 5.6<sup>′′</sup> (low resolution scan) and a dispersion of 25 $`\mathrm{m}\mathrm{\AA }/\mathrm{pixel}`$. Most of the analyzing procedure is well described in Canfield et al. (1993). To derive the magnetic field vectors from Stokes profiles, we have used a nonlinear least square fitting method (Skumanich and Lites, 1987) for fields stronger than 100 G and an integral method (Ronan, Mickey and Orral, 1987) for weaker fields. In that fitting, the Faraday rotation effect, which is one of the error sources for strong fields, is properly taken into account. The noise level in the original magnetogram is about 70 G for transverse fields and 10 G for longitudinal fields. The basic observational parameters of the magnetograms used in this study are presented in Table I. To resolve the $`180^{}`$ ambiguity, we have adopted a multi-step ambiguity solution method by Canfield et al. (1993) (for details, see the Appendix of their paper). In the 3rd and 4th steps of their method, they have chosen the orientation of the transverse field that minimizes the angle between neighboring field vectors and the field divergence $`|𝐁|`$. ## Section 3 Evoultion of Magnetic Nonpotentiality In the active region AR 5747, a number of flares took place including a 2B/X3 flare. In Table II, we summarize some basic features of major X-ray flares during the observing period. Figure 1 shows the ambiguity resolved vector magnetograms obtained on Oct. 20 to Oct. 22, 1989. The three magnetograms have the same field of view. As seen in the figures, strong sheared transverse fields are concentrated near the neutral line and they form a global clockwise winding pattern. In Paper I, an account is given of the magnetic nonpotentiality parameters used in this study. The vertical current density is presented in Figure 2. The vertical current density kernels persisted, with little change of configuration, over the whole observing span. Wang, Xu, and Zhang (1994) and Leka et al. (1993) have discussed the important characteristics of these vector magnetic fields and vertical current densities. We tabulate the time variation of magnetic fluxes and total vertical currents of positive and negative signs in Table III. The differences between the absolute values of the positive and negative quantities are within a few percent. As seen in the table, the magnetic fluxes and total vertical currents of both signs decreased with time. It is observed that several small $`\delta `$ sunspots (A1, A2 and A3 in Fig. 1a) disappeared in Figure 1b, which suggests that flaring events between Oct. 20 and 21 should be associated with flux cancellation. It is to be noted that there were no remarkable flux emergence during the observing period. Figure 3 shows the angular shear multiplied by transverse field strength and Figure 4 shows the shear angle multiplied by total field strength. As seen in the figures, strong magnetic shear is concentrated near the inversion line, where $`\mathrm{H}_\beta `$ emission patches were observed (see Fig. 2 of Wang, Xu, and Zhang 1994). The time variation of two weighted mean shear angles is given in Table IV. The values of two shear angles monotonically decreased with time. The magnetic free energy density is shown in Figure 5. Its evolutionary trend is quite similar to that of shear angles. The 2-D MAD multiplied by total field strength (Figure 6) also has a similar evolutionary pattern to that of the other nonpotentiality parameters above. We summarize the variations of mean free energy density, planar sum of free energy density, and sum of MAD multiplied by field strength in Table IV, in which the values obtained with the potential field method for the $`180^{}`$ ambiguity resolution are also given in parentheses for comparison. As seen in the table, all the nonpotentiality parameters under consideration decreased with time, which suggests that the active region was in a relaxation stage during the observing period. From the above results, we may infer that the flares that occurred in our observation are just bursty parts of energy release in a long term relaxation of the stressed magnetic field. In a self-organizing system, a transition toward a lower energy state proceeds very mildly in the beginning and for most of time until a sudden bursty event develops as in an avalanche. Why, then, did a series of flares occur, rather than one? Flares can surely take place in repetition if enough energy is supplied into the system between the flaring events to recover the free energy released by the preceding flaring event. However, this is not the case as far as the flares in our observation are concerned. No indication of energy input, whether flux emergence or increase of magnetic shear, was detected throughout our observing span. We thus speculate that the occurrence of a series of flares was possible due to the complex geometry of our active region magnetic field. A simple bipolar magnetic field would proceed to a lower energy state by one bursty event of reconnection. However, in a complex active region containing more than a pair of magnetic poles, the transition to the lowest energy state may possibly comprise several steps of macroscopic change in field topology. This speculation, of course, has to be examined by further studies involving many other observations and numerical experiments as well. ## Section 4 Magnetic Forces and Linear Force-Free Field Approximation It is generally believed that solar magnetic fields are force-free in the corona, but far from it in the photosphere. However, to construct a force-free model of the coronal magnetic field, the field data observed at the photospheric level are employed as boundary conditions. Not only because a magnetohydrostatic equilibrium under gravity is more difficult to construct than a force-free solution, but also because no reliable information about plasma pressure is available in the photosphere, force-free field modeling with photospheric boundary conditions is being widely attempted (e.g., McClymont and Mikić, 1994 for AR 5747) despite the afore-mentioned inconsistency. The reliability of such models thus depends on how much the field behaves like a force-free field near the photosphere. In this section, we investigate the “force-freeness” of AR 5747. A force-free field is a magnetic field satisfying the Lorentz force-free condition $$(\times 𝐁)\times 𝐁=0,$$ (1) which can be rewritten as $$\times 𝐁=\alpha 𝐁.$$ (2) The so called force-free coefficient $`\alpha `$ is thus given by $$\alpha =\frac{J_x}{B_x}=\frac{J_y}{B_y}=\frac{J_z}{B_z},$$ (3) in rationalized electromagnetic units. Taking divergence of Equation (2) and using $`𝐁=0`$, we have $$𝐁\alpha =0,$$ (4) which means that $`\alpha `$ is a function of each field line. With the vector magnetogram of Oct. 20, 1989, Canfield et al. (1991) examined whether the ratio of current density to field strength ($`J/B`$) is conserved along each elementary flux tube. Although the force-free coefficient $`\alpha =J_i/B_i`$ is necessarily constant along each field line in a force-free field, the condition is in practice difficult to check due to the noises in current density in weak field regions (McClymont, Jiao, and Mikić, 1997). To examine the force-freeness of AR 5747, we have computed the integrated Lorentz force components scaled with the integrated magnetic pressure force, i.e., $`F_x/F_o`$, $`F_y/F_o`$ and $`F_z/F_o`$ (Metcalf et al. 1995), in which $$F_x=\frac{1}{4\pi }B_xB_z𝑑x𝑑y,$$ (5) $$F_y=\frac{1}{4\pi }B_yB_z𝑑x𝑑y,$$ (6) $$F_z=\frac{1}{8\pi }(B_z^2B_x^2B_y^2)𝑑x𝑑y,$$ (7) and $$F_o=\frac{1}{8\pi }(B_z^2+B_x^2+B_y^2)𝑑x𝑑y,$$ (8) where $`F_o`$ is the integrated magnetic pressure force. In this calculation, only pixels with field strength larger than 100 G for both longitudinal and transverse fields are considered to reduce the effect of noise. In Table V, we present the normalized integrated forces obtained from three vector magnetograms. The absolute values of these forces are much smaller than those at the photospheric level of Metcalf et al. (1995), which implies that our active region field is more or less force-free even near the solar surface. It is also noted that the magnetic fields of AR 5747 become less force-free in a relaxation stage as times go. Now we turn to the question whether our active region field is approximately linearly force-free. In Figure 1, the transverse field vectors show a common curling pattern for each magnetic polarity, which allows us to expect that values of the force-free coefficient do not diverge much. To investigate the linearity, we have plotted for each data set $`B_z`$ vs. $`J_z`$ and a plausible regression line obtained by eye fitting in Figure 7. The figures show that there exist approximate linear relationships between $`B_z`$ and $`J_z`$ for three vector magnetograms. We have already observed in Figures 1 and 2 that the distribution of vertical electric current density well matches that of magnetic fluxes of opposite polarity. In Table V, we have tabulated linear force-free coefficients obtained by linear regression in Figure 7. We have also listed the coefficients obtained by minimizing the difference between the horizontal components in a constant $`\alpha `$ force-free field model and the horizontal field vectors in the vector magnetogram, considering only pixels with $`B_t>300\mathrm{G}`$ (Pevtsov et al., 1996). In both sets, the absolute value of force-free coefficients decreased with time, as other nonpotentiality parameters did. This suggests that the linear force-free coefficient could be as good a nonpotential evolutionary indicator as other nonpotentiality parameters as long as the linear force-free approximation is more or less valid. Furthermore, the linear force-free coefficient has a merit as a global parameter. ## Section 5 Summary and Conclusion In this study, we have analyzed the MSO vector magnetograms of AR 5747 taken on October 20 to 22, 1989. A nonlinear least square method was adopted to derive the magnetic field vectors from the observed Stokes profiles and a multi-step ambiguity solution method was used to resolve the $`180^{}`$ ambiguity. From the ambiguity-resolved vector magnetograms, we have derived a set of physical quantities which are magnetic flux, vertical current density, magnetic shear angle, angular shear, magnetic free energy density and MAD, a measure of magnetic field discontinuity. In order to examine the force-free character of the active region field, we have calculated the normalized integrated Lorentz forces and compared the longitudinal field $`B_z`$ and the corresponding vertical current density $`J_z`$. Most important results from this work can be summarized as follows. 1) Magnetic nonpotentiality is concentrated near the inversion line, where flare brightenings are observed. 2) All the physical parameters that we have considered (vertical current density, mean shear angle, mean angular shear, sum of free energy density and sum of MAD) decreased with time, which indicates that the active region was in a relaxation period. 3) The X-ray flares that occurred during the observing period could be related with flux cancellation. Flaring events might be considered as bursty parts in the long term relaxation process. 4) It is found that the active region was approximately linearly force-free throughout the observing span and the absolute value of the derived linear force-free coefficient decreased with time. Our result suggests that the linear force-free coefficient could be a good global parameter indicating the evolutionary status of an active region as long as the field is approximately force-free. ## Acknowledgements We wish to thank Dr. Metcalf for allowing us to use some of his numerical routines for analyzing vector magnetograms and Dr. Pevtsov for helpful comments. The data from the Mees Solar Observatory, University of Hawaii are produced with the support of NASA grant NAG 5-4941 and NASA contract NAS8-40801. This work has been supported in part by the Basic Research Fund (99-1-500-00 and 99-1-500-21) of Korea Astronomy Observatory and in part by the Korea-US Cooperative Science Program under KOSEF(995-0200-004-2).
no-problem/9910/cond-mat9910454.html
ar5iv
text
# Paths to Self-Organized Criticality ## I Introduction The label “self-organized” is applied indiscriminately in the current literature to ordering or pattern formation amongst many interacting units. Implicit is the notion that the phenomenon of interest, be it scale invariance, cooperation, or supra-molecular organization (e.g., micelles), appears spontaneously. That, of course, is just how the magnetization appears in the Ising model; but we don’t speak of “self-organized magnetization.” After nearly a century of study, we’ve come to expect the spins to organize; the zero-field magnetization below $`T_c`$ is no longer a surprise. More generally, spontaneous organization of interacting units is precisely what we seek, to explain the emergence of order in nature. We can expect many more surprises in the quest to discover what kinds of order a given set of interactions lead to. All will be self-organized, there being no outside agent on hand to impose order! “Self-organized criticality” (SOC) carries greater specificity, because criticality usually does not happen spontaneously: various parameters have to be tuned to reach the critical point. Scale-invariance in natural systems, far from equilibrium, isn’t explained merely by showing that the interacting units can exhibit scale invariance at a point in parameter space; one has to show how the system is maintained (or maintains itself) at the critical point. (Alternatively one can try to show that there is generic scale invariance, that is, that criticality appears over a region of parameter space with nonzero measure .) “SOC” has been used to describe spontaneous scale invariance in general; this would seem to embrace random walks, as well as fractal growth , diffusive annihilation ($`A`$ $`+`$ $`A`$ $`0`$ and related processes), and nonequilibrium surface dynamics . Here we restrict the term to systems that are attracted to a critical (scale-invariant) stationary state; the chief examples are sandpile models . Another class of realizations, exemplified by the Bak-Sneppen model , involve extremal dynamics (the unit with the extreme value of a certain variable is the next to change). We will see that in many examples of SOC, there is a choice between global supervision (an odd state of affairs for a “self-organized” system), or a strictly local dynamics in which the rate of one or more processes must be tuned to zero. The sandpile models introduced by Bak, Tang and Wiesenfeld (BTW) , Manna , and others have attracted great interest, as the first and clearest examples of self-organized criticality. In these models, grains of “sand” are injected into the system and are lost at the boundaries, allowing the system to reach a stationary state with a balance between input and output. The input and loss processes are linked in a special way to the local dynamics, which consists of activated, conservative, redistribution of sand. In the limit of infinitely slow input, the system displays a highly fluctuating, scale-invariant avalanche-like pattern of activity. One may associate rates $`h`$ and $`ϵ`$, respectively, with the addition and removal processes. We have to adjust these parameters to realize SOC: it appears in the limit of $`h`$ and $`ϵ0^+`$ with $`h/ϵ0`$ . (The addition and removal processes occur infinitely slowly compared to the local redistribution dynamics, which proceeds at a rate of unity. Loss is typically restricted to the boundaries, so that $`ϵ0`$ is implicit in the infinite-size limit.) Questions about SOC fall into two categories. First, Why does self-organized criticality exist? What are the conditions for a model to have SOC? Second, the many questions about critical behavior (exponents, scaling functions, power-spectra, etc.) of specific models, and whether these can be grouped into universality classes, as for conventional phase transitions both in and out of equilibrium. Answers to the second type of question come from exact solutions , simulations , renormalization group analyses , and (one may hope) field theoretical analysis. Despite these insights, assertions in the literature about spontaneous or parameter-free criticality have tended to obscure the nature of the phase transition in sandpiles, fostering the impression that SOC is a phenomenon sui generis, inhabiting a different world than that of standard critical phenomena. In this paper we show that SOC is a phase transition to an absorbing state, a kind of criticality that has been well studied, principally in the guise of directed percolation . Connections between SOC and an underlying conventional phase transition have also been pointed out by Narayan and Middleton , and by Sornette, Johansen and Dornic . Starting with a simple example (Sec. II), we will see that the absorbing-state transition provides the mechanism for SOC (Sec. III). That is, we explain the existence of SOC in sandpiles on the basis of a conventional critical point. In Sec. IV we discuss the transformation of a conventional phase transition to SOC in the contexts of driven interfaces, a stochastic process that reproduces the stationary properties of directed percolation, and the Bak-Sneppen model. We find that criticality requires tuning, or equivalently, an infinite time-scale separation. With this essential point in mind, we present a brief review of the relevance of SOC models to experiments in Sec. V. Sec. VI presents a summary of our ideas. We note that this paper is not intended as a complete review of SOC; many interesting aspects of the field are not discussed. ## II A simple example We begin with a simple model of activated random walkers (ARW). Each site $`j`$ of a lattice (with periodic boundary conditions) harbors a number $`z_j=0,1,2\mathrm{}`$ of random walkers. (For purposes of illustration the ring $`1,\mathrm{},L`$ will do.) Initially, $`N`$ walkers are distributed randomly amongst the sites. Each walker moves independently, without bias, to one of the neighboring sites (i.e., from site $`j`$ to $`j+1`$ or $`j1`$, with site $`L+11`$ and $`0L`$), the only restriction being that an isolated walker (at a site with $`z_j=1`$) is paralyzed until such time as another walker or walkers joins it. The active sites (with $`z_j2`$) follow a Markovian (sequential) dynamics: each active site loses, at a rate 1, a pair of walkers, which jump independently to one of the neighbors of site $`j`$. (Thus in one dimension there is a probability of 1/2 that each neighbor gains one walker, while with probability 1/4 both walkers hop to the left, or to the right.) The model we have just defined is characterized by the number of lattice sites, $`L^d`$, and the number of particles, $`N`$. It has two kinds of configurations: active, in which at least one site has two or more walkers, and absorbing, in which no site is multiply occupied, rendering all the walkers immobile . For $`N>L^d`$ only active configurations are possible, and since $`N`$ is conserved, activity continues forever. For $`NL^d`$ there are both active and absorbing configurations, the latter representing a shrinking fraction of configuration space as the density $`\zeta N/L^d1`$. Given that we start in an active configuration (a virtual certainty for an initially random distribution with $`\zeta >0`$ and $`L`$ large), will the system remain active indefinitely, or will it fall into an absorbing configuration? For small $`\zeta `$ it should be easy for the latter to occur, but it seems reasonable that for sufficiently large densities (still $`<1`$), the likelihood of reaching an absorbing configuration becomes so small that the walkers remain active indefinitely. In other words, we expect sustained activity for densities greater than some critical value $`\zeta _c`$, with $`\zeta _c<1`$. A simple mean-field theory provides a preliminary check of this intuition. Consider activated random walkers in one dimension. For a site to gain particles, it must have an active ($`z2`$) nearest neighbor. Since active sites release a pair of walkers at a rate of unity, a given site receives a single walker from an active neighbor at rate 1/2, and a pair of walkers at rate 1/4. Thus the rate of transitions that take $`z_j`$ to $`z_j+1`$ is $`[P(z_j,z_{j+1}2)+P(z_j,z_{j1}2)]/2`$; transitions from $`z_j`$ to $`z_j+2`$ occur at half this rate. In the mean-field approximation we ignore correlations between different sites, and factorize the joint probability into a product: $`P(z,z^{}2)=\rho _z\rho _a`$, where $`\rho _z`$ is the fraction of sites with occupation $`z`$ and $`\rho _a=_{z2}\rho _z`$ is the fraction of active sites. Using this factorization, we can write a set of equations for the site densities: $$\frac{d\rho _z}{dt}=\rho _a(\rho _{z1}\rho _z)+\frac{1}{2}\rho _a(\rho _{z2}\rho _z)+\rho _{z+2}\theta _{z2}\rho _z,(z=0,1,2\mathrm{}),$$ (1) where $`\theta _n=0`$ for $`n<0`$ and is one otherwise. The final two terms represent active sites losing a pair of walkers. It is easy to see that the total probability, and the density $`\zeta =_zz\rho _z`$ are conserved by the mean-field equations. This infinite set of coupled equations can be integrated numerically if we impose a cutoff at large $`z`$. (This is justified by the finding that $`\rho _z`$ decays exponentially for large $`z`$.) The mean-field theory predicts a continuous phase transition at $`\zeta _c=1/2`$. For $`\zeta <\zeta _c`$ the only stationary state is the absorbing one, $`\rho _a=0`$, while for $`\zeta \stackrel{>}{}\zeta _c`$ the active-site density grows $`\zeta \zeta _c`$. A two-site approximation (in which we write equations for the fraction $`\rho _{z,z^{}}`$ of nearest-neighbor pairs with given heights, but factorize joint probabilities involving three or more sites), yields $`\zeta _c=0.75`$. The existence of a continuous phase transition is confirmed in Monte Carlo simulations, which yield $`\zeta _c0.9486`$ in one dimension, and $`\zeta _c0.7169`$ in two dimensions. Figure 1 shows how the stationary density of active sites $`\rho _a`$ depends on $`\zeta `$; we see $`\rho _a`$ growing continuously from zero at $`\zeta _c`$. (The points represent estimated densities for $`L\mathrm{}`$, based on simulation data for $`L`$ = 100 — 5000.) The inset shows that the active-site density follows a power law, $`\rho _a(\zeta \zeta _c)^\beta `$, with $`\beta =0.43(1)`$; a finite-size scaling analysis confirms this result . In summary, activated random walkers exhibit a continuous phase transition from an absorbing to an active state as the particle density is increased above $`\zeta _c`$, with $`\zeta _c`$ strictly less than 1. (It has yet to be shown rigorously that the active-site density in the ARW model is singular at $`\zeta _c`$, in the infinite-size limit; our numerical results are fully consistent with the existence of such a singularity.) ### A Absorbing-State Phase Transitions Absorbing-state phase transitions are well known in condensed matter physics, and population and epidemic modeling . The simplest example, which may be thought of as the “Ising model” of this class of systems, is the contact process . Again we have a lattice of $`L^d`$ sites, each of which may be occupied (active) or vacant. Occupied sites turn vacant at a rate of unity; vacant sites become occupied at a rate of $`(\lambda /2d)n_o`$ where $`n_o`$ is the number of occupied nearest neighbors (the factor $`2d`$ represents the number of nearest neighbors). There is a unique absorbing configuration: all sites vacant. For $`\lambda `$ sufficiently small, the system will eventually fall into the absorbing state, while for large $`\lambda `$ an active stationary state can be maintained. Letting $`\rho `$ represent the density of occupied sites, the mean-field theory analogous to the one formulated above for activated random walkers reads: $$\frac{d\rho }{dt}=(\lambda 1)\rho \lambda \rho ^2.$$ (2) This predicts a continuous phase transition (from $`\rho 0`$ to $`\rho =1\lambda ^1`$ in the stationary state) at $`\lambda _c=1`$. Rigorous analyses confirm the existence of a continuous phase transition at a critical value $`\lambda _c`$, in any dimension $`d1`$. Simulations and series analyses yield $`\lambda _c=3.29785(2)`$ in one dimension. This model, and its continuous-update counterpart, directed percolation (DP; see Sec. IV), have been studied extensively. The critical exponents are known to good precision for $`d=1`$, 2, and 3; the upper critical dimension $`d_c=4`$. There is, in addition, a well established field theory for this class of models : $$\frac{\rho }{t}=^2\rho a\rho b\rho ^2+\eta (x,t).$$ (3) Here $`\rho (x,t)`$ is a local particle density, and $`\eta (x,t)`$ is a Gaussian noise with autocorrelation $$\eta (x,t)\eta (x^{},t^{})=\mathrm{\Gamma }\rho (x,t)\delta (xx^{})\delta (tt^{}).$$ (4) That $`\eta ^2`$ is linear in the local density follows from the fact that the numbers of events (creation and annihilation) in a given region are Poissonian random variables, so that the variance equals the expected value. (The noise must vanish when $`\rho =0`$ for the latter to be an absorbing state!) This field theory serves as the basis for a strong claim of universality : Continuous phase transitions to an absorbing state fall generically in the universality class of directed percolation. (It is understood that the models for which we expect DP-like behavior have short-range interactions, and are not subject to special symmetries or conservation laws beyond the simple translation-invariance of the contact process. Models subject to a conservation law are known to have a different critical behavior .) The activated random walkers model resembles the contact process in having an absorbing-state phase transition. We should note, however, two important differences between the models. First, ARW presents an infinite number ($`2^{L^d}`$, to be more precise) of absorbing configurations, while the CP has but one. In fact, particle models in which the number of absorbing configurations grows exponentially with the system size have also been studied intensively. The simplest example is the pair contact process, in which both elementary processes (creation and annihilation) require the presence of a nearest-neighbor pair of particles . In one dimension, a pair at sites $`i`$ and $`i+1`$ can either annihilate, at rate $`p`$, or produce a new particle at either $`i1`$ or $`i+2`$, at rate $`1p`$ (provided the selected site is vacant). This model shows a continuous phase transition from an active state for $`p<p_c`$ to an absorbing state above $`p_c`$. The static critical behavior again belongs to the DP universality class, but the critical exponents associated with spreading of activity from an initially localized region are nonuniversal, varying continuously (in one dimension) with the particle density in the surrounding region . A second important difference between ARW and the CP and PCP is that the former is subject to a conservation law (the number of walkers cannot change from its initial value). In a field-theoretic description of ARW we will therefore need (at least) two fields: the local density $`\rho (x,t)`$ of active sites, and the local particle density $`\zeta (x,t)`$; the latter is frozen in regions where $`\rho =0`$. The evolution of $`\rho `$ is coupled to $`\zeta `$ because the particle density controls existence and level of activity in the ARW model. Given that absorbing-state phase transitions fall generically in the universality class of directed percolation, it is natural to ask whether this is the case for activated random walkers as well. The answer, apparently, is “No.” The critical exponent $`\beta `$ for ARW is, as we noted above, 0.43, while for one-dimensional DP $`\beta =0.2765`$ ; the other critical exponents differ as well . While the reason for this difference is not understood, it appears, at least, to be consistent with the existence of a conserved field in ARW. To summarize, our simple model of activated random walkers has an absorbing-state phase transition, as does the contact process, directed percolation and the PCP. All possess the same basic phase diagram: active and inactive phases separated by a continuous phase transition at a critical value of a “temperature-like” parameter ($`\zeta `$ in ARW, $`\lambda `$ in the CP). But ARW possesses an infinite number of absorbing configurations, and the evolution of its order parameter (the active-site density) is coupled to a conserved density $`\zeta `$. The latter presumably underlies its belonging to a different universality class than DP. ## III Activated Random Walkers and Sandpiles The activated random walkers model possesses a conventional critical point: we have to tune the parameter $`\zeta `$ to its critical value. What has it got to do with self-organized criticality? The answer is that ARW has essentially the same local dynamics as a model known to exhibit SOC, namely, the Manna sandpile . In Manna’s sandpile, the redistribution dynamics runs in parallel: at each time step, all of the sites with $`z2`$ simultaneously liberate two walkers, which jump randomly to nearest neighbor sites. This may result in a new set of active sites, which relax at the next time step, and so on. (Time advances by one unit at each lattice update, equivalent to the unit relaxation rate of an active site in ARW.) We defined ARW with sequential dynamics as this makes it a Markov process with local transitions in configuration space, like a kinetic Ising model. There is of course nothing wrong in defining ARW with parallel dynamics; it too has an absorbing-state phase transition. There is a much more fundamental difference between the Manna sandpile and the ARW model: the former allows addition and loss of walkers. Recall that we defined the ARW with periodic boundary conditions; walkers can never leave the system. In the sandpile walkers may exit from one of the boundary sites. (On the square lattice, for example, a walker at an edge site has a probability of 1/4 to leave the system at the next step.) If we allow walkers to leave, then eventually the system will reach an absorbing configuration. When this happens, we add a new walker at a randomly chosen site. This innocent-sounding prescription — add a walker when and only when all other activity ceases — carries the infinite time scale separation essential to the appearance of SOC in sandpiles. The sequence of active configurations between two successive additions is known as an avalanche; avalanches may involve any number of sites, from zero (no topplings) up to the entire system. Manna showed that his model reaches a stationary state in which avalanches occur on all scales, up to the size of the system, and follow a power-law distribution, $`P(s)s^\tau `$, for $`ss_c`$. (Here $`s`$ is the number of transfer or toppling events in a given avalanche, and $`s_cL^D`$ is a cutoff associated with the finite system size.) In other words, the Manna sandpile, like the models devised by Bak, Tang and Wiesenfeld and others, exhibits scale invariance in the stationary state. We know that ARW, which has the same local dynamics as the Manna sandpile, shows scale invariance when (and only when) the density $`\zeta =\zeta _c`$. So in the stationary state of the Manna model, the density is somehow attracted to its critical value. How does it happen? The mechanism of SOC depends upon a particular relation between the input and loss processes, and the conventional absorbing-state phase transition in the model with a fixed number of particles. Walkers cannot enter the system while it is active, though they may of course leave upon reaching the boundary. In the presence of activity, then, $`\zeta >\zeta _c`$ and $`d\zeta /dt<0`$. In the absence of activity there is addition, but no loss of walkers, so $`\zeta <\zeta _c`$ implies $`d\zeta /dt>0`$. Evidently, the only possible stationary value for the density in the sandpile is $`\zeta _c`$! Of course, it is possible to have a low level of activity locally, in a region with $`\zeta <\zeta _c`$, but under such conditions activity cannot propagate or be sustained. (One can similarly construct absorbing configurations with $`\zeta >\zeta _c`$, but these are unstable to addition of walkers, or the propagation of activity from outside.) In the infinite-size limit, the stationary activity density is zero for $`\zeta <\zeta _c`$, and positive for $`\zeta >\zeta _c`$, ensuring that $`\zeta `$ is pinned at $`\zeta _c`$, when loss is contingent upon activity, and addition upon its absence. That the Manna sandpile, in two or three dimensions, with parallel dynamics, has a scale-invariant avalanche distribution is well known . Here we note that the same holds for the one-dimensional version, with random sequential dynamics. Figure 2 shows the probability distribution for the avalanche size (the total number of topplings) when we modify ARW to include loss of walkers at the boundaries, and addition at a randomly chosen site, when the system falls into an absorbing configuration. The distribution follows a power law, $`P(s)s^{\tau _s}`$, over a wide range of avalanche sizes and durations; there is, as expected, an exponential cutoff $`s_cL_s^D`$ for events larger than a characteristic value associated with the finite size of the lattice. (Our best estimates are $`\tau _s=1.10(2)`$ and $`D`$ = 2.21(1).) The upper inset of Fig. 2 shows that the stationary density approaches $`\zeta _c`$, the location of the absorbing-state phase transition, as $`L\mathrm{}`$. It is also interesting to note that, in contrast with certain deterministic one-dimensional sandpile models , the present example appears to exhibit finite-size scaling, as shown in the lower inset of Fig. 2. ### A A Recipe for SOC The connection between activated random walkers and the Manna sandpile suggests the following recipe for SOC. Start with a system having a continuous absorbing-state phase transition at a critical value of a density $`\zeta `$. This density should represent the global value of a local dynamical variable conserved by the dynamics. Add to the conservative local dynamics (1) a process for increasing the density in infinitesimal steps ($`\zeta \zeta +d\zeta `$) when the local dynamics reaches an absorbing configuration, and (2) a process for decreasing the density at an infinitesimal rate while the system is active. Run the system until it reaches the stationary state; it is now ready to display scale invariance. Let’s see how these elements operate in the Manna sandpile. We started with activated random walkers, which does indeed display a continuous absorbing-state transition as a function the density $`\zeta `$ of walkers; this density, moreover, is conserved. To this we added the input of one walker ($`\zeta \zeta +1/L^d`$ in $`d`$ dimensions), when the system is inactive. We then broke the translational symmetry of the ARW model to define boundary sites, and allowed walkers at the boundary to leave the system. The latter implies a loss rate $`d\zeta /dtL^1\rho _b`$, where $`\rho _b`$ is the activity density at the boundary sites. The conditions of our recipe are satisfied when $`L\mathrm{}`$, which we needed anyway, to have a proper phase transition in the original model. Now we can examine the ingredients one by one. First, the phase transition in the original model should be to an absorbing state, because our input and loss steps are conditioned on the absence or presence of activity. Second, the temperature-like parameter controlling the transition should be a conserved density. So the contact process and PCP aren’t suitable starting points for SOC, because the control parameter $`\lambda `$ isn’t a dynamical variable. (To self-organize criticality in the CP, we’d have to change $`\lambda `$ itself, depending on the absence of presence of activity. But this is tuning the parameter by hand!) Third, we need to change the density $`\zeta `$ in infinitesimal steps, else we will always be jumping between values above or below $`\zeta _c`$ without actually hitting the critical density. The same thing will happen, incidentally, if we start out with a model that has a discontinuous transition (with attendant hysteresis) between an active and an absorbing state; this yields self-organized stick-slip behavior. The basic ingredients of our recipe are an absorbing-state phase transition, and a method for forcing the model to its critical point, by adding (removing) particles when the system is frozen (active). Following the recipe, the transformation of a conventional critical point to a self-organized one does not seem surprising . ### B Firing the Baby-Sitter The reader may have noted a subtle inconsistency in the above discussion. We rejected the contact process as a suitable candidate for SOC because changing the parameter $`\lambda `$ on the basis of the current state (active or frozen) amounts to tuning. Cannot the same be said for adding walkers in the Manna sandpile? Somehow, a dynamics of walkers entering and leaving the system seems more “natural” than wholesale fiddling with a parameter. But who is going to watch for activity, to know when to add a particle? A system managed by a supervisor can hardly be called “self-organized!” If we want to avoid building a supervisor or baby-sitter into the model, we had better say that addition goes on continuously, at rate $`h`$, and that SOC is realized in the limit $`h0^+`$ . (The original sandpile definitions have a baby-sitter. Simulations, in particular, have a live-in baby-sitter to decide the next move. Addition at rate $`h0^+`$ is a supervisor-free interpretation of the dynamics .) In the recipe for SOC without baby-sitters, we replace addition (1) above with (1’): allow addition at rate $`h`$, independent of the state of the system, and take $`h0^+`$. (There is no problem with the removal step: dissipation is associated with activity, which is local.) We pay a price when we fire the baby-sitter: there is now a parameter $`h`$ in the model, which has to be tuned to zero. Evidently, sandpiles don’t exhibit generic scale invariance, but rather, scale invariance at a point in parameter space. This is consistent with Grinstein’s definition of SOC, which requires an infinite separation of time scales from the outset . ### C Variations In certain respects, our recipe allows greater freedom than was explored in the initial sandpile models. There is no special reason, for example, why loss of walkers has to occur at the boundaries. We simply require that activity be attended by dissipation at an infinitesimal rate. SOC has, indeed, been demonstrated in translation-invariant models with a uniform dissipation rate $`ϵ\rho `$ when $`ϵ0^+`$. In the original sandpile models, addition takes place with equal probability at any site, but restricting addition to a subset of the lattice will still yield SOC. Our recipe allows a tremendous amount of freedom for the starting model; the only restriction is that it possess an absorbing-state critical point as a function of a conserved density. The dynamical variables can be continuous or discrete. The hopping process does not have to be symmetric, as in ARW. (In fact, directed hopping yields an exactly-soluble sandpile .) The model need not be defined on a regular lattice; any structure with a well defined infinite-size limit should do. The dynamics, moreover, can be deterministic. Consider a variant of the ARW model (on a $`d`$-dimensional cubic lattice) in which a site is active if it has $`z2d`$ walkers. At each lattice update (performed here with parallel dynamics), every active site ‘topples, transferring a single walker to each of the $`2d`$ nearest-neighbor sites. In this case the only randomness resides in the initial configuration. But the model again exhibits a continuous absorbing-state phase transition as we tune the number of walkers per site, $`\zeta `$. Starting with this deterministic model, our recipe yields the celebrated Bak-Tang-Wiesenfeld sandpile. As a further variation, we can even relax the condition that the order parameter is coupled to a conserved field . The price is the introduction of an additional driving rate. This situation is exemplified by the forest-fire model . The model is defined on a lattice in which each site can be in one of three states: empty, or occupied by a tree, either live or burning. Burning trees turn into empty sites, and set fire to the trees at nearest-neighbor sites, at a rate of unity. It is easy to recognize that burning trees are the active sites: any configuration without them is absorbing. In an infinite system, there will be a critical tree density that separates a phase in which fires spread indefinitely from an absorbing phase with no burning trees. In a finite system we can study this critical point by fixing the density of trees at its critical value . So far we have no process for growing new trees. The forest-fire propagates like an epidemic with immunity: a site can only be active once, and there is no proper steady state . As in sandpiles, to obtain a SOC state we must introduce an external driving field $`f`$ that introduces a small probability for each tree to catch fire spontaneously. This driving field allows the system to jump between absorbing configurations through the spreading of fires. The latter, however, are completely dissipative, i.e., the number of trees is not conserved. Thus, if we want to reach a stationary state we must introduce a second external driving field $`p`$ that causes new trees to appear. (Empty sites become occupied by a living tree at rate $`p`$.) In this case criticality is reached by the double slow driving condition $`f,p0`$ and $`f/p0`$. In practice, this slow driving condition is achieved by the usual supervisor, that stops fire ignition and tree growth during active intervals. ### D Fixed-Energy Sandpiles If someone hands us a sandpile displaying SOC, we can identify the initial model in our recipe; it has the same local dynamics as the SOC sandpile. Thinking of the conserved $`\zeta `$ as an energy density, we call the starting model a fixed-energy sandpile (FES). Thus the activated random walkers model introduced in Sec. II is the fixed-energy Manna sandpile, and the variant described in the preceding subsection is the BTW FES. Now the essential feature of the fixed-energy sandpile is an absorbing-state phase transition. SOC appears when we rig up the addition and removal processes to drive the local FES dynamics to $`\zeta _c`$. To understand the details of SOC, then, we ought to try to understand the conventional phase transition in the corresponding fixed-energy sandpile. This is our program for addressing the second class of questions (about critical exponents and universality classes) mentioned in the Introduction. Since fixed-energy sandpiles have a simple dynamics (Markovian or deterministic) without loss or addition, and are translation-invariant (when defined on a regular lattice), they should be easier to study than their SOC counterparts. The relation to absorbing-state phase transitions leads to a proper identification of the order parameter , and suggests a strategy for constructing a field theory of sandpiles . Spreading exponents, conventionally measured in absorbing-state phase transitions, are related through scaling laws to avalanche exponents, usually measured in slowly driven systems . ## IV Other Paths to SOC ### A Driven Interfaces In this section we illustrate the central idea of the preceding section — the transformation of a conventional phase transition to a self-organized one — in a different, though related, context. We begin with a single point mass undergoing driven, dissipative motion in one dimension. Its position $`H(t)`$ follows the equation of motion $$M\frac{d^2H}{dt^2}+\gamma \frac{dH}{dt}=FF_p(H),$$ (5) where $`M`$ is the mass, $`\gamma \dot{H}`$ represents viscous dissipation, $`F`$ is the applied force, and $`F_p(H)`$ is a position-dependent pinning force. In many cases of interest (i.e., domain walls or flux-lines) the motion is overdamped and we may safely set $`M=0`$. The pinning force has mean zero ($`F_p(h)=0`$) and its autocorrelation $`F_p(h)F_p(h+y)\mathrm{\Delta }(|y|)`$ decays rapidly with $`|y|`$; the statistical properties of $`F_p`$ are independent of $`H`$. Assuming, as is reasonable, that $`F_p`$ is bounded ($`F_pF_M`$), we expect the motion to continue if the driving force $`F`$ exceeds $`F_M`$. Otherwise the particle gets stuck somewhere. Now consider an elastic interface (or a flux line) subject to an external force, viscous damping, and a pinning force associated with irregularities in the surrounding medium. If we discretize our interface, using $`H_i(t)`$ to represent the position, along the direction of the driving force, of the $`i`$-th segment , the equation of motion is $$\gamma \frac{dH_i}{dt}=H_{i+1}+H_{i1}2H_i(t)+FF_{p,i}(H_i),$$ (6) where the $`F_{p,i}(H_i)`$ are a set of independent pinning forces with statistical properties as above. This driven interface model has a depinning transition at a critical value, $`F_c`$, of the driving force . (Eq. (6) describes a linear driven interface, so-called because it lacks the nonlinear term $`(h)^2`$, familiar from the KPZ equation .) For $`F<F_c`$ the motion is eventually arrested ($`dH_i/dt=0`$ for all $`i`$), while for $`F>F_c`$ movement continues indefinitely. Close to $`F_c`$ there are avalanche-like bursts of movement on all scales, interspersed with intervals of near-standstill. The correlation length and relaxation time diverge at $`F_c`$, as in the other examples of absorbing-state phase transitions we’ve discussed above. We may take the order parameter for this transition as the mean velocity, $`\overline{v}=dH_i/dt`$. To reach the absorbing-state phase transition in the driven interface model we need to adjust the applied force $`F`$ to its critical value $`F_c`$. Can we modify this system so that it will be attracted to the critical state? Note that $`F`$ is not a dynamical variable, any more than is $`\lambda `$, in the contact process. Our sandpile recipe doesn’t seem to apply here. The crucial observation is that we may change the nature of the driving, replacing the constant force $`F`$ with a constraint of fixed velocity, $`dH_i/dt=v`$. A finite $`v`$ corresponds to a state in the active phase: the mean driving force $`F_i_v>F_c`$ for $`v>0`$. When we allow $`v`$ to tend to zero from above, we approach the depinning transition. This limit can be attained through an extremal dynamics in which we advance, at a given step, only the element subject to the smallest pinning force . (Notice that in extremal dynamics we are directly adjusting the order parameter.) To avoid the global supervision implicit in extremal dynamics we may attach each element of the interface to a spring, and move the other end of each spring at speed $`V`$. Now the equations of motion read $$\gamma \frac{dH_i}{dt}=H_{i+1}+H_{i1}2H_i(t)+k(VtH_i)F_{p,i}(H),$$ (7) where $`k`$ is the spring constant. For high applied velocities, the interface will in general move smoothly, with velocity $`\dot{H}=V`$, while for low $`V`$ stick-slip motion is likely. In the overdamped regime, the amplitudes of the slips are controlled by $`V`$ and $`k`$, and the statistics of the potential. In the limit $`V0`$, the interface motion exhibits scale invariance; $`V`$ plays a role analogous to $`h`$ in the sandpile. (The limits $`V0`$ and $`k0`$ have a particular significance, since the block can explore the pinning-force landscape quasistatically.) The fine tuning of $`F`$ to $`F_c`$ in the constant-force driving has been replaced by fine tuning $`V`$ to zero. This parameter tuning corresponds, once again, to an infinite time-scale separation. Finally, we note that restoring inertia ($`M>0`$) results in a discontinuous depinning transition with hysteresis, resulting in stick-slip motion of the sort associated with friction . Once again, we have transformed an absorbing-state phase transition ($`F=F_c`$) into SOC by driving the system at a rate approaching zero ($`V0`$). But there appear to be fundamental differences between sandpiles and driven interfaces. In the sandpile, but not in the driven interface, the order parameter is coupled to a conserved density. The sandpile, moreover, does not involve a quenched random field as does the driven interface. Despite these apparent differences, close connections have been suggested between the two kinds of model . We review this correspondence in the next subsection, following Ref. . ### B Sandpiles and Driven Interfaces Consider the BTW fixed-energy sandpile in two dimensions; let $`H_i(t)`$ be the number of times site $`i`$ has toppled since time zero. To write a dynamics for $`H_i`$, we observe that the occupation $`z_i(t)`$ of site $`i`$ differs from its initial value, $`z_i(0)`$, due to the inflow and the outflow of particles at this site. The outflow is given by $`4H_i(t)`$, since each toppling expels four particles. The inflow can be expressed as $`_{NN}H_j(t)`$: site $`i`$ gains a particle each time one of its nearest neighbors topples. Summing the above contributions we obtain: $`z_i(t)`$ $`=`$ $`z_i(0)+{\displaystyle \underset{jNNi}{}}H_j(t)4H_i(t)`$ (8) $`=`$ $`z_i(0)+_D^2H_i(t),`$ (9) where $`_D^2`$ stands for the discretized Laplacian. Since sites with $`z_i(t)4`$ topple at unit rate, the dynamics of $`H_i`$ is given by $`{\displaystyle \frac{dH_i}{dt}}`$ $`=`$ $`\mathrm{\Theta }[z_i(0)+_D^2H_i(t)3]`$ (10) $`=`$ $`\mathrm{\Theta }[_D^2H_i(t)+FF_{p,i}],`$ (11) where $`dH_i/dt`$ is shorthand for the rate at which the integer-valued variable $`H_i(t)`$ jumps to $`H_i(t)+1`$, and $`\mathrm{\Theta }(x)=1`$ for $`x>0`$ and is zero otherwise. In the second line, $`F\zeta 3`$ and $`F_{P,i}z_i(0)\zeta `$. (Recall that $`\zeta =z_i(t)`$ for all $`t`$.) Thinking of $`H_i(t)`$ as a discretized interface height, Eq. (11) represents an overdamped, driven interface in the presence of columnar noise, $`F_{p,i}`$, which takes independent values at each site, but does not depend upon $`H_i`$, as it does in the interface model discussed in the preceding subsection. We see from this equation that tuning $`\zeta `$ to its critical value $`\zeta _c`$ is analogous to tuning the driving force to $`F_c`$. If we replace the discrete height $`H_i`$ in Eq. (11) with a continuous field, $`H(x,t)`$ (and similarly for $`F_p`$), and replace the $`\mathrm{\Theta }`$-function by its argument, we obtain the Edwards-Wilkinson surface-growth model with columnar disorder, which has been studied extensively . The similarity between the present height representation and the dynamics of a driven interface suggests that the critical point of the BTW fixed-energy sandpile belongs to the universality class of linear interface depinning with columnar noise, if the rather violent nonlinearity of the $`\mathrm{\Theta }`$-function is irrelevant. (The latter remains an open question. A height representation for the Manna sandpile is also possible, but is complicated by the stochastic nature of the dynamics.) Applying the recipe of Sec. III to the driven interface, we would impose open boundaries, which drag behind the interior as they have fewer neighbors pulling on them; eventually the interface gets stuck. When this happens, we ratchet up the “force” at a randomly chosen site (in effect, $`F_{p,j}F_{p,j}1`$ at the chosen site). The dynamics is then attracted to the critical point. Once again, we may trade supervision (checking if the interface is stuck) for a constant drive ($`FF+ht`$) in the limit $`h0`$. ### C Self-Organized Directed Percolation and the Bak-Sneppen Model Take the square lattice and rotate it by $`45^o`$, so that each site has two nearest neighbors in the row above, and two below. The sites exist in one of two states, “wet” and “dry.” The states of the sites in the zeroth (top) row can be assigned at will; this defines the initial condition. A site in row $`i1`$ is obliged to be dry if both its neighbors in row $`i1`$ are dry; otherwise, it is wet with probability $`p`$, and dry with probability $`1p`$. This stochastic cellular automaton is called site directed percolation. Like the contact process, it possesses an absorbing state: all sites dry in row $`k`$ implies all dry in all subsequent rows. The dynamics of site DP can be expressed in a compact form if we define the site variable $`x_j^i`$ to be zero (one) if site $`j`$ in row $`i`$ is wet (dry). The variables in the next row are given by $$x_j^{i+1}=\mathrm{\Theta }[\mathrm{max}\{\eta _j^i,\mathrm{min}\{x_{j1}^i,x_{j+1}^i\}\}p],$$ (12) where the $`\eta _j^i`$ are independent random variables, uniform on . If both neighbors in the preceding row are in state 1, $`x_j^{i+1}`$ must also equal 1; otherwise $`x_j^{i+1}=0`$ with probability $`p`$. Thinking of the rows as time slices, we see that site DP is a parallel-update version of the contact process: increasing $`p`$ renders the survival and propagation of the wet state more probable, and is analogous to increasing $`\lambda `$ in the CP. Just as the CP has a phase transition at $`\lambda _c`$, site DP has a transition from the absorbing to the active phase at $`p_c0.7054`$. We’ve already dismissed the contact process (and by extension DP) as starting models for realizing SOC via the recipe of Sec. III. Remarkably, however, it is possible to define a parameter-free stochastic process whose stationary state reproduces the properties of critical DP . This process, self-organized directed percolation (SODP), is obtained by replacing the discrete variables in Eq. (12) by real variables which store the value of one of the previous $`\eta _j^i`$. In place of Eq. (12) we have simply $$x_j^{i+1}=\mathrm{max}\{\eta _j^i,\mathrm{min}\{x_{j1}^i,x_{j+1}^i\}\},$$ (13) Notice that parameter $`p`$ has disappeared, along with the $`\mathrm{\Theta }`$ function. Starting from a distribution with $`x_j^0<1`$ for at least one site (but otherwise arbitrary), this process eventually reaches a stationary state, characterized by the probability density $`\mu (x)`$. One finds that $`\mu (x)`$ is zero for $`x<p_c`$ (the critical value of site DP), jumps to a nonzero value (infinity, in the thermodynamic limit), at $`p_c`$, and decreases smoothly with $`x`$ for $`x>p_c`$. The process has discovered the critical value of site directed percolation! Hansen and Roux explained how this works : for any $`p[0,1]`$ the probability that $`x_j^i<p`$ is $`p`$ if either or both of the neighbors in the previous time slice have values less that $`p`$ (i.e., if the smaller of $`x_{j1}^{i1}`$ and $`x_{j+1}^{i1}`$ is $`<p`$), and is zero if $`x_{j1}^{i1}`$ and $`x_{j+1}^{i1}`$ both exceed $`p`$. This is exactly how the “wet” state propagates in site DP, with parameter $`p`$, if we equate the events ‘site $`j`$ in row $`i`$ is wet’ and ‘$`x_j^i<p`$.’ It follows that in the stationary state, $$\mathrm{Pr}[x_j^i<p]=_0^p\mu (x)𝑑x,$$ (14) equals the probability $`P(p)`$ that a randomly chosen site is wet, in the stationary state of site DP with parameter $`p`$. This explains why $`\mu (x)=0`$ for $`x<p_c`$, and why $`\mu (p_c)`$ is infinite in the infinite-size limit ($`dP/dp`$ is infinite at $`p_c`$). The spatio-temporal distribution of DP is also reproduced; for example, the joint probability $`\mathrm{Pr}[x_j^ip_c,x_k^ip_c]`$ decays as a power law for large separations $`|jk|`$. The process effectively studies all values of $`p`$ at once, greatly improving efficiency in simulations. Stochastic processes corresponding to other models (DP on other lattices, bond instead of site DP, epidemic processes) have also been devised . It seems unlikely, on the other hand, that such a real-valued stochastic process exists for activated random walkers or other fixed-energy sandpiles. (Of course, such a process would be of great help in studying sandpiles!) SODP doesn’t fit into the same scheme as sandpiles or driven interfaces. It is a real-valued stochastic process that generates, by construction, the probability distribution of DP for all parameter values, including $`p_c`$. The process itself does not have a phase transition; all sites are active (except those inside a sequence of 1’s — a configuration that will never arise spontaneously), since there is a finite probability for $`x_j^i`$ to change. SODP is self-organized in the sense that its stationary probability density has a critical singularity, without the need to adjust parameters. If we choose to regard SODP as an instance of SOC, we must recognize that the path in this case is very different from that in sandpiles or driven interfaces; the system is not being forced to its critical point by external supervision or driving. Rather, SODP is directed percolation implemented in a different (parameter-free) way. Furthermore, the dynamics embodied in Eq. (13) seems a much less realistic description of a physical system than is driven-interface motion, or even the rather artificial dynamics of a sandpile model. In the rather unlikely event that SODP were realized in a natural system, it would not immediately yield a scale-invariant “signal” such as avalanches or fractal patterns. The latter would require a second process (or an observer) capable of making fine distinctions among values of $`x`$ in the neighborhood of $`p_c`$. So the kind of SOC represented by SODP does not appear a likely explanation of scale invariance in nature. A (fanciful) interpretation of Eq. (13) is that $`x_j^i`$ represents the “fitness” of an individual, which mates with its neighbor to produce an offspring that inherits the fitness of the less-fit parent. This offspring survives if her fitness exceeds that of an interloper, whose fitness is random. (It is, to put it crudely, as if an established population were constantly challenged by a flux of outsiders.) Seen in this light, SODP bears some resemblance to the evolutionary dynamics represented, again in very abstract form, in the Bak-Sneppen model . Here, the globally minimum fitness variable, along with its nearest neighbors, is replaced by a random number at each time step. (If the $`x_j^i`$ are associated with different species, then the appearance of a new species at site $`i`$ affects the fitness of the “neighboring” species in the community in an unpredictable way.) This is a kind of extremal dynamics, a scheme we’ve already encountered in the driven interface model; another familiar example is invasion percolation . Interestingly, the Bak-Sneppen model shows the same qualitative behavior as SODP: a singular stationary distribution of fitness values $`x_j^i`$. The model exhibits avalanches in which replacement of a single species provokes a large number of extinctions. In the interface under extremal dynamics, the height $`H_i(t)`$ cannot decrease. In the Bak-Sneppen model momentary setbacks are allowed ($`x_j`$ can decrease in a given step), but individuals of low fitness will eventually be culled. This is like an interface model with quenched noise such that, on advancing to a new position, an element may encounter a force that throws it backward, for a net negative displacement. The Bak-Sneppen model is equivalent to a driven interface in which the least-stable site and its neighbors are updated at the same moment; we can, as before, trade extremal dynamics for a limit of infinitely slow driving. Another way of obtaining the extremal dynamics of the Bak-Sneppen model as the limit of a stochastic process with purely local dynamics is as follows . Take a one-dimensional lattice (with periodic boundaries, for definiteness), and assign random numbers $`x_j`$, independent and uniform on , to each site $`j=1,\mathrm{},L`$. The configuration evolves via a series of “flips,” which reset the variables at three consecutive sites. That is, when site $`j`$ flips, we replace $`x_{j1}`$, $`x_j`$, and $`x_{j+1}`$ with three independent random numbers again drawn uniformly from . Let the rate of flipping at site $`j`$ be $`\mathrm{\Gamma }e^{\beta x_j}`$, where $`\mathrm{\Gamma }^1`$ is a characteristic time, irrelevant to stationary properties. The Bak-Sneppen model is the $`\beta \mathrm{}`$ limit of this process. We can get some insight into the stationary behavior via a simple analysis. Let $`p(x)dx`$ be the probability that $`x_j[x,x+dx]`$. The probability density satisfies $$\frac{dp(x)}{dt}=e^{\beta x}p(x)2_0^1e^{\beta y}p(x,y)𝑑y+3_0^1e^{\beta y}p(y)𝑑y$$ (15) where $`p(x,y)`$ is the joint density for a pair of nearest-neighbor sites. If we invoke a mean-field factorization, $`p(x,y)=p(x)p(y)`$, then $$\frac{dp(x)}{dt}=p(x)\left[e^{\beta x}+2I(\beta )\right]+3I(\beta ),$$ (16) where $$I(\beta )_0^1e^{\beta y}p(y)𝑑y.$$ (17) The stationary solution is $$p_{st}(x)=\frac{3}{2}\frac{1e^{2\beta /3}}{1e^{2\beta /3}+e^{\beta x}(e^{\beta /3}1)}.$$ (18) The solution is uniform on for $`\beta =0`$, as we’d expect, but in the $`\beta \mathrm{}`$ limit we have $`p_{st}=(3/2)\mathrm{\Theta }(x1/3)\mathrm{\Theta }(1x)`$. The probability density develops a step-function singularity, as in the Bak-Sneppen model. Not surprisingly, the mean-field approximation yields a rather poor prediction for the location of the singularity, which actually falls at 0.6670(1) . (A two-site approximation places the singularity at $`x=1/2`$.) The main point is that to realize singular behavior from a local dynamics, we have to tune a parameter associated with the rates. Alternative mean-field treatments of the Bak-Sneppen model may be found in Refs. and We can construct a model with the same local dynamics as that of Bak and Sneppen by replacing $`x_{j1}`$, $`x_j`$, and $`x_{j+1}`$ at rate 1, if and only if $`x_j<r`$. (Sites with $`x_j>r`$ may only change if they have a nearest neighbor below the cutoff.) In other words, only sites with $`x_j<r`$ are active; an updated site is active with probability $`r`$. There is an absorbing phase for small $`r`$, separated from an active phase by a critical point at some $`r_c`$ . To get the Bak-Sneppen model we forget about $`r`$, and declare the unique active site in the system to be the one with the smallest value of $`r`$. In the infinite-size limit, the probability to find a site with $`r<r_c`$ is zero, in the stationary state. We see once again that in extremal dynamics we tune the order parameter itself to zero: at each instant there is exactly one active site, so $`\rho _a=1/L`$. Grassberger and Zhang observed that the existence of SODP “casts doubt on the significance of self-organized as opposed to ordinary criticality.” A similar doubt might be prompted by our recipe for turning a conventional critical point self-organized. Of course, even if it is possible to explain all instances of SOC in terms of an underlying conventional critical point, the details of the critical behavior remain to be understood . Numerical results indicate that sandpiles, driven interfaces, and the Bak-Sneppen model define a series of new universality classes. Furthermore, no one has been able to derive the critical exponents of avalanches in SOC sandpiles, even the in abelian case, where quite a lot is known about the stationary properties . ## V SOC and the Real World Since SOC has been claimed to be the way “nature works” , we would expect to find a multitude of experimental examples where this concept is useful. Originally, SOC was considered an explanation of power laws, that it provided a means whereby a system could self-tune its parameters. So once we saw a power law we could claim that it was self-generated and “explained” by SOC. The previous sections should have convinced the reader that there are no self-tuning critical points, although sometimes the fine tuning is hidden, as in sandpile models. Therefore, an “explanation” of experimentally observed power laws requires the identification of the tuning parameters controlling the scaling, as in any other ordinary critical point. Here, we will restrict the discussion to experimental examples of avalanche behavior, leaving aside fractals and $`1/f`$ noise whose connection with SOC is rather loose. (It is worth mentioning that a physical realization of self-organized criticality — without avalanches, as far as is known — has been identified in liquid <sup>4</sup>He at the $`\lambda `$ point .) Following the introduction of SOC, there were many experimental studies of avalanches, which sometimes yielded power-law distributions over a few decades, leading to endless discussions about the applicability of SOC. If we accept that self-tuned critical points don’t exist, then these controversies have no basis: we have only to understand how far the system is from the critical point, and why. This task has only been accomplished in a few cases; several examples require further study, both experimental and theoretical. Soon after the sandpile model was introduced, several experimental groups measured the size-distribution of avalanches in granular materials. Unfortunately, real sandpiles do not seem to be behave as the SOC sandpile model. Experiments show large periodic avalanches separated by quiescent states with only limited activity . While for small piles one could try to fit the avalanche distribution with a power law over a limited range , the behavior would eventually cross over, on increasing the system size, to the one described above, which is not scale-invariant. The reason sand does not behave like an ideal sandpile is the inertia of the rolling grains. As grains are added, the inclination of the pile increases until it reaches the angle of maximal stability $`\theta _s`$, at which point grains start to flow. Due to inertia, the flow does not stop when the inclination falls to $`\theta _c`$, but continues until the inclination attains the angle of repose $`\theta _s<\theta _c`$ . Since the “constant force” (i.e., with $`\theta `$ controlled) version of the system has a first-order transition, it is no wonder that criticality is not observed in the slowly driven case. So if we want to see power-law avalanches we have to get rid of the inertia of the grains. Grains with small inertia exist and can be bought in any grocery store: rice! A ricepile was carefully studied in Oslo: elongated grains poured at very small rate gave rise to a convincing power-law avalanche distribution . The previous discussion tells us that in order to observe a power-law avalanche distribution, inertia should be negligible. As discussed in Sec. IV, the motion of domain walls in ferromagnets and flux lines in type II superconductors is overdamped, due to eddy-current dissipation; these systems are probably the cleanest experimental example of power-law distributed avalanches. The noise produced by domain wall motion is known as the Barkhausen effect, first detected in 1919 . Since then, it has become a common non-destructive method for testing magnetic materials, and its statistical properties have been studied in detail. When the external magnetic field is increased slowly, it is possible to observe well separated avalanches, whose size distribution is a power-law over more than three decades . Domain walls are pushed through a disordered medium by the magnetic field, so we would expect a depinning transition at some critical field $`H=H_c`$. One should note, however, that the “internal field” acting on the domains is not the external field, but is corrected by the demagnetizing field $`H_dNM`$ where $`M`$ is the magnetization and $`N`$ the demagnetizing factor. Therefore, if we increase the external field at constant rate $`c`$, the internal field is given by $`H_{int}=ctNM=ctky(t)`$, where $`y(t)`$ is the average position of the domain wall and $`kN`$. We recognize here the recipe for SOC given in section IVA: in the limit $`c0`$ and $`k0`$ we expect to reach the critical point. This fact was indeed verified in experiments, where $`k`$ can be controlled by modifying the aspect ratio of the sample . In type II superconductors, when the external field is increased, flux lines are nucleated at the border of the sample and pushed inside by their mutual repulsion. The resulting flux density gradient, known as the Bean state , bears some analogy with sandpiles, as pointed out by De Gennes over 30 years ago . Unlike sand grains, flux lines have little inertia, and exhibit power-law distributed avalanches . It is still unclear whether in this system a mechanism similar to the demagnetizing field maintains a stationary avalanche state, as in ferromagnets. Simulations of flux line motion have reproduced experimental results in part, but a complete quantitative explanation of the phenomenon is lacking. Another broad class of phenomena where SOC has been invoked on several occasions is that of mechanical instabilities: fracture, plasticity and dislocation dynamics. Materials subject to an external stress release acoustic signals that are often distributed as power laws over a limited range: examples are the fracturing of wood , cellular glass and concrete , in hydrogen precipitation , and in dislocation motion in ice crystals . While it has often been claimed that these experiments provided a direct evidence of SOC, this is far from being established. In fact, fracture is an irreversible phenomenon and often the acoustic emission increases with the applied stress with a sharp peak at the failure point. There is thus no stationary state in fracture, and it is debated whether the failure point can even be described as a critical point or a first-order transition . The situation might be different in plastic deformation, where a steady state is possible ; recent experimental measurements of dislocation motion appear promising . We may mention some related phenomena in which avalanches have been observed, and a theoretical interpretation is still debated: martensitic transformations , sliding systems and sheared foams . Finally, it is worth mentioning that SOC has been claimed to apply to several other situations in geophysics, biology and economics. We have deliberately chosen to discuss only those examples for which experimental observations are accurate and reproducible. Even in these cases, it is often hard to distinguish between SOC-like behavior and other mechanisms for generating power laws. This task appears almost hopeless in situations where only limited data sets are available, such as for forest fires , or evolution , and remains very complicated in other cases, such as earthquakes, as witnessed by the vast theoretical literature on the subject . ## VI Summary The genesis of self-organized criticality is a continuous absorbing-state phase transition. The dynamical system exhibiting the latter may be continuous or discrete, deterministic or stochastic, conservative or dissipative. To transform a conventional phase transition to SOC, we couple the local dynamics of the dynamical system to an external supervisor, or to a “drive” (sources and sinks with rates {$`h`$}). The relevant parameter(s) {$`\zeta `$} associated with the phase transition are controlled by the supervisor or drive, in a way that does not make explicit reference to {$`\zeta `$}. One such path involves slow driving ($`h0`$), in which the interaction with the environment is contingent on the presence or absence of activity in the system (linked to {$`\zeta `$} via the absorbing-state phase transition). Another, extremal dynamics, restricts activity to the least stable element in the system, thereby tuning the order parameter itself to zero. Specific realizations of this rather abstract (and general) scheme have been discussed in the preceding sections: sandpiles, forest fires, driven interfaces, and the Bak-Sneppen model. Viewed in this light, “self-organized criticality” refers neither to spontaneous or parameter-free criticality, nor to self-tuning. It becomes, rather, a useful concept for describing systems that, in isolation, would manifest a phase transition between active and frozen regimes, and that are in fact driven slowly from outside. Acknowledgements We thank M. Alava, A. Barrat, A. Chessa, D.Dhar, P.L. Garrido, P. Grassberger, D. Head, K.B. Lauritsen, J. Machta, E. Marinari, R. Pastor-Satorras, L. Pietronero and A.Stella for continuous discussions and fruitful “arguments” on the significance of SOC. M.A.M., A.V., and S.Z. Acknowledge partial support from the European Network Contract No. ERBFMRXCT980183. M.A.M. also acknowledges support from the Spanish Ministerio de Educación under project DGESEIC, PB97-0842’. Figure Captions Fig. 1. Stationary density $`\rho `$ of active sites versus density of walkers $`\zeta `$ in one-dimensional ARW. The inset is a logarithmic plot of the same data, where $`\mathrm{\Delta }=\zeta \zeta _c`$. The slope of the straight line is 0.43. Fig. 2. Stationary avalanche-size distribution in the one-dimensional Manna sandpile with sequential dynamics, for $`L=500`$, 1000, 2000, and 5000 (left to right) . Lower inset: finite-size scaling plot of the data in the main graph, $`\mathrm{ln}P^{}`$ versus $`\mathrm{ln}s^{}`$, with $`s^{}L^{2.21}s`$ and $`P^{}L^{2.43}P`$. Upper inset: stationary density $`\zeta `$ in the inner 10% of the system, plotted versus $`1/L`$. The diamond on the $`\zeta `$ axis is the critical density of ARW.
no-problem/9910/cs9910017.html
ar5iv
text
# Finite-Resolution Hidden Surface Removal Research partially supported by National Science Foundation grant DMS-9627683, U.S. Army Research Office MURI grant DAAH04-96-1-0013, by a Sloan Fellowship. See http://www.uiuc.edu/~jeffe/pubs/gridvis.html for the most recent version of this paper. ## 1 Introduction Hidden surface removal is one of the oldest and most important problems in computer graphics. Informally, the problem is to compute the portions of a given collection of geometric objects, typically composed of triangles, that are visible from a given camera position and orientation in $`\mathrm{I}\mathrm{R}^3`$. In order to simplify calculation (and explanation), a projective transformation is applied so that the camera is at $`\mathrm{}`$ on the $`z`$-axis and all vertices have positive $`z`$-coordinates, so that the desired image is the orthographic projection of the objects onto the $`xy`$-plane. We will follow the computer graphics convention that the $`y`$-axis is vertical, the $`x`$\- and $`z`$-axes are horizontal, and the positive $`z`$-axis points into the image, directly away from the camera. Historically, there are two different approaches to solving the hidden surface removal problem: *object space* and *image space* . Object-space (or *analytic*) hidden surface removal algorithms compute which object is visible at every point in the image plane. Image-space algorithms, on the other hand, compute only the object visible at a finite number of sample points. We will refer to the sample points themselves as “pixels”, since usually there is one sample point per pixel in the final finite-resolution output image. (Image-space algorithms that compute sub-pixel features do so by sampling a small constant number of points within each pixel area .) The output of an object-space hidden surface removal algorithm is the projection of the forward envelope<sup>1</sup><sup>1</sup>1This would be called the “lower envelope” if the $`z`$-axis were vertical. of the objects onto the image plane. The resulting planar decomposition is called the *visibility map* of the objects. Each face of the visibility map is a maximal connected region in which a particular triangle, or no triangle, is visible. McKenna described the first algorithm to compute visibility maps in $`\mathrm{\Theta }(n^2)`$ time, where $`n`$ is the number of input triangles; see also . This is optimal in the worst-case. Unfortunately, McKenna’s algorithm *always* uses $`\mathrm{\Theta }(n^2)`$ time and space, even when the visibility map is much simpler. This shortcoming led to the development of several *output-sensitive* algorithms, whose running time depends not only on $`n`$, the number of triangles, but also on $`v`$, the number of vertices of the visibility map. The fastest algorithm currently known, an improvement by Agarwal and Matoušek of an algorithm of de Berg *et al.* , runs in time $`O(n^{1+\epsilon }+n^{2/3+\epsilon }v^{2/3})`$. For more details on these and other object-space algorithms, see the comprehensive survey by Dorward . The primary disadvantage of the object-space approach is the potentially high complexity of the visibility map, which may be much larger than the number of pixels in the desired output image, even for reasonable input sizes. Even when the visibility map is not overly complex, it may contain features that are significantly smaller than the area of a pixel and thus do not contribute to the final image. This is especially problematic for applications of hidden-surface removal such as form-factor calculation, where the desired output image may have very low resolution . For image-space algorithms, on the other hand, the ultimate goal is to compute, for each pixel in the finite-resolution output image, which triangle is visible at that pixel. The most common image-space approach is the *$`z`$-buffer* algorithm introduced by Catmull . This algorithm loops through the triangles, determining the pixels that each triangle covers in the image plane; each pixel maintains the smallest $`z`$-coordinate of any triangle covering that pixel. While this algorithm can be implemented cheaply in hardware, it can still be quite slow when the number of triangles and number of pixels are both large. Another common image-space approach is *ray casting* (also known as *ray tracing* and *ray shooting*): Shoot a ray from each pixel in the positive $`z`$-direction and compute the first triangle it hits. Using using the best known unidirectional ray-shooting data structure, due to Agarwal and Sharir , we obtain an algorithm with running time $`O((n+n^{2/3}p^{2/3}+p)\mathrm{log}^3n)`$, where $`n`$ is the number of triangles and $`p`$ is the number of pixels. Erickson’s lower bound for Hopcroft’s problem suggests that this algorithm is close to optimal in the worst case, even for the simpler problem of deciding whether any ray hits a triangle. In practice, ray-shooting queries are answered by walking through a decomposition of space determined by the triangles, such as an octtree , triangulation , or binary space partition . See for related theoretical results. Neither $`z`$-buffers nor ray casting exploit *spatial coherence* in the image. If the visible triangles are fairly large, then the same triangle is likely to be visible through several pixels; however, both algorithms compute the triangle behind each pixel independently. Spatial coherence is exploited to some extent by more complex techniques such as Warnock’s subdivision algorithm , hierarchical $`z`$-buffers , hierarchical coverage masks , and frustum casting , which construct a recursive quadtree-like decomposition of the image. However, this decomposition can be much more complex than the visibility map if, for example, the image contains several long diagonal lines. In particular, if the pixels lie in a regular $`\sqrt{p}\times \sqrt{p}`$ grid, the decomposition can have complexity $`\mathrm{\Theta }(v\sqrt{p})`$. A few hidden surface removal algorithms work simultaneously in both image and object space . The basic idea for these algorithms is to traverse the objects in order from front to back (*i.e.*, by increasing “distance” from the camera), decomposing the image plane using the boundaries of the objects and reverting to ray casting when any region of the image plane contains only a single pixel. Of course, there are sets of triangles do not have a consistent depth order, and these algorithms will produce incorrect output if such as set is given as input. While a depth order can always be guaranteed by first decomposing the triangles with a binary-space partition tree, this could produce $`\mathrm{\Theta }(n^2)`$ triangle fragments in the worst case . One exception to the depth-order requirement is Weiler and Atherton’s algorithm , which decomposes the image plane into regions within which the triangles can be depth-ordered; this algorithm can also produce a quadratic number of fragments. The image decompositions produced by these algorithms produce cannot be analyzed either in terms of the complexity of the visibility map, since they can decompose triangles even when all depth cycles are invisible, or in terms of the number of pixels, since they can produce many fragments that do not contain a pixel at all. In this paper, we propose another hybrid approach to hidden surface removal that exploits both spatial coherence and finite precision. In Section 2, we define the *sampled visibility map* of a set of triangles with respect to a set of pixels. Like other image-decomposition schemes, the sampled visibility map adapts to local changes in the image complexity, but unlike previous approaches its complexity is easily bounded both by the complexity of the analytic visibility map and by the number of pixels. We describe an output-sensitive algorithm to construct the sampled visibility map in Section 4. Our algorithm runs in time $`O(n^{1+\epsilon }+n^{2/3+\epsilon }t^{2/3}+p)`$, where $`t`$ is the number of trapezoids in the output. This matches the performance of Agarwal and Matoušek’s visibility map algorithm when $`t=\mathrm{\Theta }(v)`$, and almost matches Agarwal and Sharir’s ray-casting algorithm when $`t=\mathrm{\Theta }(p)`$. Our algorithm does not require the triangles to have a consistent depth order, nor does it decompose the triangles into orderable fragments. A variant of our algorithm allows a sequence of pixels to be specified online, at an additional amortized cost of $`O(\mathrm{log}t)`$ time per pixel. The algorithms presented in Section 4 assume that the pixels are just arbitrary points in the $`xy`$-plane. In Section 5, we describe a faster algorithm for the common special case where the pixels are the vertices of a rectangular grid. The running time of our improved algorithm is $`O(n^{1+\epsilon }+n^{2/3+\epsilon }t^{2/3}+t\mathrm{log}p)`$, which is sublinear in the number of pixels unless the output is very large. Finally, in Section 6, we discuss some other applications of our techniques and suggest directions for further research. ## 2 Definitions Let $`\mathrm{\Delta }`$ be a set of $`n`$ disjoint triangles in $`\mathrm{I}\mathrm{R}^3`$, where every vertex has positive $`z`$-coordinate. We say that a triangle $`\mathrm{}\mathrm{\Delta }`$ is *visible* at a point $`\pi `$ in the $`xy`$-plane if a ray from $`\pi `$ in the positive $`z`$-direction hits $`\mathrm{}`$ before any other triangle in $`\mathrm{\Delta }`$. The *visibility map* $`Vis(\mathrm{\Delta })`$ is a planar straight-line graph, each face of which is a maximal connected region in which a particular triangle in $`\mathrm{\Delta }`$, or no triangle, is visible. See Figure 1(a). Let $`v`$ denote the number of vertices of $`Vis(\mathrm{\Delta })`$. The *trapezoidal decomposition* of $`Vis(\mathrm{\Delta })`$, denoted $`Trap(Vis(\mathrm{\Delta }))`$, is obtained by decomposing each face into (possibly degenerate) trapezoids, two of whose edges are vertical (*i.e.*, parallel to the $`y`$-axis). The vertical edges are defined by casting segments up and/or down from each vertex into the face, stopping when the segment reaches another edge of the face. Faces are decomposed individually, so only one vertical edge is added at a “T” vertex where one visible edge appears to overlap another. See Figure 1(b). Finally, let $`P`$ be a set of $`p`$ points in the $`xy`$-plane, called “pixels”. The *sampled visibility map* of $`\mathrm{\Delta }`$ with respect to $`P`$, denoted $`Vis(\mathrm{\Delta }P)`$, is the subset of trapezoids in $`Trap(Vis(\mathrm{\Delta }))`$ that contain at least one pixel in $`P`$. See Figure 1(d). Let $`t`$ denote the number of trapezoids in $`Vis(\mathrm{\Delta }P)`$. Clearly $`tp`$, since every trapezoid in $`Vis(\mathrm{\Delta }P)`$ contains at least one pixel. Moreover, since $`Trap(Vis(\mathrm{\Delta }))`$ contains at most $`2v`$ trapezoids, $`t2v`$. ## 3 Building One Trapezoid in $`𝑽𝒊𝒔\mathbf{(}𝚫\mathbf{}𝑷\mathbf{)}`$ A naïve algorithm for constructing the sampled visibility map would start by constructing $`Vis(\mathrm{\Delta })`$. While this approach leads to an algorithm that is nearly optimal in the worst case, it cannot give an output-sensitive algorithm. To obtain output-sensitivity, we construct $`Vis(\mathrm{\Delta }P)`$ one trapezoid at a time. Specifically, for each pixel $`\pi P`$, if it is unmarked, we determine the trapezoid $`\tau _\pi Trap(Vis(\mathrm{\Delta }))`$ that contains it and then mark all the pixels contained in $`\tau _\pi `$. We construct each trapezoid in four stages, which are illustrated in Figure 2. Stage 1. Forward Ray Shooting. The first stage in constructing the trapezoid $`\tau _\pi `$ is to determine the triangle visible at $`\pi `$; see Figure 2(a). This is done by answering a unidirectional ray-shooting query, exactly as in the standard ray-casting algorithm. Agarwal and Sharir describe a data structure that can answer such queries in time $`O((n/\sqrt{s})\mathrm{log}^3n)`$ using a data structure of size $`O(s\mathrm{log}^2n)`$, where $`s`$ can be chosen anywhere between $`n`$ and $`n^2`$. The preprocessing time needed to construct this data structure is $`O(s\mathrm{log}^3n)`$. Agarwal and Sharir’s data structure is actually designed to answer point stabbing queries for a set of triangles in the plane—How many triangles contain the query point? Like most geometric range searching structures, their data structure defines a number of *canonical subsets* of the set of triangles. For any point $`\pi `$, the set of triangles that contain $`\pi `$ can be expressed as the disjoint union of $`O((n/\sqrt{s})\mathrm{log}^3n)`$ canonical subsets; in particular, this implies that the triangles in any canonical subset have a common intersection. Their data structure stores the size of each canonical subset, and a stabbing query is answered by summing up the sizes of the relevant canonical subsets. To obtain a unidirectional ray-shooting data structure for our three-dimensional triangles $`\mathrm{\Delta }`$, it suffices to build Agarwal and Sharir’s point-stabbing structure for the $`xy`$-projection of $`\mathrm{\Delta }`$. Now the triangles in any canonical subset have a consistent front-to-back ordering, and the triangle visible through $`\pi `$ can be computed by comparing the front-most triangles in the relevant canonical subsets. Stage 2. Vertical Ray Dragging. The second stage in our algorithm finds the top and bottom edges of $`\tau _\pi `$. Intuitively, these edges are computed by dragging the ray through $`\pi `$ parallel to the $`y`$-axis until the triangle hit by the ray changes. See Figure 2(b). Let $`\mathrm{}_\pi \mathrm{\Delta }`$ be the triangle visible at $`\pi `$, and let $`\overline{\pi }`$ be the point on $`\mathrm{}_\pi `$ with the same $`x`$\- and $`y`$-coordinates as $`\pi `$. (To avoid the case where no triangle is visible at $`\pi `$, we can assume that there is a large “background” triangle.) Let the *curtain* of a triangle edge be the set of points on or directly behind that edge; each curtain is a three-sided unbounded polygonal slab, two of whose sides are parallel to the $`z`$-axis . We can find the top (resp. bottom) edge of $`\tau _\pi `$ by shooting a ray from $`\overline{\pi }`$ along the surface of $`\mathrm{}_\pi `$ in the positive (resp. negative) $`y`$-direction. In each case, the desired edge is determined either by an edge of $`\mathrm{}_\pi `$ or by the first curtain hit by the ray. Agarwal and Matoušek describe a data structure of size $`O(sn^\epsilon )`$, where $`s`$ can be chosen anywhere between $`n`$ and $`n^2`$, that can answer ray shooting queries in a set of $`n`$ curtains in time $`O(n^{1+\epsilon }/\sqrt{s})`$, after $`O(sn^\epsilon )`$ preprocessing time. Stage 3. Oblique Ray Dragging. Each vertical trapezoid edge in $`Trap(Vis(\mathrm{\Delta }))`$ is defined either by a vertex of $`Vis(\mathrm{\Delta })`$ at its top or bottom endpoint, or by a projected visible vertex of some triangle, which could lie anywhere in the edge. The third stage looks for the nearest vertices of $`Vis(\mathrm{\Delta })`$ along the top and bottom edges of $`\tau _\pi `$. Let $`\widehat{e}`$ and $`\stackrel{ˇ}{e}`$ be triangle edges whose projections lie directly above and below $`\pi `$, respectively, and let $`\widehat{\pi }\widehat{e}`$ and $`\stackrel{ˇ}{\pi }\stackrel{ˇ}{e}`$ be the points with the same $`x`$-coordinate as $`\pi `$. Intuitively, we drag rays to the left and right along $`\widehat{e}`$ (resp. $`\stackrel{ˇ}{e}`$), starting at $`\widehat{\pi }`$ (resp. $`\stackrel{ˇ}{\pi }`$), stopping when each ray either hits another edge or hits an endpoint of $`\widehat{e}`$ (resp. $`\stackrel{ˇ}{e}`$); see Figure 2(c). Just as in the previous stage, each ray-dragging queries can be answered by performing a ray-shooting query in the set of curtains in time $`O(n^{1+\epsilon }/\sqrt{s})`$, using Agarwal and Matoušek’s data structure . Stage 4. Swath Sweeping. In the final stage, we search for the visible triangle vertices whose projections lie beneath the top edge and above the bottom edge of $`\tau _\pi `$, and whose $`x`$-coordinates are closest to that of the pixel $`\pi `$. Since we know that $`\mathrm{}_\pi `$ is the only triangle visible in $`\tau _\pi `$, it suffices to consider only triangle vertices in front of the plane containing $`\mathrm{}_\pi `$, and we can assume that all such vertices are visible. Intuitively, we take the vertical swath of rays swept in Stage 2, and sweep it to the left and right until it hits such a vertex. We will describe only the leftward sweep; the rightward sweep is completely symmetric. It suffices to build a data structure storing only the rightmost vertex of each triangle, *i.e.*, the vertex with largest $`x`$-coordinate. To answer a swath-sweep query, we perform a binary search over the $`x`$-coordinates of the rightmost vertices, looking for the left edge of $`\tau _\pi `$. At each step in the binary search, we determine whether a particular query trapezoid $`\tau `$ contains the projection of any visible triangle vertex. Intuitively, at each step, we cast a trapezoidal beam forward into the triangles and ask whether it encounters any triangle vertex before it hits $`\mathrm{}_\pi `$. In fact, since the trapezoid $`\tau `$ lies entirely inside the projection of $`\mathrm{}_\pi `$, it suffices to check whether the beam hits a vertex before the plane containing $`\mathrm{}_\pi `$. We answer this trapezoidal beam query using a *multi-level data structure*. Multi-level data structures allow us to decompose complicated queries into simpler components and devise independent data structures for each component. The size (resp. query time) of a multi-level structure is the size (resp. query time) of its largest (resp. slowest) component, times an additional factor of $`O(\mathrm{log}n)`$ per “level”. See for detailed descriptions of this standard technique. We decompose trapezoidal beam queries by observing that the beam through a trapezoid $`\tau `$ contains a visible vertex $`v`$ if and only if * the $`x`$-coordinate of $`v`$ is between the left and right $`x`$-coordinates of $`\tau `$, * the $`xy`$-projection of $`v`$ is below the top edge of $`\tau `$, * the $`xy`$-projection of $`v`$ is above the bottom edge of $`\tau `$, and * $`v`$ is in front of the plane containing $`\mathrm{}_\pi `$. The first level of our data structure is a range tree over the $`x`$-coordinates of the triangle vertices, which lets us (implicitly) find the vertices between the left and right sides of $`\tau `$ in $`O(\mathrm{log}n)`$ time. This level requires $`O(n)`$ space and $`O(n\mathrm{log}n)`$ preprocessing time. The next two levels let us (implicitly) find all the vertices whose $`xy`$-projections lie in the wedge determined by the top and bottom edges of $`\tau `$. One level finds the points below the top edge; the other finds the points above the bottom edge. For each level, we can use a two-dimensional halfplane query structure of Agarwal and Sharir , which answers queries in time $`O((n/\sqrt{s})\mathrm{log}n)`$ using space $`O(s)`$ and preprocessing time $`O(s\mathrm{log}n)`$, for any $`s`$ between $`n`$ and $`n^2`$. Finally, in the last level, we need to determine whether any vertex lies in front of the plane containing $`\mathrm{}_\pi `$. We can answer this three-dimensional halfspace emptiness query in $`O(\mathrm{log}n)`$ time, $`O(n)`$ space, and $`O(n\mathrm{log}n)`$ preprocessing time using (for example) a Dobkin-Kirkpatrick hierarchy . Combining all four levels, we obtain a data structure of size $`O(s\mathrm{log}^3n)`$, with preprocessing time $`O(s\mathrm{log}^4n)`$, that can answer any trapezoidal beam query in time $`O((n/\sqrt{s})\mathrm{log}^4n)`$, for any $`nsn^2`$. Thus, the overall time to answer a swath-sweep query is $`O((n/\sqrt{s})\mathrm{log}^4n)`$. Putting all four stages together, we obtain the following result. The time and space bounds are dominated by the curtain ray-shooting data structure in the second and third stages. ###### Lemma 3.1 Let $`\mathrm{\Delta }`$ be a set of $`n`$ disjoint triangles in $`\mathrm{I}\mathrm{R}^3`$, and let $`s`$ be a parameter between $`n`$ and $`n^2`$. We can build a data structure of size $`O(sn^\epsilon )`$ in time $`O(sn^\epsilon )`$, so that for any point $`\pi `$ in the $`xy`$-plane, we can construct the trapezoid $`\tau _\pi Trap(Vis(\mathrm{\Delta }))`$ containing $`\pi `$ in time $`O(n^{1+\epsilon }/\sqrt{s})`$. ## 4 All Trapezoids ### 4.1 Guessing the Output Size Lemma 3.1 implies that for any positive integer $`t`$, the total time to build our data structure and construct $`t`$ trapezoids is $$O\left(\left(s+\frac{tn}{\sqrt{s}}\right)n^\epsilon \right).$$ If we know the number of trapezoids in advance, we can minimize the total running time by setting $`s=\mathrm{max}(n,t^{2/3}n^{2/3})`$; the resulting time bound is $`O(n^{1+\epsilon }+t^{2/3}n^{2/3+\epsilon })`$ In our application, however, $`t`$ is the number of trapezoids in $`Vis(\mathrm{\Delta }P)`$, which is not known in advance. We can obtain the same overall running time in this case using the following standard doubling trick, previously used in several output-sensitive analytic hidden surface removal algorithms . Our algorithm runs in several phases. In the $`i`$th phase, we build the data structures from scratch with $`s=2^{2i/3}n`$, and then construct the next $`2^i\sqrt{n}`$ trapezoids. The time for the $`i`$th phase is $`O(2^{2i/3}n^{1+\epsilon })`$, and the algorithm goes through $`\mathrm{log}_2(t/\sqrt{n})`$ phases before it builds all $`t`$ trapezoids. ### 4.2 Avoiding Redundant Queries To construct the entire collection of trapezoids $`Vis(\mathrm{\Delta }P)`$, we loop through the pixels, constructing the trapezoid containing each pixel. Of course, if we have already built the trapezoid containing a pixel, we want to avoid building it again. There are at least two methods for avoiding this redundancy. In one method, after we construct each new trapezoid, we search for and mark all the pixels it contains. This can be done in $`O((n/\sqrt{s})\mathrm{log}^3n+k)`$ time using a two-dimensional range searching data structure similar to the one used in the last stage of our trapezoid-construction algorithm . Here, $`s`$ is as usual an arbitrary parameter between $`n`$ and $`n^2`$, and $`k`$ is the number of pixels marked. Since the leading term is dominated by the time to construct the trapezoid in the first place, this approach adds only an $`O(p)`$ term to the overall running time of our hidden-surface removal algorithm. ###### Theorem 4.1 Let $`\mathrm{\Delta }`$ be a set of $`n`$ disjoint triangles in $`\mathrm{I}\mathrm{R}^3`$, and let $`P`$ be a set of of $`p`$ points in the $`xy`$-plane. We can construct $`Vis(\mathrm{\Delta }P)`$ in time $`O(n^{1+\epsilon }+t^{2/3}n^{2/3+\epsilon }+p)`$, where $`t`$ is the number of trapezoids in $`Vis(\mathrm{\Delta }P)`$. Alternately, before querying a new pixel, we could first check whether it is contained in an earlier trapezoid by performing a point location query. We can maintain a semi-dynamic set of $`t`$ interior-disjoint vertical trapezoids and answer point-location queries in $`O(\mathrm{log}t)`$ time per query and $`O(\mathrm{log}t)`$ amortized time per insertion, using a data structure of size $`O(t\mathrm{log}t)`$ based on a segment tree with fractional cascading . This approach adds $`O(p\mathrm{log}t)`$ to the overall running time of our hidden-surface removal algorithm; the total insertion time $`O(t\mathrm{log}t)`$ is dominated by other terms. Although this approach is slower than pixel-marking, it can be used when the set of pixels is presented online instead of being fixed in advance. ###### Theorem 4.2 Let $`\mathrm{\Delta }`$ be a set of $`n`$ disjoint triangles in $`\mathrm{I}\mathrm{R}^3`$, and let $`P`$ be a sequence of $`p`$ points in the $`xy`$-plane. We can maintain $`Vis(\mathrm{\Delta }P)`$ as points in $`P`$ are inserted, in total time $`O(n^{1+\epsilon }+t^{2/3}n^{2/3+\epsilon }+p\mathrm{log}t)`$, where $`t`$ is the number of trapezoids in $`Vis(\mathrm{\Delta }P)`$. ## 5 A Faster Sweepline Algorithm <br>(“Traps and Gaps”) The algorithms described in the previous section work for arbitrary sets of pixels. However, in most applications of hidden surface removal, the pixels form a regular integer grid. In this case, we can improve the performance of our algorithm using the following sweep-line approach, suggested by Pavan Desikan and Sariel Har-Peled. Without loss of generality, we assume that the pixel lattice is aligned with the coordinate axes. Our improved algorithm sweeps a vertical line $`\mathrm{}`$ across the image plane from left to right. At any position, $`\mathrm{}`$ intersects several trapezoids in $`Vis(\mathrm{\Delta }P)`$. Between any pair of such trapezoids is a *gap*, which is a possibly unbounded, possibly empty triangle bounded on the left by $`\mathrm{}`$, bounded above by the line through the bottom edge of the higher trapezoid, and bounded below by the line though the top edge of the lower trapezoid. Gaps can intersect each other, as well as other trapezoids that hit $`\mathrm{}`$. See Figure 3 (a). We store the traps and gaps in two data structures: a balanced binary search tree and a priority queue. The binary tree stores the traps and gaps in sorted order from top to bottom along $`\mathrm{}`$. For the priority queue, the priority of a trap is the $`x`$-coordinate of its right edge, and the priority of a gap is the $`x`$-coordinate of the leftmost pixel(s) inside the gap, or $`\mathrm{}`$ if the gap contains no pixels. Since the sweepline clearly crosses at most $`t`$ trapezoids, the cost of inserting or deleting a trap or gap from the sweep structures is $`O(\mathrm{log}t)`$. Note that this is bounded by both $`O(\mathrm{log}p)`$ and $`O(\mathrm{log}n)`$. To find the leftmost pixel inside a gap, we use the following two-dimensional integer programming result of Kanamaru *et al.* ; see also . For related results on enumerating integer points in convex polygons, see . ###### Lemma 5.1 (Kanamaru *et al.* ) Given a convex $`m`$-gon $`\mathrm{\Pi }`$, we can find the lowest leftmost integer point in $`\mathrm{\Pi }`$, or determine that $`\mathrm{\Pi }`$ contains no integer points, in time $`O(m+\mathrm{log}\delta )`$, where $`\delta `$ is the length of the shortest edge of the axis-aligned bounding box of $`\mathrm{\Pi }`$. ###### Corollary 5.2 We can find a leftmost pixel in any gap, or determine that there is no such pixel, in $`O(\mathrm{log}p)`$ time. We do not require that the sweepline structures always contain every trapezoid in $`Vis(\mathrm{\Delta }P)`$ that intersects $`\mathrm{}`$. Instead we maintain the following weaker invariant: whenever $`\mathrm{}`$ reaches a pixel $`\pi `$, the trapezoid $`\tau _\pi Vis(\mathrm{\Delta }P)`$ containing $`\pi `$ must be stored in the sweepline structures. We initialize the sweep structure with a single gap that contains the entire pixel grid. When the sweepline $`\mathrm{}`$ reaches the right edge of a trap $`\tau `$, we delete it from the sweep structure. We also delete the gaps immediately above and below $`\tau `$ and insert the new larger gap. Manipulating the sweep structure requires $`O(\mathrm{log}t)`$ time, and finding a leftmost pixel in the new gap requires $`O(\mathrm{log}p)`$ time, so the total time required to kill a single trap is $`O(\mathrm{log}p)`$. When $`\mathrm{}`$ reaches a leftmost pixel $`\pi `$ in a gap $`\gamma `$, we perform a trapezoid query to find the trap $`\tau _\pi Vis(\mathrm{\Delta }P)`$ containing $`\pi `$. We then delete $`\gamma `$ from the sweep structure, insert $`\tau _\pi `$, and insert the two smaller gaps $`\gamma ^+`$ and $`\gamma ^{}`$ immediately above and below $`\tau _\pi `$. The new trap $`\tau _\pi `$ may not contain all the leftmost pixels in $`\gamma `$; any omitted pixels will now be a leftmost pixel in either $`\gamma ^+`$ or $`\gamma ^{}`$. If some new gap contains a leftmost pixel of $`\gamma `$, it will be (recursively) filled before the sweepline moves again. (We can avoid creating such “transient” gaps by storing the highest and lowest leftmost pixels in each gap $`\gamma `$, at an additional cost of $`O(1)`$ time when $`\gamma `$ is created, but this improves the running time of our algorithm by at most a constant factor.) For each new trap inserted, our algorithm spends $`O(\mathrm{log}p)`$ time and creates at most two new gaps. Every gap except the initial one is created when a trap is inserted or deleted. We can charge at most three gaps to each trap: the gaps immediately above and below when the trap is inserted, and the gap left behind when the trap is deleted. The total number of gaps created over the entire algorithm is therefore at most $`3t+1`$. It follows that the total time spent finding leftmost pixels is $`O(t\mathrm{log}p)`$, and the total time spent manipulating the sweep structures is $`O(t\mathrm{log}t)`$. All the remaining time is spent on trapezoid queries, as in our earlier algorithms. ###### Theorem 5.3 Let $`\mathrm{\Delta }`$ be a set of $`n`$ disjoint triangles in $`\mathrm{I}\mathrm{R}^3`$, and let $`P`$ be a regular lattice of $`p`$ points in the $`xy`$-plane. We can construct $`Vis(\mathrm{\Delta }P)`$ in time $`O(n^{1+\epsilon }+t^{2/3}n^{2/3+\epsilon }+t\mathrm{log}p)`$, where $`t`$ is the number of trapezoids in $`Vis(\mathrm{\Delta }P)`$. Note that this time bound is sublinear in $`p`$ unless $`t=\mathrm{\Omega }(p/\mathrm{log}p)`$. Moreover, the $`O(t\mathrm{log}p)`$ term is dominated by other terms unless either $`t`$ is nearly quadratic in $`n`$ or $`p=2^{\mathrm{\Omega }(n^c)}`$ for some positive constant $`c`$. ## 6 Discussion and Open Problems One interesting special case of hidden-surface removal is the so-called *window rendering* problem, where the objects are axis-aligned horizontal rectangles. A simple modification of our algorithm solves this problem in time $`O(n\mathrm{log}^2n+t\mathrm{log}n+p)`$ which compares favorably with the best analytic solutions . If the pixels form a regular grid, we can improve the running time to $`O(n\mathrm{log}^2n+t\mathrm{log}n)`$ using the sweepline approach. (Note that this time bound does not depend at all on the number of pixels!) Similar improvements can be made for $`c`$-oriented polyhedra . It seems likely that our techniques can also be extended to other special cases of hidden surface removal with faster analytic solutions, such a polyhedral terrains and objects whose union has small complexity . Perhaps the most interesting open question is whether sampled visibility maps, or some other similar image decomposition, can be constructed efficiently *in practice*. As we mentioned in the introduction, ray-shooting queries are already answered in practice by walking through a spatial decomposition defined by the input objects. The same spatial decomposition can also be used to answer ray-dragging queries and trapezoidal beam queries. Since curved models are often polygonalized (and complex polyhedral models are often simplified) so that each polygonal facet covers only a few pixels, a practical implementation may require the sampled visibility map to be redefined in terms of higher-level objects, such as convex polyhedra or algebraic surface patches, instead of triangles. A practical implementation of our ideas would have other interesting applications. By changing the order in which our algorithm processes pixels, we can make it suitable for progressive rendering, where the quality of the image improves smoothly over time as finer and finer details are computed, or foveated rendering, where fine details are more important in certain areas of the image than others. Another possible application is occlusion culling . By sampling the visibility map at a small number of random points, we can quickly establish a set of simple occluders that can be used for conservative visibility tests. The occlusion tests themselves would be slightly simpler than in earlier approaches: A triangle is invisible if its projection is contained in some trapezoid. Sampled visibility maps exploit spatial coherence well in a global sense; the number of regions is never much larger than the size of the visibility map. In a more local sense, however, there is clearly room for improvement. Consider an image that contains mostly empty space, except for a large number of small triangles near the boundary. The sampled visibility map consists of several tall thin trapezoids, but a better decomposition would have a single region covering most of the image. It would be interesting to develop decompositions with better local behavior—perhaps where the expected size of the component containing a random pixel is maximized, or where the size of a component is tied to the local feature size of the visibility map near that component—but with the same global properties as sampled visibility maps. #### Acknowledgment. I thank Pavan Desikan and Sariel Har-Peled for suggesting the sweep-line approach described in Section 5 and pointing out several relevant references .
no-problem/9910/gr-qc9910072.html
ar5iv
text
# Towards a regular type N vacuum gravitational field ## Abstract An exact twisting type N vacuum solution is found. It has gauge and curvature invariants which are regular in the angular coordinates and decays to flat spacetime for big retarded times. 04.20J The study of gravitational radiation far away from bounded sources becomes more and more important. It is governed by the Petrov type N part of the Riemann tensor and exact expanding vacuum solutions of type N seem to play fundamental role in the understanding of this phenomenon. All non-twisting solutions are known but have singularities which extend to spatial infinity (singular pipes). This is a typical feature of the class of Robinson-Trautman solutions . For the twisting case only the Hauser solution is known but it is singular too . A linearised twisting solution also possesses singular pipes , but a third and fourth order iterations of it turn out to be regular . However, higher orders become singular again and even the third order iteration leads to singularities in the recently discovered curvature invariant . In this work we present an exact twisting solution which satisfies the existing criteria for regularity. We shall work in the tetrad and coordinate system used in . The metric is $$ds^2=\frac{2d\zeta d\overline{\zeta }}{\rho \overline{\rho }P^2}2\mathrm{\Omega }\left(dr+Wd\zeta +\overline{W}d\overline{\zeta }+N\mathrm{\Omega }\right)$$ (1) $`\mathrm{\Omega }=du+Ld\zeta +\overline{L}d\overline{\zeta }`$ where $`r`$ is the coordinate along a null congruence, $`u`$ is the retarded time and $`\zeta `$ is connected to the usual spherical angles by $$\zeta =\sqrt{2}\mathrm{tan}\frac{\theta }{2}e^{i\phi }$$ (2) The metric components are determined by a real function $`P`$ and a complex function $`L`$ which are $`r`$-independent: $$\rho =\left(r+i\mathrm{\Sigma }\right)^1$$ (3) $$W=\rho ^1L_u+i\mathrm{\Sigma }$$ (4) $$N=r\left(\mathrm{ln}P\right)_u+K/2$$ (5) where $`=_\zeta L_u`$ and the gauge invariants $`\mathrm{\Sigma }/r`$ and $`K/r^2`$ will be written out later, after their simplification. The basic functions $`P`$ and $`L`$ satisfy the field equation $$Im\overline{}\overline{}V=0$$ (6) where $`V_u=P`$ and the condition that the third Weyl scalar $`\mathrm{\Psi }_3`$ vanishes and hence the solution is of type N or higher: $$\psi ^1=0$$ (7) $$\psi ^1=P^1\left(\overline{}\overline{}V\right)_u$$ (8) We shall use an adaptation of Stephani’s method which was proposed originally for type II solutions. Let us choose as a basic element the complex potential $`\psi (\zeta ,\overline{\zeta },u)`$ which satisfies $`\psi =0`$. This immediately gives $$L=\psi _\zeta /\psi _u$$ (9) and $`\psi `$ may be taken as an arbitrary function $`\psi =\psi (\zeta ,\overline{\zeta },u)`$. This relation can be inverted to obtain $`u=u(\zeta ,\overline{\zeta },\psi )`$. This allows to consider $`V`$ as a function of $`\zeta ,\overline{\zeta }`$ and $`\psi `$, $`V=V(\zeta ,\overline{\zeta },u(\zeta ,\overline{\zeta },\psi ))=V(\zeta ,\overline{\zeta },\psi )`$. Then $`V=V_\zeta `$ and $`[,_\psi ]=0`$. Eqs. (7) and (8) become $$\overline{\psi }H_{\zeta \zeta }=H$$ (10) $$P=i\psi _uH$$ (11) where $`iH=V_\psi `$. In order to simplify Eqs. (6) and (10) we discuss the subclass of functions $`\psi `$ with $`u`$-independent real part: $$\psi =q(\zeta ,\overline{\zeta })ih(\zeta ,\overline{\zeta },u)$$ (12) $`\overline{\psi }=2q(\zeta ,\overline{\zeta })\psi `$ Then Eq. (11) shows that $`H`$ is real and replacing $`\overline{\psi }`$ from Eq. (12) into Eq. (10) we can determine $`H=H(\zeta ,\overline{\zeta },\psi )`$. Eqs. (9) and (11) give $`P`$ and $`L`$ . A non-trivial $`P`$ demands $`H0,`$ $`h_u0`$. Eq. (12) also gives $`\overline{}V=V_{\overline{\zeta }}+2q_{\overline{\zeta }}V_\psi `$. Applying the commutator $`[,\overline{}]=2Q_\psi `$, where $`Q=q_{\zeta \overline{\zeta }}`$, to Eq. (6) one obtains $$2Q\left(H_{\zeta \overline{\zeta }}+QH_\psi +2q_{\overline{\zeta }}H_{\psi \zeta }\right)+Q_{\overline{\zeta }}H_\zeta +Q_\zeta H_{\overline{\zeta }}+2q_{\overline{\zeta }}Q_\zeta H_\psi +\frac{1}{2}Q_{\zeta \overline{\zeta }}H=0$$ (13) The twist $`\mathrm{\Sigma }`$ and the other gauge invariant $`K`$ (since we always take $`r`$ as directly given and independent of $`\zeta `$ ) read $$\mathrm{\Sigma }=h_uH^2Q$$ (14) $$K=h_u^2H\left(\overline{}+\overline{}\right)\mathrm{ln}H$$ (15) We must ensure that $`Q0`$ for a non-zero twist. The only non-trivial Weyl scalar is $$\mathrm{\Psi }_4=i\rho h_u^3H^2\psi ^2$$ (16) The non-vanishing of $`h_u`$ and $`H`$ guarantees that $`\mathrm{\Psi }_40`$ and the solution is of type N. The true curvature invariant $`I`$ of Bičák and Pravda reads $$\sqrt{I}=48\left(\rho \overline{\rho }\right)^2\mathrm{\Psi }_4\overline{\mathrm{\Psi }}_4$$ (17) Instead of Eqs. (6) and (7) we have derived a system of two linear second order with respect to $`H`$ equations (10) and (13). The procedure of finding a solution is the following. We fix $`q`$ and take an arbitrary $`h`$. $`L`$ is given by Eq. (9). We substitute from Eq. (12) $`\overline{\psi }(\zeta ,\overline{\zeta },\psi )`$ in Eq. (10) and solve it for $`H,`$ $`\psi `$ being treated as a parameter. Then we try to satisfy Eq. (13). $`P`$ is found from Eq. (11). A possible problem is the reality of $`H`$. The system (10) and (13) is simpler than the numerous non-linear high order equations found for $`L`$ in the gauge $`P=1`$ when Killing or homothetic Killing vectors are present \[13-16\]. Let us find next a solution $`H(x,u)`$, where $`x=\zeta +\overline{\zeta }`$, possessing the simplest possible twist $`Q=Q_0=const,`$ $`q=Q_0x^2/2`$. Eq. (10) becomes $$\left(Q_0x^2\psi \right)H_{xx}=H$$ (18) Introduction of the variable $`z=Q_0x^2/\psi `$ transforms Eq. (18) into $$z\left(1z\right)H_{zz}+\frac{1}{2}\left(1z\right)H_z+H/4Q_0=0$$ (19) This is a hypergeometric equation and one of its fundamental solutions is $`F(a,b,c,z)`$ where $`a\left(1+2a\right)=1/2Q_0,`$ $`b=a1/2,`$ $`c=1/2`$. We must make $`H`$ real, having in mind that $`\overline{z}=z/\left(z1\right)`$ and $`\overline{\psi }=\psi \left(z1\right)`$. The necessary linear transformations of $`F`$ exist when $`c=2b`$, which means $`Q_0=4/3`$ . Then $$H_1=\left(i\psi \right)^{3/4}F(\frac{3}{4},\frac{1}{4},\frac{1}{2},z)$$ (20) $$z=4x^2/3\psi $$ (21) $$\psi =\frac{2}{3}x^2ih$$ (22) is one real solution of Eq. (19). The expressions $`ca`$ and $`cb`$ are not integers and the second real solution is $$H_2=x\left(i\psi \right)^{1/4}F(\frac{1}{4},\frac{3}{4},\frac{3}{2},z)$$ (23) Both solutions may be expressed also by Legendre functions because $`c=2b`$ . They satisfy an identity which follows from Eq. (21): $$\frac{3}{4}H=\psi H_\psi +\frac{1}{2}xH_x$$ (24) When $`Q=Q_0`$ and $`_\zeta =_x`$ Eq. (13) becomes $$H_{xx}+Q_0H_\psi +2Q_0xH_{\psi x}=0$$ (25) Exploiting Eqs. (18) and (24) we see that $`H_{1,2}`$ satisfy Eq. (25) for $`Q_0=4/3`$. In total, we have found two type N solutions depending on an arbitrary real function $`h(x,u)`$ and given by $$P_{1,2}=h_uH_{1,2}$$ (26) $$L=4ix/3h_u+h_x/h_u$$ (27) The function $`h`$ may be chosen in such a way that $`\mathrm{\Sigma },`$ $`K,`$ $`\mathrm{\Psi }_4`$ and $`I`$ remain regular in $`x`$ for $`\mathrm{}x\mathrm{}`$, i.e. possess no singular pipes. Namely, let us take $$h=g\left(u\right)\left(1+x^4\right)^1$$ (28) where $`g0`$ and is bounded, but otherwise arbitrary function. Then $$\mathrm{\Sigma }=4H^2g_u/3\left(1+x^4\right)$$ (29) $$K=2H_x\left(H_x+8xH_\psi /3\right)\left(1+x^4\right)^8g_u^2$$ (30) $$\sqrt{I}=48H^4g_u^6\left(r^2+\mathrm{\Sigma }^2\right)^3\left(1+x^4\right)^2\left[4x^4\left(1+x^4\right)^2/9+g^2\right]^2$$ (31) Eqs. (21) and (22) show that $`z`$ is always complex and $`z2`$. Hence, $`F`$ in Eqs. (20) and (23) is regular in $`x`$. Singular pipes do not arise in Eqs. (29), (30) and (31) even when $`x\mathrm{}`$ because $`Hx^{3/2},`$ $`H_xx^{1/2}`$ and $`H_\psi x^{1/2}`$. The condition $`g0`$ prevents the appearance of singularity in $`I`$ at $`x=0`$. Thus the gauge invariants $`\mathrm{\Sigma }`$ and $`K`$ and the true coordinate invariant $`I`$ are regular for any $`x`$ i.e. on the two-dimensional sphere. The function $`g\left(u\right)`$ is characteristic for gravitational radiation since it may bring ”news”. If $`g\left(u\right)=1+k\left(u\right)`$ and $`k_u0`$ when $`u\mathrm{}`$, the three invariants vanish for large retarded times. Finally, let us make connection with previous results about twisting type N fields. The even Hauser solution separates variables in the present coordinates : $$L_H=2\left(u+i\right)/x$$ (32) $$P_H=x^{7/2}f\left(u\right)$$ (33) $$f\left(u\right)=F(\frac{1}{8},\frac{3}{8},\frac{1}{2},u^2)$$ (34) Eq. (32) leads to the potential $$\psi _H=Q_0x^2\left(1iu\right)/2$$ (35) It falls within the subclass we are discussing. A combination of linear and quadratic transformations applied to $`f\left(u\right)`$ yields $$P_H=AP_1+BP_2$$ (36) where $`A`$ and $`B`$ are constants and $`h=2ux^2/3`$. The result for the odd solution is similar. This is not surprising because a linear combination of solutions of the linear equations (10) and (25) is also a solution if $`\psi `$ is the same. Eq. (36) fixes $`Q_0=4/3`$ in Eq. (35). Formulae (29-31) are replaced by $$\mathrm{\Sigma }_H=2x^5f$$ (37) $$K_H=\frac{8}{9}x^5S\left(u\right)$$ (38) $$\sqrt{I_H}=108f^4x^{10}\left(1+u^2\right)^2\left(r^2+4f^4x^{10}\right)^3$$ (39) where $`S\left(u\right)`$ $`=\left(9f/43uf_u\right)^2+9f_u^2`$. It is clearly seen that $`\mathrm{\Sigma }_H`$ and $`K_H`$ have singular pipes at $`x=\mathrm{}`$. Quite interestingly, $`I_H`$ is regular in $`x`$. When $`u\mathrm{},`$ $`fu^{3/4}`$ and $`\mathrm{\Sigma }_H`$ and $`K_H`$ diverge, but $`I_H0`$. There are other solutions of Eq. (10) with $`L`$ given by Eq. (32) and $`\psi `$ given by Eq. (35). They correspond to rational degenerations of the hypergeometric function for certain values of $`Q_0`$. When $`Q_0=1/2`$ and $`Q_0=1/6`$ Eq. (19) yields the solutions $$P_{C1}=2^6x^6\left(u^2+1\right)$$ (40) $$P_{C2}=12^{7/2}x^7\left(u^2+1\right)$$ (41) respectively. The constants in the above equations are irrelevant since $`P`$ is determined up to a multiplicative constant. These are the Collinson’s semi-solutions , because they do not satisfy the constraint equation (25). This work was supported by the Bulgarian National Fund for Scientific Research under contract F-632.
no-problem/9910/cond-mat9910364.html
ar5iv
text
# Ion-ion correlations: an improved one-component plasma correction ## Abstract Based on a Debye-Hückel approach to the one-component plasma we propose a new free energy for incorporating ionic correlations into Poisson-Boltzmann like theories. Its derivation employs the exclusion of the charged background in the vicinity of the central ion, thereby yielding a thermodynamically stable free energy density, applicable within a local density approximation. This is an improvement over the existing Debye-Hückel plus hole theory, which in this situation suffers from a “structuring catastrophe”. For the simple example of a strongly charged stiff rod surrounded by its counterions we demonstrate that the Poisson-Boltzmann free energy functional augmented by our new correction accounts for the correlations present in this system when compared to molecular dynamics simulations. The classical one-component plasma (OCP) is an idealized model, in which a single species of ions moves in a homogeneous neutralizing background of opposite charge and interacts only via a repulsive Coulomb potential . Apart from its applications in plasma physics it is also commonly used in soft matter physics as one of the simplest possible approaches for modeling correlations when studying polyelectrolytes, charged planes or charged colloids . The general idea is the following: Compute the OCP free energy as a function of bulk density $`n_\mathrm{B}`$ and use this expression in the spirit of a local density approximation (LDA) as a correlation correction for the inhomogeneous system (i.e., $`n_\mathrm{B}n(𝒓)`$). The total excess free energy is the volume integral over the free energy density and thus becomes a functional of $`n(𝒓)`$. Many alternative and more sophisticated methods based on integral equations have been developed for treating this correlation problem. Even though they offer results which are in good agreement with Monte-Carlo simulations, they do not provide any intuitive insight into the physics governing ionic solutions. There is, however, a fundamental problem with the local density approaches: The OCP free energy is not a convex function of density . This implies that it cannot be used in a thermodynamically stable way within LDA, since the system can lower its total free energy by developing local inhomogeneities and increasing its density in one region at the expense of another (disregarding any surface effects) . Once started, this continues as a runaway process and the overall system collapses to a point. This feature is already seen on the level of the Debye-Hückel plus hole (DHH) approximation , which is an extension of the original Debye-Hückel (DH) theory for the special case of the OCP, and the instability it gives rise to has been termed “structuring catastrophe” in this context . The proper way for avoiding this difficulty thus requires modifications of the one-component plasma model itself. The new theory, referred to as the Debye-Hückel-Hole-Cavity (DHHC) approach, remains simple and can be used within LDA to account for correlation effects present in more complex ionic solutions, as will be shown in an example at the end of the paper, where we compare its predictions to simulational results of a model system. Since the necessary changes to DHH will turn out to be surprisingly tiny, it is worthwhile to briefly recall the way in which DHH theory arrives at a free energy for the OCP. For definiteness, we assume a system of $`N`$ identical point-particles of valence $`v`$ and (positive) unit charge $`q`$ inside a volume $`V`$ with a uniform neutralizing background of density $`vn_\mathrm{B}`$ and dielectric constant $`\epsilon `$. According to the DH approach, the potential $`\varphi `$ created by a central ion (i.e., fixed at the origin) and all its surrounding ions results from solving the spherically symmetric Poisson equation $$^2\varphi (r)=\varphi ^{\prime \prime }(r)+\frac{2}{r}\varphi ^{}(r)=\frac{4\pi }{\epsilon }\rho (r)$$ (1) under the requirement that the charge density is $`\rho (𝒓)=qv\delta (𝒓)`$ at the central ion and that the rest of the mobile ions rearrange themselves in the uniform background in accordance with the Boltzmann distribution $`\rho (r)=vqn_\mathrm{B}\mathrm{exp}[\beta vq\varphi (r)]vqn_\mathrm{B}`$. Combining this with eq. (1) yields the nonlinear Poisson-Boltzmann (PB) equation, while linearization of the exponential function in the mobile ion density gives $`\rho (r)=\epsilon \kappa ^2\varphi (r)/4\pi `$ together with the famous Debye-Hückel solution for the potential, $`\varphi (r)\mathrm{e}^{\kappa r}/r`$, illustrating the rearrangement of the other ions around the central one in order to screen the Coulomb interaction. Here, $`\kappa \sqrt{4\pi \mathrm{}n_\mathrm{B}}`$ is the inverse screening length, $`\mathrm{}=\mathrm{}_\mathrm{B}v^2`$, with $`\mathrm{}_\mathrm{B}=\beta q^2/\epsilon `$ being the Bjerrum length, and $`\beta =1/k_\mathrm{B}T`$. The problem with the DH theory is that the condition for linearization is obviously not satisfied for small $`r`$, where the potential is large — indeed, the particle density becomes negative and finally diverges at the origin. This defect was overcome by the DHH theory , which artificially postulates a correlation hole of radius $`h`$ around the central ion where no other ions are allowed. In this case the charge density is given by $$\rho (r)=\{\begin{array}{cc}\hfill qv(\delta (𝒓)n_\mathrm{B}):& rh\hfill \\ \hfill \epsilon \kappa ^2\varphi (r)/4\pi :& r>h.\hfill \end{array}$$ (2) The solution of the linearized PB equation with the appropriate boundary conditions (continuity of electric field and potential) yields the potential for both regions in dependence of $`h`$, which has to be fixed on physical grounds: At low temperatures the electrostatic repulsion dominates and the minimum ion separation essentially becomes the mean separation, so $`h=(4\pi n_\mathrm{B}/3)^{1/3}`$. At high temperatures, the hole size can be estimated by balancing Coulombic and thermal energy, which gives $`h=\mathrm{}`$. A systematic way to interpolate between these two limits results from excluding particles from a region where their potential energy is larger than some threshold. A natural choice is the thermal energy $`k_\mathrm{B}T`$, which leads to $$\kappa h=\omega 1\text{with}\omega =(1+3\mathrm{}\kappa )^{1/3}.$$ (3) Incidentally, this assumption also gives a continuous charge density across the hole boundary. Once the potential at the position of the central ion is known, the electrostatic contribution to the Helmholtz free energy density can be obtained by the Debye charging process , as was done previously by Penfold et al. : $$\frac{\beta f_{\mathrm{DHH}}}{n_\mathrm{B}}=\frac{1}{4}\left[1\omega ^2+\frac{2\pi }{3\sqrt{3}}+\mathrm{ln}\left(\frac{\omega ^2+\omega +1}{3}\right)\frac{2}{\sqrt{3}}\mathrm{arctan}\left(\frac{2\omega +1}{\sqrt{3}}\right)\right].$$ (4) The presented simple DHH analysis of the one-component plasma theory offers considerable insight into ionic systems and is in good agreement with Monte-Carlo simulations when fluctuations on the charge density are not relevant . In principle one can attempt to include such fluctuations by applying the bulk density-functional theory in a local way, i.e., $`n_\mathrm{B}n(𝒓)`$. The basic idea is to obtain the density distribution via functional minimization of the Helmholtz free energy $$\beta F_{\mathrm{OCP}}[n(𝒓)]=\mathrm{d}^3r\left\{n(𝒓)\mathrm{ln}\left(n(𝒓)V_\mathrm{p}\right)+\beta f_{\mathrm{DHH}}[n(𝒓)]\right\}$$ (5) under the constraint of global charge neutrality ($`V_\mathrm{p}`$ represents the volume of a particle.) Yet, this variational process does not lead to a well defined density profile, since $`f_{\mathrm{DHH}}(n)`$ asymptotically behaves like $`n^{4/3}`$ at high densities and therefore is not a convex function – with the implications already mentioned in the introduction. At small densities, however, the free energy density is convex and changes to a concave form only beyond a critical density $`n^{}7.8618/\mathrm{}^3`$ (see fig. 1). Hence, if during the process of actually computing $`n(𝒓)`$ such a density is never met, the theory does not “realize” its asymptotic instability and gives a finite (yet, meta-stable) answer. It has in fact been applied to account for correlations in the case of systems with low ionic strength ). Assuming the case of aqueous solutions ($`\mathrm{}_\mathrm{B}=7.14`$Å) and monovalent ions we find a critical density $`n^{}\mathrm{\hspace{0.33em}36}\text{mol/l}`$, which clearly is high enough to prevent a runaway process to set in. However, already for divalent and trivalent ions we find $`n^{}0.56\text{mol/l}`$ and $`0.049\text{mol/l}`$, respectively, which are sufficiently low to be realized and thus to trigger a collapse. Notice the strong dependence of $`n^{}`$ on valence, namely, on the sixth power. To circumvent the instabilities occurring at high densities erroneously attributed to the local density approach itself, a number of nonlocal free energies have been proposed . In these weighted density approximations (WDA) the local density is replaced by a spatially averaged quantity. The main problem with these methods is that the choice of the weighting function is somewhat arbitrary. In most cases it is obtained by relating the second variation of the free energy with the direct correlation function. At this point the WDA requires prior information about this function, which is not yet available and thus has to be calculated using different approaches (like, e.g., integral equation theories). Whatever choice one takes, it is still (i) quite arbitrary and (ii) leads to a series of approximations which (iii) instead of clarifying the physics tend to obscure it. The instabilities present in the local DHH approach can be properly overcome by recognizing that the failure of this model is due to the (too strong) requirement of local charge neutrality imposed by the LDA: A local fluctuation leading to an increase of particle density implies a corresponding increase in background density; therefore the fluctuation is not suppressed by an increase in repulsive Coulomb interactions but quite on the contrary favored by its decrease. A natural solution for that problem is to decouple the particle density from the background density and to apply the LDA just to the former one. This, however, leads to nonlinearities in the solution of the differential equation which spoil the simplicity of the DH and DHH approximations. The most simple solution is to exclude the neutralizing background from a cavity of radius $`a`$ placed around the central ion only, which is already sufficient to control the unphysical divergence of the particle density. Even though it does not accounts for excluded-volume effects , the parameter $`a`$ can in principle be identified with the diameter of the particles. In addition, the exclusion hole for $`arh`$ is retained in order to account for the electrostatic repulsion between two ions. Consequently, the charge density, which for the usual DHH theory is given by eq. (2), has now three regions: $$\rho (r)=\{\begin{array}{cc}\hfill qv\delta (𝒓):& r<a\hfill \\ \hfill qvn_\mathrm{B}:& arh\hfill \\ \hfill \epsilon \kappa ^2\varphi (r)/4\pi :& r>h.\hfill \end{array}$$ (6) The solution of the linearized PB equation with appropriated boundary conditions gives the potential in those regions: $$\psi (r)=\frac{ve_0}{4\pi \epsilon r}\times \{\begin{array}{cc}1\frac{r}{2\mathrm{}}\left[(\kappa h)^2(\kappa a)^2\right]\kappa rC_h:\hfill & 0r<a\hfill \\ 1\frac{r}{2\mathrm{}}\left[(\kappa h)^2(\kappa r)^2\right]\frac{1}{3\mathrm{}\kappa }\left[(\kappa r)^3(\kappa a)^3\right]\kappa rC_h:\hfill & ar<h\hfill \\ C_h\mathrm{e}^{\kappa (rh)}:\hfill & hr<\mathrm{},\hfill \end{array}$$ (7) with the abbreviation $$C_h=\frac{1}{1+\kappa h}\left(1\frac{(\kappa h)^3(\kappa a)^3}{3\mathrm{}\kappa }\right).$$ (8) In order to obtain the old theory in the limit $`a0`$ we choose the hole size $`h`$ to yield the same screening (i.e., the same amount of charge within $`h`$) as the DHH theory, which results in $$\kappa h=\left[(\omega 1)^3+(\kappa a)^3\right]^{1/3}.$$ (9) This expression has four important physical limits: zero/infinite temperature and low/high density. At low temperature the exclusion hole has maximum size and, like in the DHH case, behaves as $`h=(3/4\pi n_\mathrm{B}+a^3)^{1/3}`$. As the temperature is increased, the hole size shrinks, but contrary to DHH theory it does not vanishes and $`ha`$ as $`T\mathrm{}`$. At small densities, entropic effects compete with the Coulombic repulsion and $`h=\mathrm{}+a`$; for high densities, the exclusion hole decreases but is again limited below and $`ha`$. Using this prescription for $`h`$, the Helmholtz free energy can be obtained by Debye-charging the fluid: $$\frac{\beta f_{\mathrm{DHHC}}}{n_\mathrm{B}}=\frac{(\kappa a)^2}{4}_1^\omega \mathrm{d}\overline{\omega }\left\{\frac{\overline{\omega }^2}{2(\overline{\omega }^31)}\mathrm{\Omega }(\overline{\omega })^{2/3}+\frac{\overline{\omega }^3}{(1+\mathrm{\Omega }(\overline{\omega })^{1/3})(\overline{\omega }^2+\overline{\omega }+1)}\right\}$$ (10) with the abbreviation $$\mathrm{\Omega }(\overline{\omega })=(\overline{\omega }1)^3+\frac{(\kappa a)^3}{3\mathrm{}\kappa }(\overline{\omega }^31)$$ (11) and where $`\omega `$ is the same as in eq. (3). The integral can be solved numerically for given values of $`\mathrm{}_\mathrm{B}`$, $`v`$ and $`a`$. As in the DHH approach fluctuations are taken into account by allowing the density to become local; thus, $`n(𝒓)`$ is obtained by minimizing the free energy from eq. (5) with $`f_{\mathrm{DHH}}`$ replaced by $`f_{\mathrm{DHHC}}`$ as given by eq. (10). But differently from the DHH theory, the Debye-Hückel-Hole-Cavity free energy is a convex function of density and thus applicable within a local density approximation. This situation is depicted in fig. 1, where we plotted the previous expression of the free energy of the DHH theory together with the improved expression of the DHHC approach. Recall that the DHH free energy has a point of inflection at a critical density $`n^{}7.8618/\mathrm{}^3`$, which makes it unstable at high densities – particularly for multivalent ionic correlations, as is demonstrated in the right part of fig. 1. As an example, we apply this free energy as a correlation correction in the theoretical description of the screening of a charged rod, which is a simple model of biologically relevant stiff polyelectrolytes like DNA, actin filaments or microtubules. Much of the thermodynamic behavior of these molecules is determined by the distribution of the counterions around the polyion. As a model system we take a rod of radius $`r_0`$ and line charge density $`\lambda =0.959q/r_0`$ embedded in a cell of outer radius $`R=123.8r_0`$ and the complementary values $`\mathrm{}_\mathrm{B}/r_0=3`$, $`v=1`$ and $`\mathrm{}_\mathrm{B}/r_0=1`$, $`v=3`$ have been investigated, which on the plain PB level both give a fraction of condensed counterion (in the Manning sense) of roughly 65% . This system is thus strongly charged and one expects ionic correlations to become relevant. Indeed, the comparison between the distributions obtained by simulation and the ones from PB theory shows that the mean-field approach fails in the limit of high ionic strength. In reality the ions do not just interact with the average electrostatic field but if an ion is present in a position $`𝒓`$, it tends to push away other ions from that point. This effect becomes important at high densities, low temperatures and for multivalent ions. As discussed above, a simple way to improve PB theory is to extend the density functional to include a term of the form (10) which accounts for the correlations. The configurational free energy for the screened macroion solution can be partitioned into two terms: $$F_\mathrm{P}[n(𝒓)]=F_{\mathrm{PB}}[n(𝒓)]+\mathrm{d}^3rf_{\mathrm{DHHC}}[n(𝒓)].$$ (12) The first part $$F_{\mathrm{PB}}[n(𝒓)]=\mathrm{d}^3r\left\{k_\mathrm{B}Tn(𝒓)\mathrm{ln}\left(n(𝒓)V_\mathrm{p}\right)+\frac{1}{2}qvn(𝒓)\varphi [n(𝒓)]\right\}$$ (13) contains the ideal gas contribution of the small ions, the interaction with the macroion potential and the mean-field interaction between the counterions. Minimization of this expression under the constraint of global charge neutrality gives – together with the Poisson equation – the Poisson-Boltzmann equation. The inter-particle correlations are now approximately accounted for by adding an excess free energy, which is the second term in eq. (12) — the DHHC free energy in local density approximation. The equilibrium ion distribution minimizing the functional (12) is most easily found by means of a Monte-Carlo solver, as has been proposed elsewhere . The fraction of ions within a distance $`r`$, $$P(r)=\frac{1}{\lambda }_{r_0}^r\mathrm{d}\overline{r}\mathrm{\hspace{0.33em}2}\pi \overline{r}vqn(\overline{r}),$$ (14) obtained following this procedure is illustrated in fig. 2. Compared to the plain PB result the simulation shows a stronger condensation of ions in the vicinity of the rod, an effect which is more pronounced in the trivalent system. In both cases the increased condensation is reproduced by the correlation corrected PB functional from eq. (12). While in the case $`\mathrm{}_\mathrm{B}/r_0=3`$, $`v=1`$ the theoretical prediction practically overlaps the simulation, it somewhat overestimates correlations in the complementary case $`\mathrm{}_\mathrm{B}/r_0=1`$, $`v=3`$. It must, however, be noted that the ions in the simulation also interacted via a repulsive Lennard-Jones potential, giving them a diameter of roughly $`r_0`$. The expected reduction of particle density resulting from the additional hard core is not accounted for in the presented theory, but could easily be included along the lines of Refs. . In conclusion, we have shown that the failure of the local density approximation for the one-component plasma is due to the asymptotically concave free energy employed by the DHH theory. To eliminate this problem, we introduced a DHHC approach in which the uniform background is absent in the immediate vicinity of the central ion, which leads to a convex, thermodynamically stable free energy. Moreover, the local density functional theory derived from this assumption is able to correctly account for the correlations between small ions in the presence of a strongly charged macroion. This was demonstrated for the case of a stiff rodlike polyelectrolyte by comparing the integrated charge density to simulation results of the same model. A more detailed investigation of the applicability of the LDA to rodlike polyelectrolytes and to charged colloids is postponed to future work. ###### Acknowledgements. This work has been supported by the Brazilian agency CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico). M. C. B. would like to thank K. Kremer for his hospitality during a stay in Mainz, where most of this work was completed.
no-problem/9910/hep-th9910229.html
ar5iv
text
# References ITEP-TH-56/99 hep-th/xxnnmmm On heavy states in supersymmetric gluodynamics at large N A.Gorsky and K.Selivanov ITEP, Moscow, 117259, B.Cheremushkinskaya, 25 ## Abstract It is argued that there are states (quasiparticles) with masses ranging over the scales $`\mathrm{\Lambda }N_c^{1/3}÷\mathrm{\Lambda }N_c`$ in N=1 supersymmetric multicolor gluodynamics. These states exist in the form of quantum bubbles made out of the BPS domain walls. Analogous states are likely to exist in non-supersymmetric case as well. Recently a remarkable progress in understanding supersymmetric gauge theories took place. One of the main roles in those developments was played by the so-called BPS states which showed up practically in every instance when some kind of exact information about a gauge theory was available. The classical example was, of course, the construction of low energy effective action in N=2 supersymmetric Yang-Mills , where a crucial piece of information was obtained due to existence of the BPS saturated monopoles and dyons. N=1 supersymmetric Yang-Mills also possesses the BPS states. These are domain walls and (upon perturbing the theory with some operators or upon introducing some matter) strings . Notice that this time the BPS states are not point-like which is, of course, much harder to work with. It may easily happen that extendedness of the BPS states in N=1 supersymmetric gluodynamics is the reason why it is less treatable compared to the N=2 theory. Anyway, the role of the BPS extended objects in dynamics of N=1 theory is still to be understood, though there are permanent advances (see, e.g. -) of which the most inspiring for the present note was the paper ). The paper was concentrated about the fact that the BPS domain wall width scales as $`1/N_c`$ at large $`N_c`$. The point of the paper was that such behavior of the width assumes existence of heavy particles, $`mN_c`$ (of which the wall could be done) in the spectrum of multicolor N=1 supersymmetric gluodynamics, in addition to the traditional glueballs whose mass is independent of $`N_c`$ in the multicolor limit. It is worth stressing that these heavy particles were argued in to exist in the non-supersymmetric gluodynamics too. The purpose of the present note is to suggest a candidate for these heavy particles. We argue that they can, in turn, be done out of the BPS walls, namely, in the form of quantum bubbles done out of the BPS walls. It happens that at large $`N_c`$ these bubbles can be treated semiclassically, the thing wall approximation being applicable (because a typical radius of the bubble is much bigger than the wall width). Masses of these bubbles range over the scales $`\mathrm{\Lambda }N_c^{1/3}÷\mathrm{\Lambda }N_c`$. The BPS domain walls in N=1 supersymmetric gluodynamics were introduced in . They interpolate between the $`N_c`$ vacua distinguished by the value of gluino condensate. The tension of the wall between two adjacent vacua is of the order of $`N_c\mathrm{\Lambda }^3`$, where $`\mathrm{\Lambda }`$ stands for the scale of the theory. It is instructive to discuss the domain walls at large $`N_c`$ in the framework of M-theory, where they are represented as M5 branes wrapped on some cycles . This construction naturally explains that the wall tension scales as $`N_c`$. Later it was argued in refs. that the domain wall width $`\delta `$ scales as $`\delta 1/N_c`$. The arguments of refs. , were supplemented in by considering the $`N_c`$-behavior of the wall junctions and by studying various effective Lagrangians. Quantum bubbles (in a scalar theory with spontaneous symmetry breaking) were introduced in , where they arose as resonant states in the multi-particle production at a threshold. Let us remind main points concerning those bubbles. The action in the thin wall approximation reads $$S=4\pi \mu 𝑑tr^2\sqrt{1\dot{r}^2}$$ (1) where $`\mu `$ stands for the tension of the wall. Corresponding Hamiltonian reads $$H^2p^2=(4\pi \mu r^2)^2$$ (2) with the canonical momentum $`p`$. The classical trajectory with energy $`E`$ corresponds to oscillations of the bubble between the turning point $`r=r_0=(\frac{E}{4\pi \mu })^{1/2}`$ and $`r=0`$. It could be that instead of oscillating the bubble would quickly dissipate into outgoing waves. Below we give some arguments that this is not the case. The bubble becomes stable in the large $`N_c`$ limit. Therefore for the moment we proceed discussing bubbles in assumption of their stability. The part of the trajectory near zero radius, $`r\delta `$, cannot be described within the thin wall approximation. However in the case that $`r_0\delta `$ (which we show below is indeed the case) most of the evolution of the bubble proceeds within the applicability of the thin wall approximation. Via standard quasi-classical consideration this assumes that the wave function of the bubble decays quickly near zero radius so this region should not be essential. The oscillatory motion of the bubbles can be quantized and the discrete energy levels found by applying the Bohr - Sommerfield quantization rule: $$I(E)p𝑑r2\pi \nu (E)=2\pi n,$$ (3) where the integral runs over one full period of oscillation and contains the momentum $`p`$ determined by eq.(2) in the thin wall approximation. The quantity $`\nu (E)`$ is a correction to the thin wall limit, which arises from the contribution to the action of the motion at short distances $`r\delta `$, where the latter limit is not applicable. Since at such distances $`pE`$, by order of magnitude $`\nu (E)`$ can be estimated as $`\nu (E)E\delta `$. The integral in eq.(3) is of the order of $`Er_0`$, and is thus much larger than $`\nu (E)`$ once the condition $`r_0\delta `$ is satisfied. In terms of the turning radius $`r_0`$ the quantization relation (3) reads as $$k\mu r_0^3=2\pi (n+\nu (E)),$$ (4) with $`k`$ being a numerical coefficient, $$k=\frac{\pi \sqrt{\pi }\mathrm{\Gamma }[1/4]}{\mathrm{\Gamma }[7/4]}.$$ (5) For an energy level $`E_n`$ in terms of the number $`n`$ of the level one finds: $$E_n=\mu ^{1/3}\left(2\pi n/k\right)^{2/3}.$$ (6) Thus for finite $`n`$ the energy of the bubble scales as $`N_c^{1/3}`$. Let us first verify that the thin wall approximation is valid. Using the fact that the tension of the wall scales as $`N_c`$, one sees from Eq.(4) that (at finite $`n`$) $$r_0N_c^{1/3}$$ (7) which is indeed large compared to the wall width, $`\delta 1/N_c`$ (on the other side, $`r_0`$ is small compared to the scale of the theory, $`\mathrm{\Lambda }`$, so the bubbles are point-like). Let us now estimate the decay rate of the quantum bubble into the light glueballs (whose mass is independent of $`N_c`$ in the multicolor limit). The spectrum of the particles produced by an accelerating wall is described by the exponential $`e^{E/T_{eff}}`$, where $`E`$ is the energy of the produced particles and $`T_{eff}`$ stands for an effective temperature, which is equal to the acceleration of the wall. In the case of the bubble a typical acceleration is of the order of $`1/r_0`$. Hence, in view of Eq.(7), we expect that the particle production rate is $`e^{CN_c^{1/3}}`$ where $`C`$ is $`N_c`$-independent. It is seen from Eq.(4) that for sufficiently high levels of the bubble, $`nN_c`$, $`r_0`$ becomes unsuppressed by the powers of $`N_c`$ and hence for such levels one expects no suppression of the particle production. Remarkably, energy of these extremal levels scales as $`N_c`$, which is the mass scale argued in to exist in N=1 supersymmetric multicolor gluodynamics in order to explain the $`N_c`$ dependence of the domain wall width. It is worth noticing that both perturbative and nonperturbative corrections to the effective action Eq.(1) are likely to be small. Indeed, the effective coupling constant is expected to freeze at the scale $`r_0`$ which is small compared to $`1/\mathrm{\Lambda }`$. So, perturbative corrections are expected to be of the order of $`1/logN_c`$. Nonperturbative corrections should be suppressed by powers of $`N_c`$. One is tempted to claim that in the multicolor limit Eq.(6) is nonexact only because it is obtained via Bohr-Sommerfield quantization, which is good only for sufficiently high levels. Notice, however, that Eq.(6) gives only part of the spectrum, nonspherical modes of the bubble being neglected. The problem of quantization of the bubble in general case is the problem of quantization of supermembrane which is unsolved by now. All of the above picture is very likely to exist in the usual non-supersymmetric gluodynamics in the multicolor limit. In (see also , ) it was argued that even in the non-supersymmetric case at every $`\theta `$ where are $`N_c`$ vacua one of which being stable and others - metastable. However those metastable vacua live infinitely long in the multicolor limit. The wall interpolating between the adjacent vacua was argued to have the same $`N_c`$-dependences - $`N_c`$ in its tension and $`1/N_c`$ in its width. Of course, the walls between nondegenerate vacua cannot be at rest, but the quantum bubbles can still exist. A typical size of the quantum bubbles is seen to be much less than the size of the critical bubble (the one which arise in the spontaneous metastable vacuum decay), hence the above estimates remain intact. Let us briefly discuss possible interactions of quantum bubbles in Minkowski space. First we have to pose question about the bubble charges. It is clear that flat domain wall is charged with respect to three-form field which can be identified in non-supersymmetric case with Chern-Simons three-form. However the total charge of the bubble vanishes. The most simple argument concerning this point amounts from the analogy with the behaviour of one-form field in two dimensions. Indeed in d=2 electric field of the charge is constant and the total field of charge-anticharge pair is zero. The bubble above is the analogue of this pair in d=4 and taking into account that the curvature of the three-form field of the wall is constant one can show the absence of the total three-form charge. Therefore there is no the Coulomb like interaction between two bubbles. However one can look for the possible string like interaction for the bubble pair. Naively one can expect a string stretched between bubbles since QCD string can end on the domain wall. But more careful inspection shows that it is not the case. The point is that U(1) field providing this possibility is related to the presence of the fermionic zero mode on the flat domain wall. In the bubble case zero mode disappears due to the curvature therefore the argument above fails in this case. Thus there is no interaction of the bubble pair via a single QCD string. On more supporting argument comes from the consideration of the sizes of the objects - the size of QCD string is believed to be independent of $`N_C`$ while the size of the low level bubble vanishes in the multicolor limit. The issue of other possibilities for the bubbles to interact deserves further investigation. It would be interesting to realize the influence of these additional states on the thermodynamics of the multicolor QCD. The fractional dependence on N implies the possibility of a kind of the phase transition related with the quantum bubbles. Since we deal with the large N limit one can also expect gravitational counterpart of these states via AdS/CFT correspondence. We would like to thank M. Shifman and M. Voloshin for discussions on the related issues. The work of A.G. was supported in part by grant INTAS-97-0103 and the work of K.S. by grant INTAS-96-0482.
no-problem/9910/gr-qc9910091.html
ar5iv
text
# Erratum: The evolution of circular, non-equatorial orbits of Kerr black holes due to gravitational-wave emission [Phys. Rev. D 61, 084004 (2000)] Equation (4.52) of this paper should read $$Z_{lmk}^{H,\mathrm{}}=(1)^{l+k}\overline{Z}_{lmk}^{H,\mathrm{}}.$$ As a consequence, the odd $`l`$ contributions to the gravitational waveforms shown are off by a minus sign. However, all of the rates of change discussed in this paper ($`\dot{E}`$, $`\dot{L}_z`$, $`\dot{Q}`$, $`\dot{r}`$, $`\dot{\iota }`$) are unchanged since they only depend on the square of this coefficient.
no-problem/9910/astro-ph9910364.html
ar5iv
text
# The Parker Instability in a Thick Galactic Gaseous Disk: I. Linear Stability Analysis and Nonlinear Final Equilibria ## 1 INTRODUCTION In a series of seminal papers, Parker (1966, 1967, 1969) discussed the stability of a magnetized interstellar system with cosmic rays, and immersed in an external gravitational field. To build an equilibrium state in a plane-parallel density distribution, he assumed that i) the initial magnetic field is parallel to the galactic plane, ii) the gravitational acceleration is constant, iii) and the vertical pressure distributions for the gas, cosmic rays and magnetic field are simply described by an exponential function with the same scale height. Using a normal mode analysis he found that such a system is unstable if the adiabatic index of the gas is below a certain critical value. When the perturbation wavevectors are confined to the two-dimensional (2D) plane defined by the directions of the initial magnetic and gravitational fields, the critical value is defined by $`\gamma _{\mathrm{cr},u}=(1+\alpha +\beta )^2/(1+0.5\alpha +\beta )`$, where $`\alpha `$ is the ratio of the magnetic-to-gas pressures, and $`\beta `$ is the ratio of cosmic-ray-to-gas pressures (Parker 1966). When the wavevectors are allowed to have all three-dimensional (3D) components, the critical adiabatic index becomes $`\gamma _{\mathrm{cr},m}=1+\alpha +\beta `$ (Parker 1967). Later, these (2D and 3D) types of perturbations were classified as the undular (2D) and interchange (Hughes & Cattaneo 1987) or mixed (3D) modes, (Matsumoto et al. 1993). Using the “energy principle” method, Lachièze-Rey et al. (1980) also found a generalized form for the critical adiabatic index (eq. in their paper), which is basically the same one of the mixed mode. The critical adiabatic index for this mixed mode is in general smaller, and more restrictive, than that of the undular mode. Given that cooling times in the diffuse interstellar medium (ISM) are shorter than the timescales for the instability, Parker used the isothermal value for the adiabatic index, $`\gamma =1`$, and concluded that the equilibrium state of the general ISM is then unstable. The undular instability, which promotes the formation of high-density structures, is eventually stabilized by the tension of the distorted field lines. Using mass invariance along flux tubes, Mouschovias (1974) obtained the 2D final equilibrium state of the original one-dimensional Parker model. Even when this new 2D equilibrium stage is in turn unstable against 3D perturbations (see Asséo et al. 1978; Asséo et al. 1980), the undulation pattern along the initial field lines persists in the nonlinear 3D evolution of the instability (Kim et al. 1998). Therefore, the final 2D equilibrium state can be helpful in visualizing the resulting large-scale structure of the ISM. The three original assumptions made by Parker described above are obvious idealizations, and some of them have been modified in subsequent studies. The first assumption, a well ordered field that is parallel to the galactic plane, is not really sustained by observations (except near the midplane). The interstellar $`𝐁`$-field has a bisymmetric spiral field configuration (see Heiles 1996; Indrani & Deshpande 1998; Vallee 1998), and random components with cell sizes of the order of 50 pc (e.g. Rand & Kulkarni 1989). Also, the transition between the gaseous disk and the halo is very broad and has a complex structure with vertical field components (see Boulares & Cox 1990 for a discussion of the support provided by the tension of curved field lines). Thus, the plane-parallel field assumption is only valid as an average field configuration, but it is very difficult to relax in both analytical and numerical treatments of the problem. As a variation to the simplest plane-parallel field scheme, Hanawa, Matsumoto & Shibata (1992) derived the unstable modes in a skewed magnetic field whose direction is still horizontal, but the field direction changes with distance from the midplane (i.e., the $`x`$ and $`y`$ components of the field vary with height, but $`B_z`$ is always equal to zero). For such a field configuration, the instability tends to form structures with the scales of giant molecular clouds. The second assumption, a constant gravity, has been relaxed in more recent studies. The galactic gravitational field varies in a nearly linear fashion near the midplane (the linear approximation is excellent for $`[z]150`$ pc; Oort 1965; Bahcall 1984; Bienaymé, Robin & Crézé 1987; Kuijken & Gilmore 1989), and two different functions, linear and tanh ($`z/H`$), have been considered by different authors (Giz & Shu 1993; Kim, Hong & Ryu 1997; Kim & Hong 1998). Since gravity is the driving force of the instability, a variation of the functional form for the acceleration has a direct impact on the properties of the unstable modes (i.e., growth rates, length scales, and parity). For the constant gravity case, the weight of a gas parcel is the same regardless of its $`z`$-position, but the acceleration is discontinuous at $`z=0`$. Thus, the flow cannot move across the midplane and the only allowed modes are those with even parity. The resulting structures are then distributed symmetrically with respect to $`z=0`$, and are called midplane symmetric (MS). In the case of the other two functions (linear and tanh), the acceleration is continuous at the midplane and the weight of the gas increases with $`[z]`$. Thus, the odd parity solutions, or midplane antisymmetric (MA) modes, also appear, and the growth times are shorter than those of the uniform gravity case (e.g. Giz & Shu 1993; Kim & Hong 1998). The third assumption, a disk with a single gas component, has not been modified in any of the recent studies but it also has to be revised. The actual ISM structure is very complex, and has several gas components ranging from the cold molecular phase to a hot and highly ionized plasma (e.g. Kalberla & Kerp 1998). Each component, in turn, has several sub-components with their own set of representative values for midplane densities and scale heights (the velocity dispersion of most components, however, seems to be equal to $`9`$ km s<sup>-1</sup>: see Boulares & Cox 1990). For instance, the atomic hydrogen phase could be divided into one cold H I component and two warm H I components (e.g. Bloemen 1987; Boulares & Cox 1990; McKee 1990; Spitzer 1990). To further complicate the situation, the vertical distributions for the magnetic field and cosmic rays do not seem to follow the stratification of the main gas components. Many of the system properties remain largely unknown and, depending on the assumed temperature and $`𝐁`$-field distributions, the resulting magnetohydrostatic (MHS) equilibrium configurations can be either stable or unstable to the Parker instability (e.g. Bloemen 1987; Boulares & Cox 1990; Martos & Cox 1994; Franco, Santillán & Martos 1995; Kalberla & Kerp 1998). Thus, a quantitative stability analysis for this type of multi-component gaseous disk is required. In this paper we address this issue and investigate the 2D stability of an extended, multi-component, magnetized disk with a “realistic” gravitational acceleration. We use the vertical equilibrium model for the warm magnetized system that has been discussed by Martos (1993), Martos & Cox (1994, 1998) and Santillán et al. (1999a). This equilibrium configuration is based on the observed distributions of: i) the vertical acceleration of the gravitational field in the solar neighborhood (Bienaymé, Robin & Crézé 1987), and ii) the density distributions of the gaseous components (Boulares & Cox 1990). Using the normal mode analysis, here we find that an isothermal extended disk is unstable with respect to the undular mode, and derive the resulting linear growth rates. We also derive the final 2D equilibrium state for both the MS and MA modes. The non-linear evolution is followed with the aid of 2D magnetohydrodynamic (MHD) numerical experiments and the results will be presented in the accompanying paper by Santillán et al. (1999b, hereafter Paper II). The plan of the present paper is as follows. In §2, we describe the initial MHS equilibrium state, and perform the normal mode analysis. The dispersion relations for the unstable undular modes are then discussed. In §3, the nonlinear final equilibria of the undular modes are presented, and a summary and discussion of the results are given in §4. ## 2 Normal Mode Analysis The linear stability analysis can be performed with either the “energy principle” method (Bernstein et al. 1958) or with the usual “normal mode” analysis. The energy principle method provides the critical adiabatic index in a relatively easy way (e.g. Zweibel & Kulsrud 1975; Asséo et al. 1978; Asséo et al. 1980; Lachièze-Rey et al. 1980), but does not allow to derive the resulting dispersion relations. Here we want to find the dispersion relations and, hence, derive them with the usual normal mode analysis. ### 2.1 Isothermal Magnetohydrodynamic Equations The temperature distribution of the gaseous disk is largely unknown but, given that the velocity dispersion of the main gas components are similar, we identify the velocity dispersion with the gas sound speed and define our model as an isothermal disk stratification. Thus, we do not differentiate between the thermal and kinetic pressures, and both are gathered together in a single pressure term with either a constant velocity dispersion or a constant effective temperature. The particular values for the resulting model velocity dispersion and effective temperature are given below. The dynamics of a magnetized isothermal plasma immersed in a gravitational field is described by the MHD equations, $$\frac{\rho }{t}+(\rho 𝐯)=0,$$ (1) $$\rho \left(\frac{𝐯}{t}+𝐯𝐯\right)=\left(\rho a^2+\frac{B^2}{8\pi }\right)+\frac{1}{4\pi }𝐁𝐁+\rho 𝐠,$$ (2) $$\frac{𝐁}{t}=\times (𝐯\times 𝐁),$$ (3) where $`a`$ (= constant) is the isothermal sound speed, and the rest of the symbols have their usual meanings. We use a Cartesian coordinate system ($`x,y,z`$), whose axes are defined parallel to the radial, azimuthal, and vertical directions, respectively. We perform the analysis in the $`yz`$ plane and assume that the gravitational acceleration has only a vertical component, $`𝐠=[0,0,g(z)]`$. ### 2.2 Initial Equilibrium Configuration After Parker (1966) introduced the simplified exponential equilibrium model for the gaseous disk, several authors have built more complex and realistic approximations for the actual ISM structure (e.g. Badhwar & Stephens 1977; Bloemen 1987; Boulares & Cox 1990; Kalberla & Kerp 1998). These models are based on the observed stratifications for the gas, cosmic rays, and magnetic and gravitational fields in the solar neighborhood. Here we use, as the initial equilibrium state in our stability analysis, the MHS equilibrium configuration discussed originally in Martos (1993). This initial model uses the vertical distributions for the density and gravitational acceleration described by Boulares & Cox (1990) and Bienaymé, Robin & Crézé (1987), respectively. The density stratification is $`n_0(z)`$ $`=`$ $`0.6\mathrm{exp}\left[{\displaystyle \frac{z^2}{2(70\text{pc})^2}}\right]+0.3\mathrm{exp}\left[{\displaystyle \frac{z^2}{2(135\text{pc})^2}}\right]+0.07\mathrm{exp}\left[{\displaystyle \frac{z^2}{2(135\text{pc})^2}}\right]`$ (4) $`+`$ $`0.1\mathrm{exp}\left[{\displaystyle \frac{|z|}{400\text{pc}}}\right]+0.03\mathrm{exp}\left[{\displaystyle \frac{|z|}{900\text{pc}}}\right]\mathrm{cm}^3.`$ The midplane value is $`n_0(0)1.1\mathrm{cm}^3`$ and, for a plasma with 10% He, the corresponding mass gas density stratification is $`\rho _0(z)=1.27m_\mathrm{H}n_0(z)`$, where $`m_\mathrm{H}`$ is the mass of a hydrogen atom. Figure 1 shows the distribution of each gas component (representing the contributions of H<sub>2</sub>, cold H I, warm H I in clouds, warm intercloud H I, and warm diffuse H II). The molecular and cold atomic phases are the dominant ISM mass components near the midplane, whereas the warm intercloud H I and warm diffuse H II are the most important gas layers beyond $`z300`$ pc. The extended H II component was originally detected in absorption against the Galactic synchrotron background (Hoyle & Ellis 1963), and later it was reported in hydrogen recombination emission (Reynolds 1989). This ionized gas is a major component of the ISM, which has been usually ignored in previous modeling, and its surface density is about a third of that of the H I component at the solar neighborhood. The power requirements to ionize this layer are comparable to that available from supernovae. Because of the inclusion of the extended components, with scale heights larger than 300 pc, our model is referred to as the thick gaseous disk model. The resulting effective scale height is defined by the total gas column density as $$H_{\mathrm{eff}}=\frac{1}{n_0(0)}_0^{\mathrm{}}n_0(z)𝑑z166\mathrm{pc}.$$ (5) In the case of the gravitational field, the observationally derived acceleration at the solar neighborhood can be fitted by (Martos 1993) $$g(z)=8\times 10^9\left[10.52\mathrm{exp}\left(\frac{|z|}{325\text{pc}}\right)0.48\mathrm{exp}\left(\frac{|z|}{900\text{pc}}\right)\right]\mathrm{cm}\mathrm{s}^2.$$ (6) This gravitational acceleration is similar to the one derived by Kuijken & Gilmore (1989), and requires less local dark matter content than the ones derived by Oort (1965) and Bahcall (1984). Given these two basic building blocks, $`\rho _0(z)`$ and $`g(z)`$, the initial equilibrium configuration is constructed by assuming that the gas is isothermal and the initial magnetic field is parallel to the galactic plane. The effects of cosmic rays are not explicitly included here because the results may depend on the assumptions made. For instance, in contrast to the effects of the isotropic cosmic ray pressure considered by Parker (1966), Nelson (1985) showed that an anisotropic cosmic ray pressure may tend to stabilize the gas layer. For simplicity, then, we gather the non-thermal pressures into a single term represented by the magnetic pressure (i.e., we assume that the sum of the cosmic ray and magnetic pressures is contained in the magnetic term). Then, the MHS equilibrium for the gas-field-gravity system is given by $$\frac{d}{dz}P_0(z)=\frac{d}{dz}\left[\rho _0(z)a^2+\frac{B_0^2(z)}{8\pi }\right]=\rho _0(z)g(z),$$ (7) where $`P_0(z)`$ is the total pressure of the system (thermal plus magnetic). This equation defines the stratification of the magnetic field. For completeness, because our system is finite, we set the additional boundary condition $`P_0(z=10\mathrm{k}\mathrm{p}\mathrm{c})=0`$, and the system pressure is computed with the integral $$P_0(z)=_z^{10\mathrm{k}\mathrm{p}\mathrm{c}}\rho _0(z)g(z)𝑑z.$$ (8) Given the total pressure and the strength of the magnetic field (including both ordered and random components) at midplane, $`P_0(0)3\times 10^{12}`$ dyne cm<sup>-2</sup> and $`B_0(0)5\mu `$G (Boulares & Cox 1990; Heiles 1996), the resulting isothermal sound speed (from $`P_0(0)=1.27m_\mathrm{H}n_0(0)a^2+B_0^2(0)/8\pi `$, with $`n_0(0)=1.1`$ cm<sup>-3</sup>) is $`a=8.4`$ km s<sup>-1</sup>. Thus, the sound speed value is very similar to the observed velocity dispersion of the main gas components (within 5 to 9 km s<sup>-1</sup>; Boulares & Cox 1990), and the corresponding effective disk temperature is $`T_{\mathrm{eff}}=10900`$ K. This is called the “warm” magnetic disk model and its properties are discussed by Martos (1993), Martos & Cox (1994, 1998) and Santillán et al. (1999a). Figure 2 shows the distributions of the thermal, magnetic, and total pressures as functions of distance from the galactic plane. The maximum of the magnetic pressure is not centered at $`z=0`$ because the field stratification is derived from MHS equilibrium. This warm magnetic disk model is Parker unstable because the gas is almost entirely supported by the magnetic field above $`z200`$ pc. There is high-latitude H I gas with velocity dispersions of 35 km s<sup>-1</sup> (Kulkarni & Fich 1985), and halo gas with up to 60 km s<sup>-1</sup> (Kalberla et al. 1998). The inclusion of these additional gas components with different velocity dispersions in our analysis is beyond the scope of the present paper but, as sketched in §4, we will address this issue in a future study. ### 2.3 Linearized Perturbation Equations We limit the present discussion to perturbations in the $`yz`$ plane (i.e., in the plane defined by the directions of the initial magnetic and gravitational fields). Due to this limitation, only the undular modes are allowed and we follow the procedure described by Kim, Hong & Ryu (1997) and Kim & Hong (1998) to derive the properties of the instability. Given that the velocities in the initial model are equal to zero, we denote by $`𝐯`$, $`\delta \rho `$, $`\delta 𝐁`$ the infinitesimal perturbations in velocity, density, and magnetic field, respectively. The perturbed state is then described by $$𝐯;\rho =\rho _0+\delta \rho ;𝐁=B_0\widehat{e}_y+\delta 𝐁.$$ (9) Inserting these perturbed variables in equations (1-3), and keeping only the first-order terms for the perturbations, the linearized perturbation equations become $$\frac{}{t}\delta \rho +v_z\frac{d\rho _0}{dz}+\rho _0\left(\frac{v_y}{y}+\frac{v_z}{z}\right)=0,$$ (10) $$\rho _0\frac{v_y}{t}+\frac{}{y}(a^2\delta \rho )\frac{1}{4\pi }\frac{dB_0}{dz}\delta B_z=0,$$ (11) $$\rho _0\frac{v_z}{t}+\frac{}{z}(a^2\delta \rho )+\frac{1}{4\pi }\frac{dB_0}{dz}\delta B_y+\frac{1}{4\pi }B_0\frac{}{z}\delta B_y\frac{1}{4\pi }B_0\frac{}{y}\delta B_z+g\delta \rho =0,$$ (12) $$\frac{}{t}\delta B_y+B_0\frac{v_z}{z}+\frac{dB_0}{dz}v_z=0,$$ (13) $$\frac{}{t}\delta B_zB_0\frac{v_z}{y}=0.$$ (14) The coefficients of equations (10-14) do not depend explicitly on $`y`$ and $`t`$, and the perturbations can be Fourier-decomposed with respect to these variables $$\left[\begin{array}{c}\delta \rho (y,z;t)\\ v_y(y,z;t)\\ v_z(y,z;t)\\ \delta B_y(y,z;t)\\ \delta B_z(y,z;t)\end{array}\right]=\left[\begin{array}{c}\delta \rho (z)\\ v_y(z)\\ v_z(z)\\ \delta B_y(z)\\ \delta B_z(z)\end{array}\right]\mathrm{exp}(i\omega tik_yy),$$ (15) where $`i\omega `$ is the growth rate and $`k_y`$ is the wavenumber along the $`y`$-direction. Inserting these decomposed forms into the perturbation equations (10-14), and combining them we obtain the reduced equation, $$f\frac{d^2v_z}{dz^2}+\frac{df}{dz}\frac{dv_z}{dz}+hv_z=0,$$ (16) where the functions $`f`$ and $`h`$ are defined by $$f=2(\omega ^2k_y^2a^2)\frac{B_0^2}{8\pi }+\omega ^2\rho _0a^2,$$ (17) $$h=(\omega ^2k_y^2a^2)\left(\omega ^2\rho _02k_y^2\frac{B_0^2}{8\pi }\right)\omega ^2\rho _0\frac{dg}{dz}+k_y^2g\frac{d}{dz}\left(\frac{B_0^2}{8\pi }\right).$$ (18) The factor $`dg/dz`$ appearing in the second term of $`h`$ is introduced by taking the derivative with respect to $`z`$ on both sides of the MHS equation (7), and then making the appropriate substitutions. This results in a more compact form for the $`h`$ function (and we do not need to calculate numerically a second-order derivative term). With the transformation $`\mathrm{\Psi }=v_zf^{1/2}`$, equation (16) can be rearranged to $$\mathrm{\Psi }^{\prime \prime }+\left[\frac{1}{4}\left(\frac{f^{}}{f}\right)^2\frac{1}{2}\left(\frac{f^{\prime \prime }}{f}\right)+\frac{h}{f}\right]\mathrm{\Psi }=0,$$ (19) where the prime superscript () denotes the derivative with respect to $`z`$. Given the complicated functional forms for $`\rho _0(z)`$, $`g(z)`$, and $`B_0(z)`$, one cannot perform further simplications of equation (19). The required boundary conditions (BCs) are: $`\mathrm{\Psi }=0`$ at an upper boundary $`z=z_{\mathrm{node}}`$, and $`\mathrm{\Psi }=0`$ or $`d\mathrm{\Psi }/dz=0`$ at the midplane, $`z=0`$. The first condition at the midplane, $`\mathrm{\Psi }=0`$ at $`z=0`$, generates the even parity MS solutions, whereas the second one corresponds to the odd parity MA solutions (e.g. Horiuchi et al. 1988; Giz & Shu 1993). ### 2.4 Dispersion Relations The dispersion relations are found with the method described in the Appendix of Kim et al. (1997). The method is a numerical procedure to find, for a given wavenumber, an eigenvalue ($`i\omega `$) which satisfies the imposed BCs. Our equilibrium configuration, as stated in §2.2, turns out to be Parker unstable, and we find eigenvalues $`i\omega `$ that are real and positive. The resulting dispersion relations are shown in Figure 3 for five cases whose upper boundaries are placed at $`z`$-locations ranging from $`z=9H_{\mathrm{eff}}`$ to $`z=30H_{\mathrm{eff}}`$. The growth rates, wavenumbers, and nodal points in Figure 3 are normalized as: $$\mathrm{\Omega }=i\omega \frac{H_{\mathrm{eff}}}{a},\nu _y=k_yH_{\mathrm{eff}},\zeta _{\mathrm{node}}=\frac{z_{\mathrm{node}}}{H_{\mathrm{eff}}}.$$ (20) Since gravity has small values near the midplane, the dispersion relations are not sensitive to the midplane boundary conditions. Thus, the solutions are degenerate with respect to parity, and the growth rates are nearly the same for both the MS and MA modes. The plotted dispersion relations are for the principal $`z`$-modes, whose MS nodal points are located at midplane and $`\zeta _{\mathrm{node}}`$. The nodal points of the MA modes are located at $`\zeta _{\mathrm{node}}`$ and $`\zeta _{\mathrm{node}}`$. For the lower nodal point in Figure 3, $`\zeta _{\mathrm{node}}=9`$ ($`z_{\mathrm{node}}=1.5`$ kpc), the fastest growth time is about $`6.2\times 10^7`$ years and its wavelength is 3.11 kpc. For the upper point, $`\zeta _{\mathrm{node}}=30`$ ($`z_{\mathrm{node}}`$=5 kpc), the corresponding values change to $`3.4\times 10^7`$ years and 3.43 kpc, respectively. From the figure, it is clear that the maximum growth rate above $`z3`$ kpc is less sensitive to the position of the nodal point. This is because the gravitational acceleration (see eq. ) already reaches its maximum value, $`8\times 10^9`$ cm s<sup>-2</sup> at about $`3`$ kpc. Therefore, as the nodal point goes to positions higher than $`\zeta _{\mathrm{node}}=30`$, the growth time converges to $`3\times 10^7`$ years, which can be regarded as the minimum growth time of the Parker instability in the thick gaseous disk (obviously, at these heigths the gravitational force also has a non-neglegible radial component, and we are near the limit of validity of our 2D analysis). In the following section we address the structure of the final equilibrium state. ## 3 Two-dimensional Equilibria of the Undular Instability ### 3.1 Magnetohydrostatic Equations The MHS equations are obtained by setting $`𝐯=0`$ and dropping the time-derivative terms in the MHD equations (1-3). Hence, the continuity and induction equations are of no use in this case. Due to this reason, the number of unknowns ($`\rho `$, $`B_y`$, and $`B_z`$) is larger than the number of equations (the $`y`$ and $`z`$ components of the momentum equation), and one requires an additional expression. Closure is granted with flux freezing, which results in conservation of the mass-to-flux ratio in a flux tube. The details for the derivation of the final equilibrium states are given in Mouschovias (1974), and are summarized in Spitzer (1978). Given the magnetic vector potential $`𝐀=\widehat{e}_xA(y,z)`$ $$𝐁=\times 𝐀,$$ (21) and the gravitational potential $$\psi =_0^zg(z)𝑑z,$$ (22) the final magnetic equilibrium is given by $$^2A=4\pi \frac{dq}{dA}\mathrm{exp}\left(\frac{\psi }{a^2}\right).$$ (23) The function $`q\rho a^2\mathrm{exp}(\psi /a^2)`$ is a constant along a line of force and is given by $$q(A)=\frac{a^2}{2}\frac{dm}{dA}\left\{_0^{\lambda _y/2}𝑑y\frac{z(y,A)}{A}\mathrm{exp}\left[\frac{\psi (y,A)}{a^2}\right]\right\}^1,$$ (24) where $`\lambda _y`$ is the perturbation wavelength along the initial magnetic field, and $`dm/dA`$ is the mass-to-flux ratio. As stated above, for flux freezing conditions the mass between two field lines is conserved and the mass-to-flux ratio remains constant during the evolution (i.e., is a constant of motion). Then, this ratio is determined from the initial equilibrium configuration $$\frac{dm}{dA}=\lambda _y\frac{\rho _0(A)}{B_0(A)},$$ (25) where $`\rho _0(A)`$ and $`B_0(A)`$ represent the initial distributions of the density and magnetic field as functions of $`A`$, respectively. ### 3.2 Final Equilibrium States Now, after setting the mass-to-flux ratio, one can solve equations (23) and (24) simultaneously. Following the detailed procedure described in Appendix C of Mouschovias (1974), we solve these equations by iteration. In contrast to the original work of Mouschovias, who used a constant gravity with a discontinuity at midplane, we use a smooth and continuous gravity function (eq. ). The discontinuity prevents midplane gas crossings, and he found the final equilibria of the MS modes only. We do not have such a discontinuity and are able to derive the final equilibria of both the MS and MA modes. The initial equilibrium distributions for the density and field lines are plotted in Figure 4a. Colors are mapped from red to violet as the density decreases. The white lines represent the $`𝐁`$-field lines, and they are chosen in such a way that the magnetic flux between two consecutive lines is the same. The length scales are normalized with the effective scale height, $`H_{\mathrm{eff}}`$. First, to derive the final state of the MS mode, we added a MS perturbation to the magnetic vector potential in the initial equilibrium state, $$\delta A(y,z)=A_0(z)C\mathrm{cos}(\frac{2\pi y}{\lambda _y})\mathrm{sin}(\frac{2\pi z}{\lambda _z}),$$ (26) where $`C=0.01`$ is the amplitude of the perturbation, and $`\lambda _z=2z_{\mathrm{node}}`$ (the MS mode has zero amplitude at $`z=0`$ and $`z=z_{\mathrm{node}}`$, so $`z_{\mathrm{node}}`$ corresponds to a half of the wavelength value of the principal mode along the $`z`$-axis). Figure 3 shows that when the first nodal point from midplane is $`9H_{\mathrm{eff}}`$, the most unstable horizontal wavelength is $`18H_{\mathrm{eff}}`$. Thus, we use the pair of unstable wavelengths $`(\lambda _y,\lambda _z)=(18H_{\mathrm{eff}},18H_{\mathrm{eff}})`$, and get the final equilibrium state displayed in Figure 4b. The actual computational domain for this symmetric case is $`0y9H_{\mathrm{eff}}`$ and $`0z9H_{\mathrm{eff}}`$, but for visualization purposes we extend the domain eight times in the figure. The condensations formed in the magnetic valleys and voids in the arches are clearly seen in the figure. Due to the condition imposed at midplane, $`\delta A=0`$, the field line at $`z=0`$ is not deformed at all. For the MA case, we perturb the initial state with the perturbation, $$\delta A(y,z)=A_0(z)C\mathrm{cos}(\frac{2\pi y}{\lambda _y})\mathrm{cos}(\frac{2\pi z}{\lambda _z}),$$ (27) where we now use $`\lambda _z=4z_{\mathrm{node}}`$ (in contrast to the MS mode, the MA mode has maximum amplitude at $`z=0`$ and then requires twice the wavelength value along the $`z`$-axis). As stated before, the dispersion relations shown are not sensitive to the midplane boundary conditions, and the most unstable horizontal wavelength is the same for both the MS and MA modes. Using the same nodal point as before, $`z_{\mathrm{node}}=9H_{\mathrm{eff}}`$, we set the pair of unstable wavelengths to $`(\lambda _y,\lambda _z)=(18H_{\mathrm{eff}},36H_{\mathrm{eff}})`$, and the final state of the MA mode is plotted in Figure 4c. The computational domain is now $`0y9H_{\mathrm{eff}}`$ and $`9H_{\mathrm{eff}}z9H_{\mathrm{eff}}`$, and covers the upper and lower hemispheres. For a better visual impression we extend it by a factor of four in Figure 4c. As before, one can also see condensations and voids in the figure, but their positions are now alternated between the upper and lower hemispheres. Hence the distance between successive condensations is a half of the horizontal wavelength. Also, the $`𝐁`$-field line at midplane is now undulated, with locations above and below $`z=0`$, as is characteristic of the MA mode. The density enhancements produced by the gas that has been sliding into the magnetic valleys can be obtained from the column density of the final state. At any given location $`y`$, the final column density is $$N_f(y)=_{z(y,A_i[z=0])}^{9H_{\mathrm{eff}}}\rho _f(y,z)𝑑z,$$ (28) where the subscripts $`i`$ and $`f`$ denote the initial and final states, respectively. The lower limit of the integral corresponds to the final $`z`$-coordinate of the magnetic field line that was initially located at midplane, and is labeled with $`A_i(z=0)`$. Thus, the lower limit is exactly equal to zero for the MS modes, but it is different from zero for the MA modes. In Figure 5 we plot the column density distributions for both modes (normalized to the initial column density, $`N_i=_0^{9H_{\mathrm{eff}}}\rho _i(z)𝑑z`$). The figure reveals that the MA modes drive more gas into the magnetic valleys than the MS cases. This is because the MA perturbations can gather more mass in the condensations by bending the midplane. Another interesting quantity is the ratio of the magnetic-to-gas pressures, $`\alpha =B^2/(8\pi a^2\rho )`$. The value of this ratio varies with time and $`z`$-location, and Figure 6 shows the corresponding distributions at the initial and final equilibrium states. The initial state is plotted as a solid line, and is labeled as $`\alpha _i(z)`$. The distribution for the final MS state is plotted with dashed lines at two different $`y`$-positions: the distribution at $`y=0`$, corresponding to the center of the condensation (the magnetic valley), is labeled as $`\alpha _f(0,z)`$, and the distribution at $`y=9H_{\mathrm{eff}}`$ (the central part of the magnetic arch; see Fig. 4b) is labeled as $`\alpha _f(9,z)`$. Finally, the distribution for the final MA state at the position $`y=0`$ is shown with a dotted line, and is also labeled $`\alpha _f(0,z)`$ (the lower part of the disk contains the central part of a condensation, and the upper part of the disk has the maximum of the magnetic arches; see Fig. 4c). The $`\alpha `$-distribution of the initial state shows that our disk model is mainly supported by gas pressure near the midplane, and by magnetic pressure at high latitudes (see also Fig. 2). The distribution, however, is completely modified at the final equilibrium stages. The gas pressure increases at the condensations and the resulting pressure ratios, for both the MS and MA modes, become smaller than the initial $`\alpha `$ values. In contrast, at the voids, where the magnetic energy becomes dominant, $`\alpha `$ reaches values as high as $`10^4`$. This is because the gas is efficiently drained down from the magnetic arches, as already pointed out by Mouschovias (1974) for the case of a thin gaseous disk in a uniform gravity. As a final comment of this section, we add that the galactic system seems to prefer the lower energy state of the MA mode. This is not apparent from the dispersion relations, which are degenerate for the MS and MA modes, but it appears in detailed numerical MHD experiments performed with the same thick disk model considered in this study. The results of these numerical simulations will be reported in a separate paper (Santillán et al. 1999b), and here we only mention the relevant result. The runs are started with the initial equilibrium state described §2.2 and, as expected, the early linear phase of the experiments follows the rates and wavelengths derived in the present linear analysis for any of the two parity modes. We also performed several experiments with random velocity perturbations, without any preferred parity or wavelength. These random perturbation experiments eventually evolve into the MA configuration, similar to the one shown Figure 4c, indicating that the MA mode is preferred over the MS mode. ## 4 Summary and Discussions Here we have presented the linear perturbation analysis of a magnetized and warm thick disk. The dispersion relations for the undular mode of the Parker instability are derived, along with the resulting final equilibrium states. The initial disk parameters are taken from the observed distributions at the solar circle, and we assume that the gas is isothermal and that the initial field lines are parallel to the disk. Given the complexities inherent to trying to model the dynamical effects of cosmic rays, its pressure is not explicitly included in here. The resulting multi-component gaseous disk model, then, has a thermal-to-magnetic pressure ratio that decreases with $`z`$-location, and is Parker unstable. The properties of the unstable modes for five different nodal points are given in Figure 3. These nodal points correspond with the assumed extension of the disk above midplane (for $`H_{\mathrm{eff}}166`$ pc, the five cases in Figure 3 represent a disk extending up to 1.5, 2, 3, 4 and 5 kpc, respectively). The value of the critical wavelength for the instability depends on the location of the chosen nodal point and, for disks extending between 5 and 1.5 kpc above midplane, it increases from 1.5 to 1.8 kpc, respectively. The wavelength of the fastest growing mode, however, is almost independent of the assumed nodal point, and is about 3 kpc for all the cases considered. The corresponding growth time scales are slightly more sensitive to the nodal point and, for disks extending between 1.5 and 5 kpc above midplane, the time scales decrease from 6.2 to 3.4$`\times 10^7`$ year, respectively. The minimum growth time then converges to $`3\times 10^7`$ year as the nodal point tends to large $`z`$-values. Thus, for average conditions in the solar neighborhood, the linear analysis in 2D indicates that the preferred wavelength is about 3 kpc, and that the gas condensations are formed in time scales of the order of about $`3\times 10^7`$ year. The wavelength values are larger, by a factor of about 8, than those derived for the thin disk cases, but the corresponding time scales are larger by only a factor of about 2 (Kim & Hong 1998). These are substantial differences, and indicate that the multi-component structure of the disk play an important role in the large-scale stability and evolution of the ISM. The densities for the final equilibrium stages, on the other hand, are larger for the MA modes. The resulting MA column densities at the condensations are increased by a factor of about 3 with respect to the value of initial equilibrium stage. Now, the spiral density wave can trigger the Parker instability in the model considered here (Martos & Cox 1994), and the contrast obtained is similar to the expected density contrast between arm-interarm regions for strong waves (Elmegreen 1991). For comparison, the final-to-initial column density ratio of the fastest growing mode in a thin disk model is of about 1.2 only (Mouschovias 1974). Thus, the gas from the extended gas layers participating in the instability contribute with a fraction of about 2/3 to the total mass gathered in the condensations. The role of self-gravity and differential rotation of the Galaxy are not included in the present study. Self-gravity may not be important at the early linear stages of the instability (e.g. Hanawa, Nakamura & Nakano 1992), but it will lead to more compact and denser condensations at the non-linear phases. Galactic differential rotation, on the other hand, has an influence at several stages of the Parker instability (e.g. Shu 1974; Zweibel & Kulsrud 1975; Balbus & Hawley 1991; Foglizzo & Tagger 1994, 1995). For instance, if the radial differential force is strong enough, a transient shearing instability also appears, and the combined Parker-shearing instability could lead to angular momentum transfer and dynamo action in disks. In the case of the 2-D undular perturbations considered in this paper, the lateral motions of the flows should be affected by the Coriolis force. If we, however, include the ignored third dimension (the radial direction) in our analysis, the vertical motion of the mixed mode with a smaller wavelength along the radial direction dominates the system, and the effects of rotation are severely reduced during the linear growth. Nonetheless, as stated by the referee, the stabilizing effects of rotation may be important at the final equilibrium stages. These are important issues that require detailed three dimensional studies with differential rotation, and should be addressed in future studies. If the assumptions of the present work are valid, the range of growth rate values are marginally consistent with those required for the formation of giant molecular clouds in our Galaxy (e.g. Blitz & Shu 1980). Also, the most unstable wavelength in our model is somewhat larger than the corrugation distance derived by Alfaro et al. (1992) for the Carina Arm (2.4 kpc), but the condensations formed by the odd parity mode of the instability may well be associated with the origin of this observed structure. A more detailed study is required to properly address this issue, and important caveats should be borne in mind regarding the applicability of the present results. One is that of the randomness of the Galactic magnetic field topology at the kpc length scale, not included in the present modeling. Another one is the largely unknown temperature structure of the halo, and the filling factors of the different gas components. Models built from the same density and gravity distributions, but in which the magnetic field distribution is prescribed from the Galactic synchrotron emission (e.g. Martos & Cox 1998), require thermal dominance at high $`[z]`$ and are therefore Parker stable. This leads us to a final important question if the isothermal disk assumption represents a fair description of the actual gaseous disk in our Galaxy. Here we do not differentiate between the thermal and kinetic pressures, and both are gathered in a single isothermal term with sound speed similar to the velocity dispersion of the main components extending up to $`1.5`$ kpc from midplane (Boulares & Cox 1990). Such an isothermal condition, then, can be considered as a reasonable approximation for the regions located between 1 to 1.5 kpc from the midplane. The existence of a few “anomalous” velocity components within 2 kpc (e.g. Kulkarni & Fich 1985; Reynolds 1985), and gas with a large velocity dispersion at $`z`$ of about 4 kpc (Kalberla et al. 1998), already hint that the effective sound speed should be increased somewhere within 1 and 2 kpc. The details for such a variation are presently unknown, but we are currently investigating the effects of some reasonable velocity distributions. Obviously, the loss of magnetic support provides a stabilizing effect, but the present restrictions do not indicate that the instability can be completely suppressed. A detailed discussion of the range of velocity dispersion variations and the resulting unstable mode values will be presented elsewhere. ###### Acknowledgements. It is a big pleasure to thank Emilio Alfaro, Don Cox, Gene Parker, and Dongsu Ryu for many stimulating and informative discussions during the development of this project. We are grateful to Thierry Foglizzo, the referee, and Steve Shore, the editor, for several constructive comments. JF thanks the Korea Astronomy Observatory and Seoul National University for their warm hospitality. JF, MM and AS acknowledge partial support by DGAPA-UNAM grant IN130698, CONACyT grants 400354-5-4843E and 400354-5-0639PE, and by a R&D CRAY Research grant. The work of JK was supported by the Office of the Prime Minister through Korea Astronomy Observatory grant 99-1-200-00, and he also acknowledges the warm hospitality of the Instituto de Astronomía-UNAM. The work of SSH was supported in part by a grant from the Korea Research Foundation made in the year 1997. Figure Captions
no-problem/9910/astro-ph9910354.html
ar5iv
text
# Radiation from cosmic string standing waves ## Introduction Cosmic strings are one-dimensional topological defects which may have been created by a phase transition in the early universe . (For reviews see .) As the universe evolves, intercommutations between long strings produce oscillating loops. In the standard scenario, these loops lose energy by gravitational radiation and eventually disappear. This produces a scaling solution where the average distance between strings is a constant fraction of the Hubble length. Most of the energy in the string network is emitted as gravitational waves, which we cannot observe, and only a small fraction appears as high-energy particles. However, in a recent paper , Vincent, Antunes, and Hindmarsh claim that energy in a string network is lost by direct particle emission from long strings, rather than in gravitational waves. To back up this claim they study large-amplitude sinusoidal standing waves, and claim that the energy emission rate is sufficient to explain scaling behavior with the great majority of the energy emitted as particles. Moore and Shellard found that the emission rate fell exponentially with wavelength, but their amplitudes were much less than those of . Furthermore, the range of wavelengths in which saw exponential fall off had no overlap with the wavelengths studied in . Here we simulate the same large-amplitude waves as in and cover part of the same wavelength range, but we come to a different conclusion. In our simulations, the energy emission rate declines exponentially with wavelength, and thus cannot account for the large direct-emission rate claimed in . ## Model We work with the Abelian-Higgs model, which produces local strings with no massless degrees of freedom in the vacuum. The Lagrangian is $$=D_\mu \overline{\varphi }D^\mu \varphi \frac{1}{4}F_{\mu \nu }F^{\mu \nu }\frac{\lambda }{4}(|\varphi |^2\eta ^2)^2.$$ (1) We work with units such that $`\eta =1`$ and $`e=1`$, and we use the “critical coupling” regime in which $`\beta =\lambda /(2e^2)=1`$ so that in our units $`\lambda =2`$. As in , we study strings whose initial core position is given by a sinusoidal wave $`y=A\mathrm{cos}kx`$, with amplitude $`A=\lambda /2=\pi /k`$ . A preliminary investigation shows that emission from a standing wave is not uniform in time, but rather consists of a series of bursts emitted when the string is momentarily stationary with large amplitude waves in its position. Figure 1 shows a snapshot of the energy density around a string at one such point, and Fig. 2 shows a plot of the energy emitted over several oscillations. It thus appears that standing wave radiation is akin to radiation from cusps, and results from the overlap of the tails of the the string fields. We use such a model below to compute a theoretical expectation of the dependence of radiation rate on wavelength. ## Expectations Vincent, Antunes, and Hindmarsh argued as follows: in a scaling network, the distance between strings, $`\xi `$, scales with the Hubble distance, which is proportional to time. In a volume $`\xi ^3`$ there will be string length roughly $`\xi `$, so the energy density in the string network is $`\rho =\mu /\xi ^2`$, and thus $`\dot{\rho }=2\mu \dot{\xi }/\xi ^3`$. Since $`\dot{\xi }`$ is a constant, $`\dot{\rho }\xi ^3`$ is constant. As a model they used a sinusoidal standing wave with wavelength $`\lambda `$ in a box of volume $`\lambda ^3`$. They expected $`\dot{\rho }\lambda ^3`$ to be constant. If we let $`E`$ be the energy of a single wavelength of the string, then $`E=\rho \lambda ^3`$, and thus $`\dot{E}`$ is independent of $`\lambda `$. If we let $`P_L`$ be the power per unit length radiated from the standing wave, we need $$P_L\lambda ^1$$ (2) to sustain a scaling network from energy emission of this type. In contrast, analyzing the fields around the string would lead to a different conclusion. A straight, static string is topologically stabilized in a minimum-energy configuration, and so cannot radiate. If the string is curved, then there is the possibility for radiation, but since the fields fall off exponentially toward the vacuum at large distances from the string, one would expect the amount of radiation to be suppressed by an exponential factor depending on the radius of curvature, $`R`$. This seems in keeping with Fig. 1, which shows the radiation coming from the points of maximum curvature. As a specific model, one can imagine that an element of momentarily stationary curved string gives up an amount of energy proportional to $`\mathrm{exp}(\alpha R)dl`$, where $`\alpha `$ is a constant of $`O(1)`$ and $`dl`$ is the length of the element of string. The total energy emission is then $$Ee^{\alpha R}𝑑l.$$ (3) For a sinusoidal wave, $`y=A\mathrm{cos}kx`$, the radius of curvature is $$R=\frac{(1+A^2k^2\mathrm{sin}^2kx)^{3/2}}{Ak^2\mathrm{cos}kx}.$$ (4) We will consider the region around one of the peaks of the sinusoid. The energy emission is dominated by the region where $`x`$ is near zero, so that $`R`$ is small. In this regime, we can approximate $$R\frac{(1+A^2k^4x^2)^{3/2}}{Ak^2(1k^2x^2/2)}\frac{1}{Ak^2}+\left(\frac{3}{2}Ak^2+\frac{1}{2A}\right)x^2.$$ (5) In our case, we are going to consider a fixed ratio of amplitude to wavelength, $`A=\lambda /2=\pi /k`$ so we get $$R\frac{\lambda }{2\pi ^2}+\frac{3\pi ^2+1}{\lambda }x^2.$$ (6) We can now do the integral of Eq. (3), approximating $`dl=\sqrt{1+\pi \mathrm{sin}^2kx}dx`$ by just $`dx`$, and extending the limits of integration to infinity, to get $$E\sqrt{\lambda }e^{\alpha \lambda /(2\pi ^2)}.$$ (7) Since we keep the amplitude a fixed multiple of the wavelength, the period of the standing wave is just proportional to $`\lambda `$. If we consider a half wavelength of string, it emits bursts of energy $`E`$ twice per cycle, so the power is $`\lambda ^{1/2}e^{\beta \lambda }`$, and the power per unit length is $$P_L\lambda ^{3/2}e^{\beta \lambda }.$$ (8) ## Simulation The simulation is based on a lattice action, as described in . However, in the present case we have used a different lattice spacings in the 3 cardinal directions. The maximum speed of the string is quite large, and leads to a Lorentz contraction of the field profile in the direction of motion. To accurately represent the fields, the lattice spacing should be proportional to $`1/\gamma _i=\sqrt{1v_i^2}`$, where $`v_i`$ is the component of the string velocity along axis $`i`$. In the $`z`$ direction, where there is no motion, we have used a lattice spacing of 0.33, which seems to be the largest that gives reliable results. The corresponding spacings in the $`x`$ and $`y`$ directions are 0.31 and 0.10 respectively. The Courant condition requires $`\mathrm{\Delta }t<(\mathrm{\Delta }x^2+\mathrm{\Delta }y^2+\mathrm{\Delta }z^2)^{1/2}0.09`$, and in our simulations we use $`\mathrm{\Delta }t=0.08`$. To extract the energy which is emitted by the string, we have used absorbing boundary conditions on the $`y`$ and $`z`$ faces and accumulated at each step the amount of energy that they absorb. The conditions are $`n_iD_i\varphi `$ $`=`$ $`D_t\varphi ,`$ (10) $`𝐄_T`$ $`=`$ $`𝐧\times 𝐁,`$ (11) where $`D`$ denotes the covariant derivative, $`𝐧`$ the outward normal unit vector on each boundary, and $`𝐄_T𝐄𝐧(𝐄𝐧)`$ the transverse component of the electric field. This corresponds to the zeroth-order absorbing boundary condition for free electromagnetism. The energy flowing into the boundary is $$E_{absorbed}=_\mathrm{\Omega }𝐒𝑑𝐧$$ (12) where $`\mathrm{\Omega }`$ is the boundary surface, and $`𝐒`$ is the Poynting vector, given by $$S_j=D_t\overline{\varphi }D_j\varphi D_t\varphi D_j\overline{\varphi }+(𝐄\times 𝐁)_j.$$ (13) Using Eqs. (Simulation), we can rewrite Eq. (12) as $$E_{absorbed}=_\mathrm{\Omega }(2|D_t\varphi |^2+|𝐄_T|^2)𝑑\mathrm{\Omega },$$ (14) which is easily computed. To produce the sinusoidal waves, we use the same technique that we used to generate cusps in , i.e., we create two traveling wiggles on a straight string that will combine to produce the desired sinusoidal form. (In the Nambu-Goto approximation, the resulting sinusoid would be exact; in our case there will be a distortion of the shape due to the dynamics that occur before it is formed, but this effect will be small because the wiggles out of which the sinusoid forms are not themselves strongly curved.) The initial field configuration for a moving wiggle is known exactly, from a result of Vachaspati . The advantage of this technique is that it does not require the use of relaxation, as is necessary in other field theory simulation schemes . The two original wiggles will overlap to form a single wavelength of the standing wave, from one minimum of $`y`$ to the next. At this point it is possible to change to periodic boundary conditions in the $`x`$ direction, so that the straight part of the string is removed, and we are left with a single wavelength of standing wave in a periodic box. This technique was used to produce Figs. 1 and 2, but it is not an accurate method for extracting the radiation rate, because energy coming from different burst is not clearly separated by the time it reaches the boundaries. When each burst of radiation is emitted, the string changes its shape and amplitude and its subsequent evolution does not correspond to a constant amplitude sinusoid any more. Of course, for large enough $`\lambda `$ this would not matter, but for the wavelengths in the range of our simulation, it makes a significant difference. To avoid this problem, we allow the original wiggles to pass by each other beyond the point of the overlap, so that they generate just a single burst, and then separate. The place from which the burst is emitted is at the center of the overlapping region, and the string near that point has been following the same evolution as in a real standing wave for a half period, so we feel that this burst accurately represents a single burst of a standing wave oscillation. Using the expressions given above for the Poynting vector on the boundaries, we can compute the energy absorbed at each time step in our simulation. The energy absorption on the box faces is shown in Fig. 3. Integrating all the energy absorbed from the first burst of radiation we can compute the power emitted by a sinusoidal standing wave. We have repeated this procedure for different values of the wavelength ranging from $`\lambda =12`$ to 30, in natural units. In Fig. 4 we plot the power emitted per unit length and compare it with theoretical predictions from Eqs. (8) and (2). We see that the exponentially decaying model fits quite well. The line shown has the form $$P_L\lambda ^{3/2}e^{2.56R_0}$$ (15) with $`R_0\lambda /(2\pi ^2)`$, but we do not have sufficient accuracy to confirm the exponent of $`\lambda `$ or the exact constant. A curve with $`\lambda ^2`$ or $`\lambda ^1`$ and somewhat different constant would fit equally well. On the other hand, the form of does not fit at all. ## Discussion We have simulated large-amplitudestanding waves on local cosmic strings, and found an exponential decrease in radiation power with increasing wavelength. Our technique proceeds from separated wiggles with exact initial conditions, and we have used quite a small lattice spacing as compared to other authors, so we feel that our results are reliable. The constant in the exponential gets its dimensions from the string thickness ($`10^{30}\text{cm}`$ for a GUT scale string), and thus for waves of any reasonable cosmological size, the radiation is utterly negligible. One could in principle imagine that strings in cosmological networks still have excitations at wavelengths comparable to their thickness, but this does not seem reasonable. Such wiggles will be rapidly smoothed out by gravitational radiation, and there is no mechanism for regenerating them at such small scales. Thus we conclude that direct radiation of particles from string length cannot play a significant role in the production of cosmic rays or the maintenance of a scaling network. As a result, cosmic ray observations do not rule out field theories that admit cosmic strings, as claimed in . ## Acknowledgments We would like to thank Alex Vilenkin and Xavier Siemens for helpful conversations, and Southampton College, particularly Arvind Borde and Steve Liebling, for the use of their computer facilities. This work was supported in part by funding provided by the National Science Foundation. J. J. B. P. is supported in part by the Fundación Pedro Barrie de la Maza.
no-problem/9910/astro-ph9910113.html
ar5iv
text
# Impacts of the Detection of Cassiopeia A Point Source ## 1. Introduction Cassiopeia A (Cas A) is an interesting supernova (SN) remnant in various aspects. The remnant is very young, about 320 years old. This ring-shaped (e.g. Holt et al. 1994) remnant is associated with jet like structures (Fesen, Becker & Blair 1987). The observed abundances of heavy elements are in good agreement with the yields of a massive star (e.g., Hughes et al. 2000). The overabundance of nitrogen found in some knots (Fesen et al. 1987) implies that the progenitor was a massive Wolf-Rayet star (WN type) which has lost most of its H-rich envelope during the pre-SN evolution. The SN was suggested to be faint (Ashworth 1980), which implies that the progenitor was not a red-supergiant possibly due to loss of its H-rich envelope. Recently the ACIS on board the Chandra X-ray satellite observed Cas A and found a point-like source (Tananbaum et al. 1999). Subsequently, Aschenbach (1999) reported that the ROSAT/HRI image of Cas A taken during 1995-1996 also shows the point-like source at the similar location. Very recently Pavlov et al. (2000) and Chakrabarty et al. (2000) reported the results of their detailed analyses of the Cas A point-source data from the Chandra observation. These authors convincingly argue that the observed point source should, indeed, be a compact remnant of the SN explosion. The single power-law fit to the Chandra data by Pavlov et al. (2000) yields the higher photon index $`\mathrm{\Gamma }`$ and lower luminosity $`L`$ than those observed from typical young pulsars. The spectrum can be equally well fit by thermal models. The best fit for a one component blackbody model yields the temperature $`T^{\mathrm{}}=68`$ MK, the effective radius $`R_e=0.200.45`$ km, and the bolometric luminosity $`L^{\mathrm{}}=(1.41.9)\times 10^{33}`$ erg s<sup>-1</sup>. (In this paper the temperature $`T^{\mathrm{}}`$ and luminosity $`L^{\mathrm{}}`$ refer to the values to be observed at infinity.) Chakrabarty et al. (2000) obtained similar results. The size is too small for a 10 km radius neutron star (NS), but it is consistent if the dominant emission comes from localized hot spots. Pavlov et al. (2000) find that the spectrum is equally well fit by a two temperature thermal model with hydrogen polar caps and the rest of the cooling NS surface composed of Fe. These authors also analyzed the archival data from ROSAT and Einstein, and report that the results are consistent with the Chandra results within the 1 $`\sigma `$ level. Their data analyses of the point source showed no statistically significant variability (both long and short time scale) over the Einstein \- Chandra period. Chakrabarty et al. (2000) carried out detailed timing analysis and report that the 3 $`\sigma `$ upper limit on the sinusoidal pulsed fraction is $`<`$ 25% for period $`P>`$ 100 ms, $`<`$ 35% for $`P>`$ 5 ms, and $`<`$ 50% for $`P>`$ 1 ms. We emphasize here that the detection of the point source itself is extremely important, whether it turns out to be a neutron star (NS) or a black hole (BH). In this paper, therefore, we will consider both cases. Although the currently available data are not sufficient to distinguish between these options, the most recently completed long Chandra observation by S. Holt et al. (2000) and already planned long XMM observations should be able to do so. Therefore, we consider that it is extremely important and timely now, to discuss the implications and offer some predictions for each case. ## 2. Accreting Black Hole If the Cas A progenitor is more massive than $`25M_{}`$ a BH may be formed in the explosion (e.g., Ergma & van den Heuvel 1998). After formation the inner part of the ejected matter may fall back onto the BH due to the presence of a deep gravitational potential well or a reverse shock. The property of an accreting BH depends strongly on whether or not an accretion disk is formed. Here we present plausible BH scenarios based on the disk accretion model under the following observational constraints (see, e.g., Pavlov et al. 2000): (1) the single power-law X-ray luminosity of intermediate brightness, $`L_\mathrm{x}`$(0.1$``$5.0 keV) $`=(260)\times 10^{34}`$ erg s<sup>-1</sup> for distance $`d`$ = 3.4 kpc, which is much lower than the Eddington luminosity, $`L_{\mathrm{EDD}}7.5\times 10^{38}(M_{\mathrm{BH}}/3M_{})`$ erg s<sup>-1</sup> for hydrogen-free matter, (2) no significant variability being detected between the Einstein and Chandra observations, (3) large $`F_\mathrm{x}/F_{\mathrm{opt}}`$ ($`\text{ }>100`$), and (4) large power-law photon index, $`\mathrm{\Gamma }2.64.1`$. In our model, we assume that the fallback material has specific angular momentum greater than $`(GM_{\mathrm{BH}}r_\mathrm{S})^{1/2}`$ (where $`r_\mathrm{S}`$ is the Schwarzschild radius) and thus a fallback disk is formed. There is no efficient mechanism for angular-momentum removal since the Cas A compact remnant is unlikely to have a binary companion (§4). Then the disk evolution most likely obeys the self-similar solution in which the total angular momentum within the disk is kept constant (Pringle 1974; Mineshige, Nomoto, & Shigeyama 1993). This solution predicts that disk luminosity decays in a power-law fashion after the disk is formed (Mineshige et al. 1997) as $`lL/L_{\mathrm{EDD}}10(M_{\mathrm{fallback}}/0.1M_{})(\alpha /0.1)^{1.3}(t/320\mathrm{y}\mathrm{r})^{1.3}`$ $`(M_{\mathrm{BH}}/3M_{})^{1.15}`$, where $`M_{\mathrm{fallback}}`$ is the amount of fallback material and $`\alpha `$ is the viscosity parameter. We should allow a factor of $`0.110`$ changes depending on the distribution of matter and angular momentum. In order for the black-hole accretion scenario to be consistent with the observed $`l10^4`$ at 320yr, the amount of the fallback material should indeed be very small, $`M_{\mathrm{fallback}}10^6M_{}`$. Although the accretion models predict luminosity decrease during the last 20 years from the Einstein (in 1979) to the Chandra (in 1999) observations, it is small, only about 10% – $`(320/300)^{1.3}0.90`$. Since the Einstein observations include larger error bars, more than several tens of $`\%`$, the luminosity drop of this level cannot be detected, which is consistent with the lack of observed long range large scale variability. The luminosity of $`10^{33}`$ erg s<sup>-1</sup> is typical to Galactic BH candidates (GBHC) during quiescence. However, the constraint (3), the large $`F_\mathrm{x}/F_{\mathrm{opt}}`$ ratio, rules out models that invoke formation of a fallback disk whose properties are similar to those in quiescent GBHC (Chakrabarty et al. 2000). In the case of usual GBHC, hydrogen-rich matter is continuously added to the disk from the binary companion. According to the disk-instability model for outbursts of GBHC (Mineshige & Wheeler 1989), a part of the transferred material is accumulated in the outer parts of the disk, which inevitably produces large optical flux in the quiescent GBHC. Also the constraint (4), a large photon index $`\mathrm{\Gamma }`$, is in conflict with the ADAF (advection-dominated) model for the quiescent GBHC (Narayan, McClintock, & Yi 1996). For any ADAF models in which soft photons are provided only by internal synchrotron emission and no external soft photons are available, the power-law photon indices should be as small as $`\mathrm{\Gamma }1.7`$ (Tanaka & Lewin 1995). These are the reasons why Chakrabarty et al. (2000) did not favor an accreting BH model for the Cas A source. Here we propose a different promising BH model, a disk-corona type model, for which the above analogy to the GBHC is not valid. First, we consider the constraint (3), the large $`F_\mathrm{x}/F_{\mathrm{opt}}`$ ratio. In our model for Cas A, there is no binary companion which supplies mass at 320 years (§4). This means that the outer disk boundary is not extended enough to emit significant optical fluxes. The disk is stable due to the smaller disk size and the different composition of the disk material (mostly heavy elements with possibly a little He but no hydrogen; §4), i.e., the thermally unstable outer zones are absent. In the absence of an instability, the mass-flow rate in the disk is close to be constant (Mineshige et al. 1993). Then according to the standard disk model, the effective temperature is $`T(r)4000(M_{\mathrm{BH}}/3M_{})^{1/4}(r/10^{10}\mathrm{cm})^{3/4}(l/10^4)^{1/4}`$ K. For the disk size as small as $`r_\mathrm{d}\text{ }<10^{10}`$ cm and $`l10^4`$, the constraint $`L_{\mathrm{opt}}<10^{32}`$ erg s<sup>-1</sup> is satisfied. Next consider the constraint (4), the large $`\mathrm{\Gamma }`$. In order to reproduce large photon indices by Compton scattering, we require that energy input rate into soft photons exceeds that into electrons. It is important to note that GBHC generally exhibit two states, soft and hard, and a large $`\mathrm{\Gamma }`$ is a characteristics of the soft-state emission which exhibits soft blackbody spectra with $`kT1`$ keV. The radiation from thermal photons with $`1`$ keV times the area of the emission region around a typical black hole of 3 – 10 M produces higher luminosity, $`L_\mathrm{x}10^{3638}`$ erg s<sup>-1</sup>, than observed from Cas A. However, we emphasize that unlike GBHC no further mass input is available in our model. Then the accretion rate monotonically decreases, and so does the maximum blackbody temperature, as $`T_{\mathrm{max}}0.1(M/3M_{})^{1/4}(l/10^4)^{1/4}`$ keV. Therefore, we get $`T_{\mathrm{max}}`$ 0.1 keV for $`l10^4`$, instead of $``$ 1 keV. A large $`\mathrm{\Gamma }`$ is then naturally obtained in our model, since there is copious supply of soft-photons at $``$ 0.1 keV into electron clouds in the corona from the underlying cool disk (Mineshige, Kusunose, Matsumoto 1995). In other words, the important model parameter is $`\mathrm{}_{\mathrm{soft}}/\mathrm{}_{\mathrm{hard}}`$ (ratio of compactness parameter of soft photons to that of hard electrons), where the compactness parameter is proportional to the energy output rate divided by the size of the region. For $`\mathrm{}_{\mathrm{soft}}>\mathrm{}_{\mathrm{hard}}`$, we have a large spectral index ($`\mathrm{\Gamma }>`$ 2) because of efficient Compton cooling of hard electrons as shown in Mineshige et al. (1995). The spectral slope is rather insensitive to $`\dot{M}`$ and $`M`$. The conclusion is that with the low accretion rate and lower soft photon temperature our Compton model with a disk-corona configuration naturally yields large $`\mathrm{\Gamma }`$ with the observed luminosity. ## 3. Cooling Neutron Star Here let us assume that the observed Cas A point source is a NS. Pavlov et al. (2000) and Chakrabarty et al. (2000) convincingly argue that the dominant radiation observed by Chandra is most likely coming from polar hot spots or equatorial ring if it is a NS. Our main purpose in this section is to argue that it is still worthwhile to compare with theoretical models the observed upper limit to the cooling NS component (i.e., the radiation from the whole stellar surface excluding the hotter, localized areas). Pavlov et al. (2000) offered, as a possible model, a two-component thermal model where the temperature and radius of the polar caps with hydrogen are 2.8 MK and $``$ 1 km, respectively, while the rest of the surface of the 10 km NS consisting of Fe is at 1.7 MK. In this model, the hotter polar caps are the result of higher conductivity of hydrogen as compared with Fe, the temperature difference between the polar caps and the rest of the surface should be small, less than a factor of 2, and hence non-standard cooling should be excluded. Here we offer a promising alternative NS model. SN remnants are usually classified into two categories: shell-type and filled-center (plerions). Cas A is considered to be a prototype of the former, where radio pulsars are normally not found. Recently Pacini (2000) emphasized the evidence for the presence of an active NS in at least some of the shell-type SNRs although radio pulsars were not found. Also, there is some evidence for significant magnetospheric activities (which can be responsible for polar cap heating) in some NS where no radio pulsar has been found. An example is Geminga (e.g., see Tsuruta 1998). Therefore, the apparent absence of a radio pulsar and/or a plerion should not be used as evidence against polar cap heating. Chakrabarty et al. (2000) offers accretion as a possible cause for polar cap heating when the field strength is significant. If it is weak, their accreting NS model offers the hotter component as originating from the equatorial hot ring. In either case, with an additional heat source for the hotter component, larger temperature difference between the hotter and cooler components is expected, and hence there is no conflict with the possibility of faster non-standard cooling. We adopt the conservative upper limit to the cooler component given by Chandara (Pavlov et al. 2000), $`L^{\mathrm{}}<3\times 10^{34}`$ erg s<sup>-1</sup>. The neutron star thermal evolution is calculated with a general relativistic evolutionary code without making the isothermal approximation (Nomoto & Tsuruta 1987, Umeda et al. 1994, hereafter U94; Umeda, Tsuruta & Nomoto 1994, hereafter UTN94). Our results are summarized in Figure 1. The observed upper limit for Cas A is consistent with the ’standard’ cooling. However, it is still only an upper limit, and if the actual luminosity of the cooler component turns out to be $`10^{33}`$ erg s<sup>-1</sup> or less, the result will be extremely interesting. This is because then the observed value will be certainly below the standard cooling curve, and hence that will be considered the evidence for non-standard cooling scenarios such as those involving pion and/or kaon condensates, or the direct URCA process (e.g., U94; UTN94). When the particles in the stellar core are in the superfluid state with substantial superfluid energy gaps, neutrino emissivity $`l_\nu `$ is significantly suppressed (e.g., see Tsuruta 1998). In order to examine this effect of superfluidity, we calculated pion cooling for a representative superfluid model with an intermediate degree of suppression, called the E1 – 0.6 model (see U94). The result is shown as the thin solid curve in Figure 1. ## 4. Constraints from Progenitor Scenarios Here we discuss whether the formation of a NS or BH is consistent with the current models of stellar evolution and supernovae and whether the evolutionary scenarios constrain the radiation processes from the compact source. The overabundance of nitrogen in Cas A implies that the progenitor was a massive WN star which lost most of its hydrogen envelope before the SN explosion. Here we describe two possible evolutionary paths to form such a pre-SN WN star. One path is the mass loss of a very massive single star. A star with the zero-age main-sequence mass $`M_{\mathrm{MS}}`$ larger than $``$ 40 M can lose its hydrogen-rich envelope via mass loss due to strong winds and become a Wolf-Rayet star (e.g., Schaller et al. 1992). Recent theoretical models and population synthesis studies suggest that stars with $`M_{\mathrm{MS}}\text{ }>`$ 25 M are more likely to form BHs than NSs (e.g., Ergma & van den Heuvel 1998). This implies that the WN star progenitor is massive enough to form a BH. The explosion can be energetic enough to prevent too much matter fall back to be consistent with the small fall back mass inferred in §2. The other evolutionary path to form a pre-SN WN star is mass loss due to binary interaction. If the progenitor is in a close binary system with a less massive companion star, the star loses most of its H-rich envelope through Roche lobe overflow. In this case, the WN progenitor can form from a star of $`M_{\mathrm{MS}}\text{ }<40`$ M. Its SN explosion of type Ib/c would leave either a BH (if $`M_{\mathrm{MS}}2540`$ M) or a NS (if $`M_{\mathrm{MS}}\text{ }<25`$ M). If the compact remnant in Cas A turns out to be a NS, therefore, the progenitor must have been in a close binary system. In the binary scenario, the companion to the Cas A progenitor cannot be more massive than a red dwarf, as constrained from the R & I band magnitude limit (van den Bergh & Pritchet 1986). When the companion star is such a small mass star, i.e., the mass ratio between the stars is large, the mass transfer is inevitably non-conservative (e.g., Nomoto, Iwamoto, & Suzuki 1995), and the companion star will spiral-in into the envelope of the Cas A progenitor. In order for most of the H-rich envelope to be removed, the envelope should have been a red-giant size so that the orbital energy released during the spiral-in exceeds the binding energy of the envelope. After losing its envelope due to frictional heating, the star became a WN star. If we take the model of $`M_{\mathrm{MS}}=25M_{}`$, as an example, the star at the WN stage has 8 $`M_{}`$. Since the explosion ejects Si and Fe from the deep layers (Hughes et al. 2000), the mass of the compact remnants could not exceed $`23M_{}`$. Then the binary system is very likely to be disrupted at the explosion. Then the compact star in the Cas A remnant does not have a companion star, and so no mass transfer can be postulated. The implication is also that the accretion onto the compact remnant can occur only as a result of fallback of the ejected matter, and so the composition of the fall back matter is mostly heavy elements with possibly a small fraction of helium but no hydrogen. In either the single or binary scenario, the WN star blows a fast wind which collides with the red-giant wind material to form a dense shell (Chevalier & Liang 1989). If the red-giant wind formed a ring-like shell (due possibly to the spiral-in of the companion), the collision between the supernova ejecta and the shell could explain the observed ring-like structure of Cas A. ## 5. Discussion and Conclusion We agree with Chakrabarty et al. (2000) that for Cas A point source the usual ADAF model for a quiescent GBHC hardly reconcile with observation. However, we emphasized in §2 that there does exist a very promising BH disk accretion model. In this model, the fallback material is like the soft state of a GBHC with a disk-corona configuration, not like a quiescent GBHC with ADAF. With the low accretion rate and Comptonization of cooler soft photons ($``$ 0.1 keV or less), we naturally obtain large photon index of $`\mathrm{\Gamma }2.64.1`$ and lower luminosity of $`L10^{34}10^{35}`$ erg s<sup>-1</sup>, as observed from the Cas A point source. Accreting NS models are also possible (see Chakrabarty et al. 2000). However, we can still, without difficulty, distinguish between the BH and NS accretion models because the characteristic properties of the observed X-ray spectra in these two cases are quite different (e.g., see Tanaka 2000). For instance, the radiation from an accreting NS is dominated by thermal emission from the stellar surface (Rutledge et al. 2000), which is absent if a BH is involved. If the point source is a NS, the dominant radiation observed by Chandra most likely corresponds to the radiation from a localized small area. The detailed studies of theoretical light curves expected from anisotropic cooling of a NS have been carried out by, e.g., Shibanov et al. (1995) and Tsuruta (1998), with the latter including hot spots. The results show that pulsation depends on the relative angles between the rotation axis, magnetic axis and the line of sight. Depending on the combinations of these angles, pulsations from zero to up to about 30% are predicted, and so the observed constraints on the pulsed fraction are still consistent with a NS model. Although the current data of Cas A point-source can be consistent with both BH and NS scenarios, future observations by the Chandra, XMM, and other satellite missions should be able to distinguish between these cases. If distinct periodicity is found the point source definitely should be a NS. The existence of the NS itself will significantly constrain the progenitor scenario for Cas A. Better spectral information should be able to distinguish between the BH and NS as the compact remnant. If the source is found to be a BH, the implication is significant in the sense that this will offer the first observational evidence for BH formation through a SN explosion and greatly constrain the BH progenitor mass by combining with the abundance analysis of Cas A (Hughes et al. 2000). In conclusion we emphasize that the Cas A point source can potentially provide great impacts on the theories of supernova explosion, progenitor scenario, compact remnant formation, accretion to compact objects, and NS thermal evolution. We thank Drs. G. Pavlov, M. Rees, H. Tananbaum, B. Aschenbach, and J. Trümper for valuable discussions. This work has been supported in part by the grant-in-Aid for Scientific Research (0980203, 09640325), COE research (07CE2002) of the Ministry of Education, Science, Culture and Sports in Japan, and a NASA grant NAG5-3159.
no-problem/9910/astro-ph9910497.html
ar5iv
text
# High Resolution Radio Imaging of Distant Submillimeter Galaxies ## Radio Emission from Distant Galaxies in the HDF The diffuse radio emission observed in local starbursts is believed to be a mixture of synchrotron radiation (excited by supernovae remnants and hence directly proportional to the number of supernovae producing stars) and thermal radiation (from HII regions and hence an indicator of the number of O and B stars in a galaxy). As the thermal and synchrotron radiation of a starburst dissipates on a physical time scale of $`10^710^8`$ years, the radio luminosity is a true measure of the instantaneous star-formation rate (SFR) in a galaxy, uncontaminated by older stellar populations. Since supernovae progenitors are dominated by $``$8 $`M_{}`$ stars, synchrotron radiation has the additional advantage of being less sensitive to uncertainties in the initial mass function as opposed to UV and optical recombination line emission. However, the most obvious advantage of using the radio luminosity as a SFR tracer is its unsusceptibility to dust obscuration, as galaxies and the inter-galactic medium are transparent at centimeter wavelengths. The strong correlation between far-infrared and radio emission from local star-forming galaxies suggests that radio emission from distant, dust obscured galaxies should be visible at the microjansky level for luminous starbursts at redshifts less than 3-4. We have recently completed a deep radio survey of the Hubble Deep Field using both the Multi-Element Microwave Linked Interferometer (MERLIN) and the Very Large Array (VLA) at 1.4 and 8.5 GHz (Richards et al. 1998, AJ , 116, 1039; Richards 1999a, ApJL, 511, 1; Richards 1999b, ApJ, 2000, in press, astro-ph/9908313; Muxlow et al. 2000, in prep.) in order to study the nature of microjansky radio galaxies, and in particular understand their implication for galaxy evolution at early epochs. The optical identifications of the 72 radio sources detected in a complete sample ($`S_{1.4}`$ 40 $`\mu `$Jy or 6$`\sigma `$) on the HST images in the HDF and flanking fields show that: 70$`\pm `$10% of the optical identifications are associated with morphologically peculiar, merging and/or interacting galaxies, many with independent evidence for active star-formation (blue colors, infra-red excess, HII-like emission spectra). The remaining identifications are composed of low-luminosity FR Is, Seyferts, LINERs, and luminous star-forming field spirals at low redshift (representative identifications are shown in Figure 1). The radio spectral indices are in general steep ($`\alpha >0.5`$ ; $`S\nu ^\alpha `$) and the median radio angular size about 1-1.5<sup>′′</sup> , indicative of diffuse synchrotron emission in $`z=0.21.3`$ galactic disks. 20% of the radio sources cannot be identified to $`I_{AB}`$ = 25 in deep ground based images and to $`I_{AB}`$ = 28.5 in the HDF itself. These radio sources are likely distant, extreme starburst systems enshrouded in dust. This ’new’ population is discussed in more detail in Richards et al. (1999, ApJ, in press, astro-ph/9909251) and one source in particular, observed with HST-NICMOS by Waddington et al. (1999, ApJ, in press, astro-ph/9910069). Similar radio sources have recently been reported in the HDF-S (Norris et al. 1999, astro-ph/9910437). Thus the cosmological faint radio population is dominated by the distant analogs of local IRAS galaxies with suggested star-formation rates of 10-1000 $`M_{}`$ yr<sup>-1</sup>. In principle this radio selected starburst population allows for a derivation of the star-formation history, independent of optical selection biases. ## Detection of Distant Ultraluminous Radio Selected Starburst Galaxies In March and June 1999, we obtained shallow JCMT/SCUBA images of 14 optically faint radio sources in the Hubble Flanking fields. We detected 5 of these sources above 6 mJy at 850 $`\mu `$m. None of the 32 lower redshift (0.2 $`<z<`$ 1.3) radio sources in our field of view were detected. Comparison of our source counts with those from previous sub-mm surveys (Eales et al. 1999; Hughes et al. 1999; Barger, Cowie & Sanders 1999), shows that our radio selection technique recovers essentially all of the bright ($`S_{850}`$ $`\genfrac{}{}{0pt}{}{_>}{^{}}`$6 mJy) sub-mm sources. Thus there is an almost one-to-one correspondence between the bright sub-mm sky and the optically invisible microjansky radio sources ($`S_{1.4}`$ $`\genfrac{}{}{0pt}{}{_>}{^{}}`$40 $`\mu `$Jy). Based on the far-infrared to radio flux relationship observed in local starburst galaxies such as Arp 220, we modeled the redshifts of these optically faint, radio/sub-mm galaxies and found they likely lie at 1 $`\genfrac{}{}{0pt}{}{_<}{^{}}`$$`z`$ $`\genfrac{}{}{0pt}{}{_<}{^{}}`$3 (Carilli & Yun 1999, ApJL, 1999, 513, 13). We can use the sub-mm flux alone to estimate the overall luminosity, only weakly dependent on redshift because of the offsetting effects of far-IR spectral index and cosmological dimming. These values imply we are detecting ultraluminous infrared galaxies with 10<sup>12-13</sup>$`L_{}`$ , substantially more luminous than Arp 220. In the volume probed by our survey between 1 $`\genfrac{}{}{0pt}{}{_<}{^{}}`$$`z`$ $`\genfrac{}{}{0pt}{}{_<}{^{}}`$3 , this corresponds to a volume averaged star-formation rate of 0.4 $`M_{}`$ /yr/Mpc<sup>3</sup>, equivalent to the dust corrected optical value obtained by Steidel et al. (1999, ApJ, 519, 1). Thus optically ’invisible’ objects form an important constituent of the $`z>1`$ star-formation history, as shown by Barger, Cowie and Richards (1999, AJ submitted). These high redshift radio selected starburst galaxies are completely missing from the optical samples, implying that optical surveys give a biased view of the distant star-forming Universe. Our low redshift (0.1 $`<z<`$1.3) sample contains important information on the star-formation history as well. Based on the optical identifications in our deepest radio surveys, we have attempted to make a first guess of the radio determined star-formation history. We use both the radio properties, such as spectral index and morphology, as well as the optical morphology provided by HST images to cull a clean sample of star-forming systems (Richards et al. 1998; Richards 1999b; Haarsma et al. 1999). Our preliminary results are in general agreement with the dust corrected estimates of Steidel et al., although systematically higher at all redshifts. We interpret this as evidence of missing star-formation is the optical studies due to underestimates of the dust extinction. We cannot completely rule out the possibility that our radio samples have some contamination by low-luminosity AGN (i.e., Seyferts) which could also bring the radio and optical surveys into better agreement. Deeper high resolution radio observations, with complete spectroscopic coverage are needed to reveal the amount of ’hidden’ star-formation in the distant Universe. Only be combining, optical, radio, and far-infrared/sub-mm measurements of distant galaxies can a reliable consensus of their star-forming properties be obtained. It is a pleasure to thank my collaborators F. Bauer, A. Barger, L. Cowie, K. Kellermann, E. Fomalont, B. Partridge, R. Windhorst, T. Muxlow, I. Waddington and D. Haarsma. Support for part of this work was provided by NASA through grant HF-01123.01-99A from the STSI, which is operated by AURA, Inc., under NASA contract NAS5-2655. Figure Caption 1: Montage of radio/HST flanking field overlays. Contours are 1.4 GHz fluxes drawn at 2, 4, 8, 16, 32, 64 $`\sigma `$ ($`\sigma `$ = 4 $`\mu `$Jy). Greyscale is log strectch of HST I-band image 5<sup>′′</sup> on a side. Upper Left: I = 19.5 elliptical with weak radio AGN core at $`z=0.32`$. Twenty percent of the IDs are AGNs. Upper Right: I = 21.1 disk galaxy is at $`z=0.96`$ with a flat radio spectral index ($`\alpha `$ = 0.2). The optical spectra shows broad high excitation lines, suggesting the presence of a Seyfert core. Lower Left: A dramatic $`z=0.5`$ merger with a starburst core. This galaxy has about 1/3 the luminosity of Arp 220. About 60% of the radio IDs are of this variety. A rather bright I = 18.3 mag disk galaxy with an unusually steep radio spectrum of $`\alpha >`$ 1.6. There is no known star-forming, or spiral galaxy in the local Universe with such a steep non-thermal spectrum. About 10% of the radio sources fit into this ultrasteep class.
no-problem/9910/hep-th9910266.html
ar5iv
text
# 1 Introduction ## 1 Introduction During the past couple of years, significant progress has been made in our understanding of M-theory beyond the BPS configurations. These advances have already permitted to test the web of dualities relating the different phases of M-theory (the different superstring theories) on some of the non-BPS states of the spectrum, and a beautiful outlook on the interplays between BPS branes and non-BPS branes has been given in some cases (see - for reviews and references therein) along with an elegant mathematical formulation . For example, a BPS Dp-brane of type II theory may be viewed as coming from a non-BPS system given by a D(p+2), anti D(p+2) pair , . The instability of this non-BPS configuration manifests itself in a complex tachyonic mode of the open string stretched between the pair. When the pair coincides, the tachyon rolls down to a true vacuum and condenses leading to a stable vortex-like configuration, and the resulting object is a BPS Dp-brane. Recently, it has been argued that a similar mechanism could also produce fundamental strings. Namely, a fundamental string in the Type II theory could be described as a bound state of any pairs of stable Dp, anti-Dp branes where the tachyon of a D(p-2)-brane stretched between them condenses. In , the author considers in some detail the case $`p=4`$, i.e. the condensation of a D2-brane stretched between a D4, anti-D4 pair. Since the tachyonic condensing charged object is in this case extended (a tachyonic worldvolume string), there are no direct ways to describe quantitatively this type of mechanism. Nevertheless, the existence of this process can be deduced from the following three steps consideration . One first notices that the usual process of creation of a D2-brane in terms of the condensation of a fundamental string should have a description in M-theory in terms of creation of an M2-brane by condensation of a stretched M2 between a pair of M5, anti-M5 branes. Then one identifies in the worldvolume effective action of the brane anti-brane pair the Wess-Zumino term responsible for this process. Finally upon dimensional reduction one finds two Wess-Zumino terms, one describing the usual realisation of the D2 as a soliton, the other describing the fundamental string. In this paper, we propose to extend in a systematic way this kind of consideration. We study all the possible realisations of branes of M and type II theories as topological solitons of a brane-antibrane system by looking at the Wess-Zumino terms of the worldvolume effective actions of the different brane anti-brane pairs in M-theory. The paper is organised as follows. In section 2 we study the $`\text{M5}\text{}\overline{\text{M5}}`$ system, reviewing the work of Yi . Section 3 discusses the $`\text{MKK}\text{}\overline{\text{MKK}}`$ system, and in section 4 we study the $`\text{M9}\text{}\overline{\text{M9}}`$ case. Section 5 considers all brane-antibrane systems in M-theory in which M-waves are involved, in particular the $`\text{M2}\text{}\overline{\text{M2}}`$ case. Finally, section 6 analyses type IIB branes from the same kind of brane- antibrane systems. The last section contains a summary and some discussions. ## 2 The $`\text{M5}\text{}\overline{\text{M5}}`$ system In this section we briefly review the results of ref.. We start with the $`\text{M5}\text{}\overline{\text{M5}}`$ system and analyse the process of annihilation of a pair of M5, anti-M5 branes in terms of the tachyonic condensation of an M2-brane stretched between this pair. In particular, one can identify the following coupling in the M5-brane, $`\overline{\text{M5}}`$-brane worldvolume action: $$_{R^{5+1}}\widehat{C}d\widehat{a}^{(2)},$$ (2.1) as the one describing the emergence of an M2-brane soliton when the stretched tachyonic M2-brane condenses . Here $`\widehat{C}`$ is the 3-form of eleven dimensional supergravity and $`\widehat{a}^{(2)}`$ the worldvolume 2-form present in the action of the M5-brane<sup>1</sup><sup>1</sup>1 Hats on target space fields indicate that they are 11-dimensional. We use hats as well for the worldvolume fields of branes in 11 dimensions.. This 2-form is self-dual for a single M5-brane, however for an M5, anti-M5 pair it is unrestricted , given that in the anti-M5 brane effective action it is anti-self-dual and both contributions are combined to describe the coinciding M5, anti-M5 pair<sup>2</sup><sup>2</sup>2The complete WZ term including the coupling of the complex tachyonic field has been constructed in for D brane anti-D brane systems.. The topologically non-trivial tachyonic condensation of an M2 is though to be accompanied by a localised magnetic flux $`d\widehat{a}^{(2)}`$. Integrating over the flux on a transverse $`R^3`$ one finds: $$_{R^{2+1}}\widehat{C},$$ (2.2) which means that the condensation of the tachyonic mode of the M2 gives rise to the annihilation of the M5 anti-M5 pair into an M2-brane, since this is the way the M2-brane couples, minimally, to $`\widehat{C}`$. The dimensional reduction of the stretched M2-brane between the two M5-branes, along an M5-brane worldvolume direction, gives a fundamental string stretched between a D4 and an anti-D4 branes, if the reduction takes place along the M2-brane, or a D2-brane stretched between two D4, anti-D4 branes, if the reduction takes place along a direction transverse to the M2-brane. These two processes are described by the worldvolume reduction of (2.1)<sup>3</sup><sup>3</sup>3We ignore all numerical prefactors.: $$_{R^{5+1}}\widehat{C}d\widehat{a}^{(2)}_{R^{4+1}}C^{(3)}db^{(1)}+_{R^{4+1}}B^{(2)}da^{(2)}$$ (2.3) where: $$\widehat{a}_{\mu 5}^{(2)}=b_\mu ^{(1)},\widehat{a}_{\mu \nu }^{(2)}=a_{\mu \nu }^{(2)},\mu =0,1,\mathrm{},4,$$ (2.4) and $`C^{(3)}`$ denotes the RR 3-form and $`B^{(2)}`$ the NS-NS 2-form of the Type IIA theory. The first term describes a stretched fundamental string, coupled to $`b^{(1)}`$, and the second a stretched D2-brane, coupled to $`a^{(2)}`$. Considering the first term, integration over the localised magnetic flux $`db^{(1)}`$, which accompanies the topologically non-trivial condensation of the tachyon of the string , along a transverse $`R^2`$ gives the coupling: $$_{R^{2+1}}C^{(3)}$$ (2.5) which is the way the RR 3-form couples to the D2-brane. Therefore, one recovers the fact that the condensation of the stretched fundamental string, coupled to $`b^{(1)}`$, gives a solitonic D2-brane. Reversing the logic, the well-established process of creation of a D2-brane legitimates by oxidation the process of M2 creation described above. On the other hand, a similar argument on the second term of (2.3) shows that the mechanism of tachyonic condensation of the stretched D2-brane, coupled to $`a^{(2)}`$ , upon integration over the localised flux $`da^{(2)}`$ on a transverse $`R^3`$ (flux which should accompany this topologically non-trivial process) gives: $$_{R^{1+1}}B^{(2)},$$ (2.6) describing a solitonic fundamental string. One can describe as well the process in which a D2-brane stretched between a NS5, anti-NS5 pair of branes condenses, giving rise to a D2-brane soliton. The corresponding coupling is given by the reduction of (2.1) along a direction perpendicular to the M5-branes worldvolume: $$_{R^{5+1}}C^{(3)}da^{(2)}.$$ (2.7) Here the integration over the localised flux $`da^{(2)}`$ on a transverse $`R^3`$ gives the minimal coupling of a D2-brane: $$_{R^{2+1}}C^{(3)}.$$ (2.8) ## 3 The $`\text{MKK}\text{}\overline{\text{MKK}}`$ system Following the same reasoning as in the previous section we now analyse the different possible processes starting with the $`\text{MKK}\text{}\overline{\text{MKK}}`$ system by looking at the relevant terms in the worldvolume effective action of the Kaluza-Klein monopole. The worldvolume effective action of the M-theory Kaluza-Klein monopole was constructed in . The existence of the Taub-NUT direction in the space transverse to the monopole is implemented at the level of the effective action by introducing a Killing isometry which is gauged in the worldvolume. Then the target space fields must couple in the worldvolume with covariant derivatives of the embedding scalars, or through contraction with the Killing vector. The Kaluza-Klein monopole is charged with respect to an 8-form, which is the electric-magnetic dual of the Killing vector considered as a 1-form. This field is itself contracted with the Killing vector, giving a 7-form minimally coupled to the 7 dimensional worldvolume of the monopole. The worldvolume effective action of the monopole contains the following term : $$_{R^{6+1}}i_{\widehat{k}}\widehat{\stackrel{~}{C}}d\widehat{b}^{(1)},$$ (3.1) where $`\widehat{\stackrel{~}{C}}`$ denotes the 6-form of eleven dimensional supergravity, $`\widehat{k}`$ is the Killing vector, with $`(i_{\widehat{k}}\widehat{\stackrel{~}{C}})_{\widehat{\mu }_1\mathrm{}\widehat{\mu }_5}\widehat{k}^{\widehat{\mu }_6}\widehat{\stackrel{~}{C}}_{\widehat{\mu }_1\mathrm{}\widehat{\mu }_6}`$, and $`\widehat{b}^{(1)}`$ is a 1-form worldvolume field which describes the coupling to an M2-brane wrapped on the Taub-NUT direction. The same coupling appears in the effective action of the $`\text{MKK}\text{}\overline{\text{MKK}}`$ pair, where now $`\widehat{b}^{(1)}=\widehat{b}_1^{(1)}\widehat{b}_2^{(1)}`$, with $`\widehat{b}_{1,2}^{(1)}`$ the corresponding vector field in the worldvolume of each monopole. After condensation of the tachyonic mode of the M2-brane, the integration of the localised flux $`d\widehat{b}^{(1)}`$ on a transverse $`R^2`$ gives $$_{R^{4+1}}i_{\widehat{k}}\widehat{\stackrel{~}{C}}$$ (3.2) i.e. an M5-brane soliton with one worldvolume direction wrapped around the Taub-NUT direction. Therefore we can describe a (wrapped) M5-brane soliton, as the condensation of an M2-brane stretched between the Kaluza-Klein anti Kaluza-Klein monopole pair. We can now analyse the different possible processes in the type IIA theory to which this process gives rise. Dimensionally reducing along the Taub-NUT direction of the monopole we can describe a D4-brane through the condensation of an open string stretched between a D6, anti-D6 pair . This is described by the coupling: $$_{R^{6+1}}C^{(5)}db^{(1)}$$ (3.3) which is straightforwardly obtained by reducing the coupling (3.1) describing the creation of the solitonic (wrapped) M5-brane. The reduction along a worldvolume direction of the monopole gives as one of the possible configurations a solitonic NS5-brane, obtained after the condensation of an open string stretched between a Type IIA pair of Kaluza-Klein anti-Kaluza-Klein monopoles. The worldvolume reduction of (3.1) gives: $$_{R^{5+1}}i_kC^{(5)}db^{(1)}+_{R^{5+1}}i_kB^{(6)}db^{(0)}$$ (3.4) where $`b^{(0)}`$ arises as the component of $`\widehat{b}^{(1)}`$ along the worldvolume direction that is being reduced. These terms describe two processes: One in which a (wrapped) D4-brane is created after condensation of a (wrapped) D2-brane stretched between two Type IIA monopoles, described by the first term, and one in which a (wrapped) NS5-brane is created after the condensation of a (wrapped) open string. This is, to our knowledge, the first example in which a NS5-brane has been described through a brane anti-brane pair annihilation. There is another configuration giving rise to a solitonic NS5-brane, though it occurs after the annihilation of a pair of so-called exotic branes, i.e. branes not predicted by the analysis of the spacetime supersymmetry algebra. This process is obtained after the reduction of the M2-brane stretched between the KK, anti-KK pair along a direction transverse to the monopoles, but different from the Taub-NUT direction. This reduction of the M-theory Kaluza-Klein monopole gives rise to a Kaluza-Klein type of solution in ten dimensions whose transverse space is not a four dimensional euclidean Taub-NUT space, as for the conventional Kaluza-Klein monopole, but three dimensional. Therefore this solution is not asymptotically flat but logarithmically divergent. Moreover it is not predicted by the analysis of the Type IIA spacetime supersymmetry algebra (see for a possible explanation of this fact). We will however discuss briefly this type of configurations because they give rise to interesting descriptions of NS-NS branes in terms of brane anti-brane annihilation. We will denote the brane obtained through this reduction of the M-theory KK-monopole as a KK6-brane<sup>4</sup><sup>4</sup>4The Type IIA Kaluza-Klein monopole would then be denoted as a KK5-brane.. The coupling in the KK6-brane worldvolume action reads: $$_{R^{6+1}}i_kB^{(6)}db^{(1)}.$$ (3.5) This describes a (wrapped) NS5-brane after condensation of a (wrapped) D2-brane stretched between a KK6, anti-KK6 pair of branes. We now turn to the “dual” process in the $`\text{MKK}\text{}\overline{\text{MKK}}`$ system , i.e. obtaining an M2-brane soliton after the condensation of an M5-brane. A wrapped M5-brane is coupled in the worldvolume of a higher dimensional brane through a 4-form field, which in the 7 dimensional worldvolume of the monopole is dual to the vector field $`\widehat{b}^{(1)}`$ describing the coupling of a wrapped M2-brane. Given a dual pair of worldvolume fields only one can couple at the same time in the worldvolume effective action, given that both of them carry the same number of degrees of freedom. Therefore we need to dualise the vector field $`\widehat{b}^{(1)}`$ in the Kaluza-Klein monopole effective action. First, one adds a term $$_{R^{6+1}}𝑑\widehat{b}^{(1)}d\widehat{b}^{(4)}$$ (3.6) to the action, and then integrates $`\widehat{b}^{(1)}`$ from its equation of motion<sup>5</sup><sup>5</sup>5This step is normally quite involved due to the complicated form of the Born-Infeld part.. Since $`\widehat{b}^{(1)}`$ couples through its gauge invariant field strength: $$\widehat{F}^{(2)}=d\widehat{b}^{(1)}+(i_{\widehat{k}}\widehat{C}),$$ (3.7) we can write (3.6) as $$_{R^{6+1}}(\widehat{F}^{(2)}(i_{\widehat{k}}\widehat{C}))d\widehat{b}^{(4)},$$ (3.8) from where a term $$_{R^{6+1}}(i_{\widehat{k}}\widehat{C})d\widehat{b}^{(4)}$$ (3.9) is already known to be coupled to the dual action without the need to eliminate explicitly $`\widehat{F}^{(2)}`$ from its equation of motion. This term describes an M2-brane soliton. Indeed, the condensation of the wrapped M5-brane, accompanied by a localised $`d\widehat{b}^{(4)}`$-flux over a transverse $`R^5`$, gives a coupling: $$_{R^{1+1}}(i_{\widehat{k}}\widehat{C}),$$ (3.10) which is the way the eleven dimensional 3-form couples to a wrapped M2-brane. Therefore, this is the soliton that is produced after the condensation. The creation of a fundamental string in the Type IIA theory is now described as the reduction of the stretched M5-brane between the Kaluza-Klein anti Kaluza-Klein pair over the Taub-NUT direction of the monopole. This gives a D4-brane stretched between a D6, anti-D6 pair. The reduction of the coupling (3.9) gives: $$_{R^{6+1}}B^{(2)}db^{(4)}.$$ (3.11) Therefore, we find: $$_{R^{1+1}}B^{(2)}$$ (3.12) after the integration of the 4-form, describing a solitonic fundamental string . Three other possible brane anti-brane annihilation processes can be deduced in Type IIA from the M-theory Kaluza-Klein anti-Kaluza-Klein annihilation that we have just discussed. If we reduce this process along a worldvolume direction of the Kaluza-Klein monopole we find the following couplings: $$_{R^{5+1}}i_kB^{(2)}db^{(4)}+_{R^{5+1}}i_kC^{(3)}db^{(3)}$$ (3.13) where: $$\widehat{b}_{\mu _1\mathrm{}\mu _4}^{(4)}=b_{\mu _1\mathrm{}\mu _4}^{(4)},\widehat{b}_{\mu _1\mathrm{}\mu _36}^{(4)}=b_{\mu _1\mathrm{}\mu _3}^{(3)},\mu =0,1,\mathrm{},5.$$ (3.14) The first term in (3.13) describes a fundamental string created after the condensation of a NS5-brane stretched between a pair of Type IIA Kaluza-Klein monopole anti-monopole. Both the fundamental string and the NS5-brane are wrapped on the Taub-NUT direction of the monopole. On the other hand, the second term represents a wrapped D2-brane arising after the condensation of a wrapped D4-brane stretched between the two monopoles. The third case is obtained reducing (3.9) along a direction transverse to the monopole but different from its Taub-NUT direction. We obtain the following coupling in the seven dimensional worldvolume of a KK6-brane: $$_{R^{6+1}}i_kC^{(3)}db^{(4)}.$$ (3.15) This describes a D2-brane wrapped around the NUT direction of the KK6-brane, and it is obtained after the condensation of a (wrapped) NS5-brane stretched between the brane anti-brane pair. ## 4 The $`\text{M9}\text{}\overline{\text{M9}}`$ system In this section we study the possible processes of creation of branes after tachyonic condensation in the $`\text{M9}\text{}\overline{\text{M9}}`$ system. We identify the coupling in the M9-brane effective action responsible for this condensation and analyse the related processes of creation of branes in type IIA. For this purpose it is important to recall that the M9-brane contains a gauged direction in its worldvolume , such that the field content is that of the nine dimensional vector multiplet. Reduction along this direction gives the D8-brane effective action, whereas the reduction along a different worldvolume direction gives the KK8-brane, another so-called exotic brane of the Type IIA theory, in the sense that it is not predicted by the analysis of the Type IIA spacetime supersymmetry algebra. This brane contains as well a gauged direction in its worldvolume, inherited from that of the M9-brane, and has been studied in connection with the 7-brane of the Type IIB theory in , where its worldvolume effective action has been derived. The Wess-Zumino term of the M9-brane effective action has not yet been constructed in the literature. However, from the effective action of the KK8-brane we can deduce the presence of the following term in the M9-brane worldvolume effective action: $$_{R^{8+1}}i_{\widehat{k}}\widehat{N}^{(8)}d\widehat{b}^{(1)}.$$ (4.1) Let us stress that the M9-brane is effectively an 8-brane, given that it has one of its worldvolume directions gauged. Therefore the integration takes place over a nine dimensional space-time. The contraction of the 8-form potential $`\widehat{N}^{(8)}`$ with the (worldvolume) Killing direction, denoted by $`\widehat{k}`$, is the field to which the M-theory Kaluza-Klein monopole couples minimally<sup>6</sup><sup>6</sup>6As we mentioned in the previous section here $`\widehat{N}^{(8)}`$ is the electric-magnetic dual of the Killing vector considered as a 1-form.. $`\widehat{b}^{(1)}`$ is a 1-form worldvolume field describing a wrapped M2-brane ending on the M9-brane . The reduction of this term along a worldvolume direction different from the gauged direction gives the terms: $$_{R^{7+1}}i_kN^{(8)}db^{(0)}+_{R^{7+1}}i_kN^{(7)}db^{(1)}$$ (4.2) present in the effective action of the KK8-brane (expression (4.6) of reference ), and where: $$\widehat{b}_\mu ^{(1)}=b_\mu ^{(1)},\widehat{b}_8^{(1)}=b^{(0)},\mu =0,1,\mathrm{},7,$$ (4.3) and $`i_{\widehat{k}}\widehat{N}^{(8)}`$ gives rise to the two fields $`i_kN^{(8)}`$, $`i_kN^{(7)}`$. $`i_kN^{(8)}`$ is the field that couples minimally to the KK6-brane that we considered in the previous section, whereas the ordinary Type IIA Kaluza-Klein monopole is charged with respect to $`i_kN^{(7)}`$. The coupling (4.1) in the M9-brane worldvolume effective action describes the process in which a Kaluza-Klein monopole is created after the condensation of a, wrapped, M2-brane stretched between an M9, anti-M9 pair of branes. As in the previous sections the pair is described by choosing $`\widehat{b}^{(1)}`$ as the difference of vector fields in each brane. Indeed, integration over the localised magnetic $`\widehat{b}^{(1)}`$-flux, associated to the wrapped M2-brane, on a transverse $`R^2`$ gives: $$_{R^{6+1}}i_{\widehat{k}}\widehat{N}^{(8)}$$ (4.4) i.e. the field minimally coupled to the Kaluza-Klein monopole. Reducing (4.1) along the isometric worldvolume direction denoted by $`\widehat{k}`$ we can describe the process in which a D6-brane is created after the condensation of an open string stretched between a pair of D8, anti-D8 branes . The resulting coupling in the worldvolume effective action of the D8 brane is given by: $$_{R^{8+1}}C^{(7)}db^{(1)}$$ (4.5) where $`C^{(7)}`$ is the RR 7-form potential of the Type IIA theory and is obtained through the reduction: $$i_{\widehat{k}}\widehat{N}^{(8)}=C^{(7)}+\mathrm{}$$ (4.6) (see for the details of this reduction). Therefore the condensation of a fundamental string, accompanied by a localised $`db^{(1)}`$ flux over a transverse $`R^2`$, gives a D6-brane soliton: $$_{R^{6+1}}C^{(7)}.$$ (4.7) Instead, we can reduce (4.1) along a worldvolume direction of the M9-brane, in which case we obtain a pair of KK8, anti-KK8 branes with the following couplings: $$_{R^{7+1}}i_kN^{(7)}db^{(1)}+_{R^{7+1}}i_kN^{(8)}db^{(0)},$$ (4.8) derived already in (4.2). The first coupling describes a Type IIA Kaluza-Klein monopole, obtained after the condensation of a D2-brane. The Taub-NUT direction of the monopole coincides with the gauged worldvolume direction of the KK8 branes, and the stretched D2-brane is also wrapped around this direction. The second coupling describes a KK6-brane, obtained after the condensation of a (wrapped) open string stretched between the two KK8, anti-KK8 branes. Again the Killing direction of the KK6 and KK8-branes coincide. A last reduction on (4.1) can be performed along the transverse direction. Such a reduction on the M9 gives rise to the NS9A-brane predicted by the Type IIA spacetime supersymmetry algebra . We have thus the possibility of obtaining a KK6 after the condensation of a (wrapped) D2 stretched between a NS9A, anti-NS9A pair. As in the previous section, the process in which a wrapped M2-brane is created through the condensation of a Kaluza-Klein monopole stretched between the pair of M9, anti-M9 branes is described by the coupling dual to (4.1). The Kaluza-Klein monopole must be coupled to the (9 dimensional) worldvolume of the M9-brane through a 6-form worldvolume field $`\widehat{b}^{(6)}`$, which must be the worldvolume dual of the vector field $`\widehat{b}^{(1)}`$. In the dualisation process we find: $$_{R^{8+1}}𝑑\widehat{b}^{(1)}d\widehat{b}^{(6)}=_{R^{8+1}}(\widehat{F}^{(2)}(i_{\widehat{k}}\widehat{C}))d\widehat{b}^{(6)},$$ (4.9) from where we already identify a coupling: $$_{R^{8+1}}i_{\widehat{k}}\widehat{C}d\widehat{b}^{(6)}$$ (4.10) in the dual effective action. Integration over a localised magnetic $`\widehat{b}^{(6)}`$-flux, associated to the Kaluza-Klein monopole, on a transverse $`R^7`$ gives: $$_{R^{1+1}}i_{\widehat{k}}\widehat{C}$$ (4.11) describing a wrapped M2-brane soliton. The reduction of (4.10) along the $`\widehat{k}`$-direction gives: $$_{R^{8+1}}B^{(2)}db^{(6)}.$$ (4.12) Here $`b^{(6)}`$ is associated to a D6-brane, and the integration of its localised magnetic flux on a transverse $`R^7`$ gives: $$_{R^{1+1}}B^{(2)},$$ (4.13) describing a fundamental string soliton . As before, we can also analyse the process obtained by reducing (4.10) along a worldvolume direction. We obtain: $$_{R^{7+1}}i_kC^{(3)}db^{(5)}+_{R^{7+1}}i_kB^{(2)}db^{(6)},$$ (4.14) where $`b^{(5)}`$ arises from the reduction of $`\widehat{b}^{(6)}`$ along the worldvolume direction. The first term describes a (wrapped) D2-brane, occuring after the condensation of a KK5 monopole whose Killing direction coincides with the Killing direction of the KK8, anti-KK8 branes obtained after the reduction. The second term describes a (wrapped) fundamental string, realised in this case after the condensation of a KK6-brane stretched between the pair of KK8, anti-KK8 branes. Finally, reducing (4.10) along the transverse direction leads to a process where a (wrapped) D2-brane is produced after the condensation of a KK6-brane stretched between a pair of NS9A, anti-NS9A branes. ## 5 Brane-antibrane systems in M-theory involving M-waves In this section we analyse, among other processes, the brane-antibrane system in M-theory giving rise to the process in which a fundamental string stretched between a D2, anti-D2 pair condenses and a solitonic D0-brane is produced. This process is described in M-theory by a wrapped M2-brane stretched between a pair M2, anti-M2. When the tachyonic mode of the wrapped M2-brane condenses an M-wave soliton is produced. In order to identify the Wess-Zumino coupling in the worldvolume effective action of the M2, anti-M2 pair of branes that is responsible for this process let us first analyse its dual configuration, i.e. that in which an M-wave is “stretched” between the brane anti-brane pair and a wrapped M2-brane is created after its tachyonic mode condenses. The so-called M-wave is a pp-wave in M-theory carrying momentum along a given direction, which we will denote by $`y`$. An M-wave ending on another M-brane is described in the worldvolume effective action of the latter by its coupling to the embedding scalar $`y`$. Therefore in order to have a non-trivial condensation of the tachyonic mode of the M-wave this direction must be topologically non-trivial. Indeed, the coupling in the worldvolume effective action of the M2, anti-M2 pair of branes that is responsible for the condensation of the tachyonic mode of the M-wave is given by: $$_{R^{2+1}}i_{\widehat{h}}\widehat{C}dy,$$ (5.1) where $`\widehat{h}`$ denotes a Killing vector in the $`y`$-direction. Integration over a localised magnetic $`dy`$-flux gives a coupling: $$_{R^{1+1}}(i_{\widehat{h}}\widehat{C}),$$ (5.2) describing an M2-brane wrapped on the $`y`$-direction. The dual process is now described by dualising this coupling. For this we need to recall that the field strength associated to the $`y`$-field, which appears in the worldvolume effective action of the M-theory pp-wave , is $`\widehat{F}^{(1)}=dy+\widehat{h}^2\widehat{h}_\mu dx^\mu `$, where $`\mu `$ runs over all directions but the $`y`$ direction. Therefore, we find a coupling: $$_{R^{2+1}}\widehat{h}^2\widehat{h}_\mu 𝑑x^\mu d\widehat{b}^{(1)},$$ (5.3) $`\widehat{b}^{(1)}`$ being the worldvolume dual of the scalar $`y`$. Integrating now over a localised $`\widehat{b}^{(1)}`$-flux we end up with the coupling: $$_R\widehat{h}^2\widehat{h}_\mu 𝑑x^\mu $$ (5.4) which is the field minimally coupled to the M-wave moving in the $`\widehat{h}`$ direction. Therefore condensation of the wrapped M2-brane coupled to $`\widehat{b}^{(1)}`$ in the worldvolume of the pair produces an M-wave moving in the direction on which the stretched M2-brane is wrapped. We can now analyse which are the Type IIA brane anti-brane annihilation processes to which these two systems give rise. It is particularly interesting to consider the reduction along the $`y`$-direction. The coupling (5.1) gives rise to: $$_{R^{2+1}}B^{(2)}dy.$$ (5.5) Therefore it describes a fundamental string, through the condensation of a D0-brane stretched between the D2, anti-D2 pair of branes obtained after the reduction. On the other hand, the coupling (5.3) gives: $$_{R^{2+1}}C^{(1)}db^{(1)}$$ (5.6) which in turn describes a solitonic D0-brane after condensation of a fundamental string. Therefore we can conclude that the M2, anti-M2 systems that we have discussed are the origin in M-theory of both the D0-brane creation studied by Sen and the creation of a fundamental string in a D2, anti-D2 system after D0-brane condensation . Reduction along a direction different from $`y`$ gives rise to two interesting processes which, as we will see, generalise to all branes in Type II theories. From worldvolume reduction we obtain a process in which a wrapped fundamental string stretched<sup>7</sup><sup>7</sup>7 The string is in fact not really properly stretched between the pair being wrapped in the $`y`$–direction. between a pair F1, anti-F1 gives rise to a pp-wave moving in the direction on which the stretched string is wrapped. Of course we also find the dual to this process, i.e. a solitonic wrapped fundamental string emerging after the condensation of a pp-wave in the same brane configuration. Similarly, after doing a direct dimensional reduction we obtain the same type of configurations but for D2-branes. A wrapped D2-brane is created after a pp-wave condenses in a D2, anti-D2 pair, and a pp-wave can also be created if instead a wrapped D2-brane condenses. An M-wave, being coupled in the worldvolume of an M-brane through the worldvolume scalar labelling the direction in which it propagates, can end on any of the branes of M-theory. Therefore we can analyse the same two processes that we have studied in the $`\text{M2}\text{}\overline{\text{M2}}`$ case on M5, MKK or M9 brane anti-brane systems. We are not going to repeat in detail the corresponding analysis of the Wess-Zumino terms responsible for these processes, since the reasoning goes straightforwardly as for the M2 system. Let us just mention that, together with the process in which any wrapped Type IIA p-brane<sup>8</sup><sup>8</sup>8Also NS-NS ones, like the fundamental string that we have just considered. stretched between a p-brane anti p-brane pair gives rise to a pp-wave soliton, and its dual (wrapped p-brane creation through condensation of a pp-wave), we find the following processes: $$(\mathrm{NS5},\overline{\text{NS5}};\mathrm{D4}\mathrm{D0})$$ $$(\mathrm{NS5},\overline{\text{NS5}};\mathrm{D0}\mathrm{D4})$$ $$(\mathrm{KK6},\overline{\text{KK6}};\mathrm{KK5}\mathrm{D0})$$ $$(\mathrm{KK6},\overline{\text{KK6}};\mathrm{D0}\mathrm{KK5})$$ $$(\mathrm{NS9},\overline{\text{NS9}};\mathrm{KK8}\mathrm{D0})$$ $$(\mathrm{NS9},\overline{\text{NS9}};\mathrm{D0}\mathrm{KK8})$$ where we use a simplified notation to indicate that the condensation of a brane (third column) stretched between the brane-antibrane pair given by the first and second columns gives rise to a certain brane soliton, specified by the fourth column. Most of these processes involve exotic branes, but it is particularly interesting to see that a D0-brane can be realised as a solitonic configuration in an NS5, anti-NS5 annihilation process. This represents a novel way of realising this brane soliton other than through, the more conventional, D1, anti-D1 annihilation process . ## 6 Type IIB branes from brane-antibrane systems In the previous sections we have derived all the Type IIA branes that can possibly occur as solitons after the condensation of a tachyonic brane stretched between a pair brane-antibrane. This derivation was performed by direct reduction from M-theory, where the Wess-Zumino term in the worldvolume effective action of the brane-antibrane pair responsible for the process was identified. It is now straightforward to derive the T-dual couplings responsible for brane creation in a brane-antibrane system in the Type IIB theory. Together with the realisation of Dp-branes as bound states of D(p+2), anti D(p+2) branes we find the corresponding dual processes, in which a BPS fundamental string is described as a bound state of D(p+2), anti D(p+2) branes after the condensation of a tachyonic Dp-brane stretched between them. T-duality allows to identify explicitly the couplings in the D(p+2)-brane effective action that are responsible for these processes: $$_{R^{p+3}}C^{(p+1)}db^{(1)}$$ (6.1) describes Dp-brane creation after condensation of an F1, described by $`b^{(1)}`$, and: $$_{R^{p+3}}B^{(2)}db^{(p)}$$ (6.2) describes F1 creation after condensation of the p-brane described by $`b^{(p)}`$, worldvolume dual of $`b^{(1)}`$. Type IIB branes are organised as singlets or doublets under the Type IIB SL(2,Z) duality group. Therefore we expect to find a spectrum of brane solitons in brane-antibrane systems that respects this symmetry. The D5 brane forms an SL(2,Z) doublet with the NS5 brane of the Type IIB theory. Therefore S-duality predicts the occurrence of a D3-brane soliton as a bound state in a NS5, anti-NS5 pair when the condensation of a D1-brane stretched between the two takes place. This process is indeed obtained after T-dualising two Type IIA configurations. Namely, that in which a D2 brane stretched between a pair of NS5, anti-NS5 branes condensed to give a BPS D2-brane, and that in which the same kind of brane was stretched between a pair of KK5 monopoles and condensed to give rise to a D4-brane soliton. Of course the process in which a D1-brane is created after the condensation of a D3-brane stretched between the two NS5, anti-NS5 branes is also predicted by T-duality from certain Type IIA configurations. For these and the following configurations we omit the explicit Wess-Zumino couplings, since they can be straightforwardly derived from those in Type IIA found in the previous sections. The situation with the D9-brane is similar to that with the D5-brane, in the sense that it forms a doublet with an NS9-brane in the Type IIB theory. We have indeed obtained after T-duality the configurations describing a 7-brane as a bound state NS9, anti-NS9 after a stretched D1-brane condenses, and its dual, namely a D1-brane realised as a bound state NS9, anti-NS9 through the condensation of a tachyonic 7-brane. For the Type IIB $`\text{MKK}\text{}\overline{\text{MKK}}`$ system we find that a pair of Type IIB KK, $`\overline{\text{KK}}`$ monopoles can annihilate giving rise to the following solitonic branes: $$(\mathrm{KK5},\overline{\text{KK5}};\mathrm{D3}\mathrm{D3})$$ $$(\mathrm{KK5},\overline{\text{KK5}};\mathrm{D1}\mathrm{D5})$$ $$(\mathrm{KK5},\overline{\text{KK5}};\mathrm{F1}\mathrm{NS5})$$ $$(\mathrm{KK5},\overline{\text{KK5}};\mathrm{D5}\mathrm{D1})$$ $$(\mathrm{KK5},\overline{\text{KK5}};\mathrm{NS5}\mathrm{F1})$$ where we use the same simplified notation as in the previous section. We see from these configurations that the creation of both a brane and its S-dual is possible through annihilation of a pair of monopoles, which agrees with the fact that this brane is selfdual under S-duality, so all pairs of S-dual processes should be allowed. Concerning 7-branes some remarks are in order. T-duality of the Type IIA KK6 and KK8-branes predicts a 7-brane in the Type IIB theory which is connected by S-duality with the D7-brane, for what we will denote it as an NS7-brane. This is the 7-brane that appears for instance in the processes involving NS9, anti-NS9 annihilation. The existence of these two different effective actions describing 7-branes in the Type IIB theory, where the analysis of the spacetime supersymmetry algebra predicts a single 7-brane, was discussed in , where it was argued that the two worldvolume effective actions are indeed necessary in order to describe a single nonperturbative 7-brane in the weak and strong coupling regimes. Consistently with this picture, we find after T-duality one configuration in which an NS5-brane is created when a D1-brane stretched between a pair of NS7, anti-NS7 branes condenses, and the corresponding dual process, i.e. that in which a solitonic D1-brane emerges after condensation of an NS5-brane. These processes are S-dual to the more standard D5, F1 creation through D7, anti-D7 annihilation. Finally, T-duality on the Type IIA configurations involving pp-waves predicts the analogous kind of configurations in the Type IIB theory. Namely, solitonic wrapped p-branes after p-brane anti p-brane annihilation through condensation of a pp-wave, and solitonic pp-waves after condensation of a wrapped p-brane in the same system. ## 7 Discussion In this paper, extending preceeding considerations , we have classified the branes of the M and Type II theories which can be realised in a consistent way with respect to the structure of the theory as bound states of systems brane-antibrane, after condensation of the tachyon mode of open branes stretched between the pair. We have achieved this classification by studying the Wess-Zumino terms in the worldvolume effective actions of the branes of M-theory and their reductions. We have shown that it is possible to give an eleven dimensional description to the creation of a fundamental string from the annihilation of a pair of Dp, anti-Dp branes with p=2,6,8. As for the case p=4 , the fundamental string is created by the condensation of a D(p-2)-brane stretched between the pair Dp, anti-Dp. We have identified the term in the Wess-Zumino action of the pair brane-antibrane in M-theory responsible for this creation. This term is obtained generically by finding the worldvolume dual of the coupling describing a stretched M2-brane between the brane anti-brane pair, which in turn describes an extended worldvolume soliton coming from the condensation of the M2-brane. The case p=4 is especially simple in this sense. Its description in M-theory is in terms of an M2-brane stretched between a pair of M5, anti-M5 branes, giving rise to a solitonic M2-brane. This process is self-dual, in the sense that the dual coupling in the six dimensional worldvolume of the M5-brane describes again a solitonic M2-brane, whereas this is not the case for the M-theory origin of the p=2,6,8 cases. Therefore, in these cases the reduction to the Type IIA theory gives many different realisations of BPS objects as bound states of brane anti-brane pairs, which by T-duality give in turn rise to many configurations in the Type IIB theory. As an interesting realisation we have found that the NS5-brane can originate from certain brane anti-brane annihilations, both in the Type IIA and Type IIB theories. We can conclude in general that the BPS branes that can be realised as solitons in a given p-brane anti p-brane configuration are determined by the possible branes that can end on the p-brane, and whose tachyonic mode condenses giving rise to the soliton configuration. These branes are easily predicted by looking at the p-brane worldvolume effective action and identifying the different worldvolume q-forms that couple in it. For instance the Type IIB Kaluza-Klein monopole anti-monopole pair gives rise to so many different solitonic configurations because there are two scalars and one 2-form coupled in the worldvolume effective action of the monopole , allowing for wrapped F1, D1 and D3-branes ending on it, together with the branes described by their worldvolume duals, i.e. D5 and NS5 branes. Finally, it is worth emphasizing that a lot of progress remains to be done in order to reach a quantitative understanding of the dynamics of the possible processes discussed in this paper. One can furthermore hope that there exists a mathematical connection with some generalisation of K-theory where these processes could fit in. ### Acknowledgements L. H. would like to acknowledge the support of the European Commission TMR programme grant ERBFMBICT-98-2872.
no-problem/9910/astro-ph9910226.html
ar5iv
text
# Bar-driven Transport of Molecular Gas in Spiral Galaxies: Observational Evidence ## The NRO-OVRO CO imaging survey has provided molecular gas distributions in the centers of 20 nearby spiral galaxies at $`300`$ pc resolution (Sakamoto et al. 1999a). It is found from the survey that central condensations of molecular gas with sub-kpc sizes and $`10^8`$$`10^9M_{}`$ masses are prevalent in $`L^{}`$ galaxies. Moreover, as shown in Fig. 1, the degree of gas concentration to the central kpc (estimated from comparison with single-dish data) is found to be higher in barred galaxies than in unbarred systems (Sakamoto et al. 1999b). This is the first statistical evidence for the higher central concentration of molecular gas (CO) in barred galaxies, strongly supporting the theory of bar-driven gas transport. To account for the excess gas in barred nuclei, more than half of molecular gas in the central kpc of a barred galaxy must have been transported there from outside by the bar. The time-averaged rate of gas inflow, $`\dot{M}`$, is statistically estimated (through the gas consumption rates estimated from H$`\alpha `$ and far-IR) to be larger than 0.1 – 1 $`M_{}`$ yr<sup>-1</sup>. The degree of gas concentration also helps to test the predictions of bar dissolution and secular morphological evolution induced by bar-driven gas transport (Norman et al 1996; Pfenniger in this volume). Our current data suggest that bar-dissolution times are longer than the consumption times of central gas concentrations in barred galaxies, and thus prefer slow ($`t>10^8`$$`10^{10}`$ yr) bar-dissolution. A search for non-barred galaxies with high gas concentration, presumably galaxies just after quick bar dissolution, is important to better constrain the bar-dissolution timescale. There have been a few other lines of observational evidence for the bar-driven gas transport. Table 1 summarizes the pieces of evidence and their properties. All of them support bar-driven gas inflow and, though each of them provides different types of information, they are complementary with each other (e.g., some of them are statistical and the others are not, some can give $`\dot{M}`$ while others give instantaneous $`\dot{M}`$). A next step would be to combine these methods, with increasing sample size, in order to construct a sequence of mass transfer, star formation, and morphological changes in galaxies. Galaxy evolution along the Hubble sequence is now within the reach of observational tests. ### Acknowledgments. The CO survey and the subsequent analysis were made in collaboration with S. K. Okumura, S. Ishizuki, and N. Z. Scoville. ## References Ho, L. C., Filippenko, A. V., & Sargent, W. L. W. 1997, ApJ, 487, 591 Maiolino, R. Risaliti, G., Salvati, M. 1999, A&A, 341, L35 Norman, C., Sellwood, J. A., & Hasan, H. 1996, ApJ, 462, 114 Quillen, A. C. et al. 1995, ApJ, 441, 549 Regan, M. W., Vogel, S. N., & Teuben, P. J. 1997, ApJ, 482, L143 Roy, J. -R. 1996, ASP Conf. Ser. 91, 63 Sakamoto, K., Okumura, S., Ishizuki, S., & Scoville, N. Z. 1999a, ApJS, (Oct. issue) in press Sakamoto, K., Okumura, S., Ishizuki, S., & Scoville, N. Z. 1999b, ApJ, (Nov. 10 issue) in press, (astro-ph/9906454)
no-problem/9910/astro-ph9910019.html
ar5iv
text
# STIS Longslit Spectroscopy Of The Narrow Line Region Of NGC 4151. I. Kinematics and Emission Line RatiosBased on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA Inc under NASA contract NAS5-26555 ## 1 Introduction Since the launch of the Hubble Space Telescope (HST) many imaging studies of the Narrow Line Regions (NLR) of active galactic nuclei (AGN) have been carried out. These studies have shown that the emission line gas often has a complex morphology, frequently taking the form of a bicone centered on the galaxy nucleus. (e.g. NGC 4151, Evans et al., 1993, Boksenberg et al. 1995, NGC 1068, Evans et al. 1991, see also the archival study by Schmitt & Kinney 1996). In the standard model for an AGN, a dense molecular torus with a radius of a few parsecs surrounds the nucleus and collimates the radiation field (e.g. Antonucci 1993). According to the model, differences in the continuum and emission line spectra, which form the basis for classification of Seyferts and other types of AGN, can be explained largely by differences in the orientation of the torus to our line of sight. For example, in type 1 Seyfert galaxies our viewing angle is close to the symmetry axis of the torus allowing a direct view of the Broad Line Region (BLR) and the nuclear continuum source, while in type 2 Seyfert galaxies our vantage point lies closer to the plane of the torus which then blocks a direct view of the inner regions. In many instances the NLR morphology and kinematics appear closely linked to the radio structure, particularly in Seyfert galaxies with linear or jet-like radio sources. In these objects the line emitting gas is often found to be cospatial with the radio jets and there is also kinematic evidence for physical interaction between the jets and the NLR gas (Capetti et al. 1999, Whittle et al., 1988). The suggestion has been made that expansion of the radio plasma into the host galaxy’s interstellar medium produces fast shock waves which emit a hard continuum and ultimately provide the dominant source of ionizing photons (Taylor, Dyson, & Axon, 1992, Sutherland, Bicknell & Dopita, 1993). The degree to which photoionization by a nuclear continuum source or by autoionizing shocks contributes to the overall energetics of the NLR has been the subject of some debate. In principal one can distinguish between them spectroscopically by studying the spatially resolved kinematics and the physical conditions of the gas as revealed by the relative intensities of specific emission lines. The Space Telescope Imaging Spectrograph (STIS) is ideally suited for this type of study. We have therefore undertaken a detailed investigation of the kinematics and physical conditions across the NLR of NGC 4151, one of the nearest Seyfert galaxies. Evidence for outflow and photo-ionization cones in the NLR of NGC 4151 was presented by Schulz (1988, 1990) based on ground-based longslit spectroscopy. Peculiar flat-topped and double-peaked emission line profiles were observed to the SE and NW between 2<sup>′′</sup> and 6<sup>′′</sup> from the nucleus and are most consistent with outflow models. Schulz (1990) suggests that the outflow is driven either by a wind related to the active nucleus or by an expanding radio plasmon. The NLR kinematics in NGC 4151 have been studied in detail using slitless spectroscopy from STIS (Hutchings et al., 1998, Kaiser et al., 1999, and Hutchings et al., 1999). These observations reveal three distinct kinematic components: one consisting of low velocity clouds ($`|VV_{sys}|100`$ km s<sup>-1</sup> ), primarily in the outer NLR following the rotation of the host galaxy disk, a second consisting of moderately high velocity clouds ($`|VV_{sys}|400`$ km s<sup>-1</sup> ) most likely associated with radial outflow within the biconical morphology and a third component of fainter but much higher velocity clouds ($`|VV_{sys}|1400`$ km s<sup>-1</sup> ) which is also outflowing but not restricted to the biconical flow of the intermediate velocity component. No evidence for higher velocities in the vicinity of the radio knots was found suggesting that the radio jet has minimal influence on the NLR kinematics. A somewhat different conclusion was drawn by Winge et al. (1999) primarily using longslit spectroscopy with HST’s Faint Object Camera. They claim evidence for strong interaction between the radio jet and the NLR gas. Furthermore, after subtracting the influence of the radio jet and galaxy rotation on the kinematics, they suggest that the residual motion is the rotation of a thin disk of gas on nearly Keplerian orbits beyond 0$`^{\prime \prime }`$.5 (60 pc using their linear scale) around an enclosed mass of $`10^9\mathrm{M}_{}`$. Interior to 60 pc the velocities turn over suggesting that the mass is extended, and, if their interpretation is correct, they are able to place upper limits on the mass of a nuclear black hole of $`5\times 10^7\mathrm{M}_{}`$. In this paper we present the initial results from our low resolution, longslit spectroscopy. A second paper presents a detailed photoionization model using the emission line ratios presented here (Kraemer et al. 1999, Paper II). Section 2 presents the observations and describes the data reduction procedures including correction for scattered light from the Seyfert nucleus. Section 3 describes the results of the kinematic and preliminary line ratio analyses. In section 4 we discuss the results in terms of different NLR models. We summarize our results and conclusions in section 5. ## 2 Observations and Data Reduction Longslit spectroscopy of NGC 4151 was obtained with STIS on board HST. Four low dispersion gratings, G140L, G230LB, G430L and G750L, were used producing spectra ranging from the UV at 1150 Å to the near-infrared at 10,270 Å. Note that the G230LB mode, which uses the CCD detector, was used instead of the G230L, due to the bright object protection limits imposed on use of the MAMA detectors. Two slit alignments were chosen to cover regions of specific interest and as many of the bright emission line clouds as possible. The first position was chosen to pass through the nucleus at position angle 221, while the second was offset from the nucleus by 0$`^{\prime \prime }`$.1 to the south at position angle 70. Figure 1 shows the slit apertures drawn on the WFPC-2 narrow band image of the \[OIII\] $`\lambda `$5007 emission line structure obtained from the HST archives (proposal ID 5124, principal investigator H. Ford). The 0$`^{\prime \prime }`$.1 slit was used to preserve spectral resolution, given here for each of the four gratings assuming an extended source (the emission line clouds are generally resolved along the slit): 2.4 Å for G140L, 2.7 Å for G230LB, 5.5 Å for G430L, and 9.8 Å for G750L (Woodgate et al. 1998, Kimble et al. 1998, Baum et al. 1998). A log of the observations is presented in Table 1. One set of observations failed and as a result no G140L spectrum was available for P.A. 70. The spectra were reduced using the IDL software developed at NASA’s Goddard Space Flight Center for the Instrument Definition Team (Lindler et al. 1998). Cosmic ray hits were identified and removed from observations using the CCD detector (G230LB, G430L, and G750L) by combining the multiple images obtained at each visit in each spectroscopic mode. Hot or warm pixels (identified in STIS dark images) were replaced by interpolation in the dispersion direction. Wavelength calibration exposures obtained after each science observation were used to correct the wavelength scale for zero-point shifts. The spectra were also geometrically rectified and flux-calibrated to produce a constant wavelength along each column (the spatial direction) and fluxes in units of ergs s<sup>-1</sup> cm<sup>-2</sup> Å<sup>-1</sup> per cross-dispersion pixel. Spectra obtained at the same position angle and spectroscopic mode were combined to increase the signal-to-noise ratios. The bright, unresolved Seyfert nucleus of NGC 4151 creates a number of difficulties when trying to examine emission lines from the NLR close in. Scattered light, largely from Airy rings imaged on the slit, causes features of the nuclear point source spectrum to be superimposed on fainter NLR features. These follow linear tracks running nearly parallel to the dispersion, diverging slightly with wavelength and can be detected as much as 20$``$30 pixels from the nucleus (Bowers & Baum, 1998). This is a particularly difficult problem for measuring the Balmer lines in the NLR since the BLR lines are strong and often have peculiar shapes which can influence the continuum placement if not subtracted properly. In addition, the extended halo of the PSF must be modeled and subtracted. Furthermore, reflection of the bright nucleus in the CCD modes appears as a ghost spectrum, which is displaced from the nucleus in both the dispersion and spatial directions. Several techniques were used to remove these effects. Corrections for scattered light in the spectra were applied in the following order: 1) removal of the reflection spectrum (in the CCD spectra), 2) correction for the halo, and 3) removal of the remaining PSF, including the diffraction-ring tracks. The reflection spectrum is not only shifted in both directions, it is broadened in the spatial direction, compressed in the dispersion direction, and altered in intensity as a function of wavelength (it tends to be redder than the nuclear spectrum). The reflection in each original spectral image was isolated by subtracting the scattered-light at the same spatial distances on the other side of the nuclear spectrum. Then the nuclear spectrum was shifted along the slit and compressed in the dispersion direction until the strong emission features matched those in the reflection. It was then divided into the observed reflection to obtain the large-scale intensity variations in both dispersion and spatial directions. These variations were fitted in wavelength regions that do not contain extended emission, in both directions with low-order splines. The fits were then multiplied by the altered nuclear spectrum to produce a model of the reflection which was subtracted from the original spectral image. A circularly-symmetric halo was adopted from previous work on the STIS detectors (Lindler 1999), and collapsed to match the observed PSF in the spatial direction (obtained by adding regions along the dispersion direction that do not contain extended emission). The halo function was adjusted at various radial positions until a reasonable match was obtained with the broad-scale profile of the PSF (i.e., ignoring diffraction tracks, etc.). The halo was then deconvolved from the original image using an iterative technique that removes flux from the halo and places it in the core. To remove the remaining scattered light, a scattering template was constructed using archival observations of stars observed with the same grating and slit width. First, the template spectrum was normalized in the dispersion direction by dividing through by the spectrum summed along the slit. Next, the template was smoothed in the dispersion direction, using a median filter with a 50 pixel wide window. The nuclear spectrum of NGC 4151 was then multiplied into the template to simulate the scattered light spectrum. The scattering subtracted spectra are clean of broad line emission as close as 4 pixels from the nucleus. Because the nuclear H$`\alpha `$ line in the G750L spectrum at P.A. 221 is saturated, the true line profile is distorted and complicates construction of the scattering template. A substitute for the saturated profile was obtained from the G750M short exposure in our slitless spectroscopy with good results. An alternative approach was also applied which used the structure along the slit in a continuum region of the NGC 4151 spectrum itself to form the model template. First, the entire image was normalized by dividing each row (which lies along the dispersion direction) by the summed nuclear spectrum from the central four rows. A spline (typically of order 11) was then fitted along each row in regions that do not contain emission lines. Thus the fit is a model of the scattering as a function of wavelength and position along the slit for a point source spectrum of constant flux per unit wavelength. This procedure was effective in modeling the diffraction tracks as well as the overall PSF. The spline fits were then multipled by the nuclear spectrum at each spatial position, and subtracted from the reflection- and halo-corrected image to produce a final corrected image, which was used for subsequent analysis. The resulting spectra are shown in Figure 2 and Figure 3. The corrections bring out the structure in the bright lines, and allow us to see fainter lines that are not evident in the original spectra. The correction process was not perfect, as evidenced by the faint structure seen in the regions above and below the strong nuclear lines, particularly H$`\alpha `$ $`\lambda 6563`$. However, these problems are minor, and the contaminating effects of nuclear absorption and emission were removed well enough for accurate measurement of the extended emission, even very close to the nucleus. Although our primary interest is the NLR, the data set also contains high quality nuclear spectra of NGC 4151 at two epochs. Monitoring campaigns have shown pronounced variability in both the nuclear continuum and BLR emission (Robinson et al., 1994, Crenshaw et al., 1996, Warwick et al., 1996, Kaspi et al., 1996, Ulrich et al., 1997, Weymann et al., 1997, Peterson et al. 1998). Over a time interval of 33 days (Jan. 8, 1998 and Feb. 10, 1998; see Table 1) the nuclear continuum dropped by 17% at 3050 Å and 10% at 6924 Å decreasing monotonically between these two wavelengths. This degree of variation is consistent with that reported in short timescale variability studies Kaspi et al. (1996). The variation of the BLR emission lines is less pronounced than that of the continuum, with H$`\gamma `$ and H$`\beta `$ showing a decrease in flux, while the change in the H$`\alpha `$ \+ N\[II\] line profile is more difficult to evaluate. The absorption lines in our far-UV spectrum are similar to those in the FOS spectra published by Weymann et al. (1997) but is at too low a resolution for comparison to the high resolution GHRS spectrum. ## 3 Analysis ### 3.1 Measurement of Line Fluxes and Component Deblending Emission line fluxes and their errors were measured along the slit in each spectral range for a total of 45 emission lines. Individual spectra were extracted from the longslit spectra by summing along the slit. The size of the extraction bins was dictated by the need for reasonably accurate fluxes for the He II $`\lambda `$1640 and $`\lambda `$4686 lines, which were used for the reddening corrections in Paper II. Experimentation revealed that bin lengths of 0$`^{\prime \prime }`$.2 (4 CCD pixels, 8 MAMA pixels) within the inner $`\pm `$1<sup>′′</sup> and 0$`^{\prime \prime }`$.4 outside this region would provide reasonable signal-to-noise ratios for these lines, and still isolate the emission-line clouds that we identified in our earlier papers. In some cases slightly different bin sizes were used to isolate individual clouds or to increase the signal-to-noise ratios. To measure the line fluxes, first a linear fit to the continuum adjacent to each line was subtracted. Typically the continuum was very close to zero following removal of the scattered light, but continuum subtraction was helpful in regions of residual structure. Next, the extreme ends of the red and blue wings of the line were marked and the total flux and centroid were computed between these two points. The uncertainties in the line fluxes were estimated using the error arrays for each spectrum produced by CALSTIS and a propagation of errors analysis (Bevington, 1969). For the blended lines of H$`\alpha `$ and \[N II\]$`\lambda \lambda `$6548, 6584, and \[S II\] $`\lambda \lambda `$ 6717, 6731, we used the \[O III\]$`\lambda `$5007 line as a template to deblend the lines (see Crenshaw & Peterson 1986). This was superior to Gaussian fitting since the emission line profiles are often complex. The results of the emission line flux measurements are presented in Table 2 where the flux values are listed relative to H$`\beta `$ and the H$`\beta `$ flux is given at the bottom in units of $`10^{15}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> Å<sup>-1</sup>. The errors obtained for each flux are given in parentheses. Because of the failure of the far-UV spectrum at P.A. 70 no G140L spectrum was obtained and so a reddening correction using the He II lines was not possible. Although dereddening using the Balmer decrement is certainly valid, a reliable extrapolation from the red to the blue and near-UV lines is uncertain. We prefer, therefore, to continue the analysis without the corrected line ratios taking care that any possible effects of reddening are accounted for by other means. Correction of the line ratios for reddening is an important step for a detailed photoionization model and is therefore presented in Paper II for the data at P.A. 221. To extract information on the multiple kinematic components the \[OIII\] $`\lambda `$5007 and \[OIII\] $`\lambda `$4959 lines were fitted independently with one to three Gaussians. Many slit extractions showed two components although in only a few cases was there compelling evidence for a third. Only velocity components measured independently at both \[OIII\] $`\lambda `$4959 and \[OIII\] $`\lambda `$5007 are included. To test that each component represented the true kinematics of the gas, we compared the velocities obtained at both \[OIII\] $`\lambda `$5007 and \[OIII\] $`\lambda `$4959. Only those components with velocity difference in the two lines less than or equal to twice the mean difference for all points were retained. The procedure was then repeated. The first iteration removed velocity components that were wildly discordant, and therefore unlikely to be real, while the second gave us confidence that the remaining components are physically significant. From the difference in velocity between components extracted at \[OIII\] $`\lambda `$4959 and \[OIII\] $`\lambda `$5007, we estimate the standard deviation of the velocities to be 30 km s<sup>-1</sup> . The results for each slit position are given in Table 3a and 3b where the Gaussian components are listed in order of increasing velocity. Negative slit positions correspond to the SW region and positive slit positions correspond to the NE. ### 3.2 Kinematics Figure 4 shows portions of the longslit spectra centered on the \[OIII\] $`\lambda `$5007 , \[OIII\] $`\lambda `$4959 and H$`\beta `$ emission lines for both slit positions, with the NE end of the slit at the top. The complex velocity structure that has been noted in both ground-based and HST studies (e.g. Schulz 1990, Kaiser et al. 1999, Hutchings et al., 1999) is seen including line splitting at several positions along the slit. Note that as a result of our scattering correction and PSF subtraction we are able to probe the emission line kinematics to within 0$`^{\prime \prime }`$.2 of the nucleus. Included within our slits are four of the high velocity regions (absolute value of projected velocity greater than 400 km s<sup>-1</sup> ) reported in Hutchings et al. (1999) and 20 of the clouds identified in Kaiser et al. (1999) (Tables 4a, 4b ). Our agreement with Hutchings’ velocities is reasonable, ranging from a difference of 6 km s<sup>-1</sup> for region N detected at slit P.A. 221 , to 160 km s<sup>-1</sup> for region D. While some of the difference is undoubtedly due to measurement uncertainties, there may be real differences due to the portion of the high velocity regions which fall within our slit. There may also be some uncertainty due to confusion of spectral and spatial information in the slitless data. We see components of high velocity gas not specifically identified by Hutchings et al. (1999) on both sides of the bicone at both slit positions (Tables 3a, 3b). This gas corresponds to high velocity gas imaged by Hutchings et al. (1999), but for which velocities were not previously measured. The high velocity components generally account for a small fraction of the total flux in the \[OIII\] emission lines, again in agreement with the findings of Hutchings et al. (1999). We find more high velocity components in slit position P.A. 70, which is close to the radio ridge line, than we do in the P.A. 221 slit. However, there is some high velocity gas not associated with the radio emission. Comparison of our velocities with those reported by Kaiser et al. (1999) is more difficult, since in several cases they reported single velocities for clouds where we find multiple velocity components. Furthermore, there are instances of extended clouds for which our slit does not sample the entire cloud. If we compare only velocities for clouds for which we find a single velocity component and average our velocities for clouds occurring in more than one extraction bin, we find the average difference in velocities is -18 km s<sup>-1</sup> +/- 94 km s<sup>-1</sup> , (in the sense V(this paper) - V(Kaiser etal.)). This difference and range is comparable to what was found for the high velocity clouds, and can be attributed to the same causes. Figure 5 shows the velocities of the individual \[OIII\] components from the Gaussian deblending. Points along P.A. 221 are marked as solid points and along P.A. 70 as open symbols. The horizontal bars indicate the size of the extracted spectrum used for the measurement along the slit. Vertical error bars are omitted since the velocity uncertainties are comparable to the size of the points on the diagram (see section 3). A systemic velocity of 997 km s<sup>-1</sup> has been subtracted from the data. The solid and dashed lines show results expected for our simple models described below. The results follow the velocity distribution determined from the slitless spectroscopy of Kaiser $`etal.`$ (1999) and the plot is similar to their Figure 8, though without the extreme high velocities. The velocities at large distances from the nucleus are consistent with the rotation of the galactic disk, while closer in the velocities are strongly blue shifted SW of the nucleus and strongly redshifted to the NE. To better understand the kinematics we consider two possibilities for the general form of the velocity field: radial outflow from the nucleus and expansion directed away from the radio axis. We adopt the basic conical geometry of the NLR of NGC 4151 as modeled by Pedlar et al. (1993), with the radio jet pointing 40 from the line of sight and projected onto the plane of the sky at a P.A. of 77. After consideration of the well-known geometry of the host galaxy (Simkin, 1975, Bosma, Ekers, & Lequeux, 1977) we require that the cone opening angle be wide enough to include our line of sight to the nucleus and also to intersect the disk of the host galaxy, since the Extended Narrow Line Region (ENLR) kinematics follow the rotation of the disk. Pedlar et al. (1993) estimate the opening angle to be 130. However, Boksenberg et al. (1995) argue that the NLR is density bounded and the ionized gas only partially fills the cone. Therefore we choose a narrow vertex angle of 70 which is a better match to the observed NLR structure. The models are drawn schematically in Figure 6. These models are used to estimate the radial velocity as a function of projected distance from the nucleus for each slit position angle. Our purpose is not to produce a detailed match to the observed velocities of each individual cloud, but to test two ideas about general form of the NLR kinematics. Therefore, we assume that the interior of the cone is uniformly filled and note that the observed velocity distribution is not expected to be as smooth or complete as the model, reflecting the way in which the emission line clouds fill the cone. For both the radial outflow model and the jet expansion model we consider two cases, one in which the flow has a constant velocity and one in which the flow decelerates as it moves outward. We model this decelerating flow as a $`R^{1/2}`$ dependence where $`R`$ in the radial flow model is distance from the nucleus and $`R`$ in the jet expansion model is distance from the radio axis. This particular form of deceleration is chosen since it seems to represent the data best and is meant only to illustrate the effect. The results are plotted in Figure 7 for all four models in the form of a model longslit spectrum of a single emission line comparable to Figure 4. The slit was chosen to lie along P.A. 70 and to have a slit width of 0$`^{\prime \prime }`$.1 as in our STIS observations. These simulated spectra were then deblended using two Gaussians at each slit position in the same manner as the real data. The velocities for the decelerating models are shown in Figure 5 as the dashed lines (one for each Gaussian component) for the case of jet expansion and the solid lines for the case of radial flow. For the case of expansion away from the radio axis we expect both large positive and large negative velocities relative to the systemic velocity at any given position along the slit. In the case of radial outflow, however, large positive velocities and velocities much closer to the systemic velocity will be observed on one side of the slit while on the other side, large negative velocities and velocities near the systemic velocity are expected. Since for NGC 4151 the far side of the SW cone lies close to the plane of the sky, the bulk of the flow is transverse to the line of sight, yielding radial velocities close to the systemic value, while the near side is much closer to the line of sight yielding large approaching radial velocities. Similar considerations hold for the NE cone except that the near side of the cone lies in the plane of the sky and the far side yields the large receding velocities. We conclude that the radial outflow case gives a better match to the observed velocity distribution than the case of expansion away from the jet. In the case of a radially decelerating flow the overall envelope of the highest velocities decreases as one moves away from the nucleus much as seen in Figure 5. Although the match is not perfect it seems to follow the trend of less extreme velocities as one moves along the slit. From these simple models we cannot exclude the possibility of some motion perpendicular to the radio jet. However, it does seem likely that the flow is dominated by a radial outflow from the nucleus which slows with distance and that any contribution from expansion away from the jet is less significant. ### 3.3 Line Ratios and Projected Distance from the Nucleus An understanding of the physical conditions in the NLR can be obtained by considering how various line strengths change as a function of distance from the nucleus and with respect to each other. In Paper II (Kraemer et al. 1999) a detailed photoionization model is developed using the emission line fluxes presented here. In the current paper we present a simpler analysis. The ratio of \[OIII\] $`\lambda `$5007 to H$`\beta `$ is well known to be sensitive to the ionization parameter $`U=Q/4\pi r^2n_ec`$, where $`Q`$ is the rate at which ionizing photons are emitted, $`r`$ is the distance to the nucleus, $`n_e`$ is the electron density, and $`c`$ is the speed of light. Figure 8 shows the \[OIII\] $`\lambda `$5007 to H$`\beta `$ ratio as a function of distance along the slit for both position angles. We use ratios that have not been corrected for extinction since the lines are close in wavelength and are therefore rather insensitive to reddening. We see from the diagram that the line ratio decreases with distance in the inner 2<sup>′′</sup> and recovers somewhat at larger radii on the NE side (positive X-axis). This trend was shown in Kaiser et al. (1999) from the slitless spectroscopy who suggest that this apparent change in the ionization parameter with distance reflects a decrease in density. Apart from the increase in \[OIII\] $`\lambda `$5007 / H$`\beta `$ on the extreme NE side of P.A. 221 , there is no significant difference in the ratio between the two position angles, suggesting that while the ionization state of the gas may change moving away from the nucleus, it generally does not change laterally, i.e. with distance from the radio axis. A similar trend is seen in the ratio of \[OIII\] $`\lambda `$5007 to \[OII\] $`\lambda `$3727 , which is also sensitive to the ionization parameter. Figure 9 plots the ratio versus distance for both slit positions. Again, the line fluxes have not been corrected for extinction but the dust is most likely patchy (see Paper II) and so is not likely to influence the overall trend and merely adds scatter. Support for this comes from the fact that the trend is largely symmetric about the nucleus indicating that no large scale dust lanes pass through our aperture. Furthermore, for the slit extractions where both He II lines used for the extinction corrections are present (along P.A. 221 ), the largest change in the \[OIII\]/\[OII\] ratio from dereddening was a decrease of $`30`$% (see Paper II). Therefore, to the extent that the distribution of dust is comparable along each slit position, the conclusion of a decreasing \[OIII\]/\[OII\] ratio with distance is robust. The safest conclusion to draw from these diagrams is that the density falls off with distance as suggested by Kaiser et al. (1999) and confirmed in Paper II. In fact, in the inner clouds the high \[OIII\]/\[OII\] most likely results from collisional de-excitation of the O<sup>+</sup> ions. Although, these trends could naively be considered an indication of decreasing ionization parameter with distance from the nucleus, the more detailed investigation in Paper II suggests a more constant ionization parameter and a density which declines less rapidly than $`r^2`$. In Figure 10 the density sensitive ratio of \[SII\] $`\lambda `$6717 to \[SII\] $`\lambda `$6731 is plotted as a function of distance, again for both slit positions. Judging by the size of the error bars, much of the scatter in the diagram is real suggesting that the gas is rather clumpy, with regions of higher and lower density at various points along the slit. There is also an interesting drop in the ratio very close to the nucleus particularly in the data from the P.A. 70 slit position, suggesting an increase in density there. Generally, the \[SII\] ratios appear to be larger farther out particularly along P.A. 221 (solid dots), indicating a decrease in density with radius, at least in the partially ionized zone. Using the five level-atom program developed by Shaw & Dufour (1995) and assuming a temperature of 15000 K (see below) we find that the density of the inner NLR is roughly $`2000\mathrm{cm}^3`$ while in the outer NLR and ENLR the density has dropped to $`300\mathrm{cm}^3`$. This agrees with the results of Robinson et al. (1994) who found density decreasing with distance from the nucleus in NGC 4151, with an overall NLR density of $`1600\mathrm{cm}^3`$, and a density in the ENLR of $`250\mathrm{cm}^3`$. This is also in agreement with the interpretation of the decline in \[OIII\] $`\lambda `$5007 /H$`\beta `$ as the result of a decrease in density. The \[OIII\]$`\lambda 5007`$/\[OIII\]$`\lambda 4363`$ ratio is well-known to be sensitive to the temperature of the gas. Figure 11 shows the \[OIII\] ratio as a function of distance along the slit. The use of this ratio to calculate the temperatures is only valid for densities up to $`n_e10^5`$ cm<sup>-3</sup> at which point collisional de-excitation begins to have an influence on the line strengths (Osterbrock, 1974). Furthermore the \[SII\] densities cannot be used since they reflect densities in the partially ionized zone. Thus we use results from Paper II for the gas densities which indicate that in the O<sup>++</sup> zone the densities are below $`10^5\mathrm{cm}^3`$. The results from the five-level atom program give temperatures in the range of 12000 K — 17000 K. Based on Figure 11 there appears to be a slight trend for a decreasing ratio (increasing temperature) with distance from the nucleus. This is difficult to confirm, however, since reddening may play a role, tending to increase the observed ratio. Paper II gives a more detailed analysis of the physical conditions along the slit. ### 3.4 Line Ratio Diagrams and Photoionization Diagrams plotting one line ratio against another can be used to investigate the origin of the photoionizing continuum. By choosing line ratios which consist of lines which are close in wavelength we can significantly reduce the effects of reddening (see e.g. Veilleux and Osterbrock, 1989). In Figure 12 a, b, and c, we present the optical emission line ratios \[S II\] $`\lambda \lambda 6717,6731/\mathrm{H}\alpha `$, \[N II\] $`\lambda 6584/\mathrm{H}\alpha `$, \[O I\] $`\lambda 6300/\mathrm{H}\alpha `$, respectively, plotted against \[OIII\] $`\lambda `$5007 /H$`\beta `$ . In each diagram the solid line separates star-forming regions from AGN and is taken from Veilleux and Osterbrock (1989). The dashed line is the power-law photoionization model for solar abundance taken from Ferland and Netzer (1983). The ionization parameter varies from $`10^4`$ to $`10^{1.5}`$ from lower right to upper left. We find that the NGC 4151 NLR clouds occupy compact regions on these diagrams indicating that the source of the ionizing continuum is the same for all of the points sampled along the slit. Thus none of the clouds observed shows evidence for star-formation or LINER-like excitation. While this result is not unexpected it is worth commenting that the NLR gas all seems to have the same source of excitation. Other line ratio diagrams including UV lines are also interesting since they allow us to investigate the possibility of alternate ionization mechanisms for the NLR clouds (Allen et al. 1998). In figure 13a, b and c we plot the ratios of CIV $`\lambda 1549`$ to He II $`\lambda 1640`$, CIV $`\lambda 1549`$ to CIII\] $`\lambda 1909`$, \[Ne V\] $`\lambda 3426`$ to \[Ne III\] $`\lambda 3869`$, respectively against \[OIII\] $`\lambda `$5007 to H$`\beta `$ (only the P.A. 221 data is shown for Figures 13a and b since the far-UV observation at P.A. 70 was unsuccessful). The lines show model grids calculated using the MAPPINGS II code (Sutherland and Dopita, 1993) by Allen et al. (1999) for shock ionization (bottom), shock plus ionized precursor gas (middle) and for power-law photoionization (top). For the shock plus precursor models, the shock velocity increases from 200 km s<sup>-1</sup> to 500 km s<sup>-1</sup> moving from low to high \[OIII\] $`\lambda `$5007 /H$`\beta `$ ratios. Notice that for the highest velocity shocks the models coincide with power-law photoionization models. Again the NGC 4151 NLR occupies very limited regions in these diagrams corresponding to photoionization by a power law at high ionization parameter or by shock plus precursor models with very high velocity ($`V_{\mathrm{shock}}500`$km s<sup>-1</sup> ). These results strongly suggest that low velocity shocks play an insignificant role in accounting for the ionization state of the NLR in NGC 4151 but we cannot rule out the possibility of ionization by radiation from fast shocks. ## 4 Discussion The results of the kinematic and emission line ratio analysis can be combined to create a coherent picture of the NLR in NGC 4151. We have seen that the kinematics bear the signature of radial outflow from the nucleus and are distinctly different from an expansion away from the radio jet axis. This is an interesting result since many recent studies have reported kinematic evidence that the radio jet can have a significant influence on the motion of NLR gas (e.g. Bicknell et al. 1998, NGC 4151 Winge et al 1998, Mrk 3 Capetti et al. 1998). In these studies the NLR gas is found immediately surrounding and expanding away from knots of radio emission as in Mrk 3 or forms a bow shock structure around the working surface of the head of the jet as in Mrk 573 (Capetti et al. 1996, Falcke, Wilson, & Simpson 1998). This seems not to be the case for NGC 4151. In conflict with this statement, the study of NGC 4151 by Winge et al. (1998) reports that high velocity clouds are seen around the edges of the radio knots. This is not confirmed by Kaiser et al. (1999) who conclude that there is no direct association between non-virial gas kinematics, as determined by high velocity and high velocity dispersion, and proximity to the radio knots. Our results concur with those of Kaiser et al. (1999). While we do find high velocity clouds in our aperture there is no distinct preference for them to found along P.A. 70, which is more closely aligned with the radio axis (P.A. 77). Further support for radial outflow comes from the emission line ratios as a function of position. For example there is no significant difference in the \[OIII\]/\[OII\] or \[OIII\] $`\lambda `$5007 /H$`\beta `$ ratios between the two slit positions even though the spectra at P.A. 70 are much more closely aligned with the radio axis than the clouds at P.A. 221. This is in contrast to the case of NGC 1068 where the \[OIII\]/\[OII\] ratio increases dramatically in regions that coincide with the radio jet (Axon et al. 1998). WFPC2 images of NGC 1068 presented by Capetti, Axon, & Macchetto (1997) may also indicate higher density and ionization state along the radio jet in this object. Furthermore, these authors suggest that an additional source of local ionizing continuum is required to explain the observations. While these results certainly raise an interesting possibility for NGC 1068, our results for NGC 4151 show no such association between the radio morphology and the emission line ratios. Thus the radio jet in NGC 4151 seems to have little influence on the ionization state of the gas. Similar results are seen for the \[SII\] ratio and the \[OIII\] $`\lambda 5007/\lambda 4363`$ ratio suggesting no strong changes in the physical condition of the gas with proximity to the radio emission. Because the line ratio diagrams show no evidence for shock or shock plus precursor ionization models at least for low velocity shocks, they support the arguments for radial outflow. If the gas were expanding away from the radio axis one would expect to see large amounts of shocked material particularly at the interface of the flow with the ambient interstellar medium of the host galaxy disk. In the case of radial outflow, we would expect to see little shocked gas since the motion is not directed into the disk and the relative velocities of gas within the flow should be small. Perhaps an important consideration is that the radio morphology of NGC 4151 is rather different from that of Mrk 3 for example. Pedlar et al. have compared the radio structure of NGC 4151 to that of an FR I type radio galaxy, with much of the radio emission coming from a diffuse component, although on much smaller scales. The radio emission in Mrk 3, by contrast, is more jet-like being unresolved with MERLIN perpendicular to the radio source axis (Kukula et al. 1993). Thus we might consider that the radio emission in NGC 4151 is not a well collimated jet, but rather a broad spray of plasma. Gas clouds in the vicinity of the radio flow would thus be more naturally accelerated in directions roughly aligned with the radio axis than perpendicular to it. One possible scenario is that the core of the radio jet in NGC 4151 has cleared a channel in the line emitting gas and has blown out of the disk of the galaxy as suggested by Schulz (1988). Thus there may have been a bow shock associated with the radio lobes in the past but the jet has passed on to lower density region in the outer bulge and galaxy halo. The line emitting gas is now free to flow out along the radio axis but only weakly interacts with the jet itself and the host galaxy ISM. NGC 4151 is also known to have a system of nuclear absorption lines, particularly CIV $`\lambda `$1549, which are blueshifted with respect to the systemic velocity by values ranging from 0 to 1600 km s<sup>-1</sup> (e.g. Weymann et al. 1997). It is tempting to link the outflow seen in our study with that for the absorption line system. However, these flows are observed on vastly different scales and thus a true connection has not been established. Models invoking winds from the nucleus to explain the NLR kinematics and other properties of Seyfert galaxies have been proposed (e.g. Krolik & Vrtilek, 1984, Schiano, 1986, Smith, 1993). One suggestion is that X-ray heating of the molecular torus is the source of the wind (Krolik & Begelman, 1986). The base of the wind forms the electron scattering region which serves as the “mirror” allowing a view of the BLR in polarized light in some Seyfert 2 galaxies. At larger radii one might expect that the steep potential of the galaxy bulge tends to decelerate the wind. We conclude that the kinematics in NGC 4151 seem to be consistent with wind models for the NLR. ## 5 Summary The results presented in this paper provide an interesting contrast to the recent work on the NLR of Seyfert galaxies. Our analysis of the longslit spectra of NGC 4151 has revealed a rather different picture of the NLR in the sense that the prominent radio jet has very little influence on the kinematics and physical conditions. We find that the kinematics are best characterized by a decelerating radial outflow from the nucleus in the form of a wind. The lack of evidence for strong shocks near the radio axis and the uniformity of the line ratios across the NLR supports this picture. Thus it appears that while interaction between the radio jet and the NLR gas may be a common occurrence it is by no means ubiquitous and does not apply in the case of NGC 4151. We would like to thank Diane Eggers for her assistance in the data analysis. We would also like to thank Mark Allen for providing the model grids for the UV line ratio diagrams. This research has been supported in part by NASA under contract NAS5-31231.
no-problem/9910/cond-mat9910189.html
ar5iv
text
# Breakdown of Luttinger liquid state in one-dimensional frustrated spinless fermion model ## Abstract Haldane hypothesis about the universality of Luttinger liquid (LL) behavior in conducting one-dimensional (1D) fermion systems is checked numerically for spinless fermion model with next-nearest-neighbor interactions. It is shown that for large enough interactions the ground state can be gapless (metallic) due to frustrations but not be LL. The exponents of correlation functions for this unusual conducting state are found numerically by finite-size method. One-dimensional (1D) Fermi systems have a number of peculiarities which distinguish them drastically from 3D ones (for review, see ). In particular, gapless (metallic) 1D systems of interacting fermions never behave as normal Fermi-liquid. Haldane have proposed another class of universality Luttinger liquid (LL) state, namely. It is characterized by the existence of three branches of low-lying Bose excitations, density, current, and charge excitations with the velocities $`v_S,v_J`$ and $`v_N`$, correspondingly. The first one is connected with the variation of the total energy of the system $`E`$ under the variation of the total momentum $`P,`$ $`v_S=\delta E/\delta P`$; the second one ($`v_J`$), with the variation of the energy under the shift of all the particles in momenta space, that can be done physically by the application of magnetic flux to the system closed as a ring; and the third one, with the variation of the chemical potential $`\mu `$ at the change of the total number of particles $`N,`$ $`v_N=\left(L/\pi \right)\delta \mu /\delta N,`$ with $`L`$ being the length of the system. In LL state there are an exact relation between the velocities $$\chi v_Jv_N/v_S^2=1,$$ (1) which is the criterion of LL. The only dimensionless parameter which determines all the infrared properties of the system (e.g., time and space asymptotics of fermionic Green functions and susceptibilities) is the ratio $$e^{2\phi }=v_N/v_S=v_S/v_J$$ (2) Original arguments by Haldane were based on exact Bethe Ansatz solutions as well as on the perturbation theory for weakly interacting systems. There are no general and rigorous proof of this assumption but all the known analytical and numerical results about 1D fermion systems confirm Haldane hypothesis . Anderson proposed that some 2D systems such as copper-oxide superconductors also belong to the class of LL which made the concept of Luttinger liquid one of the most “fashionable” in contemporary many-particle physics. Therefore the investigation of the status of Haldane hypothesis seems to be of importance. Here we present a counterexample to this hypothesis basing on exact numerical results for spinless fermion model. We proceed with the Hamiltonian $$H=t\underset{i=1}{\overset{L}{}}\left(c_i^{}c_{i+1}+c_{i+1}^{}c_i\right)+V\underset{i=1}{\overset{L}{}}n_in_{i+1}+V^{}\underset{i=1}{\overset{L}{}}n_in_{i+2}$$ (3) where $`c_i^{},c_i`$ are Fermi creation and annihilation operators on site $`i,n_i=c_i^{}c_i.`$ Phase diagram of this model has been investigated by us in . In particular, it has been shown that for half-occupied case, $`\rho =N/L=1/2`$ and arbitrarily small $`t`$ the ground state turns out to be gapless (metallic) along the line $`V=2V^{}.`$ It is the consequence of frustrations which lead to the macroscopically large degeneracy (finite entropy per site) of the ground state at the Ising limit $`t=0`$. A similar result has been obtained also in Ref. . It is important that, according to our calculations, the metallic region has non-zero width in $`(V,V^{})`$ plane. One can say with certainty that the gap is zero at $`(V/2)0.6tV^{}(V/2).`$ One can show also with certainty that the ground state is insulating at $`|(V/2)V^{}|>1`$. To check the Haldane hypothesis we restrict ourselves by the consideration of the straight line $`V=2V^{}`$ where the system is definitely metallic. Similarly, we have the metallic state for $`\rho =2/3`$ at $`V^{}=0`$ and arbitrarily large $`V`$ or, vice versa, at $`V=0`$ and arbitrarily large $`V^{}.`$ There are rare examples of metallic state with strong interactions and it seems to be interesting to check the Haldane assumptions for this unusual case. As it was already mentioned the original Haldane hypothesis is based, on the one hand, on the consideration of exactly integrable systems and, on the other hand, on the perturbation treatment of systems with weak correlations. Therefore, its validity in the case under consideration is not obvious. Note that “unusual” character of metallic state at $`\rho =1/2,V2V^{}`$ has been mentioned in but without specification what this state is. We have carried out the calculations of the ground state of the model (3) by Lanczos method for finite clusters with the consequent extrapolation to $`L\mathrm{}`$ (for details, see ). Velocities of low-lying excitations has been calculated as $`v_S`$ $`=`$ $`{\displaystyle \frac{L}{2\pi }}\left[E_{1p}(L,N)E_0(L,N)\right]`$ (4) $`v_J`$ $`=`$ $`{\displaystyle \frac{L}{2\pi }}\left[E_a(L,N)E_0(L,N)\right]`$ (5) $`v_N`$ $`=`$ $`{\displaystyle \frac{L}{\pi }}\left[E_0(L,N+1)2E_0(L,N)+E_0(L,N1)\right]`$ (6) Here $`E_0(L,N)`$ is the ground state energy of the cluster with $`L`$ sites for periodic boundary conditions and $`N`$ particles, $`E_a(L,N)`$ is ground state energy for antiperiodic boundary conditions (transition to the antiperiodic conditions corresponds to magnetic flux $`\mathrm{\Phi }=1/2`$ of the flux quantum), $`E_{1p}(L,N)`$ is the ground state energy for minimal nonzero total momentum $`P=2\pi /L`$. Then we have verified the criterion of LL $`\chi =1`$ using Eq.(1). The results of the testing calculations for the case $`\rho =1/2,V^{}=0,0V<2t`$ where the system has to be LL are shown in Fig.1 (open circles and triangles). We also present in the same figure the calculated values of $`\chi `$ along the line $`V=2V^{}`$. One can see that at $`V10t`$ we have, within the accuracy of the computations, $`\chi 1`$, in an agreement with Haldane hypothesis. However, for $`V30t`$ the values of $`\chi `$ is definitely less than unity, that is obvious even without extrapolation to $`L\mathrm{}`$ since $`\chi (L)<1`$ for finite $`L`$ and diminishes with $`L`$ increase. Therefore we have demonstrated that there are one-dimensional conducting systems of interacting fermions which are not LL. The breakdown of LL picture is caused by the competitions of nearest-neighbor and next-nearest-neighbor interactions (i.e. frustrations) which allow the system to be metallic in the limit of strong interactions. A schematic phase diagram is shown in Fig.2. The question is still open whether the transition from insulating state to non-LL conducting state is the direct one or there exists intermediate conducting LL phase. At the same time, our calculations demonstrate that for $`\rho =2/3`$ the relation (1) takes place with the accuracy of calculations for any values of parameters under consideration even along the lines $`V=0`$ or $`V^{}=0`$. It would be very interesting to understand analytically the reason for the difference between these two cases with strong frustrations. We also have calculated the static correlation functions $$\begin{array}{cc}G\left(R\right)\hfill & =c_R^{}c_0\hfill \\ K\left(R\right)\hfill & =\delta n_R\delta n_0\hfill \end{array}$$ (7) where brackets mean the averaging over the ground state, $`\delta n_i=n_i\rho `$. In LL the following asymptotics have to be valid at $`R1`$ $$\begin{array}{cc}G\left(R\right)\hfill & \underset{m=0}{\overset{\mathrm{}}{}}C_m\mathrm{sin}\left[\left(2m+1\right)k_FR\right]R^{\eta _m}\hfill \\ K\left(R\right)\hfill & \underset{m=0}{\overset{\mathrm{}}{}}D_m\mathrm{cos}\left(2mk_FR\right)R^{\theta _m}\hfill \end{array}$$ (8) where $`\eta _m=\frac{1}{2}e^{2\phi }+2\left(m+\frac{1}{2}\right)^2e^{2\phi },\theta _m=2m^2e^{2\phi }`$ $`\left(m>0\right),k_F=\rho /2`$ is the Fermi momentum. The most important exponent $`\alpha `$ determins the behavior of one-particle distribution function $`n\left(k\right)`$ near the Fermi surface $$n\left(k\right)n\left(k_F\right)C\text{sign}\left(kk_F\right)\left|kk_F\right|^\alpha ,$$ (9) where $`\alpha =\eta _01.`$ However we cannot use these expressions a priori since for the model under consideration the system is not always LL. We have found the asymptotics of the correlation functions by direct computation. It is known (see, e.g., ) that it is very difficult to find the correlation exponents from the calculations for a given $`L`$, even as large as $`L=32.`$ Therefore we use the finite size scaling technique . Specifically we use the following procedure. Our aim is to find the function $`\phi (R)\varphi (0)\varphi (R)_{\mathrm{}}`$ for the infinite chain. Direct calculations give us the functions $`f(R,L)\varphi (0)\varphi (R)_L`$ for $`R<L`$. From the symmetry considerations we have $`f(R,L)=f(LR,L)`$. Let us introduce the function $`r(R,L)`$ to have, by definition, $`\phi [r(R,L)]=f(R,L)`$. Therefore $$\underset{L\mathrm{}}{lim}r(R,L)=R.$$ (10) Then we introduce the new variable $`\lambda R/L`$ so that $`r(R,L)=Lr^{}(\lambda ,L)`$ where $`r^{}(\lambda ,L)`$ is a new unknown function. To provide (10) we have $`lim_L\mathrm{}r^{}(\lambda ,L)=\lambda `$. Also the function $`r^{}`$ satisfies the condition $`r^{}(\lambda ,L)=r^{}(1\lambda ,L)`$. For small $`\lambda `$ one has $`r^{}(\lambda )\lambda .`$ To satisfy all these requirements we try the function $`r^{}`$ as a Fourier series $$r^{}(\lambda )=\frac{\mathrm{sin}(\pi \lambda )+a_3\mathrm{sin}(3\pi \lambda )+a_5\mathrm{sin}(5\pi \lambda )+\mathrm{}}{\pi (1+a_3+a_5+\mathrm{})}$$ (11) Using the asymptotic expression similar to (8) for the dependence $`f(R,L)=\phi \left(Lr^{}(\lambda )\right)`$at finite $`L`$ and optimizing the result with respect to both $`a_n`$ and the correlation exponents we can find the latter with high enough accuracy. At least the results for the exponents appeared to be accurate enough for the clusters with $`14L26`$ used in our calculations. For the testing case $`V^{}=0,0<V<2t`$ where the system is definitely LL the results for the correlation exponents coincide with that from Haldane formula (8) with the accuracy of 0.5% for the function $`G\left(R\right)`$ and 8% for the function $`K\left(R\right)`$. In the most interesting case $`\chi 1`$ we cannot use the expression (8) and have to restrict ourselves only by the consideration of the leading terms in the asymptotics of correlation functions which are tried in the following form $$\begin{array}{cc}G(R)\hfill & (C_1+C_2\mathrm{sin}(\frac{\pi }{2}R))/R^\gamma \hfill \\ K(R)\hfill & (D_1+D_2(1)^R)/R^\delta \hfill \end{array}$$ (12) (we consider the case $`\rho =1/2`$). To diminish the number of states in the Gilbert space under consideration we use only the states which has the same (minimal) energy for $`V=2V^{}`$ and $`t=0`$ which corresponds to the consideration of the case $`V/t\mathrm{},V^{}/t\mathrm{},V/V^{}=2.`$ It allows us to consider as large clusters as $`L=32`$. The results of the calculations for the correlation functions are shown in Figs.3,4. We have found by the technique described above $`\gamma =2.009÷2.013`$ and $`\delta =1.80÷1.83`$. Note that the envelope of the function $`K(R)`$ turns out to be nonmonotonous in non-LL regime (see the black circles for $`R=2,4,6`$ in Fig.4. The results of computer simulation demonstrating possible violation of Haldane hypothesis seem to be rather unexpected. In particular, we cannot see any simple causes for the difference between two frustrated cases: $`\rho =1/2,V=2V^{}\mathrm{}`$ (non-LL behavior) and $`\rho =2/3,V^{}=0,V\mathrm{}`$ (LL behavior). It would be very important to understand these numerical results by regular field-theoretical methods. CAPTIONS TO FIGURES Fig.1. The dependence of the ratio $`\chi `$ (Eq.(1)) on the inverse size of the cluster; empty symbols correspond to $`V^{}=0`$ (circles: $`V=0.5t`$, triangles: $`V=1.5t`$); black symbols correspond to $`V=2V^{}`$ (circles: $`V=10t`$, squares: $`V=30t`$, triangles: $`V=50t`$, diamonds: $`V=100t`$, hexagons: $`V=200t`$). Fig.2. Phase diagram of the model. The boundary between conducting LL and non-LL phases is shown schematically by zigzags. Fig.3. The dependence of the correlation functions $`G(R)`$ (Eq.(5)) for L=32; open circles correspond to $`V=V^{}=0`$, black ones correspond to $`V=2V^{}`$, $`V\mathrm{}`$. Fig.4. The same as in Fig.3, for $`K(R)`$ (Eq.(5)).
no-problem/9910/cond-mat9910473.html
ar5iv
text
# Nucleation of superconductivity in a mesoscopic loop of finite width ## 1 Introduction The nucleation of superconductivity in mesoscopic samples has received a renewed interest after the development of nanofabrication techniques, like electron beam lithography. A superconductor is in the mesoscopic regime when the sample size is comparable to the superconducting coherence length $`\xi (T)`$. In the framework of the Ginzburg-Landau (GL) theory, $`\xi (T)`$ sets the length scale for spatial variations of the modulus of the superconducting order parameter $`|\mathrm{\Psi }|`$. The pioneering work on mesoscopic superconductors was carried out already in 1962 by Little and Parks , who measured the shift of the critical temperature $`T_c(H)`$ of a (multiply connected) thin-walled Sn microcylinder (a thin-wire ”loop”) in an axial magnetic field $`H`$. The $`T_c(H)`$ phase boundary showed a periodic component, with the magnetic period corresponding to the penetration of a superconducting flux quantum $`\mathrm{\Phi }_0=h/2e`$. A few years later, Saint-James calculated the $`T_c(H)`$ of a singly-connected cylinder (a mesoscopic ”disk”). Taking into account the analogy with the situation of a semi-infinite superconducting slab in contact with vacuum , the critical field was called $`H_{c3}(T)`$ in this case, since superconductivity nucleates initially near the sample interface. In the present paper, we will use the notation $`H_{c3}^{}(T)`$, for the nucleation magnetic field. The $`T_c(H)`$ phase boundary (or $`H_{c3}^{}(T)`$) of the disk shows, just like for the usual Little-Parks effect in a multiply connected sample (loop), an oscillatory behaviour. When moving along $`T_c(H)`$, superconductivity concentrates more and more near the sample interface as $`H`$ grows. A giant vortex state is formed: a ”normal” core carries $`L`$ flux quanta, and the ’effective’ loop radius increases, resulting in a decrease of the magnetic oscillation period. An experimental verification of these predictions was carried out later on by Buisson et al. and by Moshchalkov et al. . In the early paper from Saint-James and de Gennes , $`H_{c3}^{}(T)`$ has been calculated also for a film exposed to a parallel magnetic field, where surface superconductivity can grow along two superconductor/vacuum interfaces. For low magnetic fields, the two surface superconducting sheaths overlap, and, as a result, $`T_c`$ versus $`H`$ becomes parabolic, which is characteristic for 2D behaviour. When increasing the field, a crossover to a linear $`T_c(H)`$ dependence (3D) occurs at $`t2\xi (T)`$, with $`t`$ the film thickness. Shortly after, it was shown that vortices start to nucleate in the film at this dimensional crossover point ($`t=1.8\xi (T)`$. The goal of the present paper is to study the phase boundary $`T_c(H)`$ of loops made of finite width wires. In a Type-II material, superconductivity is expected to be enhanced, with respect to the bulk upper critical field $`H_{c2}`$ : $`H_{c3}^{}(T)>H_{c2}(T)`$, both at the external and the internal sample surfaces. As for a film in a parallel field, a 2D-3D dimensional crossover can be anticipated, since the loops may be simply considered as a film, which is bent such that its ends are joined together. We calculate, for the first time, the phase boundary $`T_c(H)`$ as the ground state solution of the linearized first GL equation with two superconductor/vacuum interfaces. ## 2 The linearized GL equation The linearized first GL equation to be solved in order to find $`T_c(H)`$ is: $$\frac{1}{2m^{}}(i\mathrm{}\stackrel{}{}2e\stackrel{}{A})^2\mathrm{\Psi }=|\alpha |\mathrm{\Psi }.$$ (1) This equation is formally identical to the Schrödinger equation for a particle with a charge $`2e`$ in a magnetic field. At the onset of superconductivity, the nonlinear GL term can be omitted and the $`z`$-dependence disappears from the equations and therefore an infinitely long cylinder and a disk have identical $`T_c(H)`$ boundaries. It is further assumed that $`\mu _0\stackrel{}{H}=rot\stackrel{}{A}`$, with $`H`$ the applied magnetic field. The eigenenergies $`|\alpha |`$ can be written as: $$|\alpha |=\frac{\mathrm{}^2}{2m^{}\xi ^2(T)}=\frac{\mathrm{}^2}{2m^{}\xi ^2(0)}\left(1\frac{T}{T_{c0}}\right),$$ (2) $`T_{c0}`$ being the critical temperature in zero magnetic field. From the energy eigenvalues of Eq. (1), the lowest Landau level $`|\alpha _{LLL}(H)|`$ is directly related to the highest possible temperature $`T_c(H)`$, for which superconductivity can exist. For the loop geometries, we choose the cylindrical coordinate system $`(r,\phi )`$ and the gauge $`\stackrel{}{A}=(\mu _0Hr/2)\stackrel{}{e}_\phi `$, where $`\stackrel{}{e}_\phi `$ is the tangential unit vector. The exact solution of the Hamiltonian (Eq. (1)) in cylindrical coordinates takes the following form : $`\mathrm{\Psi }(\mathrm{\Phi },\phi )`$ $`=`$ $`e^{ıL\phi }\left({\displaystyle \frac{\mathrm{\Phi }}{\mathrm{\Phi }_0}}\right)^{L/2}\mathrm{exp}\left({\displaystyle \frac{\mathrm{\Phi }}{2\mathrm{\Phi }_0}}\right)K(n,L+1,\mathrm{\Phi }/\mathrm{\Phi }_0)`$ (3) $`K(a,c,y)`$ $`=`$ $`c_1M(a,c,y)+c_2U(a,c,y).`$ Here $`\mathrm{\Phi }=\mu _0H\pi r^2`$ is the applied magnetic flux through a circle of radius $`r`$. The number $`n`$ determines the energy eigenvalue. Most generally, the function $`K(a,c,y)`$ can be any linear combination of the two confluent hypergeometric functions (or Kummer functions) $`M(a,c,y)`$ and $`U(a,c,y)`$ , but the sample topology puts a constraint on $`c_1,c_2`$, and $`n`$, via the Neumann boundary condition: $$(ı\mathrm{}\stackrel{}{}2e\stackrel{}{A})\mathrm{\Psi }|_{,b}=0,$$ (4) which the solutions $`\mathrm{\Psi }`$ of Eq. (1) have to fulfill at the sample interfaces $`b`$. The eigenenergies of Eq. (1) can be written in the form: $$\frac{r_o^2}{\xi ^2(T_c)}=\frac{r_o^2}{\xi ^2(0)}\left(1\frac{T_c(H)}{T_{c0}}\right)=4\left(n+\frac{1}{2}\right)\frac{\mathrm{\Phi }}{\mathrm{\Phi }_0}=ϵ(H_{c3}^{})\frac{\mathrm{\Phi }}{\mathrm{\Phi }_0},$$ (5) where $`\mathrm{\Phi }=\mu _0H\pi r_o^2`$ is arbitrarily defined. The integer number $`L`$ is the phase winding, or fluxoid quantum number. The parameter $`n`$ depends on $`L`$ and is not necessarily an integer number, as we shall see further on. The bulk Landau levels are obtained when substituting $`n=0,1,2,\mathrm{}`$ in Eq. (5), meaning that the lowest level $`n=0`$ corresponds to the bulk upper critical field $`\mu _0H_{c2}(T)=\mathrm{\Phi }_0/\left(2\pi \xi ^2(T)\right)`$. For a disk geometry , we have to take $`c_2=0`$ in Eq. (3) in order to avoid the divergency of $`U(a,c,y0)=\mathrm{}`$ at the origin. Selecting the lowest Landau level at each value $`\mathrm{\Phi }`$, one ends up with a cusp-like $`T_c(H)`$ phase boundary , which is composed of values $`n<0`$ in Eq. (5), thus leading to $`H_{c3}^{}(T)>H_{c2}(T)`$. A similar calculation was performed for a single circular microhole in a plane film (”antidot”) , where $`c_1=0`$ in Eq. (3), since $`M(a,c,y\mathrm{})=\mathrm{}`$. Here as well, the lowest Landau level consists of solutions with $`n<0`$. At each cusp in $`T_c(\mathrm{\Phi })`$, the system makes a transition $`LL\pm 1`$, i.e. a flux quantum enters or is removed from the sample. The loops we are currently studying have two superconducting/vacuum interfaces, one at the outer radius $`r_o`$, and one at the inner radius $`r_i`$. Consequently, the boundary condition (Eq. (4)) has to be fulfilled at both $`r_o`$, and $`r_i`$. As a result, we have a system of two equations and two variables $`n`$ and $`c_2`$ ($`c_1=1`$ is chosen), which we solved for different values of $`r_i/r_o`$. ## 3 Results Fig. 1 shows the Landau level scheme (dashed lines) calculated from Eqs. (3)-(5), for a loop with $`r_i/r_o=0.5`$. The applied magnetic flux $`\mathrm{\Phi }=\mu _0H\pi r_o^2`$ is defined with respect to the outer sample area. The $`T_c(H)`$ boundary is composed of $`\mathrm{\Psi }`$ solutions with a different phase winding number $`L`$ and is drawn as a solid cusp-like line in Fig. 1. At $`\mathrm{\Phi }0`$, the state with $`L=0`$ is formed at $`T_c(\mathrm{\Phi })`$ and one by one, consecutive flux quanta $`L`$ enter the loop as the magnetic field increases. For low magnetic flux, the background depression of $`T_c`$ is parabolic, whereas at higher flux, $`T_c(\mathrm{\Phi })`$ becomes quasi-linear, just like for the case of a filled disk. The crossover point from parabolic to quasi-linear appears at about $`\mathrm{\Phi }14\mathrm{\Phi }_0`$. The solid and dotted straight lines in Fig. 1 are the bulk upper critical field $`H_{c2}(T)`$ and the surface critical field $`H_{c3}(T)`$ for a semi-infinite slab, respectively. In these units the slopes of the curves (see Eq. (5)) are $`ϵ=2`$ for $`H_{c2}`$ (substitute $`n=0`$ in Eq. (5)) and $`ϵ=2/1.69`$ for $`H_{c3}`$. The ratio $`\eta =ϵ(H_{c2})/ϵ(H_{c3})=1.69`$ corresponds then to the enhancement factor $`H_{c3}(T)/H_{c2}(T)`$ at a constant temperature. For the loops we are studying here, $`\eta =ϵ(H_{c2})/ϵ(H_{c3}^{})`$ is varying as a function of the magnetic field. The energy levels below the $`H_{c2}`$ line (solid straight line in Fig. 1) could be found by fixing a certain $`L`$, and finding the real numbers $`n<0`$ numerically after inserting the general solution (Eq. (3)) into the boundary condition (Eq. (4)). Note that the lowest Landau level always has a lower energy $`|\alpha (\mathrm{\Phi })|`$ than for a semi-infinite superconducting slab, which implies $`H_{c3}^{}(T)>H_{c3}(T)=1.69H_{c2}(T)`$. As mentioned earlier, in a thin film of thickness $`t`$ in a parallel field $`H`$, a dimensional crossover is found at $`t=1.84\xi (T)`$. For low fields (high $`\xi `$) $`T_c(H)`$ is parabolic (2D), and for higher fields vortices start penetrating the film and consequently $`T_c(H)`$ becomes linear (3D) . In Fig. 1 the small arrow indicates the point on the phase diagram $`T_c(\mathrm{\Phi })`$ where $`w=1.84\xi (T)`$. For the loops as well, the dimensional transition shows up approximately at this point, although the vortices are not penetrating the sample area in the 3D regime. Instead, the middle loop opening contains a coreless ’giant vortex’ with an integer number of flux quanta $`L\mathrm{\Phi }_0`$. In order to compare the flux periodicity of $`T_c(\mathrm{\Phi })`$, we have plotted, in Fig. 2, the lowest energy levels of Fig. 1 as $`\eta ^1=ϵ(H_{c3}^{})/ϵ(H_{c2})`$, for loops with a different $`r_i/r_o`$. In this representation, the dotted horizontal line at $`\eta ^1=0.59`$ corresponds to the surface critical field line $`H_{c3}(T)`$. The nucleation field of a disk $`H_{c3}^{}(T)>1.69H_{c2}(T)`$, and for a circular microhole in an infinite film (”antidot”) $`H_{c3}^{}(T)<1.69H_{c2}(T)`$. As $`\mathrm{\Phi }`$ grows (the radius goes to infinity) the $`H_{c3}^{}(T)`$ of both the disk and the antidot approaches the $`H_{c3}(T)`$ line. For all the loops we study here, the presence of the outer sample interface automatically implies that $`H_{c3}^{}(T)>H_{c3}(T)`$ is enhanced ($`\eta >1.69`$), with respect to the case of a flat superconductor-vacuum interface. For loops with a small $`r_i/r_o`$, the $`T_c(\mathrm{\Phi })`$ boundary very rapidly collapses with the $`T_c(\mathrm{\Phi })`$ of the dot ($`\eta `$ becomes the same). The presence of the opening in the sample is not relevant for the giant vortex formation in the high flux 3D regime. On the contrary, in the low flux regime, the surface sheaths along the two interfaces overlap, giving rise to a different periodicity of $`T_c(\mathrm{\Phi })`$ and to a parabolic background. This regime can be described within the London limit, since superconductivity nucleates almost uniformly within the sample. In summary, we have solved the linearized GL equation for loops of different wire width, with Neumann boundary conditions at both the outer and the inner loop radius. The critical fields $`H_{c3}^{}(T)`$ are always above the $`H_{c3}(T)=1.69H_{c2}(T)`$. The ratio $`H_{c3}^{}(T)/H_{c2}(T)`$ is enhanced most strongly when the sample’s surface-to-volume ratio is the largest. The $`T_c(\mathrm{\Phi })`$ behaviour can be split in two regimes: for low flux, the background of $`T_c`$ is parabolic (characteristic for 2D behaviour) and the Little-Parks $`T_c(\mathrm{\Phi })`$ oscillations are perfectly periodic. In the high flux regime, the period of the $`T_c(\mathrm{\Phi })`$ oscillations is decreasing with $`\mathrm{\Phi }`$ and the background $`T_c`$ reduction is quasi-linear (3D regime). The 2D-3D crossover between the two regimes, at a certain applied flux $`\mathrm{\Phi }`$, is similar to the dimensional transition in thin films subjected to a parallel field. As soon the 3D regime is reached, a giant vortex state is created, where only a sheath close to the sample’s outer interface is superconducting. The authors wish to thank H. J. Fink for stimulating discussions. This work has been supported by the Belgian IUAP, the Flemish GOA and FWO-programmes, and by the ESF programme VORTEX.
no-problem/9910/astro-ph9910428.html
ar5iv
text
# First detections of FIRBACK sources with SCUBA ## 1 Introduction In the far-IR/sub-mm waveband there are currently two pressing (and related) cosmological mysteries: 1. what makes up the background detected by COBE (Puget et al. 1996, Fixsen et al. 1998, Hauser et al. 1998, Lagache et al. 1999)? 2. how do sources detected in the sub-mm (see e.g. Sanders 1999 and references therein) relate to galaxies at other wavelengths? The study of both issues provides important diagnostics of galaxy formation models, and the answers should help illuminate the dark ages of how the first objects formed and subsequently evolved into present day galaxies. The first question is the main motivation behind the FIRBACK project (Clements et al. 1998, Lagache 1998, Reach et al. 1998, Puget et al. 1999, Dole et al. 1999), which made deep $`170\mu `$m images of the sky with ISO, to resolve the CIB into sources. The second question has been the main driving force behind the search for distant sources by several teams using SCUBA, as well as the large amount of follow-up work and comparison with other wavelengths (e.g. Smail, Ivison & Blain 1997, Barger et al. 1998, Blain, Ivison & Smail 1998, Holland et al. 1998, Hughes et al. 1998, Smail et al. 1998, Barger et al. 1999, Blain et al. 1999a, Chapman et al. 1999, Cowie et al. 1999, Eales et al. 1999, Lilly et al. 1999). These observational campaigns are untangling the problem of how obscuration by dust skews our optical view of the early Universe, unveiling the ‘dark-side’ of galaxy formation out to distant redshifts, and helping provide unbiased estimates of the global star formation history of the Universe. In this letter we present the first results of a study at the interface between these two puzzles. Rather than trying to find new sources with SCUBA, we have carried out 450$`\mu `$m and 850$`\mu `$m photometry on sources already detected at 170$`\mu `$m with ISOPHOT. ## 2 The FIRBACK Survey The lifetime of the Infrared Space Observatory (ISO; Kessler et al. 1996) is long over, and we now have in hand the best information we will have from long-IR wavelengths until the launch of SIRTF. For SCUBA observations, the smallest extrapolations come at the longest wavelengths attainable with the ISOPHOT instrument (Lemke et al. 1996). ISOPHOT is an imaging photo-polarimeter with 92 arcsec pixels and 1.6 arcmin FWHM at $`170\mu `$m; reasonably high signal-to-noise sources in ‘dithered’ images can be located with an accuracy of $`\mathrm{\hspace{0.17em}40}`$ arcsec. Following the discovery of the CIB, ISO was in a unique position to investigate the sources which comprise this background. Consequently deep ISOPHOT images were obtained at $`170\mu `$m of three separate $`\mathrm{\hspace{0.17em}1}`$ square degree regions of the sky, selected for their low foreground emission – the FIRBACK Survey. One region, the ‘Marano’ fields, is only accessible from the southern hemisphere, while the other two coincide with the ‘N1’ and ‘N2’ fields of the ELAIS project (Oliver 1997), which used ISOCAM at $`7\mu `$m and $`15\mu `$m and ISOPHOT at 90$`\mu `$m (although only a fraction of the FIRBACK sources were detected at these other wavelengths). These northern regions have also been mapped with the VLA (Ciliegi et al. 1999). In addition an area towards the Lockman Hole is being studied by another group (Kawara et al. 1998). Here we have concentrated on a sample of objects from the roughly two square degree ‘N1’ field, centred at $`16^\mathrm{h}\mathrm{\hspace{0.17em}11}^\mathrm{m}`$, $`+54^{}\mathrm{\hspace{0.17em}25}^\mathrm{m}`$. For this first attempt at JCMT follow-up we chose objects with secure, unconfused radio identifications, relatively strong 170$`\mu `$m emission, and additionally high 170$`\mu `$m:21 cm flux density ratios (see Lagache et al., in preparation). This last criterion was aimed at biassing the sample away from the lowest redshift galaxies, and hence towards those which might have the highest 850$`\mu `$m:170$`\mu `$m flux density ratios. We expect that with the addition of more follow-up observations it should be possible to select future sub-samples with a higher likelihood of being strong SCUBA emitters. ## 3 JCMT Observations The data were obtained on the nights of 18–23 March 1999 using the SCUBA instrument (Holland et al. 1999) on the JCMT. The short- and long-wavelength arrays were used simultaneously, at 450$`\mu `$m and 850$`\mu `$m, respectively. We used ‘photometry’ mode, chopping at the standard 7.8125 Hz, and also nodding every second by about 45 arcsec in coordinates fixed to the array (i.e. there was no sky rotation). This means that each measurement is a double-difference between the central bolometer and positions 45 arcsec each side, corresponding approximately to positions of other bolometers on both the long- and short-wavelength arrays. The data were analyzed using the SURF package (Jenness & Lightfoot 1998). The raw data for the two arrays were flat-fielded, corrected for extinction, had bad bolometers removed, and had the average sky removed at each time interval. The information from the off-beams was then added, assuming that one long wavelength bolometer had an efficiency of exactly 0.5, with the long wavelength bolometer on the other side, as well as the two short wavelength off-beam bolometers, having slightly lower values (see Borys et al. 1999, Chapman et al. 1999, for more details). Adding the weighted off-beam signal always decreased the noise, and generally increased the signal to noise ratio (SNR), but we carried out the same procedure even when it slightly lowered the SNR. Calibration was performed a few times per night using planets and other strong sub-mm sources, and the values we used were similar to the standard gains. The standard deviation of the calibrations was 10% at 850$`\mu `$m and 12% at 450$`\mu `$m; these should be a reasonable estimate of the uncertainty in the calibration. At the low SNRs at which we are working, the calibration uncertainty is not a major contributor to the total uncertainty, and has essentially no effect on the SNR itself. ## 4 Individual Objects At 850$`\mu `$m we detected one source at $`>\mathrm{\hspace{0.17em}5}\sigma `$ and a further three at $`>\mathrm{\hspace{0.17em}3}\sigma `$. While we would not claim detection of the other sources, they generally have positive flux density (see next section), and certainly there are good upper limits in each case. This last remark applies to the 450$`\mu `$m data, where there is only one detections above $`3\sigma `$. Bayesian 95% upper limits can be obtained for all our non-detections, by integrating a Gaussian probability, neglecting the unphysical negative flux density region. The 850$`\mu `$m upper limits for our six non-detections are given in the last column of Table 1. At 450$`\mu `$m the limits are generally less constraining for reasonable SEDs, being around $`30`$mJy in the best cases. Using a combination of 170$`\mu `$m, 450$`\mu `$m and 850$`\mu `$m data, we can place some constraints on the spectral energy distributions (SEDs) of these FIRBACK galaxies. Assuming a grey-body spectrum, we can obtain a limit on some combination of luminosity, temperature, spectral index and redshift. Here we choose to normalize the luminosity at 170$`\mu `$m, and then use the SCUBA data to constrain the redshift. To do this we assume standard values for the dust temperature, $`T_\mathrm{d}=\mathrm{\hspace{0.17em}40}`$K (typical for sub-mm selected galaxies, e.g. Blain et al. 1999b, Dunne et al. 1999), and spectral index of the dust emissivity, $`\beta =\mathrm{\hspace{0.17em}1.5}`$. Because we do not know the absolute luminosity of any source, our results are degenerate in the ratio $`T_\mathrm{d}/(1+z)`$, and so we are unable to tell apart cooler objects at lower redshift from hotter objects at higher redshift. Assuming a uniform prior distribution of redshifts, we can obtain a Bayesian 95% confidence range on the redshift implied for each source by our 450$`\mu `$m and 850$`\mu `$m data, where it is understood that the redshifts can be scaled using a different dust temperature such that $`(1+z)/T_\mathrm{d}`$ is held constant. We find that the objects with the highest 850$`\mu `$m flux densities have the highest implied redshifts: e.g. $`0.66<z<\mathrm{\hspace{0.17em}1.23}`$ for N1-038. Those with lower flux densities at 850$`\mu `$m generally only yield upper limits to the redshift: e.g. $`z<\mathrm{\hspace{0.17em}0.55}`$ for N1-015. There are no objects for which we would infer $`z>\mathrm{\hspace{0.17em}2}`$. However, higher redshifts could be accommodated by adopting a higher dust temperature or a higher value of $`\beta `$ (and lower redshifts by lowering these parameters). The object in our sub-sample which is brightest at 170$`\mu `$m, N1-008, is hard to fit with any reasonable SED; it has approximately half the 850$`\mu `$m and 450$`\mu `$m flux density expected from even a $`z=\mathrm{\hspace{0.17em}0}`$ source with the same 170$`\mu `$m flux density. This suggests that there might be more than one source contributing to the ISOPHOT flux, which is not particularly unlikely, since the FIRBACK Survey is operating near the confusion limit. On the other hand, optical images show little in the error circle except for a very bright obvious spiral galaxy, and so from that point of view this is not a case where we expect more than one source in the ISOPHOT beam. Another possibility is that our JCMT beam did not include all the flux, since optically this object appears quite extended. However, we would expect the sub-mm emission to be more concentrated than the optical, and hence we are likely to have included most of the dust emission within the beam. This object will be discussed more extensively in Lagache et al. (in preparation). Relatively poor values of $`\chi ^2`$ are also found for N1-034 and N1-063. These result either from low SCUBA flux densities relative to ISOPHOT, or somewhat high SCUBA 450$`\mu `$m flux densities compared with 850$`\mu `$m. Higher SNR data, or data at additional wavelengths are required to determine whether these flux densities come from multiple objects, complex SEDs, or simply the low SNRs of the current data for these objects. Another way of estimating the redshift relies on the radio/far-IR correlation (e.g. Helou, Soifer & Rowan-Robinson 1987, Carilli & Yun 1999, Barger, Cowie & Richards 1999, Smail et al. 1999). Using the 20 cm VLA data from Ciliegi et al. (1999) and the explicit correlation (using $`\beta =\mathrm{\hspace{0.17em}1.5}`$, $`T_\mathrm{d}=\mathrm{\hspace{0.17em}40}`$K) from Carilli & Yun (1999), we find $`z=\mathrm{\hspace{0.17em}0.6}`$, 1.2, 1.1 and 1.4 for N1-008, N1-038, N1-061 and N1-063, respectively (our four 850$`\mu `$m detections). Except for N1-008 these are in broad agreement with the values obtained from the sub-mm and far-IR data alone. ## 5 Statistical Results If we combine all the data together, then we can obtain a much more precise picture of the average galaxy in our FIRBACK sub-sample. The average flux densities are given in Table 1, and represent a $`4.1\sigma `$ detection at 450$`\mu `$m and a $`7.3\sigma `$ detection at 850$`\mu `$m. The average SED is shown in Figure 1. It is clear that for $`T_\mathrm{d}=\mathrm{\hspace{0.17em}40}`$K the best fitting redshift is around 0.3, with values as low as $`z=\mathrm{\hspace{0.17em}0}`$ or higher than $`z\mathrm{\hspace{0.17em}0.6}`$ providing relatively poor fits. However, we can certainly accommodate $`z\mathrm{\hspace{0.17em}1}`$ galaxies for higher temperatures, and $`z\mathrm{\hspace{0.17em}0}`$ galaxies for lower temperatures. Since higher dust temperature is seen in star-bursting galaxies, compared with more typical star-forming galaxies, then it is possible that a fraction of these sources are much more luminous and at higher redshift. However, it would take unrealistically high temperatures, $`T_\mathrm{d}\mathrm{\hspace{0.17em}100}`$K, to push some of the objects up to say $`z\mathrm{\hspace{0.17em}3}`$ (although the possibility of gravitational lensing could complicate this). Models of galaxy populations (e.g. Guiderdoni et al. 1998), designed to fit number counts at a range of wavelengths, as well as the CIB, predict that the average FIRBACK galaxy is indeed at $`z\mathrm{\hspace{0.17em}1}`$. The other possibility is that we have uncovered a new population of relatively nearby star-forming galaxies, which do not show up clearly in surveys at other wavelengths. The degeneracy between $`(1+z)`$ and $`1/T_\mathrm{d}`$ makes it impossible to decide between these possibilities using the far-IR and sub-mm data alone. Although it seems unlikely that all our objects can be at low redshift, this could obviously be resolved by obtaining optical redshifts. Once redshifts have been obtained, then the sub-mm data will be invaluable in measuring the properties of the dust in the various galaxy types: far-IR luminosities, dust temperatures and emissivities. ## 6 Conclusions We have carried out the first SCUBA follow-up of FIRBACK sources. We found that they are generally detectable in the sub-mm; those with somewhat higher 850$`\mu `$m flux density may be at $`z\mathrm{\hspace{0.17em}1}`$, while those which are fainter in the sub-mm may be more normal galaxies at $`z\mathrm{\hspace{0.17em}0}`$. Models of evolving galaxy populations which provide a good fit to the 170$`\mu `$m counts, as well as counts at other wavelengths (e.g. Guiderdoni et al. 1998) predict that the median redshift of the FIRBACK galaxies is around 1 (Puget et al. 1999). Our results are consistent with this, provided that the average galaxy in our sub-sample is a distant star-bursting galaxy with fairly hot dust temperature. The other possibility is that some could be from an otherwise unknown population of low redshift star-forming galaxies with relatively low dust temperature. Further observations at sub-mm and other wavelengths should decide this issue, and reveal the detailed properties of the galaxies which comprise the CIB. ###### Acknowledgements. This work was supported by the Natural Sciences and Engineering Research Council of Canada. The James Clerk Maxwell Telescope is operated by The Joint Astronomy Centre on behalf of the Particle Physics and Astronomy Research Council of the United Kingdom, the Netherlands Organisation for Scientific Research, and the National Research Council of Canada.
no-problem/9910/astro-ph9910365.html
ar5iv
text
# Accretion-Ejection Instability and a “Magnetic Flood” scenario for GRS 1915+105 ## Introduction This contribution comes from two different lines of work: the first one is purely theoretical, and has led us to present recently tagger:TP99 an instability which may occur in the inner region of magnetized disks. We have called it Accretion-Ejection Instability (AEI), because one of its main characteristics is to extract energy and angular momentum from the disk, and to emit them vertically as Alfven waves propagating along magnetic field lines threading the disk. These Alfven waves then may deposit the energy and angular momentum in the corona above the disk, providing an efficient way to energize winds or jets from the accretion energy. The second approach has consisted in the comparison of this instability with the observed properties of the low-frequency (.5 - 10 Hz) Quasi-Periodic Oscillation (QPO) observed in the low and hard state of the micro-quasar GRS 1915+105. The very large and fast growing number of observational results on this source gives access to many aspects of the physics of the disk. They allow this comparison to rely on basic properties of the instability, and on more detailed ones, such as the correlation between the QPO and the evolution of the disk and coronal emissions (identified respectively as multicolor black body and comptonized power-law tail in the X-ray spectrum). This comparison encourages us to consider that the AEI may indeed be the source of the QPO. Thus we proceed by considering the $``$ 30 mn cycles of GRS 1915+105. These cycles are the most spectacular in the gallery of behaviors and spectral states of this source, in particular because multi-wavelength observations have shown IR and radio outbursts coinciding with them, consistent with the synchrotron emission from an expanding cloud ejected at relativistic speeds. These cycles have been analyzed in great details, and the QPO shows a very characteristic and reproducible behavior. We have thus built a scenario, starting from the identification of the QPO with the AEI, and considering how this could explain the evolution of the source during the cycle. We refer to it as a magnetic flood scenario, because we are led to believe that the cycle is controlled by the build-up of the vertical magnetic flux stored within the disk. The scenario is compatible with the available information on this type of cycle, explains a number of results in existing data, and leads to intriguing considerations on the behavior of GRS 1915+105. ## Accretion-Ejection Instability We will present the instability here only in general terms, and refer to our recent publication tagger:TP99 for detailed derivation and results. It appears in disks threaded by a vertical magnetic field, of the order of equipartition with the gas pressure ($`\beta =8\pi p/B^2\stackrel{<}{}1`$). The instability appears essentially as a spiral density wave in the disk, very similar to the galactic ones, but driven by the long-range action of magnetic stresses rather than self-gravity. The main difference lies in the amplification mechanism: instability results from the coupling of the spiral density wave with a Rossby wave. Rossby waves, associated with a gradient of vorticity, are best known in planetary atmospheres, including the other GRS – the Great Red Spot of Jupiter. In the present case differential rotation allows the spiral wave to grow by extracting energy and angular momentum from the disk, and transferring them to a Rossby vortex at its corotation radius. This radius is constrained by the physics of the spiral to be a few times ($`25`$ for the azimuthal wavenumber $`m=1`$, i.e. a one-armed spiral) the inner radius of the disk. A third type of wave completes the description of the instability: it is an Alfven wave, emitted along the magnetic field lines towards a low-density corona above the disk. The mechanism here is simply that the Rossby vortex twists the footpoints of the field lines in the disk. This twist will then propagate upward, carrying to the corona the energy and angular momentum extracted from the disk by the spiral. The mechanism is thus quite complex; this comes essentially from differential rotation, which allows a mixing of waves which would otherwise evolve independently. It results in an instability, growing on a time scale of the order of $`r/h`$ times the rotation time (where $`h`$ is the disk thickness and $`r`$ its radius). We will present here its main characteristics, which will be essential in what follows: $``$ It occurs when the vertical magnetic field $`B_0`$ is near equipartition ($`\beta \stackrel{<}{}1`$) and presents a moderate or strong radial gradient. $``$ The efficiency of the coupling to the Rossby wave selects modes with low azimuthal wavenumbers (the number of arms of the spiral), $`m=2`$ or $`m=1`$ usually, depending on a number of parameters (density and temperature profiles, field strength, etc.) $``$ For a given $`m`$, the mode frequency is close to $`\omega (m1)\mathrm{\Omega }_{int}`$, the rotation frequency at the inner disk radius. In the special case of the $`m=1`$ mode, the frequency is usually of the order of $`.2.5`$ times $`\mathrm{\Omega }_{int}`$. $``$ By analogy with galactic spirals, we can expect that these properties result in the formation of a large scale, quasi-stationary spiral structure rather than in a turbulent cascade to small scales. $``$ This should strongly affect the structure of the disk. Indeed, underlying the usual model of turbulent viscous transport in a disk (leading to Shakura and Sunyaev’s model of $`\alpha `$ disks) is the assumption of small scale turbulence. This leads to a local deposition of the accretion energy, efficiently heating the disk. Here on the other hand, the accretion energy is transported away by waves: extracted from the disk by the spiral wave, it is first transferred to the Rossby vortex, then to Alfven waves. Thus, here as in galaxies, the connection between gas accretion and disk heating is not as straightforward as in $`\alpha `$-disks. ## Magnetic flood scenario The low-frequency QPO in GRS 1915+105 has been the object of many recent studies tagger:swank ; tagger:muno . During the $`30`$ mn cycles of this source, the QPO appears only during the low state, and its frequency varies in a repetitive manner during that phase. Let us convert its frequency $`\nu _{QPO}`$ to a keplerian radius $`r_{QPO}`$, and compare it to the color radius $`r_{color}`$ resulting from a multi-color black body model of the disk emission: observations show that the ratio $`r_{QPO}/r_{color}`$ remains of the order of 5 while both radii vary during the low state. It is usually considered that $`r_{color}`$ gives a measure of the internal radius $`r_{int}`$ of the disk, although the ratio $`r_{color}/r_{int}`$ is subject to some uncertainties. It is thus very tempting to consider that the QPO originates from a pattern in the disk, rotating at a frequency corresponding to a radius $`r_{QPO}`$ of the order of a few times $`r_{int}`$. This may be supported by a correlation, found between $`\nu _{QPO}`$ and a higher frequency feature in various binary systems, including neutron star and black hole binariestagger:psaltis . Although the evidence is fragile in the case of GRS 1915+105, it would lead to consider that the ratio $`r_{QPO}/r_{int}`$ is of the order of 5, in agreement with the previous result, and corresponding to the value we expect for the $`m=1`$ AEI. This, and more detailed arguments to be presented elsewhere, leads us to tentatively identify the AEI as the source of the QPO, and to consider how this could fit with the 30 mn cycles of this source. We start from the conditions responsible for the onset of the instability, i.e. a change in $`B_0`$ or its radial gradient. We find better agreement with the former, and in this case the sudden transition from the “high and soft” state to the “low and hard” one would find a natural explanation: one has to remember that the best candidate to explain accretion in a magnetized disk is the magneto-rotational instability (MRI) tagger:balbus . It appears in disks with low magnetization ($`\beta >1`$), and results in small-scale turbulence which causes viscous accretion, in agreement with a standard $`\alpha `$ disk. Let us consider that in the high state the disk extends down to its last stable orbit at $`r_{LSO}`$, as suggested by the consistent minimal value of $`r_{color}`$, and that accretion is caused by the MRI, following an $`\alpha `$ prescription. The MRI might be responsible for the “band-limited noise” observed in power density spectra (below the QPO frequency, i.e. farther in the disk, when the QPO is present). Although numerical simulations of the MRI give estimates of the resulting $`\alpha `$, i.e. turbulent viscosity, they are not able at this stage to give the associated turbulent magnetic diffusivity, so that the evolution of the magnetic flux in the disk cannot be prescribed. Our main assumption is that in these conditions vertical magnetic flux builds up in the disk: either because it is dragged in with gas flowing from the companion, or from a dynamo effect tagger:brandenburg . This is actually the configuration observed near the center of the Galaxy. Then the field must grow in the disk, so that $`\beta `$ decreases until it reaches $`\beta 1`$, at which point the MRI stops and our instability sets in, appearing as the low-frequency QPO. The most important consequence is that turbulent disk heating stops, so that the disk temperature should drop, further reducing $`\beta `$. The abrupt transition from the high to the low state thus finds a natural explanation, as a sharp transition between a low magnetization, turbulently heated state to a high magnetization one, where disk heating stops and accretion energy is redirected toward the corona (although estimating what fraction of this energy is actually deposited in the corona depends on the physics of Alfven wave damping). The content of the space between the disk (when it does not extend down to $`r_{LSO}`$) and the black hole is not known. It might be an ADAF, or a large-scale, force-free magnetic configuration holding the magnetic flux frozen in the black hole (following the Blandford-Znajek mechanism). In both cases the condition which determines the inner disk radius $`r_{int}`$ must be complex, but it is reasonable to assume that a drop in the disk pressure could explain the increase of $`r_{color}`$ at the onset of the low state. Continuing accretion from the outer disk region would then move the disk back toward the last stable orbit, as seen during the low state. The light curves show an “intermediate spike”, halfway through the low state. At this time $`r_{color}`$ is back to its minimal value, the QPO stops, and the coronal emission decreases sharply. This is also when the infra-red synchrotron emission, presumably from a “blob” ejected at relativistic speed, begins tagger:mirabel . It is then natural to consider that at this time a large-scale magnetic event, possibly reconnection with the magnetic flux surrounding the black hole, causes ejection of the coronal plasma. This allows the disk to return to a lower magnetization state, so that once it has fully recovered it can start a new cycle in the high and soft state. ## Conclusion The properties of the low-frequency QPO in GRS 1915+105 have led us to tentatively identify it with the Accretion-Ejection Instability. This has allowed us to build up a scenario for the 30 mn cycles of this source. In contrast with global descriptions, such as $`\alpha `$ disks, this does not allow us to predict specific spectral signatures: in the same manner, knowledge of Rossby waves would hardly allow one to predict the existence and appearance of the Great Red Spot on Jupiter. On the other hand, the scenario is qualitatively compatible with all the information we have about these cycles. It explains why and how the QPO appears, how its frequency varies with the color radius, and why the transition from the high to the low state has to be a sharp one. Future work will be devoted to the QPO behavior at other times in GRS 1915+105, and then to other sources (black hole or neutron star binaries) where the identification of the QPO might give access to additional physics.
no-problem/9910/astro-ph9910523.html
ar5iv
text
# A Keck Survey of Gravitational Lens Systems : I. Spectroscopy of SBS 0909+532, HST 1411+5211, and CLASS B2319+051 ## 1 Introduction Gravitational lensing has proven to be an invaluable astrophysical tool for constraining the cosmological parameters $`H_0`$ (Kundić et al. 1997a; Schechter et al. 1997; Lovell et al. 1998; Biggs et al. 1999; Fassnacht et al. 1999) and $`\mathrm{\Lambda }`$ (Falco, Kochanek & Muñoz 1998; Helbig et al. 1999). In addition, a unique contribution of gravitational lensing to extragalactic astronomy lies in its capacity to measure directly the masses of the lensing objects. Consequently, it can be used to study galaxy structure and its evolution with redshift (e.g. Keeton, Kochanek & Falco 1998). The advent of high-spatial-resolution imaging with HST and faint-object spectroscopy with the Keck 10-m telescopes have opened new possibilities in the field (e.g. Kundić et al. 1997b,c; Fassnacht & Cohen 1998). Systems with compact configurations and faint components can now be studied, increasing the size and completeness of statistical samples of lenses. Specifically, a detailed study of a large number of gravitational lens systems can be used (1) to identify simple lens systems for the measurement of $`H_0`$; (2) to measure the mass-to-light of the lensing galaxies; (3) to compare the dark matter to the stellar light distribution of the lens galaxies; and (4) to probe the interstellar medium in the lensing galaxies. Nearly all of these goals depend critically on accurate redshift determinations for the background sources and the lensing galaxies. In light of this, we have begun a coordinated program to use the Low Resolution Imaging Spectrograph (LRIS; Oke et al. 1995) on the Keck II telescope to measure spectroscopic redshifts for all lens systems where either the source or lens redshift is currently unavailable. We have drawn our sources from the sample of the CfA-Arizona Space Telescope Lens Survey of gravitational lenses (CASTLES). The CASTLES team has compiled a list of all known confirmed or candidate gravitational lens systems with angular separations smaller than $`10^{^{\prime \prime }}`$. These systems were originally identified by a variety of methods and by many different groups. The specific goal of CASTLES is the construction of a complete three-band ($`V`$, $`I`$, and $`H`$) photometric survey of this sample. CASTLES uses existing Hubble Space Telescope (HST) images when available. Otherwise, they have supplemented the archival data with new WFPC2 and/or NICMOS imaging (see http://cfa-www.harvard.edu/castles). In addition, we have pre-publication access to new gravitational lens candidates discovered in the Cosmic Lens All-Sky Survey (CLASS). The CLASS survey is being conducted at radio wavelengths with the VLA and consists of observations of $``$12,000 flat-spectrum radio sources to search for gravitational lens candidates. The first three phases of this survey have confirmed 12 new lenses and found $``$10 additional candidates (Myers et al. 1999). As part of the CLASS follow-up observations, many of these lenses have been imaged in two or three bands with HST (Jackson et al. 1998a,b; Koopmans et al. 1998, 1999; Sykes et al. 1998; Fassnacht et al. 1999; Xanthopoulos et al. 1999). At this time, eight of the 12 confirmed lenses from CLASS are included in CASTLES. Earlier results from the first phases of this Keck survey have already been published (Kundić et al. 1997b,c; Fassnacht & Cohen 1998). In this paper, we present spectra of three lens systems with missing redshifts : SBS 0909+532, HST 1411+5211, and CLASS B2319+051. Unless otherwise noted, we use $`H_0=100h\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`\mathrm{\Omega }_m=0.2`$, and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.0`$. ## 2 Targets Below we present some relevant information on the previous observations of the three lens systems which are the subject of this paper. ### 2.1 SBS 0909+532 SBS 0909+532 was first discovered as a quasar by Stepanyan et al. (1991) and later identified in the Hamburg-CfA Bright Quasar Survey (Engels et al. 1998). Kochanek, Falco & Schild (1995) believed that this quasar was a good candidate for gravitational lensing because of its redshift ($`z=1.377`$) and its bright optical magnitude ($`B=17.0`$). Kochanek et al. (1997) first resolved this source into a close pair which was separated by $`\mathrm{\Delta }\theta =1\stackrel{}{\mathrm{.}}11`$ and had a flux ratio of $`R_BR_A=0.58`$ mag. These observations suggested that this system was indeed a gravitational lens. Oscoz et al. (1997) confirmed this hypothesis with spectra of the two components taken at the William Herschel Telescope (WHT). The spectra showed that components A and B were quasars at the same redshift and had identical spectra. Oscoz et al. (1997) also detected the Mg II $`\lambda \lambda 2796,2803`$ doublet in absorption at the same redshift ($`z=0.83`$) in both components. They argued that these absorption features were associated with the photometrically unidentified lensing galaxy. Optical and infrared HST imaging indicate that the lensing galaxy has a large effective radius ($`r_e=1\stackrel{}{\mathrm{.}}58\pm 0\stackrel{}{\mathrm{.}}90`$) and a correspondingly low surface brightness. It has a total magnitude of $`H=16.75\pm 0.74`$ and a color of $`IH=2.28\pm 1.01`$ within an aperture of diameter 1$`\stackrel{}{\mathrm{.}}`$7 (Lehár et al. 1999). The large uncertainties are a result of the difficulty in subtracting the close pair of quasar images (see Figure 1 of Lehár et al. 1999). Our observations confirm that the lensing galaxy is at the same redshift as the Mg II absorbers. ### 2.2 HST 1411+5211 HST 1411+5211 is a quadruple lens that was discovered by Fischer, Schade & Barrientos (1998) in archival WFPC2 images taken of the cluster CL 3C294 (CL1409+5226) with the F702W filter. The maximum image separation is 2$`\stackrel{}{\mathrm{.}}`$28. The intensities of the four components are reasonably similar; the F702W AB magnitudes correspond to $`\{\mathrm{A}\mathrm{B}\mathrm{C}\mathrm{D}\}=\{24.9625.9524.9225.00\}`$. The primary lensing galaxy is clearly observed in the HST images with a total magnitude of F702W(AB) $`=20.78\pm 0.05`$. It has the appearance of a morphologically normal elliptical galaxy with a measured half-light radius of $`r_{\frac{1}{2}}=0\stackrel{}{\mathrm{.}}61\pm 0\stackrel{}{\mathrm{.}}03`$ and an ellipticity of $`ϵ=0.27\pm 0.03`$. The lensing galaxy is located only 52$`\stackrel{}{\mathrm{.}}`$0 (or $`195h^1`$ kpc) from the center of the massive cluster CL 3C295 at $`z=0.46`$ (Butcher & Oemler 1978). Although this cluster was the subject of an extensive spectroscopic survey by Dressler & Gunn (1992), there is no measured redshift for the lensing galaxy (identified as galaxy #162 of Table 6 in Dressler & Gunn 1992); however, a photometric redshift of $`z=0.598\pm 0.11`$ based on narrow-band imaging has been measured (Thimm et al. 1994). Fischer et al. (1998) argued that this photometric redshift was suspect. Firstly, the photometric redshift had the largest quoted uncertainty of all the observed galaxies (over two times larger than the average). Secondly, Thimm et al. (1994) classified this galaxy as an Scd based on their measurement of the spectral energy distribution. The high-angular-resolution HST imaging clearly indicates that this galaxy is an early-type, not a late-type, galaxy. In this paper, we convincingly show that the photometric redshift of Thimm et al. (1994) is incorrect. ### 2.3 CLASS B2319+051 B2319+051 is a doubly-imaged gravitational lens systems newly discovered by CLASS (Marlow et al. 1999). Radio images taken with the Very Large Array (VLA) and the Multiple-Element Radio-Linked Interferometer (MERLIN) show two compact components aligned in a N-S orientation with a separation of 1$`\stackrel{}{\mathrm{.}}`$36 and a flux density ratio of 5.7:1. High-resolution radio imaging with the Very Large Baseline Array (VLBA) resolve each component into two subcomponents with a separation of 0$`\stackrel{}{\mathrm{.}}`$021 for A and 0$`\stackrel{}{\mathrm{.}}`$0075 for B. The orientation and morphology of this configuration is consistent with the lensing hypothesis. Images of this system taken with NICMOS do not show any infrared counterparts to the radio components; however, it does reveal two lensing galaxies (Marlow et al. 1999). G1 is a large, elliptical-like galaxy which is associated with the position of the two radio components; hence, it is the primary lensing galaxy. G2 is an extended, irregular galaxy which shows two clear emission peaks (G2a and G2b) and is separated from G1 by G1–G2b $`=`$ 3$`\stackrel{}{\mathrm{.}}`$516 (see Figure 9 of Marlow et al. 1999). This galaxy is the source of an external shear as modeled by Marlow et al. (1999). The integrated magnitudes of G1 and G2 are F160W $`=18.2`$ and 19.1, respectively. ## 3 Observations All of the observations were performed with the Low Resolution Imaging Spectrograph (LRIS; Oke et al. 1995) on the Keck II telescope. For the spectroscopic observations, we have used the instrument in long-slit mode with the $`300\mathrm{grooves}\mathrm{mm}^1`$ grating which provides a spectral resolution of $`2.44\mathrm{\AA }\mathrm{pixel}^1`$. The long slit was aligned along the axis defined by the two images of the background source for both SBS 0909+532 and CLASS B2319+051. Note that the latter position covers the primary lensing galaxy G1 in the B2319+051 system but not G2. For galaxy G2, the longslit was placed along the axis defined by its two components, G2a and G2b (see §2.3). For HST 1411+5211, the long slit was aligned along the axis defined by images A and C of the background source (see Fischer et al. 1998). Except for galaxy G2 of B2319+051 where only one exposure was taken, two exposures of equal duration were taken for each object. The specific details of these observations are listed in Table 1. In addition, we have obtained $`R`$ images of CLASS B2319+051 using LRIS in imaging mode. These data are the only optical imaging available on this source. The total exposure time for these observations is 1200 sec. In all cases, the data were reduced using standard IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the NSF. routines. The bias levels were estimated from the overscan region on each chip. For the imaging data, a flat-field was constructed from dome flats taken in the beginning of each night. For the spectroscopic observations, flat-fielding and wavelength calibration were performed using internal flat-field and arc lamp exposures which were taken after each science exposure. Observations of the Oke (1990) spectrophotometric standard stars Feige 34, G138-31, BD332642, and Feige 110 were used to remove the response function of the chip. The individual spectra for each object were weighted by the squares of their signal-to-noise ratios and combined. ## 4 Results The final spectra are shown in Figures 13 and 56. The lines used to identify the redshifts of the lensing galaxies and the background sources are given in Table 2. The redshift uncertainties (see Table 3) have been estimated by taking the rms scatter in the redshifts calculated from the individual spectral lines. We present a more detailed discussion of the individual systems below. ### 4.1 SBS 0909+532 The spatial projection of the spectra from the SBS 0909+532 system shows a double peak, with the sub-peaks separated by approximately 5 pixels or 1$`\stackrel{}{\mathrm{.}}`$1. This separation matches the 1$`\stackrel{}{\mathrm{.}}`$107 quasar image separation measured by Kochanek et al. (1997). The spectrum shown in Figure 1 was extracted using an aperture of 2 pixels placed in the trough between the sub-peaks of emission in order to maximize the fractional contribution of the lensing galaxy. The final spectrum is still dominated by light from the background source, a quasar at $`z_s=1.377`$ with broad C III\] and Mg II emission lines (as seen in Oscoz et al. 1997). However, it is possible to see features from the lensing galaxy, including the Ca II H and K doublet, which establishes the lens redshift as $`z_{\mathrm{}}=0.830`$. The features identified with the lensing galaxy are typical of an early-type galaxy. For a non-evolving elliptical galaxy at the lens redshift, we expect an optical–infrared color of $`IH2`$ (Poggianti 1997). Consequently, the observed value of $`IH=2.28\pm 1.01`$ (Lehár et al. 1999) provides additional support for an early-type classification of the lensing galaxy. ### 4.2 HST 1411+5211 The spectrum of lens system HST 1411+5211 shows two distinct traces, a bright central source which is separated by approximately 5 pixels or 1$`\stackrel{}{\mathrm{.}}`$1 from a significantly fainter one. The two traces correspond to the lensing galaxy and the background source, respectively, as the separation is exactly that expected from the high-angular-resolution HST imaging (Fischer et al. 1998). From these spectra, we have obtained the source and lens redshift of $`z_{\mathrm{}}=0.465`$ and $`z_s=2.811`$, respectively. In the spectrum of the lensing galaxy, the strong 4000Å break, the small equivalent width Balmer absorption lines, and the lack of \[O II\] emission indicate that little star formation is occurring (Figure 2). The spectral features are consistent with the fact that this galaxy appears as a morphologically normal elliptical. The measured redshift proves that the lensing galaxy is a member of the cluster CL 3C295. The background source shows a modest emission line at an observed wavelength of 4634Å (Figure 3). This line is much more obvious in the two-dimensional, sky-subtracted spectrum than in this one-dimensional spectrum (see Figure 4). There are only two plausible interpretations of this emission line as all other choices would require the presence of other, stronger emission lines. Firstly, the line could be \[O II\] 3273Å at $`z_s=0.243`$. We would then expect to see comparably strong \[O III\] 5007Å, 4959Å at 6164Å, 6224Å or H$`\beta `$ 4861Å at 6042Å. None of these lines are seen in the data, although the spectrum is much weaker at these wavelengths. This identification would also imply that the emission is not coming from the background source, but rather from some unrelated foreground object. Because of the lack of other emission lines and the exact coincidence with the position of the background source, we believe the only reasonable explanation for this line is Ly$`\alpha `$ 1215.7Å at $`z_s=2.811`$. The appearance of this spectrum is similar to other known star-forming galaxies at comparable redshifts with absorption features which include e.g. Si II and C IV (Steidel et al. 1996a,b). In addition, there is a continuum break blueward of this line with a drop amplitude (Oke & Korycansky 1982) of $`D_A=0.25\pm 0.05`$. This decrement is due to absorption by intervening hydrogen and is consistent with that found in the spectra of other high-redshift objects (e.g. Oke & Korycansky 1982; Kennefick et al. 1995). Because of the low signal-to-noise in this spectrum, we still regard this redshift measurement as tentative. We are planning to re-observe this object during the next observing season. In addition to the lens system, we have also obtained spectra of two galaxies which happened to lie on the long slit during the observations of the gravitational lens system. They are identified as galaxies #158 and #165 in the cluster field CL 3C295 (see Table 6 of Dressler & Gunn 1992). Dressler & Gunn (1992) list their total $`r`$ magnitudes as 20.13 and 22.56, respectively. The redshift of each galaxy was previously unknown. Based on our spectra, we find a redshift of $`z=0.451`$ for both galaxies (Figure 5), indicating that the galaxies are cluster members. Each spectrum shows the classic K star absorption features of Ca II H & K which are typical of an early-type galaxy. In addition, they show a series of strong Balmer absorption lines, including H$`\theta `$, H$`\eta `$, H$`\zeta `$, H$`\delta `$, H$`\gamma `$, and H$`\beta `$, which suggest that these galaxies are “K+A” (or more commonly known as “E+A”) galaxies (Dressler & Gunn 1983; Gunn & Dressler 1992; Zabludoff et al. 1997). These spectral features imply that these galaxies have experienced a brief starburst within the last 1–2 Gyrs. ### 4.3 CLASS B2319+051 We have obtained spectra of the two lensing galaxies, G1 and G2, in B2319+051. No optical emission associated with the background radio source has been detected; thus, the source redshift is still unknown. The redshifts of the two lensing galaxies are $`(z_\mathrm{}_1,z_\mathrm{}_2)=(0.624,0.588)`$. As the redshifts indicate, G2 is not a companion galaxy to the primary lensing galaxy G1. Rather, they are just a chance superposition along the line-of-sight. The spectrum of G1 is consistent with its morphological identification as an early-type galaxy in the high-angular-resolution NICMOS image (Marlow et al. 1999). It has a strong 4000Å break and small equivalent width Balmer absorption lines. It does, however, show some indication of current star formation with a modest \[O II\] line (equivalent width of 9Å). Galaxy G2 is clearly more active as it has much stronger \[O II\] emission (equivalent width of 22Å) and a less well-defined 4000Å break. In addition, the spectrum shows a series of strong Balmer absorption features which indicates a burst of star formation within the last 1–2 Gyrs (see e.g. §4.2). Such activity is expected as the galaxy appears morphologically irregular with two distinct peaks in the surface brightness profile. This appearance suggests a merger or interaction. The composite $`R`$ band image of a $`1^{}\times 1^{}`$ field centered on B2319+051 is shown in Figure 7. Using the object detection and analysis software SExtractor (Bertin & Arnouts 1996), we have obtained the magnitude $`R=22.2\pm 0.3`$ for the primary lensing galaxy G1 within an aperture the size of the Einstein ring radius (0$`\stackrel{}{\mathrm{.}}`$68). In addition, the total $`R`$ magnitudes of G1 and G2 are $`21.3\pm 0.3`$ and $`22.0\pm 0.3`$, respectively. The errors are large because these data were taken in non-photometric conditions with light to moderate cirrus. The total $`RF160W`$ color of G1 is consistent with a non-evolving elliptical at a redshift of $`z=0.624`$ (Poggianti 1997). ## 5 The Mass and Light Once the source and lens redshifts of a gravitational lens system are known, the system can be used, in principle, for two distinct purposes. Firstly, it is possible to measure $`H_0`$ by combining the angular diameter distances and a model of the lensing potential to predict the time delays (see e.g. Refsdal 1964; Blandford & Narayan 1992; Blandford & Kundić 1996). The predicted time delay is proportional to the ratio of angular diameter distances, $`D\frac{D_{\mathrm{}}D_s}{D_\mathrm{}s}`$ (where $`D_{\mathrm{}}`$, $`D_s`$, and $`D_\mathrm{}s`$ are the angular diameter distances to the lens, to the source, and between the lens and source, respectively). As such, the predicted time delay is also inversely proportional to $`h`$. Thus, if the background source is variable, and the time delays can be measured, the ratio between the observed and predicted time delays will provide a measure of $`h`$. Unfortunately, a time delay measurement requires long-term radio or optical monitoring and a detection of a relatively strong event (see e.g. Kundić et al. 1997a; Schechter et al. 1997; Lovell et al. 1998; Biggs et al. 1999; Fassnacht et al. 1999). Consequently, these measurements are difficult to make. More immediately, gravitational lens systems with measured redshifts can be used to study the properties of massive galaxies at moderate redshift. Specifically, the size of the image splitting provides a direct estimate of the mass within the Einstein ring of the lens. This mass can be expressed as : $$M_E1\times 10^{12}\left(\frac{D}{1\mathrm{Gpc}}\right)\left(\frac{\mathrm{\Theta }_E}{3^{^{\prime \prime }}}\right)^2M_{}$$ (1) where $`\mathrm{\Theta }_E`$ is the angular radius of the Einstein ring. For the lenses presented in this paper, we find physical Einstein ring radii of $`2.64.3h^1`$ kpc and masses of $`12\times 10^{11}h^1M_{}`$ (see Table 3). The mass of the galaxy, combined with its photometric properties, can be used to compute the mass-to-light of the lens. For this calculation, we need to measure the galaxy light within the same aperture as the mass. For both SBS 0909+532 and HST 1411+5211, all of the necessary parameters for the mass-to-light ($`M/L`$) calculation have been measured. For the remaining system B2319+051, we can only provide a reasonable estimate. In the calculations presented below, all of the galaxy magnitudes are given in a Vega-based (“Johnson”) magnitude system. In addition, we have converted all observed magnitudes to the rest-frame $`B`$ band using no-evolution $`k`$ corrections and rest-frame colors calculated from the spectral energy distribution of a typical elliptical galaxy (Coleman, Wu & Weedman 1980). We have ignored the effects of extinction and evolution. While the total extinction is usually modest in early-type lenses \[$`E(BV)0.08`$ mag; Falco et al. 1999\], the evolutionary correction is, as expected, an increasing function of redshift, approaching 1 mag at redshifts of $`z0.9`$ (Kochanek et al. 1999). ### 5.1 SBS 0909+532 The properties of the lensing galaxy in SBS 0909+532 have been measured by Lehár et al. (1999). They give a total magnitude of $`H=16.75\pm 0.74`$, a color of $`IH=2.28\pm 1.01`$ within a 1$`\stackrel{}{\mathrm{.}}`$7 diameter aperture, and an effective radius of $`r_e=1\stackrel{}{\mathrm{.}}58\pm 0\stackrel{}{\mathrm{.}}90`$. The errors on these parameters are extremely large because the subtraction of the close quasar pair leaves significant residuals in the final image (see Figure 1 of Lehár et al. 1999). However, we can try to use these values to estimate the light within the Einstein ring radius of 0$`\stackrel{}{\mathrm{.}}`$55. Adopting a de Vaucouleurs law for the galaxy surface brightness profile, we calculate that the magnitude within the Einstein ring radius would be $`H=18.3_{1.0}^{+0.9}`$. If we assume that the galaxy color is constant with radius, the $`I`$ magnitude corresponds to $`20.6_{1.4}^{+1.3}`$. Converting this value to an absolute $`B`$ magnitude, we find $`M_B=20.9_{1.5}^{+1.4}+5\mathrm{log}h`$ and $`(M/L)_B=4_3^{+11}h(M/L)_{}`$. Although this measurement does not place any strong constraints on the $`M/L`$ of this lensing galaxy, it is consistent with the mass-to-light ratios of other early-type lenses at $`z0.8`$. From the review of Keeton et al. (1998), we would expect $`(M/L)_B816h(M/L)_{}`$. We note that the mass-to-light ratios of high-redshift lensing galaxies are higher (by a factor of $`1.52`$) than the $`M/L`$ ratios of nearby elliptical galaxies within the same physical radius (e.g. Lauer 1985; van der Marel 1991); however, searches for gravitational lenses are biased toward high mass systems since these systems have a larger cross-section for lensing. ### 5.2 HST 1411+5211 For HST 1411+5211, we have obtained the photometry of the lensing galaxy from the processed WFPC2 image of the cluster CL 3C295 which is given in Smail et al. (1997). We adopt a zero point in the F702W bandpass of $`22.38\pm 0.02`$ mag DN<sup>-1</sup> s<sup>-1</sup> (Holtzman et al. 1995) and measure an aperture magnitude of F702W $`=21.23\pm 0.03`$ within the Einstein ring radius of 1$`\stackrel{}{\mathrm{.}}`$14. Converting this value to an absolute $`B`$ magnitude, we find $`M_B=18.72\pm 0.03+5\mathrm{log}h`$ and $`(M/L)_B=41.3\pm 1.2h(M/L)_{}`$. This mass-to-light ratio is considerably higher (by a factor of $`5`$) than the average lensing galaxy at $`z0.4`$ (Keeton et al. 1998). The inflated value is the result of cluster–assisted galaxy lensing induced by the cluster CL 3C295; this cluster is extremely massive with a velocity dispersion of $`\sigma =1670\mathrm{km}\mathrm{s}^1`$ (Dressler & Gunn 1992). Such an effect is also seen in the gravitational lens system Q0957+561 where the contribution of the $`\sigma =730\mathrm{km}\mathrm{s}^1`$ cluster (Angonin-Williame, Soucail & Vanderriest 1994; Fischer et al. 1997) results in an unusually high value of $`(M/L)_B22h`$ for the central lensing galaxy G1 (Keeton et al. 1998). ### 5.3 CLASS B2319+051 For B2319+051, we have calculated an aperture magnitude of $`R=22.2\pm 0.3`$ for the lensing galaxy G1 (see §4.3). This magnitude corresponds to $`M_B=20.4\pm 0.3+5\mathrm{log}h`$ or a luminosity of $`L_B=2.3\pm 0.6\times 10^{10}h^2L_{}`$. Because the redshift of the background source in this system is not known, we cannot calculate the mass-to-light ratio of the lensing galaxy. However, using the measured luminosity and equation (1), we can represent the $`M/L`$ ratio of G1 as a function of $`\frac{D_s}{D_\mathrm{}s}`$. That is, $$(M/L)_B2.00\left(\frac{D_s}{D_\mathrm{}s}\right)h(M/L)_{}$$ (2) For reasonable values of the source redshift i.e. $`z_s=13`$, we estimate that $`(M/L)_B`$ will be between $`73h(M/L)_{}`$. In our chosen cosmology, all other lensing galaxies which have been morphologically classified as early-type have blue mass-to-light ratios which are greater than $`5h`$ (Keeton et al. 1998). In order for the early-type lensing galaxy in B2319+051 to be consistent with the measurements from other lenses, we predict that the source redshift $`z_s`$ will be less than 1.5. ## 6 Conclusion As part of a continuing observational program to study gravitational lens systems, we have measured previously unidentified redshifts in three lens systems, SBS 0909+532, HST 1411+5211, and CLASS B2319+051. The spectral characteristics of the central lensing galaxy in all three systems suggest that each is an early-type galaxy. High-angular-resolution HST images confirm that these lenses appear as morphologically normal early-type galaxies (Fischer et al. 1998; Marlow et al. 1999; Lehár et al. 1999). The observations suggest, as previously noted, that the majority of lensing galaxies are early-types (see Keeton et al. 1998 and references therein). For the lensing galaxy in HST 1411+5211, we measure a blue mass-to-light ratio which is a factor of $`5`$ larger than the average lensing galaxy at a similar redshift. The presence of the massive cluster CL 3C295 is responsible for this significantly enhanced ratio. For the other two systems, we are only able to constrain the mass-to-light ratios. The large observational uncertainties on the luminosity of the lensing galaxy in SBS 0909+532 allow a wide range in mass-to-light ratio; however, our measurement is consistent with the observed values in other high-redshift gravitational lenses. Similarly for the primary lensing galaxy in B2319+051, we predict a mass-to-light ratio which is typical of previous lens measurements. Our imaging indicates that both lenses have a few companion galaxies within $`200h^1`$ kpc which have magnitudes and/or colors typical of an early-type galaxy at the lens redshift. Consequently, the primary lensing galaxy may be associated with a group of galaxies as previously observed in the lens systems MG 0751+2716, PG 1115+080, and B1422+231 (Kundić et al. 1997b,c; Tonry 1998; Tonry & Kochanek 1999). We are currently pursuing the group hypothesis for both SBS 0909+532 and B2319+051. Finally, the expected time delays in all three lens systems are approximately $`100h^1`$ days or less (Oscoz et al. 1997; Fischer et al. 1998; Marlow et al. 1999), and at least one source (B2319+051) shows evidence of variability (Marlow et al. 1999). Therefore, some of these systems may be suitable for measuring $`H_0`$. We would like to thank the referee Emilio Falco for very useful comments on the text. We also thank Mark Metzger, Gordon Squires, and Chuck Steidel for helpful discussions and essential material aids to this paper. The W.M. Keck Observatory is operated as a scientific partnership between the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. It was made possible by generous financial support of the W. M. Keck Foundation. The National Radio Astronomy Observatory is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation. MERLIN is operated as a National Facility by NRAL, University of Manchester, on behalf of the UK Particle Physics and Astronomy Research Council. Support for LML was provided by NASA through Hubble Fellowship grant HF-01095.01-97A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. This work was partially supported by the NSF under grant #AST 9420018.
no-problem/9910/quant-ph9910067.html
ar5iv
text
# On the Theory of Quantum Secret Sharing ## I Introduction In a classical secret sharing scheme, some sensitive classical data is distributed among a number of people such that certain sufficiently large sets of people can access the data, but smaller sets can gain no information about the shared secret. For instance, a possible application is to share the key for a joint checking account shared by many people. No individual is able to withdraw money, but sufficiently large groups can use the account. One particularly symmetric variety of secret sharing scheme is called a threshold scheme. A $`(k,n)`$ classical threshold scheme has $`n`$ shares, of which any $`k`$ are sufficient to reconstruct the secret, while any set of $`k1`$ or fewer shares has no information about the secret. Blakely and Shamir showed that threshold schemes exist for all values of $`k`$ and $`n`$ with $`nk`$. It is also possible to consider more general secret sharing schemes which have an asymmetry between the power of the different shares. For instance, one might consider a scheme with four shares $`A`$, $`B`$, $`C`$, and $`D`$. Any set containing $`A`$, $`B`$, and $`C`$ or $`A`$ and $`D`$ can reconstruct the secret, but any other set of shares has no information. In this example, the presence of $`A`$ is essential to reconstructing the secret, but not sufficient — $`A`$ needs the help of either $`D`$ or both $`B`$ and $`C`$. This particular scheme can be constructed by taking a $`(5,7)`$ threshold scheme, and assigning 3 shares to $`A`$, 2 to $`D`$, and 1 to each of $`B`$ and $`C`$, but other schemes exist which cannot be constructed by bundling together shares of a threshold scheme. The list of which sets are able to reconstruct the secret is called an access structure for the secret sharing scheme. It turns out that a secret sharing scheme exists for any access structure, provided it is monotone — i.e., that if a set $`S`$ can reconstruct the secret, so can all sets containing $`S`$. With the advent of quantum computation, it is possible that quantum information may someday be as commonplace as classical information, and we may wish to protect it the same ways as we protect classical information. Using quantum secret sharing , we could perhaps create joint checking accounts containing quantum money , or share hard-to-create ancilla states , or perform a secure distributed quantum computation. showed some basic results about quantum secret sharing schemes, including the existence of quantum threshold schemes. A quantum $`((k,n))`$ threshold scheme (the use of double parentheses distinguishes it from a classical scheme) exists provided the no-cloning theorem is satisfied — i.e., $`n/2<kn`$. In this paper, I will prove some further results about quantum secret sharing schemes with general access structures, including the fact that the no-cloning theorem and monotonicity provide the only restriction on the existence of quantum secret sharing schemes. Another possible application of quantum states to secret sharing is to create secret sharing schemes sharing classical data using quantum states . This could allow, for instance, for more secure distribution of the shares of the scheme. I will show below that it can also produce more efficient schemes: in any purely classical scheme, the size of each important share must be at least as large as the size of the secret, whereas using quantum states to share a classical secret, we can sometimes make each share half the size of the secret. In the theory of classical secret sharing, one sometimes considers schemes which do not completely hide the secret from unauthorized groups of people, or from which the secret cannot be perfectly reconstructed even by authorized sets. I will not consider the quantum generalizations of such schemes. I will only consider the theory of perfect secret sharing schemes, in which the data is either completely revealed or completely hidden, with no middle ground. ## II Quantum Secret Sharing I will begin by reviewing some results from which will form the basis of much of the later discussion. In a perfect quantum secret sharing scheme, any set of shares is either an authorized set, in which case someone holding all of those shares can exactly reconstruct the original secret, or an unauthorized set, in which case someone holding just those shares can acquire no information at all about the secret quantum state (that is, the density matrix of an unauthorized set is the same for all encoded states). For a generic state split up into a number of shares, most sets will be neither authorized nor unauthorized — quantum secret sharing schemes form a special set of states. One constraint on quantum secret sharing schemes is an obvious one inherited from classical schemes. Any secret sharing scheme must be monotonic. That is, if we increase the size of a set, it cannot switch from authorized to unauthorized (the indicator function which is 0 for unauthorized sets and 1 for authorized sets is monotonic). As we shall see in section III, the only other constraint on quantum secret sharing schemes is the no-cloning theorem . We cannot make two copies of an unknown quantum state. Therefore, we cannot distribute the shares of quantum secret sharing scheme into two disjoint authorized sets (each of which could produce a copy of the original state). Since every set is either authorized or unauthorized, this implies the complement of an authorized set is always an unauthorized set. A pure state quantum secret sharing scheme encodes pure state secrets as pure states (when all of the shares are available). A mixed state quantum secret sharing scheme may encode some or all pure states of the secret as mixed states. Pure state schemes have some special properties, as a consequence of the following theorem, but the general quantum secret sharing scheme is a mixed state scheme. Theorem 1 and Corollary 2 first appeared in . ###### Theorem 1 Let $`𝒞`$ be a subspace of a Hilbert space $``$ which can be written as tensor product of the Hilbert spaces of various coordinates. Then $`𝒞`$ corrects erasure errors<sup>*</sup><sup>*</sup>*An erasure error is a general error on a known coordinate. For instance, it replaces the coordinate with a state $`|e`$ orthogonal to the regular Hilbert space. Recall that a quantum error-correcting code of distance $`d`$ can correct $`d1`$ erasure errors or $`(d1)/2`$ general errors. on a set $`K`$ of coordinates iff $$\varphi |E|\varphi =c(E)$$ (1) (independent of $`|\varphi 𝒞`$) for all operators $`E`$ acting on $`K`$. A pure state encoding of a quantum secret is a quantum secret sharing scheme iff the encoded space corrects erasure errors on unauthorized sets and it corrects erasure errors on the complements of authorized sets. Proof: The first equivalence follows from the theory of quantum error-correcting codes. To recover the original secret on an authorized set, we must be able to compensate for the absence of the remaining shares, which is to say from an erasure error on the complement of the authorized set. The condition (1) implies that measuring any Hermitian operator on the coordinates $`K`$ gives us no information about which state in $`𝒞`$ we have. This means the density matrix on $`K`$ does not depend on the state, which is precisely the condition we need an unauthorized set to satisfy. $`\mathrm{}`$ As a corollary, we find that pure state schemes are only possible for a highly restricted class of access structures. ###### Corollary 2 In a pure state quantum secret sharing scheme, the authorized sets are precisely the complements of the unauthorized sets. Proof: By the no-cloning theorem, the complement of an authorized set is always an unauthorized set. By theorem 1, for a pure state scheme, we can correct erasure errors on any unauthorized set. This means we can reconstruct the secret in the absence of those shares; that is, the complement is an authorized set. $`\mathrm{}`$ Suppose we start with an arbitrary quantum access structure (a set of authorized sets) and add new authorized sets, filling out the result to be monotonic. For instance, if we started with the access structure $`ABC`$ or $`AD`$ from the introduction (any set containing $`A`$, $`B`$, and $`C`$ is authorized, as is any set containing both $`A`$ and $`D`$), we could add the set $`BD`$ (so any set containing $`B`$ and $`D`$ is also now authorized). We wish to continue to satisfy the no-cloning theorem as well, so we never add a new authorized set contained in the complement of an existing authorized set. This ensures that the complement of every authorized set remains an unauthorized set. For instance, in the example, we could not have added $`BC`$ as an authorized set, since its complement $`AD`$ is already authorized. Initially, there may be unauthorized sets whose complements are also unauthorized, but if we continue adding authorized sets, we will eventually reach a point where the authorized and unauthorized sets are always complements of each other, as is required for a pure state scheme. In the example, we could add $`CD`$ as an authorized set. Now, the authorized sets are all sets containing $`ABC`$, $`AD`$, $`BD`$, or $`CD`$. At this point, we will have to stop adding authorized sets — any more would violate the no-cloning theorem. Thus, an access structure where the authorized and unauthorized sets are complements of each other is a maximal quantum access structure. Pure state schemes and maximal access structures may seem like a very special situation, but in fact they play a central role in the theory of quantum secret sharing because of the following theorem: ###### Theorem 3 Every mixed state quantum secret sharing scheme can be described as a pure state quantum secret sharing scheme with one share discarded. The access structure of the pure state scheme is unique. Proof: Given a superoperator that maps the Hilbert space $`𝒮`$ of the secret to density operators on $``$ (which is a tensor product of the Hilbert spaces of the various shares), we can extend the superoperator to a unitary map from $`𝒮`$ to $``$ for some space $``$. We assign this additional Hilbert space to the extra share. In other words, we can “purify” the mixed state encoding by adding an extra share. The original mixed state scheme is produced by discarding the extra share. I claim that the new pure state encoding is a quantum secret sharing scheme. Sets on the original shares remain authorized or unauthorized, as they were before adding $``$. Given a set $`T`$ including the extra share, look at the complement of $`T`$, which is a set not including $``$ and is thus either authorized or unauthorized (in the new scheme as well as the old). For instance, if we purify the scheme ($`ABC`$ or $`AD`$) by adding a fifth share $`E`$, the complement of $`CDE`$ is unauthorized, while the complement of $`DE`$ is authorized. If the complement is authorized, then we can correct for erasures on $`T`$, and condition (1) holds for $`T`$ — we can get no information about the secret from $`T`$, and $`T`$ is unauthorized. If the complement of $`T`$ is unauthorized, we can correct erasures on the complement. Therefore, we can reconstruct the state with just $`T`$, and $`T`$ is authorized. Thus, the new scheme is secret sharing. It is clear from the argument that any other purification of the mixed state scheme would produce the same access structure. $`\mathrm{}`$ In , we presented a class of quantum secret sharing schemes where every share had the same size as the secret. One might wonder if it is possible to do better. For instance, can we make one share much smaller than the secret, possibly at the cost of enlarging another share? The answer is no, provided we only consider important shares (unimportant shares never make a difference as to whether a set is authorized or unauthorized). ###### Theorem 4 The dimension of each important share of a quantum secret sharing scheme must be at least as large as the dimension of the secret. Proof: We need only prove the result for pure state schemes. By theorem 3, the result for mixed state schemes will follow. Let $`S`$ be an important share in a pure state quantum secret sharing scheme. Then there is an unauthorized set $`T`$ such that $`T\{S\}`$ is authorized. Share the state $`|0`$ and give the shares of $`T`$ to Bob and the remaining shares (including $`S`$) to Alice. By corollary 2, Alice’s shares form an authorized set; she can correct for erasures on $`T`$. By theorem 6 below, this means Alice can perform any operation she likes on the secret without disturbing Bob’s shares. She can equally well perform quantum interactions between the secret and other quantum states held by her. In particular, if Alice has state $`|\psi `$ from a Hilbert space of dimension $`s`$ (the size of the secret), she can coherently swap it into her shares of the secret sharing scheme, which now encodes the state $`|\psi `$. Then Alice sends just the share $`S`$ to Bob. Bob now has an authorized set, so he can reconstruct $`|\psi `$. Therefore, by theorem 5 below, share $`S`$ must have had dimension at least $`s`$ as well. $`\mathrm{}`$ The above proof depends on two theorems of interest outside the theory of quantum secret sharing. The first is obvious, and it is also true; it has not, to my knowledge, appeared before in the literature. ###### Theorem 5 Even in the presence of preexisting entanglement, sending an arbitrary state from a Hilbert space of dimension $`s`$ requires a channel of dimension $`s`$. Proof: This proof is due to Michael Nielsen . Assume that in addition to whatever entanglement is given, Alice and Bob share a cat state $`|i_A|i_B`$ of dimension $`s`$. Using a straightforward variant of superdense coding , Alice can encode one of $`s^2`$ classical states in this cat state. Now Alice transmits her half of the cat state to Bob, using the preexisting entanglement if it helps. Bob can now reconstruct the classical state, so by the bounds on superdense coding , Alice must have used a channel of dimension $`s`$. $`\mathrm{}`$ The second theorem is more interesting. It says that if Alice can read a piece of quantum data, she can also change it any way she likes, without disturbing any entanglement of the encoding with the outside. There will be no way to tell that the data has been changed. ###### Theorem 6 Suppose a superoperator $`𝒮`$ maps a Hilbert space $`H`$ to density operators on $`AB`$, and $`𝒮`$ restricted to $`A`$ (that is, traced over $`B`$) is invertible (by quantum operation). Then for any unitary $`U:HH`$, there exists a unitary operation $`V:AA`$ such that $`V𝒮=𝒮U`$. Proof: We can extend the superoperator $`𝒮`$ to a unitary operator $`W`$ and enlarge $`B`$ with the necessary extra dimensions. If $`V`$ works for $`W`$, it will also work for $`𝒮`$. Since $`W`$ is invertible on $`A`$, the image subspace corrects erasure errors on $`B`$, and $$\psi |E|\psi =c(E)$$ (2) for any operator $`E`$ acting on $`B`$, where $`c(E)`$ is independent of $`|\psi W(H)`$. Choose a basis $`|j_B`$ for $`B`$. Given any state $`|\psi `$ in the image of $`W`$, we can write it as $$|\psi =|\psi _j_A|j_B.$$ (3) (The states $`|\psi _j`$ are not necessarily orthogonal, although we could have made them orthogonal for any single $`|\psi `$.) If we let $`E`$ be a projection on the basis states of $`B`$, or a projection on the basis states followed by a permutation of those basis states, (2) implies that the inner products $`\psi _i|\psi _j`$ are independent of $`|\psi `$. Therefore, there is a unitary operation $`V`$ acting on $`A`$ that takes any set of states $`|\psi _j_A`$ for $`|\psi W(H)`$ to the set of states $`|\varphi _j_A`$ for any state $`|\varphi W(H)`$. In fact, $`V`$ will map $`|\psi `$ to $`|\varphi `$. More generally, and by the same logic, given any two bases of $`W(H)`$, there will be a unitary $`V`$ on $`A`$ that takes one to the other. Given $`U:HH`$, we can define $`U`$ as mapping a basis $`|v_i`$ to basis $`|w_i`$. Then define $`V:AA`$ as an operator that maps $`W|v_i`$ to $`W|w_i`$, and the theorem follows. $`\mathrm{}`$ I conclude this section with an easy theorem that will be needed in the construction of a general access structure. ###### Theorem 7 If $`S_1`$ and $`S_2`$ are quantum secret sharing schemes, then the scheme formed by concatenating them (expanding each share of $`S_1`$ as the secret of $`S_2`$) is also secret sharing. The reason this requires proof is that, due to some nonlocal quantum effect, it might have been possible to get more information from sets in two copies of $`S_2`$ than can be accessed from just one of the sets. Proof: By theorem 3, we need only consider pure state schemes. Then the concatenated scheme $`S`$ is a pure state scheme too. Suppose we have some set of shares $`T`$. We can write it as the union $`T_i`$, where $`T_i`$ is a set on the $`i`$th copy of $`S_2`$. Consider the set $`U`$ of copies on which $`T_i`$ is authorized. $`U`$ is either an authorized or an unauthorized set of $`S_1`$. If it is authorized, then our big set $`T`$ is certainly authorized — we reconstruct the copies of $`S_2`$ in $`U`$, and use $`U`$ to reconstruct the original secret. If $`U`$ is unauthorized, we look at the complement of $`T`$. It can be written as a union $`T_i^{}`$, where $`T_i^{}`$ is the complement of $`T_i`$ in its copy of $`S_2`$. $`T_i^{}`$ is authorized whenever $`T_i`$ is unauthorized. Therefore, the set of copies on which $`T_i^{}`$ is authorized is the complement of $`U`$, which is authorized. Thus, the complement of $`T`$ is authorized, so $`T`$ is unauthorized. $`\mathrm{}`$ Clearly the proof works equally well for more complicated concatenation schemes, with multiple levels or with a different scheme $`S_2`$ for each share of $`S_1`$. Also note that if we bundle shares together (assigning two or more shares to the same person), the result is still a secret sharing scheme. ## III Construction of a General Access Structure This section will be devoted to proving that monotonicity and the no-cloning theorem provide the only restrictions on the existence of quantum secret sharing schemes. The same result has been shown by Adam Smith by adapting a classical construction. The construction given here is undoubtedly far from optimal in terms of the share sizes of the resulting schemes. ###### Theorem 8 A quantum secret sharing scheme exists for an access structure $`S`$ iff $`S`$ is monotone and satisfies the no-cloning theorem (i.e., the complement of an authorized set is an unauthorized set). For any maximal quantum access structure $`S`$, a pure state scheme exists. It will be helpful to first understand an analogous classical construction . Any access structure can be written in a disjunctive normal form, which is the OR of a list of authorized sets. For our standard example, with authorized sets $`ABC`$ and $`AD`$, the normal form is ($`A`$ AND $`B`$ AND $`C`$) OR ($`A`$ AND $`D`$). This normal form provides a construction in terms of threshold schemes — the AND gate corresponds to a $`(2,2)`$ threshold scheme (which has one authorized set $`A`$ AND $`B`$), while the OR gate corresponds to a $`(1,2)`$ threshold scheme (for which $`A`$ OR $`B`$ is authorized). Then by concatenating the appropriate set of threshold schemes, we get a construction for the original access structure. In the quantum case, this construction fails, because by the no-cloning theorem, there is no $`((1,2))`$ quantum threshold scheme. A single authorized set (such as $`A`$ AND $`B`$ AND $`C`$) still corresponds to a quantum threshold scheme (a $`((3,3))`$ scheme in this case), but to take the OR of these authorized sets, we will have to do something different. We will replace the $`((1,2))`$ scheme with $`((r,2r1))`$ schemes (which correspond to majority functions instead of OR). $`r`$ of the shares will be the individual authorized sets of the desired access structure, and the other $`r1`$ shares will be from another access structure that is easier to construct. The full construction is recursive. Given constructions of access structures for $`n1`$ shares, we will construct all maximal access structures for $`n`$ shares. From maximal access structures on $`n`$ shares we will be able to construct all access structures on $`n`$ shares. We can start from the base case of 1 share, which just has the trivial $`((1,1))`$ access structure. The construction will assume we know how to create threshold schemes, for instance using the construction in . Given any maximal access structure $`S`$ on $`n`$ shares, consider the access structure $`S^{}`$ obtained by discarding one share. Certainly $`S^{}`$ is still monotonic and still satisfies the no-cloning theorem. Therefore, by the inductive hypothesis, we have a construction for the access structure $`S^{}`$. Now, following the proof of theorem 3, add an additional share to $`S^{}`$ putting it in an overall pure state. By the proof of theorem 3, we know the resulting scheme is in fact a quantum secret sharing scheme. It is not hard to see that $`S`$ is the unique access structure produced this way. For instance, the maximal access structure $`ABC`$ OR $`AD`$ OR $`BD`$ OR $`CD`$ can be formed by purifying the (mixed state) scheme with access structure $`ABC`$ (just a $`((3,3))`$ threshold scheme). Now suppose we are given a general quantum access structure $`S`$ on $`n`$ shares. We describe this access structure by a list of its minimal authorized sets $`A_1,A_2,\mathrm{},A_r`$. As mentioned above, $`A_i`$ by itself defines a quantum access structure — a $`((k,k))`$ threshold scheme, in fact, if $`A_i`$ contains $`k`$ shares. $`S`$ has a total of $`r`$ minimal authorized sets. Let us take a $`((r,2r1))`$ quantum threshold scheme, and expand each of its shares using another secret sharing scheme. Share $`i`$, for $`i=1,\mathrm{},r`$, is expanded using the threshold scheme associated with the set $`A_i`$. Shares $`r+1`$ through $`2r1`$ will all be expanded using another secret sharing scheme $`S^{}`$. $`S^{}`$ will be a pure state scheme, with a maximal access structure which can be achieved by adding authorized sets to $`S`$. That means when $`A`$ is an authorized set of $`S`$ (so it contains some $`A_i`$), it is also an authorized set of $`S^{}`$. Therefore, we can reconstruct the last $`r1`$ shares of the $`((r,2r1))`$ scheme, as well as at least one of the first $`r`$ shares, so $`A`$ is an authorized set for the concatenated scheme. Conversely, if we have a set $`B`$ which does not include any of the sets $`A_i`$, we do not have an authorized set for any of the schemes $`A_i`$. $`B`$ might be an authorized set for the scheme $`S^{}`$, but that only gives us authorized sets for at most $`r1`$ shares of the $`((r,2r1))`$ scheme. Therefore, $`B`$ is an unauthorized set. This shows that the access structure of the concatenated scheme is exactly $`S`$, completing the construction. As an example, consider this construction applied to the access structure $`ABC`$ OR $`AD`$. The three rows represent shares of a $`((2,3))`$ scheme, so authorized sets on any two rows suffice to reconstruct the secret. Repeated letters imply bundling, so $`A`$ gets a share from each of the first two rows, as well as one from the third row. $$((2,3))\mathrm{scheme}\{\begin{array}{c}((3,3)):A,B,C\hfill \\ ((2,2)):A,D\hfill \\ S^{}\hfill \end{array}$$ (4) The first two rows are threshold schemes. $`S^{}`$ is a maximal access structure containing $`\{A,B,C\}`$ and $`\{A,D\}`$. For instance, in this case, $`S^{}`$ could be the scheme $`ABC`$ OR $`AD`$ OR $`BD`$ OR $`CD`$ which we constructed earlier; or we could just use the trivial scheme with authorized set $`\{A\}`$ (give $`A`$ the secret). I noted in the introduction that this particular scheme can be easily constructed directly from a $`((5,7))`$ threshold scheme. However, not all access structures can be made by bundling together shares of a threshold scheme (for instance, $`ABCD`$ OR $`ADE`$ OR $`BCD`$ cannot be so constructedFor quantum access structures, threshold schemes suffice for fewer than five shares, whereas for classical access structures, there are examples where they fail for four shares. This is because the four-share classical examples would violate the no-cloning theorem.$`E`$ would have to get more shares of the threshold scheme than $`B`$ since $`ADE`$ is authorized while $`ABD`$ is not, but $`BCD`$ is authorized while $`CDE`$ is not), while the recursive construction always works. ## IV Sharing Classical Secrets We can also use quantum states to share classical secrets, a process previously considered in and . Many of the theorems proved above will fail in this situation. For instance, superdense coding provides an example of a $`(2,2)`$ threshold scheme where each share is a single qubit, but the secret is two classical bits: the four Bell states $`|00\pm |11`$, $`|01\pm |10`$ encode the four possible 2-bit numbers, and for all four states, each qubit is completely random. This $`(2,2)`$ scheme is a pure state scheme, yet does not satisfy corollary 2, and the share size is smaller than the size of the secret. Neither is possible for a purely classical scheme or for a purely quantum scheme. Another difference is that there is no rule against copying classical data, so, for instance, $`(k,n)`$ threshold schemes are allowed, even with $`k<n/2`$. We can write down conditions for a pure state scheme of this sort to be secret sharing, along the lines of theorem 1. ###### Theorem 9 Suppose we have a set of orthonormal states $`|\psi _i`$ encoding a classical secret. Then a set $`T`$ is an unauthorized set iff $$\psi _i|F|\psi _i=c(F)$$ (5) (independent of $`i`$) for all operators $`F`$ on $`T`$. $`T`$ is authorized iff $$\psi _i|E|\psi _j=0(ij)$$ (6) for all operators $`E`$ on the complement of $`T`$. Note that only the basis states $`|\psi _i`$ appear in Theorem 9, whereas in Theorem 1, the condition had to hold for all $`|\psi `$ in a Hilbert space. This is the source of the difference between classical and quantum secrets — the former hides just a set of orthogonal states, while the latter hides all superpositions of those states. Proof: On an unauthorized set, we should be able to acquire no information about which state $`|\psi _i`$ we have. This is precisely condition (5). On an authorized set, we need to be able to correct for the erasure of the qubits on the complement. This is equivalent to being able to distinguish the state $`|\psi _i`$ from the state $`|\psi _j`$ with an arbitrary operator applied to the complement of $`T`$. That is, it is equivalent to condition (6). $`\mathrm{}`$ Note that purely classical secret sharing schemes can be considered as a particular special case of sharing classical data with quantum states — every encoding in a purely classical scheme is just a mixture of tensor products of basis states. Purely classical secret sharing schemes are always mixed state schemes, since classically, there is no way to hide information without randomness. Superdense coding provided an example where using quantum data allowed a factor of 2 improvement in space over any classical scheme. It turns out that this is the best we can do. ###### Theorem 10 The dimension of each important share of a classical secret sharing scheme must be at least as large as the square root of the dimension of the secret. The total size of each authorized set must be at least as large as the secret. This means that a $`2n`$-bit secret requires shares of at least $`n`$ qubits. Proof: The proof is quite similar to the proof of theorem 4, which gives the corresponding result for quantum secret sharing schemes. We create the quantum state corresponding to the shared secret 0. If it is a mixed state scheme, we include any extra qubits needed to purify it (the result may not be a secret sharing scheme, however — theorem 3 need not hold). If $`S`$ is the share under consideration, and $`T`$ is an unauthorized set such that $`T\{S\}`$ is authorized, give $`T`$ to Bob, and all the other shares (including $`S`$ and the extra purifying qubits) to Alice. Bob has no information about the secret; $`\psi _i|E|\psi _i`$ is independent of $`i`$. Therefore, as in the proof of theorem 6, Alice can perform, without access to Bob’s qubits, a transformation between $`|\psi _0`$ (the current state) and $`|\psi _i`$ for any $`i`$. Then she sends the share $`S`$ to Bob, who now has an authorized set, and can reconstruct $`i`$. We have sent a secret of dimension $`s`$ using prior entanglement and the share $`S`$, which by the bounds on superdense coding must therefore have dimension at least $`\sqrt{s}`$. Those bounds also show the size of the channel plus preexisting entanglement must be $`s`$, so the size of the full authorized coalition is at least $`s`$. $`\mathrm{}`$ Note that we used an analogue of theorem 6 in the proof. The general case of theorem 6 is clearly not true here: Since the data is classical, we could make two copies of it. Then one copy is sufficient to read it, but both are needed to change it without leaving a trace. In fact, the version of the theorem we have used is just the proof that perfect quantum bit commitment is impossible — Bob has no information about the state, so Alice can change the state to whatever she likes. Besides being an interesting result about secret sharing schemes, this theorem is useful in analyzing other cryptographic concepts. For instance, it shows that there is no useful unconditionally secure cryptographic memory protocol, which can only be unlocked with a key, which we would want to be much smaller than the stored data. Such a protocol would be a $`(2,2)`$ secret sharing scheme, so the theorem requires that the key be at least half the size of the data. Theorem 10 can be easily modified to show that in any purely classical scheme, each important share must be at least dimension $`s`$, not $`\sqrt{s}`$. This follows because if Alice and Bob are just sending classical states back and forth, they need a channel of dimension $`s`$ to send the secret rather than dimension $`\sqrt{s}`$. We have already seen one example where this improvement is achievable using quantum states. When else can we get this factor of 2 improvement in the number of qubits per share? I do not have a full answer to this question. Certainly for a $`(1,n)`$ threshold scheme, no improvement is possible, since each authorized coalition (each single share) must be as large as the secret. For many other threshold schemes, however, an improvement is possible. ###### Theorem 11 A $`(k,n)`$ threshold scheme exists sharing a classical secret of size $`s=p^2`$ with one qupit (a $`p`$-dimensional quantum state) per share whenever $`n2k2`$, $`pn`$, and $`p`$ is prime. Before giving the proof, I will review some basic facts about quantum and classical error-correcting codes which will be needed in the construction. A classical linear $`[n,k,d]`$ code encodes $`k`$ bits in $`n`$ bits and corrects $`d1`$ erasure errors. Classical codes must satisfy the Singleton bound $`dnk+1`$. A code $`C`$ where the bound is met exactly is called an MDS code (for “maximum distance separable”), and has some interesting properties. The dual $`C^{}`$ of $`C`$ (composed of those words which have vanishing inner product with all words of $`C`$) is also an MDS code. When $`C`$ is an $`[n,k,nk+1]`$ code, $`C^{}`$ is an $`[n,nk,k+1]`$ code. The codewords of the dual code form the rows of the parity check matrix. By measuring the parities specified by the parity check matrix, we can detect errors — any parity which is nonzero signals an error. In addition, in an MDS code, there is a codeword with support exactly on the set $`T`$ for any set $`T`$ of size $`d`$. See, for instance, chapter 11 of for a discussion of MDS codes. Quantum codes can frequently be described in terms of a stabilizer . The stabilizer of a code is an Abelian group consisting of those tensor products of Pauli matrices which fix every quantum codeword. That is, the codewords live in an eigenspace of all elements of the stabilizer. If the stabilizer contains $`2^a`$ elements, it is generated by just $`a`$ elements, and if we have $`n`$ qubits, the code encodes $`na`$ qubits. We usually consider the $`+1`$ eigenspace of the stabilizer generators, but we could instead associate an arbitrary sign to each generator. Tensor products of Pauli matrices have eigenvalues $`\pm 1`$, so each set of signs will specify a different coding subspace of the same size. Stabilizer codes can be easily generalized to work over higher dimensional spaces . We replace the regular Pauli matrices with their analogs for $`p`$-dimensional states $`X:|j|j+1`$, $`Z:|j\omega ^j|j`$, and powers and products of $`X`$ and $`Z`$ (arithmetic is now modulo $`p`$, and $`\omega =\mathrm{exp}(2\pi i/p)`$). The eigenvalues of $`X`$, $`Z`$ and their products and tensor products are powers of $`\omega `$, so instead of associating a sign with each generator of the stabilizer, we should instead associate a power of $`\omega `$. There is a standard construction, known as the CSS construction , which takes two binary classical error-correcting codes and produces a quantum code. This construction generalizes easily to qupits. Take the parity check matrix of the first code $`C_1`$ and replace $`j`$ with $`X^j`$, interpreting the rows as generators of the stabilizer. Take the parity check matrix of the second code $`C_2`$ and replace $`j`$ with $`Z^j`$, again interpreting rows as generators of the stabilizer. The stabilizer must be Abelian — this produces a constraint on the two classical codes, namely that $`C_2^{}C_1`$. If $`C_1`$ is an $`[n,k_1,d_1]`$ code and $`C_2`$ is an $`[n,k_2,d_2]`$ code, the corresponding CSS code will be an $`[[n,k_1+k_2n,\mathrm{min}\{d_1,d_2\}]]`$ quantum code. Now consider the classical polynomial code $`D_r`$ whose coordinates are $`(f(\alpha _1),\mathrm{},f(\alpha _n))`$. $`\alpha _1,\mathrm{},\alpha _n`$ are $`n`$ distinct elements of $`_p`$ (recall that $`pn`$), and $`f`$ runs over polynomials of degree up to $`r`$.For an appropriate choice of the $`\alpha _i`$s, $`D_r`$ is a Reed-Solomon code or an extended Reed-Solomon code. There are $`r+1`$ coefficients to specify $`f`$, so $`D_r`$ encodes $`r+1`$ pits. Given the function evaluated at $`r+1`$ locations, we can use polynomial interpolation to reconstruct the polynomial. In other words, even if $`n(r+1)`$ coordinates of the code are missing, we can reconstruct the $`r+1`$ coefficients specifying the polynomial. Thus, this is an $`[n,r+1,nr]`$ classical code — an MDS code. Also note that $`D_rD_{r+1}`$. The codes $`D_r`$ provide good examples of purely classical secret sharing schemes . If we choose the first $`r`$ coefficients of the polynomial at random, any set of just $`r`$ coordinates will contain no information about the remaining coefficient, so we get an $`(r+1,n)`$ threshold scheme. Applying the CSS construction to the codes $`D_r`$ and $`D_{r1}^{}`$ similarly produces good examples of quantum secret sharing schemes . With this background, we are now ready to tackle the construction. Proof of Theorem 11: We will produce a class of secret sharing schemes which use one qupit for each share and encode two classical pits, whereas any purely classical scheme could only encode one pit. We will use the classical codes $`D_r`$ to create $`p^2`$ related CSS quantum codes with certain useful properties. The secret sharing scheme will encode the $`p^2`$ classical states as the mixture of all states in the corresponding code from this family. Lemma: The parity check matrix for the code $`D_{r1}`$ includes a row $`R`$ such that for any set of $`r+1`$ coordinates, there is a linear combination of rows of $`D_{r1}`$ with support exactly on that set of coordinates. $`R`$ appears in the linear combination with coefficient $`1`$. Similarly, the dual code $`D_s^{}`$ has, in its parity check matrix, a row $`S`$ which appears with coefficient $`1`$ in a linear combination with support on any given set of $`ns`$ coordinates. For instance, we can take $`n=4`$, $`r=2`$, $`s=1`$, $`p=5`$. $`D_1`$ has generator matrix $$G=\left(\begin{array}{cccc}1& 1& 1& 1\\ 0& 1& 2& 3\end{array}\right)$$ (7) (generated by polynomials $`1`$ and $`x`$), and $`D_1^{}`$ has generator matrix $$G^{}=\left(\begin{array}{cccc}2& 4& 1& 3\\ 3& 0& 1& 1\end{array}\right).$$ (8) (The parity check matrix of $`D_1`$ is the generator matrix of $`D_1^{}`$ and vice-versa.) By subtracting $`j`$ times the first row of $`G`$ from the second row of $`G`$, we get a vector with support on the three-element set excluding coordinate $`j`$. Similarly, by adding some multiple of the first row of $`G^{}`$ to the second row of $`G^{}`$, we can get a vector with support on any three coordinates. Proof of Lemma: The codes $`D_r`$ and $`D_s^{}`$ are linear, so we only need prove the coefficients of rows $`R`$ and $`S`$ are nonzero — then some rescaling will always give the result with coefficient 1. Since $`D_r`$ is an MDS code of distance $`nr`$, its dual is an MDS code of distance $`r+2`$. Thus, the parity check matrix of $`D_r`$ (which is also the generator matrix of $`D_r^{}`$) has a linear combination of rows with support on any set of $`r+2`$ coordinates, but no linear combination of rows has weight $`r+1`$ or less. Since $`D_{r1}`$ is included in $`D_r`$, but encodes one fewer pit, the parity check matrix of $`D_{r1}`$ is just the parity check matrix of $`D_r`$ with one row $`R`$ added. That parity check matrix has a linear combination of rows with support on any set of $`r+1`$ coordinates. Since no linear combination of rows of $`D_r^{}`$ has weight $`r+1`$, each of the weight $`r+1`$ linear combinations must include a component of row $`R`$. A similar argument gives the result for $`D_s^{}`$. $`\mathrm{}`$ Now suppose we create the CSS code corresponding to the two classical codes $`D_{r1}`$ and $`D_s^{}`$. We require that $`s=nr1`$, $`2rn`$. Then $`s<r`$, so $`D_sD_{r1}`$, and we have a quantum code. We are given two classical pits $`a`$ and $`b`$ to share among $`n`$ parties. Assign a phase $`\omega ^a`$ to the generator $`R`$ corresponding to row $`R`$ of $`D_{r1}`$ and a phase $`\omega ^b`$ to the generator $`S`$ corresponding to row $`S`$ of $`D_s^{}`$. All the other generators have phase $`+1`$. Create the density matrix formed by a uniform mixture over states in the subspace specified by this stabilizer. There are $`p^2`$ of these mixed states. Claim: The set of mixed states described above define a $`(k,n)`$ threshold scheme encoding 2 classical pits, with $`k=r+1=ns`$. For instance, in the case $`n=4`$, $`r=2`$, $`s=1`$, $`p=5`$, we get the stabilizers $$\begin{array}{ccccc}& X^2& X^4& X& X^3\\ \omega ^a& X^3& I& X& X\\ & Z& Z& Z& Z\\ \omega ^b& I& Z& Z^2& Z^3\end{array}$$ (9) with $`\omega =\mathrm{exp}(2\pi i/5)`$. The claim is that this gives a $`(3,4)`$ secret sharing scheme. I now proceed to establish the claim, which will prove Theorem 11. For any set $`T`$ of $`k`$ coordinates, there will be an element $`MR`$ of the stabilizer with support on that set of coordinates, where $`M`$ contains no factors of $`R`$ or $`S`$. This follows from the lemma: There is a linear combination $`M+R`$ of rows of the parity check matrix of $`D_{r1}`$ with support on $`T`$. This linear combination translates to an element of the stabilizer — the rows of the parity check matrix become generators of the stabilizer, addition of two rows becomes multiplication of the corresponding generators, and scalar multiplication of a row becomes taking the corresponding generator to the appropriate power. Since $`MR`$ has support on $`T`$, we can measure its eigenvalue with access only to $`T`$. $`M`$ is a product of generators which are not $`R`$ or $`S`$, so the state has eigenvalue $`+1`$ for $`M`$, and it has eigenvalue $`\omega ^a`$ for $`MR`$. Thus, the eigenvalue of $`MR`$ tells us $`a`$. Similarly, there is an element $`NS`$ of the stabilizer with support on $`T`$, with $`N`$ having no factors of $`R`$ or $`S`$. We can measure the eigenvalue of $`NS`$, and it tells us $`b`$. Thus, any set of at least $`k`$ coordinates is an authorized set. A particular value of the secret is encoded as a uniform distribution over states in the stabilizer code described above. Thus, the density matrix corresponding to the secret is the projection on the subspace which is left fixed by the stabilizer. That is, $`\rho (ab)`$ $`=`$ $`{\displaystyle \underset{i}{}}(I+M_i+M_i^2+\mathrm{}+M_i^{p1})`$ (10) $`=`$ $`{\displaystyle \underset{MS}{}}M`$ (11) (normalized appropriately). The $`M_i`$ are the generators of the stabilizer $`S`$. Assume the appropriate phase is included in $`M`$ in this sum (this means that if we wish $`M`$ to have eigenvalue $`\omega `$, we include it as $`\omega ^1M`$, which has eigenvalue $`+1`$). Suppose $`T`$ is a set of $`k1`$ or fewer coordinates. The density matrix of $`T`$ is the trace of $`\rho (ab)`$ over the complement of $`T`$. Now, $`X`$, $`Z`$, and all nontrivial products of $`X`$ and $`Z`$ have trace 0. Thus, the only terms in the expression for $`\rho (ab)`$ which contribute to the trace are those coming from $`M`$ with weight $`k1`$. But the parity check matrices for $`D_{r1}`$ and $`D_s^{}`$ contain no rows or linear combination of rows of weight less than $`k`$. Thus, the density matrix of $`T`$ is just the identity, regardless of the value of $`ab`$. Thus, $`T`$ is unauthorized, proving the theorem. $`\mathrm{}`$ ## Acknowledgements I would like to thank Richard Cleve, Hoi-Kwong Lo, Michael Nielsen, and Adam Smith for helpful discussions.
no-problem/9910/astro-ph9910172.html
ar5iv
text
# The Distance to the LMC via the Eclipsing Binary HV 2274 ## 1 Introduction The Hubble constant, $`H_0`$, is one of the most important and heavily investigated cosmological parameters. At the moment, the most reliable way to measure $`H_0`$ is to determine the recession velocity and distance of objects with motions dominated by the Hubble linear expansion. Extragalactic distances are determined by constructing a distance ladder, the first rung of which is often occupied by the LMC. Since the uncertainties in the radial recession velocities are typically very small, the main error in $`H_0`$ comes from the error in distance. Therefore, to obtain the true absolute value of $`H_0`$, one requires the true value of the distance to the LMC. Different distance determinations to the LMC result in conflicting values of the distance modulus spanning the range between $`\mu _{LMC}=18.1`$ (Stanek, Zaritsky & Harris 1998) and $`\mu _{LMC}=18.7`$ (Feast & Catchpole 1997), an astounding spread of 27% in distance. One potentially accurate method for determining the distance to the LMC is the derivation of stellar parameters from eclipsing binary stars. This technique dates back to at least 1974 (Dworak 1974). More recently, Bell et al. (1991) and Bell et al. (1993) determined distances to the Magellanic Clouds using ground-based spectroscopy and photometry. Of course, the best experimental method for determining eclipsing binary stellar parameters would be acquisition of accurate, space-based, broad spectral range data, e.g. HST/STIS spectra. The many parameters which must be solved for simultaneously may give rise to regions of degeneracy in the fit if the spectra do not extend over a wide enough range in wavelength. One way to compensate for a narrow spectral range is to include ground-based photometry in the fit. Unfortunately, this method also reintroduces all the uncertainties inherent in ground-based photometry. The first eclipsing binary used to determine the distance to the LMC with space-based spectrophotometry was HV 2274, a system whose first CCD light curve was presented by Watson et al. (1992), along with estimates of maximum light magnitudes and colors of $`V14.16`$ mag and $`(BV)0.18`$ mag. The stellar parameters of this system were determined by Guinan et al. (1998a,b). By splicing together four HST/FOS spectra to form a single spectrum spanning 1150 $`\dot{A}`$ to 4820 $`\dot{A}`$, Guinan et al. (1998a,b) used ATLAS 9 model atmospheres (Kurucz 1991) to simultaneously fit for the emergent flux at the surface of the stars, the reddening, $`E(BV)`$, the normalized interstellar extinction curve, $`k(\lambda V)`$, and the ratios of the stellar radii to the distance of the binary. In their original fit to the spectra, Guinan et al. (1998a) included the photometric points of Watson et al. (1992) and found a reddening of $`E(BV)=0.083\pm 0.006`$ mag and a distance modulus $`\mu _{LMC}=18.42\pm 0.07`$ mag. While generally robust, this simultaneous fitting for both reddening and extinction admits a possible degeneracy in the region $`E(BV)0.080.12`$ mag. This region arises because similar relative extinction corrections, $`E(BV)\times k(\lambda V)`$, may be obtained with varying values of $`E(BV)`$ (Guinan et al. 1998b). The reddening towards HV 2274 was also determined by Udalski et al. (1998; thereafter U98), who obtained $`UBVI`$ photometry of the binary and the surrounding field. Consistent with the stellar parameters derived by Guinan et al. (1998a), U98 found that the color of the system remained nearly constant, independent of the binary phase. In agreement with Watson et al. (1992), U98 found a maximum $`V`$ magnitude of 14.16 mag. However, U98 disagreed with the Watson et al (1992) $`(BV)`$ color, instead finding $`(BV)=0.129`$ mag. In addition, U98 found $`(UB)=0.905`$ mag. This multi-color photometric data for HV 2274 and surrounding O/B stars allowed for an independent determination of the reddening towards HV 2274. By plotting a $`(UB)`$ vs $`(BV)`$ diagram for early spectral type stars, the reddening may be determined by measuring their displacement from a sequence of unreddened stars. Adopting a direction of reddening $`E(UB)/E(BV)=0.76`$ (Fitzpatrick 1985), U98 found a reddening to HV 2274 of $`E(BV)=0.149\pm 0.015`$ mag. This value was consistent with the reddening they found to other $`B`$ stars in the surrounding field. Guinan et al. (1998b) then refit their spectra using only the $`B`$ and $`V`$ photometric points from U98 and found that the multi-parameter fit swung to the other end of the degeneracy region and gave $`E(BV)=0.12`$ mag. This reduced the distance modulus to the LMC to $`\mu _{LMC}=18.30\pm 0.07`$ mag, about 2 sigma less than the original. As we will discuss in detail below, however, the uncertainties in $`U`$ and $`B`$ photometry are numerous. $`U`$-band photometry is quite sensitive to the specific filter used for spectral types O and B, exactly those stars used by U98 to determine the reddening. On the other hand, $`B`$-band photometry is sensitive to the specific filter used for spectral types A and F. The U98 observations were made using a non-standard $`U`$-band filter with a steeper cutoff at short wavelengths ($`\lambda <3500\dot{A}`$) than is typical, and were calibrated using only a few standard fields containing mostly spectral type A and F standard stars. Therefore, we try to verify the reddening results of U98 using a more typical $`U`$ filter and a larger sample of standard stars. ## 2 Observations and Photometric Reductions $`UBV`$ photometry of HV 2274 was carried out on the night of Oct. $`18^{th}`$, 1998 at the 0.9-m telescope at Cerro Tololo Inter-American Observatory (CTIO). We used the Site2K\_6 CCD at Cassegrain focus, a $`2048\times 2048`$ CCD with a plate scale of 0.4 arcseconds/pixel and a total field of view of 13.1 arcminutes on a side. The filters used were CTIO VTek2 5438/1026, BTek2 4200/1050 and U#2 3570/660. The U filter was made of UG1 glass with a CuSO4 liquid solution as a red leak blocker. Conditions were photometric. Table 1 contains the epochs and durations of our observations of HV 2274. All images were bias subtracted and flat-fielded in the standard manner. The $`B`$-band and $`V`$-band observations were flat-fielded using dome-flats corrected by a median of twilight and dawn sky-flats. The $`U`$-band observations were flat-fielded using the median of twilight and dawn sky-flats taken over four nights. In order to obtain transformations from instrumental to standard magnitudes, we also observed 51 standard stars from 6 different Landolt (1992) fields and one E-Region (Graham 1982). Instrumental magnitudes were calculated from aperture photometry performed with a radius of 5.8 arcseconds using DAOPHOT II (Stetson 1987, 1991). Using a standard least squares fitting routine, we performed transformations of several functional forms, fitting both observed colors and magnitudes as various functions of standard color, standard magnitude and airmass. We found that all fits produced comparable results that changed the final colors of our O/B stars by at most 0.01 mag. The adopted magnitude transformations allow for an easy comparison on a filter by filter basis with future observations. In the following expressions we represent observed magnitude as the lower case letter, standard magnitude as the upper case letter and the airmass as $`X`$. $$u=U+4.7340.090(UB)+0.444X$$ (1) $$b=B+3.125+0.107(BV)+0.263X$$ (2) $$v=V+2.9740.021(BV)+0.138X$$ (3) The range of airmass and color of our standard star observations was sufficient to include the value of the atmospheric extinction coefficient as a free parameter in our fit. The atmospheric extinction coefficients determined for this run are 0.444, 0.263, and 0.138, for the $`U`$, $`B`$ and $`V`$ bands, respectively. These compare well to the values of 0.453, 0.277 and 0.150 calculated by Landolt (1992) as the mean value of 13 years of observing runs at CTIO. In Figure 1, we show the residuals of our transformations versus standard color in the sense of observed magnitude minus the fitted magnitude. We find root mean square residuals of 0.024, 0.010 and 0.006 mag for the $`U`$, $`B`$ and $`V`$ passbands respectively. As is typical for this passband, the residuals in $`U`$ are comparatively large. However, we see no systematic increase in the size of residuals in any of our filters as we go to bluer colors. In Figure 2, we show the residuals of our transformations versus standard magnitude. We see no evidence for a dependence of residual size with magnitude. Point spread function (PSF) fitting photometry was performed for all the stars in the HV2274 field. We chose this method instead of aperture photometry since although HV 2274 is well separated from its neighbors, this was not the case for the other O/B stars in the field for which we wish to tabulate reddenings. Thus we performed PSF fitting photometry for all the stars in our program field. Both the PSF and the aperture correction vary across the field of the CTIO 0.9m $`2K\times 2K`$ CCD. We used the photometry package DAOPHOT II which allows for a quadratically varying PSF. The variation of the aperture correction across the field is only of order 0.03 mag. Even small relative shifts, however, can significantly disturb the morphology of a given set of O/B stars in the color-color plot. Thus, in order to properly account for the spatial variation in the aperture corrections, we determined the aperture corrections independently for each of the O/B stars. The aperture correction was determined from a neighbor-subtracted image. For a few of our stars, an imperfect PSF resulted in small regions of oversubtraction, which produced noisy growth curves and unreliable aperture corrections. For these stars we adopted the aperture correction of their nearest bright neighbor. We note that in the case of HV 2274 aperture photometry and PSF fitting photometry produced identical results. Our results for HV 2274 are, $$V=14.203\pm 0.006$$ (4) $$(BV)=0.172\pm 0.013$$ (5) $$(UB)=0.793\pm 0.027$$ (6) We include in our error both the formal photon counting errors and an estimation of the errors in the transformations from instrumental to standard magnitude. We compute the transformation error as the root mean square of the residuals shown in Figure 1. In Table 2 we compare our results with past efforts. Since U98 found the color of the binary to be invariant, we compare our colors calculated at a specific phase between eclipses with those of U98 and Watson et al. (1992) calculated as means across the entire cycle. Our $`V`$-band magnitude is close to the maximum light found by U98, while our $`(BV)`$ color is consistent with the original photometry published by Watson (1992). Table 3 provides a complete listing of our photometry of HV 2274 and ten other B stars in the surrounding field, while Figure 3 is a finding chart. These stars are included in the eleven-star set used by U98 and for ease of comparison, we adopt the same ID numbers. We exclude star 1 from our set as it appeared confused in our images. In Figure 4, we compare our results with that of U98 in a $`(UB)`$ versus $`(BV)`$ plot. We note that although there is a significant shift to redder $`(UB)`$ and bluer $`(BV)`$ in our data, the overall morphology of the set of stars is similar. Although we are able to compare our colors with those of U98, the observed magnitudes were not published and we cannot make comparisons on a filter by filter basis. This makes it difficult to determine for certain where the color shifts originate. We may, however, speculate on a few possible sources of the problem. First, we consider the possibility that at least part of the offset is due to the steep short-wavelength cutoff of the $`U`$ filter used by U98. Differences in short-wavelength cutoff of the $`U`$\- band can introduce systematic deviations, significant mainly for B type stars (Bessell 1986). The sign of this effect is such that a steeper blue cutoff results in smaller $`U`$ magnitudes for stars with $`(UB)<0`$ (Bessell 1990). Assuming for a moment agreement in $`B`$ magnitudes, this implies that spectral type B stars will appear bluer in $`(UB)`$ through a filter with a sharper short-wavelength cutoff. We see evidence for this effect in the expected direction as the $`U`$ filter used by U98 has a sharper short-wavelength cutoff than does the CTIO filter, and the U98 results are bluer in $`(UB)`$. However, the magnitude of the difference in the $`(UB)`$ photometry is larger than is expected from solely a $`U`$ passband mismatch. Therefore, we also consider the possibility that problems in the $`B`$ passband may contribute to the discrepancy in $`(UB)`$ as well as the discrepancy in $`(BV)`$. Because the $`B`$ band is situated near the confluence of the Balmer lines, small shifts in the short wavelength cutoff of the $`B`$ filter will affect the colors of A and F stars (Bessell 1990). Many of the standard stars used both by our group and U98 were spectral type A and F stars. This means that both our transformations were largely driven by those stars most sensitive to $`B`$ passband mismatches. A small $`B`$ passband mismatch could therefore introduce significant shifts in the transformations from observed to standard magnitudes. However, our complete spread of standard stars includes stars of spectral types O through M, including many more red stars than U98. We deem it unlikely that our transformation could be adversely affected by difficulties in the A and F stars and yet emerge with a satisfactorily linear color term over the entire range of stellar characteristics. Conversely, $`V`$-band problems cause significant changes mainly for M stars which are much redder than the majority of standard stars used. Therefore, the discrepancies in $`(BV)`$ are likely to be due to a $`B`$-band mismatch, rather then a problem with the $`V`$-band. Another possible source of discrepancy is nonlinearities in the transformations from observed to standard magnitudes. In Figure 5 we again compare our colors to those of U98 this time by plotting our color residuals versus magnitude, in the sense $`\mathrm{\Delta }Color=Color_{\mathrm{This}\mathrm{\_}\mathrm{work}}Color_{\mathrm{U98}}`$. Most of the errors in $`\mathrm{\Delta }Color`$ are highly correlated between individual points and between our and the U98 study. Therefore, as a first approximation, we assume that the only truly uncorrelated errors are the ones associated with the photon counting noise. A linear least squares fit to the eleven points using only our estimated photon noise (U98 had more observations and so their photon noise should be negligible with respect to ours) yields $`\chi _{\mathrm{\Delta }(UB)}^2=151.85`$ and $`\chi _{\mathrm{\Delta }(BV)}^2=7.26`$. The small $`\chi _{\mathrm{\Delta }(BV)}^2`$ suggests that there is little additional uncertainty due to, say, different transformations or statistical error. The linear relation is, therefore, well established and given by $`\mathrm{\Delta }(BV)=(0.401\pm 0.002)+(0.007\pm 0.003)(B15.19)`$, where the errors in the slope and zero point are uncorrelated. The very high $`\chi _{\mathrm{\Delta }(UB)}^2`$ is a strong evidence that either $`\mathrm{\Delta }(UB)`$ versus $`B`$ relation cannot be represented as a straight line (indicating severe photometric problems) or that there are additional sources of uncorrelated noise, which we did not take into account. To investigate the second possibility, we renormalize $`\chi ^2`$ to the number of degrees of freedom, i.e. we fix $`\chi _{\mathrm{\Delta }(UB)}^2=9.00`$. This procedure suggests the existence of 0.038 mag of additional error which should be added in quadrature to each point. We note that 0.038 is of order of a total error in $`U`$-magnitudes claimed by both our and U98 groups and can be understood as resulting from additional statistical uncertainty due to only 4 U98 epochs, systematic errors from U flat fielding and error in determining the U extinction. In Figure 5 we show the augmented errors for $`\mathrm{\Delta }(UB)`$ and our photon errors for $`\mathrm{\Delta }(BV)`$. The zero points of the linear relations represent an approximate offsets between our and the U98 photometry, but the errors \[especially in $`\mathrm{\Delta }(BV)`$\] are smaller than the actual ones because they were specifically chosen to judge the significance of the slopes. The slope in $`\mathrm{\Delta }(BV)`$ is significant at the 2.3-$`\sigma `$ level and the slope in $`\mathrm{\Delta }(UB)`$ with renormalized $`\chi ^2`$ is significant at 1.5-$`\sigma `$ level. This suggest some non-linearity in the transformations from observed to standard magnitude in either our work or that of U98. ## 3 The Reddening to HV 2274 To determine the reddening to HV 2274 we use the Q parameter method recently used by Harris, Zaritsky & Thompson (1997) where Q is defined as $$Q=(UB)0.76(BV)0.05(BV)^2$$ (7) This is equivalent to a rotation of a $`(UB)`$ versus $`(BV)`$ color-color diagram so that the direction of reddening $`E(UB)/E(BV)=0.76+0.05(BV)`$ is parallel to the $`(BV)`$ axis. As detailed in Harris et al. (1997), the coefficient of 0.76 in the direction of reddening was taken from the ratio of color excesses $`E(UB)/E(BV)`$ evaluated from the average LMC extinction curve outside 30 Dor (Fitzpatrick 1985), while the coefficient of 0.05 was derived directly from Galactic studies (Hiltner & Johnson 1956). In Figure 6 we plot a line of unreddened stars and our program stars in a $`(BV)`$ versus Q diagram. Again, our errors include the estimated error in our transformations. The line of unreddened stars was drawn from Harris et al. (1997) and was determined by a fitting a line in a $`(BV)`$ versus Q diagram to a set of observations of unreddened Galactic O/B stars from Straizys (1992). The equation of this line is given by $$(BV)=0.338\times Q0.036$$ (8) The reddening of a program star in such a diagram is then given by the vertical distance between that star and the unreddened line. We note that our stars fall in a fairly tight distribution, parallel to the unreddened line. We find a median reddening of $`E(BV)=0.083`$ mag, while for HV 2274 we find $$E(BV)=0.088\pm 0.025$$ (9) In Table 4 we compare this result with previous measurements. It is noteworthy that this value of $`E(BV)`$ derived from $`UBV`$ photometry is entirely consistent with the reddening fit by Guinan et al. (1998a) using spectra and Watson (1992) $`B`$ and $`V`$ photometry. It is encouraging that two such different methods produced virtually identical results. This reddening to HV 2274 is consistent with other literature on this subject. Unfortunately, HV 2274 lies outside both the LMC reddening maps of Oestreicher & Schmidt-Kaler (1996) and Harris et al. (1997) and the LMC foreground reddening map of Oestreicher, Gocherman & Schmidt-Kaler (1995). However, we may still make use of their results to estimate the likelihood of obtaining a reddening this low. Oestricher et al. (1995) finds that the foreground reddening is extremely patchy and varies from $`E(BV)_{fg}=0.00`$ mag to $`E(BV)_{fg}=0.15`$ mag, with a mean of $`E(BV)_{fg}=0.06\pm 0.02`$ mag. However, the shape of the frequency distribution of foreground reddenings makes quoting a mean value of their distribution somewhat misleading. We note that 60% of their measurements found a foreground reddening of less then $`0.05`$ mag. Similar conclusions were drawn by Bessell (1991), who concurs that the foreground reddening to the LMC is varied with $`E(BV)_{fg}=0.04`$ to $`0.09`$ mag, and Schlegel, Finkbeiner & Davis (1998) who find that the typical foreground reddening measured from dust emission in surrounding annuli is $`E(BV)_{fg}=0.075`$ mag. Since we are primarily interested in the total reddening to HV 2274, including reddening within the LMC, we turn again to Harris et al. (1997) who present a histogram of total reddening measurements obtained using the Q parameter method described above. This histogram is drawn from measurements of 2069 O/B type main sequence stars in a $`1.9^{}\times 1.5^{}`$ section of the LMC. Reproducing this histogram and integrating gives us a 15% probability of obtaining a total reddening measurement of less then $`E(BV)=0.10`$ mag. Therefore, we conclude that although our derived reddening to HV 2274 is low, it is not impossibly so. Our reddening is comfortably above the mean foreground reddening and in a plausible region of total reddening. ## 4 Conclusions We obtained $`UBV`$ photometry of the eclipsing binary HV 2274. Our principal results are: a reddening, $`E(BV)=0.088\pm 0.025`$, consistent with the original fit of Guinan et al. (1998a) and color, $`(BV)=0.172\pm 0.013`$, consistent with the original photometry of Watson (1992). Our results suggest a return to the original estimate of $`\mu _{LMC}18.4`$ mag. Even with our generous estimate of error in $`E(BV)`$, our result is still $`2\sigma `$ different from that of U98 suggesting a real difference in our photometry. Consequently, our reddening is inconsistent with the later results of Guinan et al. (1998b), emphasizing the sensitivity of the spectrophometric fit to small changes in the photometry. Our discussion has emphasized that the systematic uncertainties and variations in atmospheric transmission, filters, and calibrations inherent in ground-based data are likely to limit the accuracy of resulting distances determined from eclipsing binary systems. These systematic uncertainties affect distance moduli at the 5-10% level and will not be sufficiently reduced with larger samples. The determination of distances by solving eclipsing binary systems will become much more reliable with accurate measurements of interstellar extinction. Ground-based infrared photometry is one way to improve on the current determinations. The best way, however, would be to extend space-based spectra over a wide enough range in wavelength to obviate the need for ground-based spectra altogether. We thank Andrzej Udalski for kindly providing his colors for HV 2274 and neighbors. We also thank Edward Guinan and Ignasi Ribas for sending us their ephemeris for HV 2274 in convenient electronic form. Work at LLNL supported by DOE contract W7405-ENG-48.