id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9911/astro-ph9911075.html | ar5iv | text | # Untitled Document
NEW PUBLICATION
ASTROPHYSICAL PLASMAS AND FLUIDS
By
Vinod Krishan
Indian Institute of Astrophysics, Bangalore, India
Astrophysics and space science Library 235
Abstract: This book is a valuable introduction to astrophyscial plasmas and fluids for graduate students of astronomy preparing either for a research career in the field or just aspiring to achieve a decent degree of familiarity with 99% of the cosmos.
The contents provide a true representation of the phenomenal diversity of dominant roles that plasmas and fluids play in the near and far reaches of the universe. The breadth of coverage of basic physical processes is a particularly attractive feature of this text book. By first using the Liouville equation to derive the kinetic, the two-fluid and single-fluid, descriptions of a plasma and a fluid, and then demonstrating the use of these descriptions for specific situations in the rest of the book, the author has probably chosen the most efficient way of handling this large technical subject. The two major astrophysical issues, fluid or plasma configurations and their radiative signatures, figure prominently througout the book. The problems are designed to give the reader a feel for the quantititative properties of celestial objects.
Contents:
1. Plasma - The Universal State of Matter, 2. Statistical Description of a Many-Body System. 3. Particle and Fluid Motions in Gravitational and Electromagnetic Fields. 4. Magnetohydrodynamics of Conducting Fluids. 5. Two-Fluid Descriptions of plasma. 6. Kinetic Description of Plasmas. 7. Nonconducting Astrophysical Fluids. 8. Physical Constants. 9. Astrophyscial Quantities. 10. Differential Operators. 11. Characteristic Numbers for Fluids. 12. Acknowledgement for Figures. Index.
‘This book is an excellent introduction and fills an important niche. This book has an excellent chance to attract a large audience. An added bonus is the wonderful final hapter on neutral fluid dynamics. The emphasis on turbulence is an excellent corrective to the usual laminar approach to fluid dynamics. The tone of this text is unique. The organization is excellent and the coverage very appropriate for an introductory text’.
Paul J.Wiita, Georgia State University, USA.
November 1998, 372 pp.
Hardbound, ISBN 0-7923-5312-9
NLG 295, 00/USD 159.00/GBP 100.00
November 1998, 372 pp.
Paperback, ISBN 0-7923-5490-7
NLG 130.00/USD 70.00/GBP 45.00
ORDER FORM:
Please send me Astrophysical Plasmas and Fluids by Vinod Krishan:
-Copy(ies) of Hardbound, ISBN 0-7923-5312-9
NLG 295.00/USD 159.00/GBP 100.00
-Copy(ies) of Paperback, ISBN 0-7923-5490-7
NLG 130.00/USD 70.00/GBP 45.00
Payment enclosed to the amount of…
Please invoice me Please charge my credit card
Am.Ex Visa Diners club Mastercard Eurocard
Access
Name of Card Holder:
Card No.: Expiry Date:
Delivery address:
Name:
Address:
Date: Signature:
European VAT Registration Number:
To be sent to your supplier of:
KLUWER Academic Publishers Order Department, P.O.Box 322 3300 AH Dordrecht, The Netherlands Fax: +31-78-6546474 Tel: +31-78-6392392 Internet E-mail: orderdept@skap.nl
Orders from individuals accompanied by payment or authorization to charge a creditcard account will ensure prompt delivery. Postage and handling on all such orders, delivered by surface mail, will be absorbed by the Publisher. Orders from outside Europe will be sent by airmail, for which the customer will be charged extra. Payment will be accepted in any convertible currency. Please check the rate of exchange at your bank. US Dollar prices apply to deliveries in USA, Canada and Mexico only. Prices are subject to change without notice. All prices are exclusive of value added tax (VAT). Customers in the Netherlands please add 6Customers from other countires in the European Community please fill in the VAT number of your institute/company in the appropriate space on the order form; or add 6not charged VAT).
KLUWER ACADEMIC PUBLISHERS |
no-problem/9911/astro-ph9911107.html | ar5iv | text | # Low-ionization structures in planetary nebulae
## 1. Introduction
The presence of low-ionization structures (LIS) in PNe is poorly understood. They appear as jets, knots, filaments and tails, attached to or detached from the main shell. They are often labeled with specific acronyms trying to describe some of their characteristics – for instance, FLIERs: fast, low-ionization emission regions (Balick et al. 1993); or BRETs: bipolar, rotating, episodic jets (López, Vázquez, & Rodríguez 1995). Despite these pieces of information, the only clear characteristic of all PNe containing LIS is the presence of features much more prominent in low-ionization lines (such as \[NII\] and \[SII\]) than in the more highly ionized ones (typically H$`\alpha `$ and \[OIII\]).
There exist some families of models trying to explain the origin of the LIS. Basically, these are: interaction stellar wind models (Frank, Balick, & Livio 1996; Gárcia-Segura et al. 1999); jet interaction with the circumstellar medium (Cliffe et al. 1995), and the interaction of the shell with the interstellar medium (Soker & Zucker 1997). See Mellema (1996) for a review of LIS and their models. In addition, some ingredients – such as stellar magnetic fields, rotation; precession, a binary system in the center, and dynamical (Kelvin-Helmholtz and Rayleigh-Taylor) and/or radiation in situ instabilities – can be considered in these families of models in order to try to match the observations.
## 2. Results for two samples of PNe with LIS
### 2.1. Literature sample
We searched in the literature for PNe which present LIS, finding about 50 objects. From this sample $`50\%`$ have jets or jet-like structures and $`35\%`$ present any other kind of symmetric LIS (radially symmetrical or point-symmetric pairs). The remaining $`15\%`$ are PNe that show LIS more or less oriented in the radial direction, probably being the effect of the ionization front when interacting with density or ionization fluctuations occurring in the circumstellar gas (see, for instance, Soker 1998).
Symmetric LIS should be analyzed deeply, since they certainly gives us clues to their formation processes. In this way, a precise definition of the structures we are talking about is necessary. Hereafter we refer to as “jets” the high-collimated LIS which: are not isolated knots but are extended in the radial direction from the central star; appear in opposite symmetrical pairs; and move with velocities substantially larger than those of the main shell, whose typical velocities are of 20 - 40 km s<sup>-1</sup>. On the contrary, the features with the morphological appearance of jets, but which move with velocities similar to those of the main shell, or without available kinematical data, are called jet-like structures. It is clear that projection effects, which are often poorly known, play a fundamental role in distinguishing genuine jets and jet-like LIS. Detailed spatio-kinematic modeling of both the main nebulae and the LIS are therefore mandatory.
### 2.2. Our new data on LIS
We obtained high-quality narrow-band images and long-slit spectra for a sample of PNe: IC 4593 (Corradi et al. 1997); NGC 3918, K 1-2 and Wray 17-1 (Corradi et al. 1999a); NGC 6337, He 2-186 and K 4-47 (Corradi et al. 1999b); IC 2553 and NGC 5882 (Corradi et al., in preparation). In analyzing these data, via spatio-kinematic modeling, our goal is to determine the 3-D geometrical and kinematic parameters which can, in turn, constrain the LIS formation models.
### 2.3. Jets and jet-like LIS
Our preliminary results, in particular concerning those PNe which present jet or jet-like LIS, are puzzling. We follow the morphological classification of Schwarz et al. (1993) and the similar one of Manchado et al. (1996), and find that most of the PNe with LIS are not spherical: ellipticals ($`23\%`$); bipolars and quadrupolars ($`23\%`$); point-symmetric ($`27\%`$); irregulars ($`20\%`$): or spherical ($`7\%`$). Thus LIS seen to appear in all morphological classes, but less frequently in the spherical one.
In addition, part of PNe with LIS do not have kinematic data published, which makes a deeper analysis impossible. Others, such as, IC 4593 (Corradi et al. 1997), NGC 7009 (Balick et al. 1998), K 1-2 (Corradi et al. 1999a), He 2-429 (Guerrero et al. 1999), do have kinematic measurements, and their jet-like LIS have very low radial velocities. After deprojection for inclination the expansion velocity of NCG 7009, for instance, would put it in the class of jets (adopting the inclination angle of Reay & Atherton 1985), but there are arguments against the jet-like structure of IC 4593 being close to the plane of sky (Corradi et al. 1997). There are PNe which do contain jets, such as Hb 4 (Hajian et al. 1997), NGC 3918 (Corradi et al. 1999a), K 4-47 and He 2-186 (Corradi et al. 1999b).
What kind of features are expected from the jet formation models? If jets are formed by interacting stellar winds (ISW), in the same process in which the main nebulae are formed, then they should have ages similar to that of the main shell and lie along the symmetric axis (or around it, if they are precessing). The case of NGC 3918 is very interesting, since it does have a two-sided polar jet coeval with the highly axisymmetrical shell (Corradi et al. 1999a). However, the jet is much older than the main shell in other PNe such as NGC 6881 (Guerrero & Manchado 1998).
Our observations for NGC 3918, He 2-186 and K 4-47, reveal that the collimated gas along the jets is generally increasing in velocity. Therefore, it is possible that the linear increase of the gas expansion velocity is an intrinsic characteristic of the collimation process itself. Note that the collimation processes in the case of K 1-2 (with a close binary system in the center) and NGC 3918, for instance, could not be the same. An increase in the expansion velocity is the expected behavior of the jets formed in the models of Gárcia-Segura et al. (1999), in which the magnetic field of the central star is responsible for the jet collimation.
### 2.4. Other symmetrical LIS
Here we include all the features that appear in radially symmetrical pairs with respect to the central star (not only along the major axis, but also along other directions), and those that are point-symmetrics; but not those clearly related to jets. One thirdy of the sample present this kind of symmetry: NGC 6751 (Chu et al. 1991); Cn 1-5, NGC 2553 (Corradi et al. 1996); NGC 3242 (Balick et al. 1998); NGC 5189, NGC 6826 (Phillips & Reay 1983); NGC 6337, Wray 17-1 (Corradi et al. 1999a); and others.
What physical processes could be responsible for these structures? The idea of bullets ejected by the central star is very appealing indeed. However, there are problems with this model, since ballistic motions are clearly difficult to maintain in a hydrodynamical environment. MHD models of Garcia-Segura et al. (1999, and this volume) could apply here too. The latter could explain these symmetrical LIS only if for any reason the jet emission is obscured, except in the jet heads. In fact, this is not an unlikely possibility. New models presented in this meeting – the stagnation knots from partially collimated outflows – appear to reproduce symmetrical LIS, which appear not related to jets, such as the FLIERs of NGC 3242 and NGC 7662 (Balick et al. 1998), and in this model knots are formed in the zone of stagnation of bipolar shells (Steffen & López, this volume).
## 3. Conclusions
We have briefly discussed some of the very puzzling characteristics of low-ionization structures in PNe. The need for a good determination of the basic parameters and detailed constraints to the LIS formation models is strongly emphasized. In addition, the predictions of current models, when compared to the observations, is far from satisfactory, and detailed modeling of LIS is clearly a matter that requires urgent attention. To date, the models for the jet formation which best agree with observations are those that consider a stellar magnetic field, even though there is no real evidence for the presence of magnetic fields in post-AGB stars or PNe. What kind of direct or even indirect evidence for these fields one would expect? Finally we would like to raise one more question. Since the majority of the LIS lie in non-spherical PNe, are the processes responsible for the formation of LIS the same processes that are causing the asphericity?
#### Acknowledgments.
The work of RLMC, EV, and AM is supported by a grant of the Spanish DGES PB97-1435-C02-01, and that of DRG by a grant from the Brazilian Agency FAPESP (proc 98/7502-0).
## References
Balick, B., Alexander, J., Hajian, A. R., Terzian, Y., Perinotto, M., & Patriarchi, P. 1998, AJ, 116, 371
Balick, B., Rugers, M., Terzian, Y., & Chengalur, J. N. 1993, ApJ, 411, 778
Chu, Y-H., Manchado, A., Jacoby, G. H., & Kwitter, K. B. 1991, ApJ, 376, 150
Cliffe, J. A., Frank, A., Livio, M., Jones, T. W. 1995, ApJ, 447, L49
Corradi, R. M. L., Gonçalves, D. R., Villaver, E., Mampaso, A., Perinotto, M., Schwarz, H. E., & Zanin, C. 1999b, ApJ submitted
Corradi, R. L. M., Guerrero, M., Manchado, A., & Mampaso, A. 1997, New Astronomy 2, 461
Corradi, R. M. L., Manso, R., Mampaso, A., & Schwarz, H. E. 1996, A&A, 313, 913
Corradi, R. M. L., Perinotto, M., Villaver, E., Mampaso, A., & Gonçalves, D. R. 1999a, ApJ in press
Frank, A., Balick, B., & Livio, M. 1996, ApJ, 471, L53
García-Segura, G., Langer, N., Rózyczka, M., & Franco, J. 1999, ApJ, 517, 767
Guerrero, M. A., & Manchado, A. 1998, ApJ, 508, 262
Guerrero, M. A., Vázquez, R., & López, J. A. 1999, AJ, 117, 967
Hajian, A. R., Balick, B., Terzian, Y., & Perinotto, M. 1997, ApJ, 487, 313
López, J. A., Vázquez, R., & Rodríguez, L.F. 1995, ApJ, 455, L63
Manchado, A., Guerrero, M. A., Stanghellini, L., & Serra-Ricart, M. 1996, The IAC Morphological Catalog of Northern Galactic PNe, IAC Publ.
Mellema, G. 1996, in Jets from Stars and Galactic Nuclei, Springer Lecture Notes, ed. W. R. Kundt. (Springer, Berlin), 149
Phillips, J. P., & Reay, N. K. 1983, A&A, 117, 33
Reay, N. K., & Atherton, D. P. 1985, MNRAS, 215, 233
Schwarz, H. E., Corradi, R. M. L., & Stanghellini, L. 1993, in IAU Symp 155, Planetary Nebulae, eds. R. Weinberger & A. Acker (Dordrecht: Kluwer), 214
Soker, N., 1998, MNRAS, 299, 562
Soker, N., & Zucker, D. B. 1997, MNRAS, 289, 665 |
no-problem/9911/astro-ph9911310.html | ar5iv | text | # 1 Introduction
## 1 Introduction
In order to compare the evolutionary status, energetics, obscuration, and physical conditions of the nuclear regions of ULIRGs with those of less luminous infrared-bright galaxies with minimal sensitivity to extinction, we have used the grating mode of the ISO Long Wavelength Spectrometer (LWS) \[Clegg et al. (1996)\] to carry out (1) a full far-infrared spectral survey of a small sample of nearby IR-bright galaxies including the ULIRGs Arp 220 and Mkn 231 and (2) a fine-structure line survey of more distant galaxies, including a survey of ULIRGs in the \[C II\]158 $`\mu `$m fine-structure line. The observations allow us to analyze the dust continuum, to put constraints on the ionization parameters and the intensity of the radiation as it impinges upon the surrounding neutral clouds of gas and dust, and ultimately on the nature of the source(s) of luminosity. Detailed analyses of the spectra of the individual galaxies are presented elsewhere (Fischer et al., 1996, 1997; Colbert et al., 1999; Satyapal et al., 1999; Lord et al., 1999; Unger et al., 1999; Bradford et al., 1999; Harvey et al., 1999; and Spinoglio et al., 1999). Here we present a comparative overview of the observational results.
## 2 The LWS full spectra of infrared-bright galaxies
We present full, high signal-to-noise LWS spectra of six infrared-bright galaxies in Figure 1. The LWS aperture is $``$ 75” Swinyard et al. (1998) and the spectral resolution is $`\lambda /\mathrm{\Delta }\lambda `$ 200 in grating mode. The spectra are presented in sequence extending from strong \[O III\]52,88 $`\mu `$m and \[N III\]57 $`\mu `$m fine-structure line emission in the galaxies Arp 299 and M 82, to only faint \[C II\]158 $`\mu `$m line emission from gas in photo-dissociation regions in the prototypical ultraluminous galaxy Arp 220. The far-infrared spectrum of the ULIRG Arp 220 is dominated by absorption lines of OH, H<sub>2</sub>O, CH, and \[O I\]. Intermediate in the sequence are Cen A, NGC 253, and NGC 4945, showing weak \[O III\] and \[N III\] lines while their PDR emission lines remain moderately strong. Interestingly, the strength and richness of the molecular absorption spectra is anti-correlated with the equivalent widths of the fine-structure emission lines. For example, M 82 shows faint OH absorption from the ground level at 119 $`\mu `$m Colbert et al. (1999), while NGC 253 shows absorption from the ground-state in three cross-ladder transitions and an emission line cascade at 79 $`\mu `$m and 163 $`\mu `$m Bradford et al. (1999). In NGC 4945 and Arp 220, OH absorption from both ground and excited rotational levels is present (Lord et al., 1999; Fischer et al., 1997). In Arp 220, although the existence of a downward cascade is suggested by the presence of emission at 163 $`\mu `$m, absorption from rotational levels as high as 416 K and 305 K above the ground state is seen for OH and H<sub>2</sub>O, respectively, and the \[O I\]63 $`\mu `$m line is seen in absorption. Although the location of the excited molecules is not certain, OH and H<sub>2</sub>O are expected to exist in abundance in dense photo-dissociation regions (PDRs) Sternberg et al. (1995), where they could be excited radiatively by the far-infrared emission from warm dust.
It is of interest to compare the far-infrared spectra of the archetypical Seyfert 2 galaxy NGC 1068 Spinoglio et al. (1999) and that of the Galactic Center White et al. (1999) with the spectra presented in Figure 1. The equivalent widths of the far-infrared fine-structure line emission in NGC 1068 resemble those in the starburst galaxy M 82. In addition to its Seyfert 2 nucleus, NGC 1068 hosts a starburst in its circumnuclear ring, that is possibly as young as the youngest clusters in M 82 Davies et al. (1998). This starburst may be responsible for much of the far-infrared emission, as suggested by Telesco et al. Telesco et al. (1984). A notable difference is that in NGC 1068 the OH lines are observed in emission suggesting unique excitation conditions possibly related to its Seyfert 2 nucleus (see discussion in Spinoglio et al., 1999), while in M 82 OH is observed in absorption Colbert et al. (1999). We note here that for Cen A (Unger et al., 1999; Figure 1), also known to harbor an AGN, the weakness of the far-infrared \[O III\] lines may indicate that the AGN is not the dominant source powering the far-infrared luminosity. The far-infrared spectrum of the Galactic Center White et al. (1999) would fall toward the upper end of the sequence shown in Figure 1, with an added emission line component due to warm, perhaps shock-excited, neutral gas.
## 3 Parameterization of the far-infrared spectral sequence of IR-bright galaxies
The sequence shown in Figure 1 may be caused by variation of many parameters, but it is of interest to examine whether a single parameter or evolutionary effect can play the dominant role in the progression, and in particular to try to understand what conditions cause the ultraluminous galaxies to appear at the extreme end of the sequence.
In Figure 2 we plot the temperature-insensitive \[O III\]52/\[O III\]88 line ratio as a function of the \[O III\]88/$`F_{FIR}`$ ratio for the galaxies in which \[O III\] line emission was detected in Figure 1. To within the uncertainties no clear dependence was found for our small sample and all of the measured \[O III\] line ratios fall within the range 0.6 - 1.2, consistent with electron densities between 100 - 500 cm<sup>-3</sup>. These results suggest that neither density nor far-infrared differential extinction between 52 and 88 $`\mu `$m appears to be the *single dominant parameter* in the observed sequence Fischer et al. (1999). This is consistent with previous extinction estimates. Despite the inferred high column density of dust corresponding to A<sub>v</sub> $``$ 1000, the estimated extinction *to the ionized gas* in Arp 220 is A<sub>v</sub> $``$ 25-50 (Fischer et al., 1997; Genzel et al., 1998).
In Figures 3 and 4 we plot the \[N III\]57/\[N II\]122 and \[O III\]52/\[N III\]57 line ratios as a function of the \[O III\]88/$`F_{FIR}`$ ratio for galaxies from Figure 1 (where these lines were detected). The excitation potentials of \[N II\], \[N III\], and \[O III\] are 14.5, 29.6, and 35.1 eV, respectively. Thus for constant metallicity, both Figures 3 and 4 indicate that with progression to low relative emission line strength is a progression to lower excitation. Figure 4 strengthens our conclusion that far-infrared extinction is not responsible for the apparent excitation effects. The ionization parameter, defined as the ratio of ionizing photons to hydrogen atoms at the inner face of the cloud, plays a key role in determining ionization structure of clouds surrounding a source of ionizing radiation. It is equal to Q(H)/4$`\pi r^2n_Hc`$, where Q(H) is the Lyman continuum rate absorbed by the gas and $`n_H`$ is the hydrogen density at the inner radius, $`r`$, of the cloud. Thus if density effects alone do not explain the sequence, effects such as larger inner cloud radii due to stellar winds or lower Q(H)/$`L_{Bol}`$ due to dust within the HII regions or softer radiation fields may be responsible for the apparent excitation progression. If the latter is the case, and if starbursts are the source of the excitation, then an aging starburst or one with an IMF with a low upper mass limit could be present in the ultraluminous galaxies and other low excitation galaxies. Soft radiation fields or dusty H II regions may explain the presence of ubiquitous molecular material in close proximity to the nuclear regions of these galaxies and the prominent molecular absorption lines. It is difficult however, to reconcile the aging starburst interpretation with the high luminosity of the ultraluminous galaxies, since older starbursts have lower luminosities than their younger counterparts.
## 4 The far-infrared spectra of ULIRGs
The far-infrared spectrum of the second brightest ultraluminous galaxy Mkn 231 Harvey et al. (1999) is surprisingly similar to that of Arp 220 (to within the achieved signal-to-noise ratio). It is dominated by OH absorption, with similar OH absorption line ratios, and only faint PDR line emission is present. A single component absorption layer is inconsistent with the observed line ratios in these galaxies and fluorescent components do not alleviate the problem. The observed OH line ratios probably result from independent absorption and emission components Suter et al. (1998). Based on the mid-infrared spectra of a sample of nearby ULIRGs, Genzel et al. Genzel et al. (1998) infer that Mkn 231 has a strong AGN component while the far-infrared luminosity of Arp 220 is powered by a starburst. Thus the similarity of the far-infrared spectra of these two ultraluminous galaxies is somewhat surprising.
The ultraluminous galaxies have lower \[C II\]158 $`\mu `$m line to far-infrared flux ratios than in normal and less luminous IR-bright galaxies by an order of magnitude (Luhman et al., 1998; 1999). This has been interpreted as an indication of a lower value of the average interstellar radiation field $``$$`G_o`$$``$ in Arp 220, where the upper limit for the \[O I\]145/\[C II\]158 emission line ratio is unexpectedly low (Fischer et al., 1997; Luhman et al., 1998). Implicit in this interpretation is the assumption that the \[O I\]145 $`\mu `$m upper limit is not affected by self-absorption, a reasonable assumption since the lower level of the \[O I\]145 $`\mu `$m line is 228 K above the ground state. On the other hand, if self-absorption is responsible for the apparent faintness of the \[O I\]145 $`\mu `$m line, then very high values of $``$$`G_o`$$``$ are possible, as has been suggested by Malhotra et al. Malhotra et al. (1998) for a small percentage of their sample of normal galaxies. A plausible explanation for both low ionization parameters and low values of $``$$`G_o`$$``$ is dusty H II regions. Low ionization parameters can be consistent with high $``$$`G_o`$$``$ if molecular clouds surround very compact H II regions.
## Acknowledgments
This work was supported in part by the Office of Naval Research and the NASA ISO grant program. We appreciate the skill and dedication of the LWS instrument and data analysis teams at Vilspa, RAL, and IPAC. |
no-problem/9911/astro-ph9911240.html | ar5iv | text | # Recent developments of inverse Compton scattering model of pulsar radio emission
## 1. Introduction
The emission beams of a radio pulsar have been identified as two (core, inner conal, Lyne & Manchester 1988) or three parts (plus an outer conal, Rakin 1983) through careful studies of the observed profiles and polarization characteristics. Many theoretical models can only explain the hollow cone beam. It is necessary to understand the core emission theoretically. Some theoretical efforts have been made, one of them is inverse Compton scattering model (ICS) model, which can get both core and conal emission beams (Qiao and Lin 1998, Liu et al. 1999; Qiao et al. 1999b; Xu et al. 1999a).
Up to now, following issues have been investigated for the model:
(1). Inner gap structure and the explanation of some phenomena (Zhang & Qiao
1996; Qiao & Zhang 1996; Zhang et al. 1997a,b), such as mode-changing,
nulling.
(2). Emission beams and emission regions of radio pulsars (Qiao & Line 1998).
(3). Frequency behaviour of pulse profiles (Liu & Qiao 1999; Qiao et al. 1999b).
(4). The polarization properties of the ICS model in strong magnetic fields (Xu
et al. 1999a);
(5). Depolarization and position angle jumps (Xu et al. 1997);
(6). Coherent ICS process in magnetosphere (Liu et al. 1999; Xu et al. 1999a).
## 2. Basic idea of the ICS model
The basic idea of the model can be found in Qiao & Lin (1998), Qiao et al. (1999b), Xu et al. (1999a) and Liu et al. (1999). In the model, low frequency electromagnetic waves are supposed to be produced near the star surface due to the violent breakdown of RS type vacuum gap (Ruderman & Sutherland 1975). The waves are assumed to propagate freely in pulsar magnetospheres and inverse Compton scattered by the secondary particles produced in gap sparking processes. The upscattered photons are in the radio band, i.e., the observed radio emission. With the simple dipole field, the incident angle of the ICS decreases first, and then starts to increase above a critical height. The Lorentz factor of the secondary particles, however, keeps decreasing due to various energy loss mechanisms (mainly the ICS with the thermal photons near the surface). The combination of the above two effects naturally results in the feature that on a given field line, the emission has the same frequency at three heights, corresponding to one core and two conal emission components.
One basic ingredient of the ICS model is the vacuum gap, which has been opposed by binding energy calculations. However, the idea that pulsars are bare strange stars can solve the binding energy problem completely (Xu & Qiao 1998; Xu et al. 1999b). If the vacuum gap could be formed, the ICS of the primary particles off the thermal photons actually take an important role in the inner gap physics, both within and above the gap (Zhang & Qiao 1996; Zhang et al. 1997a,b). The energy loss behaviour of the secondary particles is also influenced by ICS process (Zhang et al. 1997b).
## 3. Emission beams and their properties
The central or ‘core’ emission beam, inner cone and outer cone beams have been simulated in the ICS model. We found that:
(1). ‘Core’ emission should be a small hollow cone in fact, which can be identified from de-composited Gaussian components (Qiao et al. 1999a).
(2). Different emission components are emitted at different heights: ‘core’ emission is emitted at a place near the surface, ‘inner cone’ at a higher place, and ‘outer cone’ at the highest. Due to the retardation and aberration effects caused by different heights, polarization position angle can have two or three modes at a given longitude (Xu et al. 1997).
(3). The beam size changes with frequencies. As observing frequency increases, the ‘core’ emission beam becomes narrow, the ‘inner cone’ becomes slightly wider or has little change, and the ‘outer cone’ also becomes narrow (Qiao & Lin 1998).
## 4. Classification and frequency behaviour of pulse profiles
Based on the ICS model and the multi-frequency observations, radio pulsars could be devided into two categories:
Type I: Pulsars with core and inner cone. These pulsars has a shorter period $`P`$, and their polar caps are larger (e.g. Fig.6b in Qiao & Lin 1998). As the impact angle of the line of sight gradually increases, they are grouped into two sub-types, namely Ia (e.g. PSR B1933+16) and Ib (e.g. PSR B1845-01).
Type II: Pulsars with all three parts of beam. These pulsars have normal periods, and the low frequency waves should be strong enough at high altitudes to produce the radio emission. They can be further grouped into three sub-types as the impact angle gradually increases. Type IIa: pulsars with five or six components at most observing frequencies when the line of sight cut the core beam. The prototype is PSR B1237+25. Type IIb: the impact angle is larger so that at higher frequencies the line-of-sight missed the core beam. An example is PSR B2045-16. Type IIc: The line of sight has the largest impact angle, so that only the outer conical branch can be observed. A typical pulsar is PSR B0525+21.
The profiles of most these types or subtypes have been simulated, and typical examples were selected and compared its multi-frequency observations (Qiao et al.1999b, Liu & Qiao 1999). Here are two examples.
Type Ia: The multi-frequency observations of PSR B1933+16 show that it belongs to Type Ia pulsars. It has a single component at low frequency, but becomes triple at higher frequencies. Such behaviour can be simulated in the model, since the radius of the ‘inner’ cone increases towards higher frequencies. This may be an important feature of the ICS model distinguished from the other models.
Type IIa: For such pulsars, we simulate a typical example, PSR B1237+25. It is worth mentioning that the ICS model can interpret an important characteristics of this pulsar: five components in most frequency bands, but three components at very low frequencies. This can hardly be explained by any other model.
## 5. Polarization
The polarization features of scattered emission by relativistic electrons in the strong magnetic field were calculated from the Stokes parameters of scattering emission (Xu et al. 1999a). (1). When $`\omega _{\mathrm{in}}\frac{eB}{mc}`$, both $`\omega _{\mathrm{in}}`$ and $`\omega _{\mathrm{out}}`$ (the angular frequency of incident and outgoing photons, respectively) are in radio band, the scattered photons are completely linearly polarized, and its polarization position angle is in the co-plane of the out-going photon direction and the magnetic field. (2). For resonant scattering at high energy bands, significant circular polarization appears in the scattered emission. The position angle of linear polarization is perpendicular to the co-plane of out-going photon and the magnetic field, different from the case in radio band.
The inverse Compton scattering of a bunch of particles outflowing in pulsar magnetosphere should be coherent in order to produce significant circular polarization for beamed radio emission (Xu et al 1999a). At a certain time an observer can only see a small part of an emission beam radiated by a particle bunch, which we called ‘transient beam’. (1). In ICS model, at a given frequency the transient beam have three parts (core, inner and outer cones), each of them is called ‘mini-beam’, and their polarization feature are quite different. (2). Circular polarization is very strong (even up to 100%) in the core mini-beam and it is much less in the inner cone mini-beam. (3). If the line of sight sweeps across the center of a core (or inner conal) mini-beam, the circular polarization will experience a central sense reversal, or else it will be dominated by one sense, either the left hand or the right hand according to its traversing line relative to the mini-beam. (4). The position angles at a given longitude of transient ‘sub-pulses’ have diverse values around the projection of the magnetic field. The variation range of position angles is larger for core emission, but smaller for conal beam. When many such ‘sub-pulses’ from one mini-beam is summed up, the mean position angle at the given longitude will be averaged to be the central value, which is determined by the projection of magnetic field lines. (5). Stronger circular polarization should be observed in sub-pulses with higher time resolution according to our model.
## 6. Conclusion
Besides the natural appearance of core and conal components, some observational properties of radio pulsars can also be explained in the ICS model, such as the frequency-dependent pulse profiles, the polarization nature of mean pulses and individual pulses. We propose here a classification method for grouping pulsar integrated pulses, which may help to understand the multi-frequency pulse data.
### Acknowledgments.
This work is supported by National Nature Sciences Foundation of China, by the Climbing project of China, and by the Youth Foundation of PKU.
## References
Liu, J.F., & Qiao, G.J., 1999, Chin. Astr. & Astrphy., 23,133
Liu, J.F.,Qiao, G.J., & Xu, R.X., 1999, Chin. Phys. Lett., 16, 541
Lyne A.G. & Manchester R.N., 1988, MNRAS 234, 477
Qiao, G.J. & Lin, W.P., 1998, A&A, 333, 172
Qiao, G.J., & Zhang, B. 1996, A&A, 306, L5
Qiao, G.J, Liu J.F., Wang Y., Wu X.J. & Han J.L., 1999a, in this proceeding.
Qiao, G.J., Liu, J.F., Zhang, B. & Han, J.L.,1999b, ApJ, submitted
Rankin J.M. 1983, ApJ 274, 333
Ruderman, M.A. & Sutherland, P.G., 1975, ApJ, 196, 51
Xu, R.X. & Qiao, G.J. 1998, Chin. Phys. Lett. 15, 934
Xu, R.X., Qiao, G.J. & Han J. L. , 1997, A&A, 323, 395
Xu, R.X., Liu,J.F., Han,J.L., & Qiao,G.J., 1999a, ApJ, accepted
Xu, R.X., Qiao, G.J., & Zhang, B., 1999b, ApJ, 522, L109
Zhang, B. & Qiao, G.J. 1996, A&A, 310, 135
Zhang, B., Qiao, G.J., & Han, J.L. 1997b, ApJ, 491, 891
Zhang, B., Qiao, G.J., Lin, W.P., & Han, J.L. 1997a, ApJ, 478, 313 |
no-problem/9911/gr-qc9911104.html | ar5iv | text | # Are pre-big-bang models falsifiable by gravitational wave experiments?
## Motivations
One of the direct phenomenological predictions of inflationary cosmological models is the generation of a stochastic background of gravitational waves (GW’s). For “slow-roll” inflationary models this prediction is hardly testable: since the spectrum is almost flat over the huge frequency band $`10^{16}\mathrm{Hz}1\mathrm{GHz}`$ GWInfl , in the window $`10\mathrm{Hz}1\mathrm{kHz}`$, where ground-based GW experiments operate, the maximum value of the energy spectrum $`\mathrm{\Omega }_{\mathrm{gw}}(f)\rho _c^1d\rho _{\mathrm{gw}}(f)/d\mathrm{ln}(f)`$ compatible with the COBE measurements of the cosmic micro-wave background (CMB) temperature fluctuations at large scales – $`h_0^2\mathrm{\Omega }_{\mathrm{gw}}10^{14}`$ KW92 – is well below the sensitivity expected for the third generation of detectors, $`h_0^2\mathrm{\Omega }_{\mathrm{gw}}\stackrel{>}{}10^{11}`$.
“Pre-big-bang” (PBB) models PRBB represent an alternative to the standard “slow-roll” inflationary scenario. In a minimal PBB model, one assumes that the initial state is the perturbative vacuum of super-string theory, the flat 10-dimensions Minkowski space-time; the Universe goes first through an inflationary phase where the curvature and the string coupling increase, eventually reaches a “stringy epoch” where the curvature scale is of order one in units of the string length, and finally evolves into a typical decelerated radiation/matter dominated era. The inflationary PBB phase has a precise consequence on the structure of $`\mathrm{\Omega }_{\mathrm{gw}}(f)`$, which affects the detectability of the stochastic background by laser interferometers, whose ”science-runs” are expected to begin at the end of 2001: in the low frequency range, the spectrum is characterized by a steep power law, $`\mathrm{\Omega }_{\mathrm{gw}}f^3`$ BGGV ; indeed the COBE bound is easily evaded, and the spectrum can peak at frequencies of interest for GW experiments, while satisfying the existing experimental bounds BGV ; BMU .
For the rather general class of minimal PBB models, $`\mathrm{\Omega }_{\mathrm{gw}}(f)`$ depends on two free parameters: (i) the red-shift $`z_sf_1/f_s`$ of the high-curvature phase, which fixes the “knee” frequency $`f_s`$ between the low and high frequency regime ($`f_110^{10}`$ Hz is the cut-off frequency of the spectrum, whose exact value is irrelevant for the issues discussed here); (ii) the value $`g_s`$ of the string coupling at the onset of the high-curvature phase, which determines the high-frequency slope of the spectrum. In Fig. 1 we show $`\mathrm{\Omega }_{\mathrm{gw}}(f)`$ for two choices of the free parameters; varying $`g_s`$ and $`z_s`$, the maximum value of the spectrum compatible with the constraints due to pulsar timing data and to the abundance of light elements at the nucleosynthesis epoch is $`h_0^2\mathrm{\Omega }_{\mathrm{gw}}10^7`$ BMU .
$`\mathrm{\Omega }_{\mathrm{gw}}(f)`$ strongly depends on $`z_s`$ and $`g_s`$: they affect both the frequency behaviour and the peak value of the spectrum. Whether GW experiments will be able or not to detect a signal predicted by PBB models does depend on the actual value of the ”true” parameters. Indeed, GW experiments represent one of the very few avenues where the cosmological models can be verified, and could open a new era for studies of the very-early Universe and the structure of fundamental fields at high energies Creighton ; Maggiore . The emphasis of this contribution is not on finding the best theoretical scenario which would guarantee a detection. The goal of our analysis is to address to which extent GW observations can test PBB models, and more in detail to identify the the region of the free-parameter space that experiments can probe. A similar analysis, in the context of string cosmology models, was carried out in AB .
## Results
In order to address whether a background characterized by a given energy spectrum $`\mathrm{\Omega }_{\mathrm{gw}}(f;z_s,g_s)`$ is indeed detectable or not, one needs to evaluate the signal-to-noise ratio (SNR) that can be obtained by cross-correlating the output of two detectors as a function of the model parameters, in our case $`z_s`$ and $`g_s`$ (see also AB ). The data analysis issues related to the detection of a stochastic background have been thoroughly discussed in ALRO , where the reader can find full details. Here we recall only the expression of the signal-to-noise ratio:
$$\mathrm{SNR}\frac{3H_0^2}{10\pi ^2}T_{\mathrm{obs}}^{1/2}\left[_{\mathrm{}}^{\mathrm{}}𝑑f\frac{\gamma ^2(|f|)\mathrm{\Omega }_{\mathrm{gw}}^2(|f|;z_s,g_s)}{f^6S_1(|f|)S_2(|f|)}\right]^{1/2}.$$
(1)
In Eq. (1) $`T_{\mathrm{obs}}`$ is the observation time – which is usually assumed to be a few months long – $`H_0=100h_0`$ km sec<sup>-1</sup> Mpc<sup>-1</sup> is the Hubble constant, $`S_{i=1,2}(f)`$ is the noise spectral density of the $`i`$th detector, and $`\gamma (f)`$ is the overlap reduction function. The spectrum $`\mathrm{\Omega }_{\mathrm{gw}}(f;g_s,f_s)`$ for minimal PBB models is given in BMU .
Fig. 2 summarizes the results: it shows the contour plots of SNR for $`T_{\mathrm{obs}}=10^7`$sec which can be achieved by cross-correlating the two 4-km LIGO interferometers located in Hanford and Livingston. The contours are drawn in the relevant 2-dimensional parameter space: in order to highlight the physical content, our parameter choice corresponds to $`\mathrm{log}_{10}(z_s)`$ and $`\mathrm{log}_{10}(g_s/g_1)`$; here $`g_1^2/4\pi =\alpha _{GUT}`$, where $`\alpha _{GUT}`$ is the gauge coupling at the unification scale ($`\alpha _{GUT}1/20`$). The two free-parameters of the models are defined over the following range: $`0<g_s/g_1<1`$ and $`1<z_s<10^{16}`$. In our analysis we have considered the estimated noise spectral density for the initial, enhanced and advanced LIGO configuration (the so-called LIGO I, II, and III, respectively).
For initial LIGO, the minimum value of a detectable stochastic background is $`h_0^2\mathrm{\Omega }_{\mathrm{gw}}^{\mathrm{min}}5\times 10^6`$, cfr. also ALRO , and therefore the experiments can not provide any direct hint about PBB models: the maximum value of SNR, for fined-tuned parameters of the model, is just below one. However, the enhanced LIGO configuration will allow us to explore a relatively large parameter region: for SNR $`>5`$, one can probe models where $`g_s`$ differs by one order of magnitude with respect to its present value $`g_1`$ and about eight orders of magnitude in $`f_s`$; the SNR reaches a maximum $`100`$ when $`g_sg_1`$. The advanced LIGO configuration – which however requires major technological developments – will enlarge the $`g_s`$-range that one can test by more than one order of magnitude; for selected parameter values the experiment could reach SNR $`1000`$.
Notice that the experiments are very sensitive to $`g_s`$, and achieve the maximum SNR for $`g_sg_1`$.
One might wonder whether other interferometers and/or resonant detectors could provide relevant information for the problem at hand. Unfortunately the location, orientation and sensitivity of VIRGO, GEO600 and TAMA are such that any search involving one of these three instruments will not allow to detect a stochastic background of PBB gravitons; just as reference, a VIRGO-GEO600 cross-correlation experiment will reach $`h_0^2\mathrm{\Omega }_{\mathrm{gw}}^{\mathrm{min}}8\times 10^6`$ for an integration time of 4-months. Neither resonant antenna are suitable: two ”bars” operating at the quantum limit with exactly the same resonance frequency would reach a maximum sensitivity $`h_0^2\mathrm{\Omega }_{\mathrm{gw}}^{\mathrm{min}}5\times 10^6`$ (see also Vitale ), which is a factor $`10`$ worse than the one required for the most optimistic theoretical prediction; experiments involving hollow spheres Coccia , which are currently under study, could reach $`h_0^2\mathrm{\Omega }_{\mathrm{gw}}^{\mathrm{min}}10^7`$ for detectors operating at the quantum limit and co-located; this would produce a SNR $`1`$ for a PBB model whose parameters are such that the peak of $`\mathrm{\Omega }_{\mathrm{gw}}(f)`$ is right in the middle of the sphere frequency band.
## Conclusions
Minimal PBB models predict a GW stochastic background which could be detectable by cross-correlating the two LIGO instruments – or, more in general, two interferometers at a distance $`<3000`$ km and quasi-optimally oriented – with a sensitivity which is intermediate between the first and second stage. In the time frame 2004-2005, when the detectors are expected to operate in the so-called enhanced configuration, we will be able to place experimental constraints on PBB models for a fairly large portion of the free-parameter space. A more detailed analysis regarding the issues discussed here is currently in preparation. |
no-problem/9911/gr-qc9911054.html | ar5iv | text | # On Wigner’s clock and the detectability of spacetime foam with gravitational-wave interferometers
## I Introduction
In their recent paper ”On the detectability of quantum spacetime foam with gravitational-wave interferometers,” Adler, Nemenman, Overduin, and Santiago claim that the way we use Wigner’s quantum clock in a gedanken timing experiment is not justified, thus casting doubt on the detectability of spacetime foam with gravitational-wave interferometers. In particular, they claim that the quantum uncertainty limit for the position of the quantum clock is actually much smaller than that obtained by Wigner and used by us. Since we were the first to propose using Wigner’s clock to explore the quantum structure of spacetime and to conclude that classical spacetime breaks down into ”quantum foam” in a manner quite different from the canonical picture , we feel a special obligation to respond to the criticism and to clarify the physics behind our proposal. We will show that the arguments by Adler et al. are invalid.
But first, we should make it clear that we merely want to find out what the low-energy limit of quantum gravity can tell us about the structure of spacetime. For that purpose, it suffices to employ the general principles of quantum mechanics and general relativity. We have little to say about, and for this work, have no use for, the correct theory of quantum gravity (be it string theory, Ashtekar variables/loop-gravity formalism, or something else). We have in mind the low-energy limit of quantum gravity as manifested in the low-frequency spectrum of the displacement noise levels registered in the gravitational-wave interferometers.
In the next section, we recapitulate our previous work on spacetime measurements and spacetime foam. In Section III, we respond to each of the four objections against our work raised by Adler et al. In Section IV, we answer some further questions which we think the readers may ask. We offer our conclusions in Section V. We point out that our spacetime foam model is consistent with the holographic principle.
## II Space-time measurements and the foaminess of spacetime
Suppose we want to measure the distance between two separated points A and B. To do this, we put a clock (which also serves as a light-emitter and receiver) at A and a mirror at B. A light signal is sent from A to B where it is reflected to return to A. If the clock reads zero when the light signal is emitted and reads $`t`$ when the signal returns to A, then the distance between A and B is given by $`l=ct/2`$, where $`c`$ stands for the speed of light. The next question is: What is the uncertainty (or error) in the distance measurement? Since the clock at A and the mirror at B are the agents in measuring the distance, the uncertainty of distance $`l`$ is given by the uncertainties in their positions. We will concentrate on the clock, expecting that the mirror contributes a comparable amount to the uncertainty in the measurement of $`l`$. Let us first recall that the clock is not stationary; its spread in speed at
time zero is given by the Heisenberg uncertainty principle as
$$\delta v=\frac{\delta p}{m}\begin{array}{c}>\\ \end{array}\frac{\mathrm{}}{2m\delta l},$$
(1)
where $`m`$ is the mass of the clock. This implies an uncertainty in the distance at time $`t`$,
$$\delta l(t)=t\delta v\begin{array}{c}>\\ \end{array}\left(\frac{\mathrm{}}{m\delta l(0)}\right)\left(\frac{l}{c}\right),$$
(2)
where we have used $`t/2=l/c`$ (and we have dropped an additive term $`\delta l(0)`$ from the right hand side since its presence complicates the algebra but does not change any of the results). Minimizing $`(\delta l(0)+\delta l(t))/2`$, we get the quantum mechanical uncertainty relation
$$\delta l^2\begin{array}{c}>\\ \end{array}\frac{\mathrm{}l}{mc}.$$
(3)
Next, we make use of the principle of equivalence, by exploiting the equality of the inertial mass and the gravitational charge of the clock, to eliminate the dependence on $`m`$ in the above inequality. This will promote the quantum mechanical uncertainty relation to a quantum gravitational uncertainty relation, making the uncertainty expression useful. Let the clock at A be a light-clock consisting of two parallel mirrors (each of mass $`m/2`$), a distance of $`d`$ apart, between which bounces a beam of light. On the one hand, the clock must tick off time fast enough such that $`d/c\begin{array}{c}<\\ \end{array}\delta l/c`$, in order that the distance uncertainty is not greater than $`\delta l`$: $`\delta l\begin{array}{c}>\\ \end{array}d`$. On the other hand, $`d`$ is necessarily larger than the Schwarzschild radius $`Gm/c^2`$ of the mirrors ($`G`$ is Newton’s constant) so that the time registered by the clock can be read off at all: $`d\begin{array}{c}>\\ \end{array}\frac{Gm}{c^2}`$. From these two requirements, it follows that
$$\delta l\begin{array}{c}>\\ \end{array}\frac{Gm}{c^2},$$
(4)
the product of which and Eq. (3) yields the (low-energy) quantum gravitational uncertainty relation
$$\delta l\begin{array}{c}>\\ \end{array}(ll_P^2)^{1/3},$$
(5)
where $`l_P=(\frac{\mathrm{}G}{c^3})^{1/2}`$ is the Planck length. The intrinsic uncertainty in space-time measurements just described can be interpreted as inducing an intrinsic uncertainty in the space-time metric $`g_{\mu \nu }`$. Noting that $`\delta l^2=l^2\delta g`$ and using Eq. (5) we get
$$\delta g_{\mu \nu }\begin{array}{c}>\\ \end{array}(l_P/l)^{2/3}.$$
(6)
The fact that there is an uncertainty in the space-time metric means that space-time is foamy. Eq. (5) and Eq. (6) constitute our model of spacetime foam. We note that even on the size of the whole observable universe ($`10^{10}`$ light-years), Eq. (5) yields a fluctuation of only about $`10^{15}`$ m. We further note that, according to our spacetime foam model, space-time fluctuations lead to decoherence phenomena. The point is that the metric fluctuation $`\delta g`$ induces a multiplicative phase factor in the wave-function of a particle (of mass $`m`$)
$$\psi e^{i\delta \varphi }\psi ,$$
(7)
given by
$$\delta \varphi =\frac{1}{\mathrm{}}mc^2\delta g^{00}𝑑t.$$
(8)
One consequence of this additonal phase is that a point particle with mass $`m>m_P`$ ($`m_P\mathrm{}/cl_P`$ is the Planck mass) is a classical particle (i.e., it suffices to treat it classically).
Though the fluctuations that space-time undergoes are extremely small, recently Amelino-Camelia has argued (convincingly, we think) that modern gravitational-wave interferometers may soon be sensitive enough to test our model of space-time foam. The idea is fairly simple. Due to the foaminess of space-time, in any distance measurement that involves an amount of time $`t`$, there is a minute uncertainty $`\delta l(ctl_P^2)^{1/3}`$. But measuring minute changes in (the) relative distances (of the test masses or the mirrors) is exactly what a gravitational-wave interferometer is designed to do. Hence, the intrinsic uncertainty in a distance measurement for a time $`t`$ manifests itself as a displacement noise (in addition to other sources of noises) that infests the interferometers
$$\sigma (ctl_P^2)^{1/3}.$$
(9)
We can write the displacement noise in terms of its Fourier transform, the associated displacement amplitude spectral density $`S(f)`$ of frequency $`f`$. For a frequency-band limited from below by the time of observation $`t`$, $`\sigma `$ is given in terms of $`S(f)`$ by
$$\sigma ^2=_{1/t}^{f_{max}}[S(f)]^2𝑑f.$$
(10)
For the displacement noise given by Eq. (9), the associated $`S(f)`$ is
$$S(f)f^{5/6}(cl_P^2)^{1/3}.$$
(11)
Since we are considering only the low-energy limit of quantum gravity, we expect this formula for $`S(f)`$ to hold only for frequencies much smaller than the Planck frequency ($`c/l_P`$).
We can now use the existing noise-level data obtained at the Caltech 40-meter interferometer to put a bound on $`l_P`$. In particular, by comparing Eq. (11) with the observed noise level of $`3\times 10^{19}\mathrm{mHz}^{1/2}`$ near 450 Hz, which is the lowest noise level reached by the interferometer, we obtain the bound $`l_P\begin{array}{c}<\\ \end{array}10^{29}`$ m which, of course, is consistent with the known value $`l_P10^{35}`$ m. Since $`S(f)`$ goes like $`f^{5/6}`$ according to Eq. (11), we can look forward to the LISA generation of gravitational-wave interferometers for improvement by optimizing the performance at low frequencies. (We hope that the gain by going to lower frequencies will not be offset by other factors such as a much larger arm length of the interferometers.)
## III Reply to the comments by Adler et al
In this section we reply to the four points raised by Adler et al. in their paper ”On the detectability of quantum spacetime foam with gravitational-wave interferometers.”
(1) Ref. claims that if Wigner’s clock is quantum mechanical but not free, then the uncertainty limit becomes much smaller than that (Eq. (3)) obtained by Wigner and used by us. In particular, Adler et al. give the example of a quantum clock bound in a harmonic oscillator potential. These authors err in neglecting the fact that the clock is then bound to something. We can now consider that something to be part of the clock (after all, we have already considered the light emission and detection devices as part of our clock), and proceed with our argument presented in Section II.
(2) Ref. claims that Wigner’s limit is based on another unrealistic assumption: that the clock does not interact with the environment. In particular, Adler et al. point out that, if the clock is sufficiently large or complex, it will interact with its enviroment in such a way that its wave function decoheres. In addition, these authors claim, such interactions may localize or ”collapse” the wave function, resulting in the clock wave function that does not spread linearly over macroscopic times, as opposed to what we have used in Eq. (2). We admit that Adler et al. have raised a good point. But the question of wave function decoherence in the context of fundamental spacetime measurements is quite subtle. We think it is much more reasonable that the phenomenon of enviroment-induced decoherence is an outcome (rather than an input) of quantum spacetime measurements at the fundamental level, with gravity being the universal agent of quantum decoherence as argued in Section II and as emphasized by us.
(3) Adler et al. observe that the existing noise-level data obtained at the Caltech 40-meter interferometer can be used to place a lower limit on the effective mass of the hypothetical clock in Eq. (3). They find the effective clock mass to be larger than 3 grams which, according to them, is such a remarkably large mass that it hardly seems plausible as a fundamental property of spacetime. This time, these authors err in forgetting that the length scale involved in the Caltech 40-meter interferometer measurement is macroscopic and has nothing to do with fundamental length scales. To appreciate this point, one can use Eq. (3) and Eq. (5) to show that the optimum mass for Wigner’s clock (optimum in the sense that it yields the smallest uncertainty in distance measurements) is given by
$$mm_P(l/l_P)^{1/3}.$$
(12)
Thus the optimum mass of the quantum clock depends on $`l`$, the distance in the distance measurement. If $`l`$ is macroscopic, the optimum mass is much larger than the Planck mass. On the other hand, if we dare (recall that we expect our result to be valid only for the low-energy domain of quantum gravity) to use Eq. (12) in the measurement of a microscopic distance approaching the fundamental length scale $`l_P`$, the optimum mass of the hypothetical clock would approach $`m_P`$, the fundamental mass scale. Therefore, the relatively large mass of the clock found in Ref. is to be expected since the distance involved in the Caltech interferometer measurement at 450 Hz is huge compared to the Planck length.
The above three objections raised by Adler et al. are all directed at the quantum uncertainty limit (Eq. (3)) obtained by Wigner and used by us. There is a way, albeit an indirect one, to show that the uncertainty limit (Eq. (3)) actually should be quite palatable even to those who believe that the intrinsic uncertainty in distance measurements is independent of the distance being measured and is given simply by the Planck length. All it takes is to use Eq. (3) as the starting point. But for the bound on $`m`$, instead of Eq. (4), one uses
$$l\begin{array}{c}>\\ \end{array}\frac{Gm}{c^2},$$
(13)
which is nothing but the mathematical statement of the obvious observation that, to measure the distance from A to B, point B should not be inside the Schwarzschild radius of the clock at A. Then one finds
$$\delta l\begin{array}{c}>\\ \end{array}l_P,$$
(14)
the canonical uncertainty in distance measurements. Thus the only question remaining is whether the more restrictive bound on $`m`$ given by Eq. (4) is also correct. This brings us to the next comment.
(4) Adler et al. note that the presence of the measurement clock system certainly produces a distortion of spacetime, but Eq. (4) tells us that it also produces an uncertainty in spacetime distances of about the same amount. (This fact has not escaped our attention. See Ref. and .) They contend that Eq. (4) must be wrong. In particular, taking the spinning Earth as the quantum clock, they assert that, with Eq. (4), one would conclude that objects in the vicinity of the Earth have a minimum intrinsic position uncertainty of roughly the Schwarzschild radius of the Earth, which is about 1 cm, and this is manifestly false by many orders of magnitude. Our reply is simply that one cannot use the spinning Earth, excellent as it is as a clock for daily lives, as Wigner’s quantum clock in the gedanken timing experiment. For one thing, the spinning Earth, by itself, cannot function as a clock. One also needs the Sun or the stars, for example, thus complicating the already huge timing device. As a clock, the spinning Earth is not very accurate. Let us imagine building a telescope with opening as large as the earth. For visible light, its resolving power is about $`10^{14}`$ radians. The Earth and the telescope rotate by about $`10^4`$ radians per sec. Hence, the spinning Earth, as a clock, cannot be precise beyond the $`10^{10}`$ sec. level, which can be translated to yield a distance measurement accuracy to about 1 cm, hardly the precision needed for spacetime measurements at the fundamental level. Intuitively, it is also clear that a quantum spacetime measurement cannot tolerate the use of a monstrously huge and massive clock like the spinning Earth which causes such a distortion in the geometry of spacetime that it completely overwhelms the uncertainty in distance measurements. Recall that even on the size of the observable universe, the end result of our analysis yields a distance fluctuation of only about $`10^{15}`$ m which is much smaller than the Schwarzschild radius of the Earth. It is true that we have merely used the light-clock as a model clock and there may be more ideal clocks to use; but due to its simplicity, the light-clock fits the bill of a quantum clock for the gedanken timing experiment at the fundamental level.
## IV Comments on some other questions
In this section, we comment on four more questions that we think some of our readers may ask.
(1) In Section II, we require our light-clock to tick off time fast enough such that $`d/c\begin{array}{c}<\\ \end{array}\delta l/c`$, implying that $`d/c`$ is the smallest unit of time for our light-clock. Some readers may well ask whether it is not possible to have smaller units of time by taking fractions of $`d/c`$. If it is possible, then the inequality $`d/c\begin{array}{c}<\\ \end{array}\delta l/c`$ needs not hold. Our reply is that, to be accurate, $`d/c`$ is indeed the smallest unit of time for our light-clock. To make an analogy, one does not use a minute-clock to time a 100-m dash which takes only about 10 sec, a fraction of a minute.
(2) Recall that our use of the light-clock in the gedanken timing experiment yields Eq. (4). One may wonder if the ensuing result (Eq. (5)) is not just an artifact of our model clock. Thus it is logical to ask whether it is not possible to replace our light-clock with some other types of clocks such that all those inequalities (including the Schwarzschild bound) no longer hold. In the absence of explicit examples, it is hard to draw any conclusions. But let us consider a clock made of a small object revolving around a black-hole just outside its event-horizon. (And let us ignore the gravitational radiation problem.) Timing is provided by the periodicity of the motion. Then there is no analog of $`d`$, the separation of mirrors in the light-clock, and it follows that those inequalities are no longer valid, so goes the hypothetical argument. The trouble with this argument is that actually there is an analog of $`d`$, given by the size of the orbit around the black-hole; and mass of the clock here is that of the black-hole. Therefore it follows that those inequalities in Eqs. (4) and (5) (as an order of magnitude estimate) still hold.
(3) In Eq. (10), we have used $`1/t`$ as the lower limit of integration; but what if the lower limit is actually a multiple (call it n) of $`1/t`$? The answer is that, since Eq. (9) holds only up to a multiplicative factor of order 1, a short calculation shows that so long as the multiple n is no more than 2 orders off unity, Eq. (10) stands as it is.
(4) One may worry that the metric fluctuations given by Eq. (6) yield an unacceptably large fluctuation in energy density. Since we asked ourselves this very question and answered it in Ref. already, we will be very brief here. But let us generalize the discussion to metric fluctuations of the form parametrized by $`a`$ with $`0<a1`$
$$\delta g\begin{array}{c}>\\ \end{array}(l_P/l)^a$$
(15)
(corresponding to distance uncertainties of $`\delta l\begin{array}{c}>\\ \end{array}l^{1a}l_P^a`$ and displacement amplitude spectral densities of $`S(f)c^{1a}l_P^af^{a3/2}`$). Models with larger spacetime fluctuations are parametrized by smaller values of $`a`$. We note that, for our model of spacetime foam, $`a=2/3`$, while, for the canonical model, $`a=1`$. The case $`a=1/2`$ corresponds to the model of spacetime foam considered in Ref. . Regarding the metric fluctuation as a gravitational wave quantized in a spatial box of volume $`V`$, one finds that the energy density is given by
$$\rho \left(\frac{m_Pc^2}{V}\right),$$
(16)
for $`1/2<a1`$. Thus the energy density associated with metric fluctuations given by Eq. (15) is obviously small in the large volume ($`V>>l_P^3`$) limit which we have assumed. Note that the energy density is of the form given by Eq. (16) and holds, as an order of magnitude estimate (consistent with what we have been using), independent of the parameter $`a`$ so long as $`a`$ is not too close to 1/2. For $`a=1/2`$, one gets
$$\rho \left(\frac{m_Pc^2}{V}\right)ln\left(\frac{V^{1/3}}{l_P}\right).$$
(17)
For $`0<a<1/2`$, one finds
$$\rho \left(\frac{m_Pc^2}{V}\right)\left(\frac{V^{1/3}}{l_P}\right)^{12a}.$$
(18)
The trend is clear: in general, larger spacetime fluctuations cost more energy. Note that the energy density $`\rho `$ associated with metric fluctuations (Eq. (15)) is the smallest for the range of $`a`$ which includes the canonical model and our model of spacetime foam.
## V Conclusions
In Ref. , Adler et al. raise four objections to our work on spacetime measurements and spacetime foam. Three of the objections are related to the question whether the quantum uncertainty limit obtained by Wigner and used by us is valid. These authors also criticize the gravitational uncertainty limit (Eq. (4)) obtained by us; they conclude that it is an artifact of our choice of a particular type of hypothetical clock and is, therefore, non-fundamental in nature. While they have raised some good points, we believe their argument is flawed (as shown in Section III). We agree that the question of an ideal quantum clock is not yet settled. But it is just inappropriate to use the spinning Earth as a quantum clock in a fundamental spacetime measurement. History has taught us that fundamental physics is best explored with simple devices; our light-clock is a simple device.
Since all the criticism by Adler et al. is related to issues of clocks, perhaps a better argument for a spacetime foam different from the canonical model is one that does not use clocks. As shown in Section IV, the energy density associated with spacetime quantum fluctuations takes on the smallest (and comparable) values for those spacetime foam models with the parameter $`a`$ in the range $`1/2<a1`$ so long as $`a`$ is not too close to 1/2. So, it is possible that Nature chooses to have a larger spacetime fluctuation (than that predicted by the canonical model) at a comparable cost of energy. This argument is very loose, but hopefully we have made our point. Only future experiments can tell which value of $`a`$ (i.e., which spacetime foam model) Nature picks. At present, if we assume that the distance uncertainty expressions given above are not off by more than an order of magnitude, a short calculation shows that the existing data provided by the Caltech 40-meter interferometer rule out models with $`a<0.54`$. We can expect more stringent bounds on $`a`$ with modern gravitational-wave interferometers.
There is one theoretical consideration which sets our model of spacetime foam ($`a=2/3`$) apart from the others. It is its connection to the holographic principle which asserts that the number of degrees of freedom of a region of space is bounded (not by the volume but) by the area of the region in Planck units. To see that, let us consider a region of space with linear dimension $`l`$. According to the conventional wisdom, the region can be partitioned into cubes as small as $`l_P^3`$. It follows that the number of degrees of freedom of the region is bounded by $`(l/l_P)^3`$, i.e., the volume of the region in Planck units. But according to our spacetime foam model, the smallest cubes inside that region have a linear dimension of order $`(ll_P^2)^{1/3}`$. Accordingly, the number of degrees of freedom of the region is bounded by $`[l/(ll_P^2)^{1/3}]^3`$, i.e., the area of the region in Planck units, as stipulated by the holographic principle. Thus one may even say that the holographic principle has its origin in the quantum fluctuations of spacetime. Evidence for our spacetime foam model would lend experimental support for the holographic principle.
Finally we recall that spacetime (metric) fluctuations can be regarded as a kind of quantized gravitational waves. It is uncanny that, through future refinements, modern gravitational-wave interferometers like LIGO, VIRGO, and LISA, which are designed to detect gravitational waves from neutron stars, supernovae, black-holes, and the like, may also be able to detect, as a by-product, a very different kind of gravitational waves — the kind that encodes the quantum fluctuations of spacetime.
Acknowledgments
One of us (YJN) thanks R. Weiss for a useful discussion. This work was supported in part by the U.S. Department of Energy under #DF-FC02-94ER40818 and #DE-FG05-85ER-40219, and by the Bahnson Fund of the University of North Carolina at Chapel Hill. Part of the work was carried out by YJN while he was on leave of absence at MIT. He thanks the faculty at the Center for Theoretical Physics for their hospitality. |
no-problem/9911/nucl-th9911045.html | ar5iv | text | # 1 Experimental (a) and SMM generated (c) Campi scatter plots Three cuts are introduced to select liquid-like events (Cut 1), gas-like events (Cut 3) and critical events (Cut 2). Panels b) and d) show the correlation between temperatures and excitation energies for experimental and SMM generated events. Events belonging to different cuts are shown by blue (1), red (2) and green (3) squares. Sizes of the squares are proportional to the yields. The solid lines in panels b) and d) show the mean temperatures for all the events at given 𝜀^∗.
UFTP-498/1999
Studying Phase Transitions in Nuclear Collisions
I.N. Mishustin
The Kurchatov Institute, Russian Research Center, 123182 Moscow, Russia;
The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark;
Institute for Theoretical Physics, J.-W. Goethe University, Robert-Meyer Str. 8-10, D-60054 Frankfurt am Main, Germany
## Abstract
In this talk I discuss three main topics concerning the theoretical description and observable signatures of possible phase transitions in nuclear collisions. The first one is related to the multifragmentation of equilibrated sources and its connection to a liquid-gas phase transition in finite systems. The second one is dealing with the Coulomb excitation of ultrarelativistic heavy ions resulting in their deep disintegration. The third topic is devoted to the description of a first order phase transition in rapidly expanding matter. The resulting picture is that a strong collective flow of matter will lead to the fragmentation of a metastable phase into droplets. If the transition from quark-gluon plasma to hadron gas is of the first order, it will manifest itself by strong nonstatistical fluctuations in observable hadron distributions.
INTRODUCTION
A general goal of present and future experiments with heavy-ion beams is to study the properties of strongly interacting matter away from the nuclear ground state. The main interest is focussed on searching for and studying possible phase transitions. Several phase transitions are predicted in different domains of temperature $`T`$ and baryon density $`\rho _B`$. There is no doubt that there should be a first order phase transition of the liquid-gas type in normal nuclear matter. This follows simply from the existence of the nuclear bound state at the saturation density $`\rho _00.15`$ fm<sup>-3</sup>. Therefore, at $`\rho _B<\rho _0`$ and low temperatures, $`T<T_c10`$ MeV, the matter will organize itself in the form of a mixed phase with droplets of nuclear liquid surrounded by the nucleon gas. The only problem is whether relatively small amounts of excited nuclear matter produced in nuclear collisions and its limited lifetime are sufficient to observe this phase transition. Based on recent data on the nuclear caloric curve and temperature fluctuations I am tempting to give a positive answer to this question. This topic will be discussed in the first part of the talk after a short description of the Statistical Multifragmentation Model (SMM) which provides a basis for theoretical analysis.
The situation at high $`T`$ and nonzero baryon chemical potential $`\mu `$ ($`\rho _B>0`$) is not so clear, although everybody is sure that the deconfinement and chiral transitions should occur somewhere. The phase structure of QCD is not yet fully understood. Reliable lattice calculations exist only for $`\mu =0`$ ($`\rho _B=0`$) where they predict a second order phase transition or crossover at $`T160`$ MeV. As model calculations show, the phase diagram in the $`(T,\mu )`$ plane may contain a first order transition line (below called the critical line) terminated at a (tri)critical point . Possible signatures of this point in heavy-ion collisions are discussed in ref. . Under certain non-equilibrium conditions, a first order transition is also predicted for symmetric quark-antiquark matter .
A striking feature of central heavy-ion collisions at high energies, confirmed in many experiments (see e.g. ), is a very strong collective expansion of matter. The applicability of equilibrium concepts for describing phase transitions under such conditions becomes questionable. In the last part of the talk I demonstrate that non-equilibrium phase transitions in rapidly expanding matter can lead to interesting phenomena which, in a certain sense, can be even easier to observe .
In the middle part of the talk I address a question which is closely related to the main topic of this conference. It illustrates how the knowledge accumulated in intermediate-energy heavy-ion physics can be used for ultrarelativistic heavy-ion colliders. Namely, I will discuss the excitation of nuclei by Lorentz-contracted and strongly-enhanced Coulomb fields of ultrarelativistic heavy ions. As well known, this process can be treated in terms of equivalent photons. Their flux grows linearly with the squared nuclear charge, and their characteristic energy is proportional to the relative Lorentz factor of colliding nuclei. This is why this Coulomb excitation of nuclei becomes especially important in high-energy heavy-ion colliders such as RHIC and LHC. The calculations show that in such colliders the equivalent photon spectrum extends far above the Giant Resonance region, into the GeV domain. The absorption of such a photon by a nucleus leads to its high excitation and subsequent disintegration. This might be an important factor determining a lifetime of ultrarelativistic heavy-ion beams.
STATISTICAL MULTIFRAGMENTATION AND LIQUID-GAS PHASE TRANSITION
When a nucleus is suddenly heated up to a temperature $`T`$ it starts expanding to adjust a new equilibrium density $`\rho _0(T)`$ which is less than the equilibrium density at zero temperature $`\rho _0`$. If the initial temperature is high enough the expansion is unlimited. At some stage of expansion the system enters into the spinodal region, where the homogeneous distribution of matter becomes thermodynamically unstable. Therefore, the nucleons form smaller and bigger clusters or droplets with density close to $`\rho _0`$. This clusterization process resembles a liquid-gas phase transition in ordinary fluids. In the transition region the matter is very soft in a sense that the sound velocity is close to zero (soft point). This means also that the expansion is slow and the system has enough time to find a most favorable cluster-size distribution maximizing the entropy.
At a later stage of expansion the system reaches a so-called freeze-out state when clusters cease to interact with each other. This break-up state of the system can be described within a statistical approach. In 1985 we have constructed a Statistical Multifragmentation Model (SMM) which up to now is one of the most successful realizations of this approach for finite nuclear systems. The model and its numerous applications are described in detail in a recent review . A similar model was also constructed by Gross . In this talk I outline only some general features of the SMM and give a few examples of how it works.
It is assumed that at break-up the system consists of primary hot fragments and nucleons in thermal equilibrium. Each break-up channel or partition, $`f`$, is specified by the multiplicities of different species, $`N_{AZ}`$, constrained by the total baryon number $`A_0`$ and charge $`Z_0`$. The total fragment multiplicity is defined as $`M=_{AZ}N_{AZ}`$. The probabilities of different break-up channels are calculated in an approximate microcanonical way according to their statistical weights,
$$W_f\mathrm{exp}\left[S_f(E^{},V,A_0,Z_0)\right],$$
(1)
where $`S_f`$ is the entropy of a channel $`f`$ at excitation energy $`E^{}`$ and volume $`V`$.
Translational degrees of freedom of fragments are described by the Boltzmann statistics while the internal excitations of individual fragments with $`A>4`$ are calculated according to the quantum liquid-drop model. An ensemble of microscopic states corresponding to a break-up channel $`f`$ is characterized by a temperature $`T_f`$ which is determined from the energy balance equation
$$\frac{3}{2}T(M1)+\underset{(A,Z)}{}E_{AZ}(T)N_{AZ}+E_f^C(V)Q_f=E^{}.$$
(2)
Here the first term comes from the translational motion, the second term includes internal excitation energies of individual fragments, the third term is the Coulomb interaction energy and the last one is the Q-value of the channel $`f`$. The excitation energy $`E^{}`$ is measured with respect to the ground state of the compound nucleus ($`A_0`$,$`Z_0`$). It is fixed for all fragmentation channels while the temperature $`T_f`$ fluctuates from channel to channel.
The total break-up volume is parametrized as $`V=(1+\kappa )V_0`$, where $`V_0`$ is the compound nucleus volume at normal density and the model parameter $`\kappa `$ is the same for all channels. The entropy associated with the translational motion of fragments is determined by the “free” volume, $`V_f`$, which is only a fraction of the total break-up volume $`V`$. In the SMM $`V_f(M)`$ is parametrized in such a way that it grows almost linearly with the primary fragment multiplicity $`M`$ or equivalently, with the excitation energy $`\epsilon ^{}=E^{}/A_0`$ of the system .
At given inputs $`A_0`$, $`Z_0`$ and $`\epsilon ^{}`$ the individual multifragment configurations are generated by the Monte Carlo method. After the break-up the hot primary fragments propagate in a common Coulomb field and loose their excitation. The most important de-excitation mechanisms included in the SMM are the simultaneous Fermi break-up of lighter fragments ($`A16`$) and the evaporation from heavier fragments, including the compound-like residues. In refs. one can find fresh examples showing how well the SMM works in describing the multifragmentation of thermalized sources.
In ref. an equation of state of a multifragment system was calculated for the grand canonical version of the SMM. As expected, it shows clear signs of a liquid-gas phase transition with a critical temperature of about 7 MeV. In the transition region the pressure isotherms are very flat indicating that the sound velocity is very small. In this region the compressibility and specific heat have nonmonotonic behaviour.
The most interesting prediction of the statistical model, a plateau in the caloric curve $`T(\epsilon ^{})`$, was formulated already in 1985 . Since that time it was a challenge for experimentalists to measure the nuclear caloric curve. First measurements were performed at GSI by the ALADIN collaboration only in 1995 . They have shown an impressive agreement with the theoretical prediction. These results have initiated an avalanche of other measurements and a lively discussion in the community (see latest ALADIN results in ref. ).
Most temperature measurements are based on the Albergo method relating the temperature to the double ratio of isotope yields. The analysis shows (see for instance ) that the temperatures extracted by this method are very sensitive to the side-feeding and nuclear structure effects. According to SMM the observed light isotopes are produced mainly by the secondary decays of hot primary fragments. This leads to a difference between the isotopic temperatures and true thermodynamical temperature at freeze-out (see detailed analysis in ref. ). In particular, isotopic temperatures have typically a less pronounced plateau than the true temperature, which can even have a backbending. In recent years several comparisons have been made (see examples in refs. which generally show very good agreement between the theory and experiment.
One should bear in mind that the ALADIN caloric curves are measured for a wide ensemble of decaying sources associated with the projectile or target spectators produced in peripheral nuclear collisions. The thermodynamical significance of such observations would increase if the measurements were done for a fixed source size with a varying excitation energy. Also the temperature measurements on the event-by-event basis would make it possible to study its fluctuations and therefore the heat capacity of the nuclear system. Such an analysis was performed recently by the Bologna group in the study of quasi-projectile (QP) fragmentation in peripheral Au+Au collisions at 35 A MeV. In this analysis only the events with reconstructed QP charges $`70<Z_{QP}<88`$ were included. The excitation energy, determined by a calorimetric method, varied for these events from 0.5 to about 8 Mev/nucleon.
Measuring temperatures event by event is of course a nontrivial task. An attempt of estimating the event “temperature” (below denoted by $`\theta `$) was made in ref. . The idea is to apply the energy balance equation (2), which is used in the SMM, but now for the experimental events. Of course, this requires certain assumptions on how the observed partitions, involving cold reaction products, are related to the original partitions consisting of hot primary fragments. Therefore, it was assumed that the light particles detected in a partition were produced by de-excitation of hot primary fragments. For reconstructing a primary partition these light particles were shared among the detected fragments proportionally to their charges and assuming the charge-to-mass ratio as in the entrance channel. Applying this procedure for the asymptotic SMM events showed that the correlation between the microcanonical temperature and excitation energy was reproduced within 5$`\%`$.
Fig. 1 shows the scatter plots in the $`(T,\epsilon ^{})`$ plane for experimental (b) and SMM generated (d) events. The ensemble-averaged temperatures are indicated by the solid lines. Their behaviour is typical for caloric curves measured by other methods. In addition to the flattening of the average temperature at $`T`$ 6 MeV, one can clearly see the broadening of the distributions in the transition region at $`\epsilon ^{}=4÷8`$ MeV/nucleon. The quantity characterizing energy fluctuations is heat capacity. For a canonical ensemble at constant volume it can be expressed as
$$C_V=\frac{\sigma _E^2}{T^2}=\frac{E^2E^2}{T^2}.$$
(3)
It is not clear whether the constant volume condition applies to actual freeze-out configurations but nevertheless studying the energy fluctuations provides an additional and important information compared to the average characteristics. Indeed, applying Eq. (3) for scatter plots of Fig. 1 reveals a peak in $`C_V`$ at temperatures around 6 MeV. This behaviour was also predicted theoretically a long time ago .
Another way of characterizing the critical behaviour is to analyze the conditional moments of fragment multiplicity distributions introduced by Campi . Fig. 1 a) shows, for each event $`j`$, the experimental correlation between the logarithm of the charge of the largest fragment, $`\mathrm{ln}(Z_{big}^{(j)})`$, and the logarithm of the corresponding second moment of the multiplicity distribution, $`\mathrm{ln}(m_2^{(j)})`$ (Campi scatter plot). Fig. 1 c) shows the same for events generated by the SMM. As expected for a system experiencing a phase transition these plots exhibit two branches: an upper branch with an average negative slop, corresponding to under-critical events, and a lower branch with a positive slop that corresponds to super-critical events. The two branches meet in a central region signalling the approach to a critical point. This trend is nicely reproduced in Fig. 1 by both the experiment and the theory.
We have made three cuts in these scatter plots selecting the upper branch (Cut 1), the lower branch (Cut 3) and the central region (Cut 2) and analyzed the events falling in each of the three zones. The fragment charge distributions in these three zones exhibit shapes going from a U-shape in Cut 1, characteristic of the evaporation events at low excitation energies, to an exponential one in Cut 3, characteristic of the vaporization events at high excitations. In Cut 2 a power-low fragment charge distribution Z with $`\tau 2.2`$ is observed as expected according to the Fisher’s droplet model for fragment formation near the critical point of a liquid-gas phase transition (see also an interesting analysis of ref. ).
The contributions of these three types of events to the caloric curves are shown in Fig. 1 for experiment (b) and for theory (d). It is clearly seen for both the data and the SMM, that in Cuts 1 and 3 besides normal events there are unusual events (although with low probability) which lie far from the average $`T(\epsilon ^{})`$ behaviour. These are compound-like states with very high temperatures and vaporization events with low temperatures. For these events one can make the analogy respectively with an overheated liquid and a super-cooled gas in the ordinary liquid-gas phase transition. Here we see the advantage of a finite system where not only the most probable states but also the metastable states can be produced with a finite probability. In my opinion, the observation of these metastable states is the best indication that we are dealing here with the first order phase transition of the liquid-gas type. These interesting questions were further studied in ref. .
ELECTROMAGNETIC EXCITATION OF ULTRARELATIVISTIC HEAVY IONS
It has become clear in recent years that high nuclear excitations can be induced by the Coulomb fields of ultrarelativistic heavy ions. Following the famous Waizsäcker-Williams method the Lorentz contracted Coulomb field of an ultrarelativistic projectile in the rest frame of a target nucleus (and vice versa) can be represented as a beam of equivalent or virtual photons. The flux of equivalent photons with energy $`E_\gamma `$ in a collision of nuclei with charge $`Z`$ at impact parameter $`b`$ is given by the standard formula
$$N(E_\gamma ,b)=\frac{\alpha Z^2}{\pi ^2}\frac{x^2}{\beta ^2E_\gamma b^2}\left[K_1^2(x)+\frac{1}{\gamma ^2}K_0^2(x)\right],$$
(4)
where $`\alpha `$ is the fine structure constant, $`\beta =v/c`$ and $`\gamma =\sqrt{1\beta ^2}`$ is the relative Lorentz factor. The variable $`x`$ in the modified Bessel functions $`K_{0,1}(x)`$ is defined as $`x=E_\gamma b/(\beta \gamma \mathrm{}c).`$ Since $`K_{0,1}`$ drop exponentially at large arguments, the main contribution to the virtual photon flux comes from the region $`x1`$. Thus the characteristic energy of virtual photons grows linearly with $`\gamma `$. This explains why the relativistic Coulomb excitation is very important for ultrarelativistic heavy-ion beams where both $`\gamma `$ and $`Z`$ are large. For colliding beams $`\gamma =2\gamma _{beam}^21`$ that gives $`210^4`$ and $`10^7`$ for RHIC and LHC respectively. This brings the spectrum of virtual photons into the GeV energy domain, i. e. much above the traditionally studied Giant Resonance (GR) and Delta-resonance regions. The absorption of such a high-energy photon will result in a very high nuclear excitation sufficient for its total disintegration.
In ref. the description of nuclear photoabsorption was extended to the photon energies much above the GR region, where the excitation of individual nucleons and multiple pion production are the dominant reaction channels. A model of electromagnetic dissociation (ED) taking into account these high-energy photon absorption channels was constructed in ref. . According to this model, the fast hadrons produced after the photon absorption initiate a cascade of subsequent collisions with the intranuclear nucleons leading to the fast particle emission and heating of a residual nucleus. This stage is described by the Intranuclear Cascade Model (INC). At a later stage the nucleus undergoes de-excitation by means of the evaporation of nucleons and lightest fragments, binary fission or multifragmentation. The latter process becomes important at ultrarelativistic beam energies, when the excitation energy of residual nuclei exceeds 3-4 MeV/nucleon. This stage of the reaction is described by the SMM.
To include all the processes described above, in ref. we have developed a specialized computer code RELDIS aimed at the Monte Carlo simulation of the Relativistic ELectromagnetic DISsociation of nuclei. The simulation begins with generating the single- or double-photon absorption process. Then the INC model is used to calculate the fast particle emission and the characteristics of residual nuclei. Finally, de-excitation of thermalized residual nuclei is simulated by the SMM.
The cross section of the photo-nuclear ($`\gamma A`$) reaction induced by a photon of energy $`E_\gamma `$ is expressed as
$$\frac{d\sigma _{ED}}{dE_\gamma }=\sigma _{\gamma A}(E_\gamma )_{b_{min}}^{\mathrm{}}N(E_\gamma ,b)2\pi b𝑑b,$$
(5)
where $`b_{min}(R_p+R_t)`$ is a minimal impact parameter for heavy-ion collisions without nuclear overlap, $`\sigma _{\gamma A}(E_\gamma )`$ is an appropriate photo-absorption cross section, either measured for the $`A`$-nucleus with real photons or calculated within a model. The total ED cross section, $`\sigma _{ED}`$, is obtained by integrating Eq. (5) by $`dE_\gamma `$ from 0 to $`\mathrm{}`$. The calculations show that the total ED cross sections for RHIC and LHC are very large, 100 b and 200 b respectively. Accordingly, the ED reaction rates are much higher than those for nuclear interactions, although the ED events are much less violent. For instance, at expected RHIC luminosity $`L10^{27}`$ cm<sup>-2</sup>s<sup>-1</sup> the ED reaction rate will be $`10^5`$ interactions per second. Together with the electron capture reactions the ED processes will be the important factors reducing the lifetime of ultrarelativistic heavy-ion beams compared with the proton ones.
We have applied the RELDIS code for calculating the ED characteristics for several heavy-ion beams. The model is in a reasonable agreement with experimental data, when available. We have also made predictions for the reactions: 160A GeV Pb+Pb (SPS), 100A+100A GeV Au+Au (RHIC) and 2.75A+2.75A TeV Pb+Pb (LHC). The inclusive (multiplicity weighted) cross sections for emitting nucleons, pions and nuclear fragments in the electromagnetic dissociation of one of the colliding Au nuclei are shown in Fig. 2 as functions of the incident c.m. energy. Nuclear fragments are divided in two groups: fission fragments (30$`<`$Z$``$ 50) and Intermediate Mass Fragments (IMF s, 3$``$Z$``$30), which are associated with the multifragmentation. One can clearly see a steep rise in the yields of all species, especially IMF s, when the incident energy grows from the SPS to RHIC and LHC domain. The inclusive cross section for neutron emission is especially large, above 1000 b at RHIC and LHC. The average neutron multiplicities are predicted to be 4.1, 7.2 and 8.8 at SPS, RHIC and LHC respectively.
The predicted neutron multiplicity distributions are shown in Fig. 3. They have a nontrivial structure. There is a strong peak at 1n emission channel associated with the GR decay. On the other hand, there is a long tail of multiple neutron emission associated with more violent reaction channels, from the direct knock-out and evaporation from the compound nucleus to fission and multifragmentation. This is where our model including all these channels shows its strength. One can see, for example, that the probability to emit more than 20 neutrons is quite noticeable ($`5\%`$ at RHIC). These results might be important for designing neutron-sensitive zero-degree calorimeters at RHIC and LHC. One of such proposals was made recently in ref. but only the 1n channel was considered there.
FIRST ORDER PHASE TRANSITION IN FAST DYNAMICS
The implications of a strong collective expansion on the liquid-gas phase transition were discussed in ref. . Here I will focus on consequences of the strong collective flow of matter for a possible first order chiral transition. I will assume that the collective velocity field is described locally by the Hubble law, $`v(r)=Hr`$, where the Hubble “constant” $`H`$ may in general depends on time.
To make the discussion more concrete, I adopt a picture of the chiral phase transition predicted by the linear sigma-model with constituent quarks . Then the mean chiral field $`\mathrm{\Phi }=(\sigma ,\pi )`$ serves as an order parameter. The model respects chiral symmetry, which is spontaneously broken in the vacuum where $`\sigma =f_\pi `$, $`\pi =0`$. The effective thermodynamic potential $`\mathrm{\Omega }(T,\mu ;\mathrm{\Phi })`$ depends, besides $`\mathrm{\Phi }`$, on temperature $`T`$ and baryon chemical potential $`\mu `$. The schematic behaviour of $`\mathrm{\Omega }(T,\mu ;\mathrm{\Phi })`$ as a function of the order parameter field $`\sigma `$ at $`\pi =0`$ is shown in Fig. 4. The minima of $`\mathrm{\Omega }`$ correspond to the stable or metastable states of matter under the condition of thermodynamical equilibrium, where the pressure is $`P=\mathrm{\Omega }_{min}/V`$. The curves from bottom to top correspond to different stages of the isentropic expansion of homogeneous matter. Each curve represents a certain point on the ($`T,\mu `$) trajectory. As one can see from the figure, the model of ref. reveals a rather weak first order phase transition, although some other models predict a stronger transition. The discussion below is quite general.
Assume that at some early stage of the reaction the thermal equilibrium is established, and partonic matter is in a “high energy density” phase Q. This state corresponds to the absolute minimum of $`\mathrm{\Omega }`$ with the order parameter close to zero, $`\sigma 0`$, $`\pi 0`$, and chiral symmetry restored (curve 1). Due to a very high internal pressure, Q matter will expand and cool down. At some stage a metastable minimum appears in $`\mathrm{\Omega }`$ at a finite value of $`\sigma `$ corresponding to a “low energy density” phase H, in which chiral symmetry is spontaneously broken. At some later time, the critical line in the ($`T,\mu `$) plane is crossed where the Q and H minima have equal depths, i.e. $`P_\mathrm{H}=P_\mathrm{Q}`$ (curve 2). At later times the H phase becomes more favorable (curve 3), but the two phases are still separated by a potential barrier. If the expansion of the Q phase continues until the barrier vanishes (curve 4), the system will find itself in an absolutely unstable state at a maximum of the thermodynamic potential. Therefore, it will freely roll down into the lower energy state corresponding to the H phase. This situation is known as a spinodal instability.
As well known, a first order phase transition proceeds through the nucleation process. According to the standard theory of homogeneous nucleation , supercritical bubbles of the H phase appear only below the critical line, when $`P_H>P_Q`$. In rapidly expanding matter the nucleation picture might be very different. As shown in ref. , the phase separation in this case can start as early as the metastable H state appears in the thermodynamic potential, and a stable interface between the two phases may exist. An appreciable amount of nucleation bubbles and even empty cavities may be created already above the critical line.
The bubble formation and growth will also continue below the critical line. Previously formed bubbles will now grow faster due to increasing pressure difference, $`P_\mathrm{H}P_\mathrm{Q}>0`$, between the two phases. It is most likely that the conversion of Q matter on the bubble boundary is not fast enough to saturate the H phase. Therefore, a fast expansion may lead to a deeper cooling of the H phase inside the bubbles compared to the surrounding Q matter. Strictly speaking, such a system cannot be characterized by the unique temperature. At some stage the H bubbles will percolate, and the topology of the system will change to isolated regions of the Q phase (Q droplets) surrounded by the undersaturated vapor of the H phase.
The characteristic droplet size can be estimated by applying the energy balance consideration, proposed by Grady in the study of dynamical fragmentation of fluids. The idea is that the fragmentation of expanding matter is a local process minimizing the sum of surface and kinetic (dilational) energies per fragment volume. As shown in ref. , this prescription works fairly well also for multifragmentation of expanding nuclei, where the standard statistical approach fails.
Let us imagine an expanding spherical Q droplet of radius $`R`$, embedded in the background of the dilute H phase. The change of the thermodynamic potential, $`\mathrm{\Delta }\mathrm{\Omega }`$, compared to the uniform H phase can be easily estimated within the thin-wall approximation . According to the Grady’s prescription, the quantity to be minimized is $`\mathrm{\Delta }\mathrm{\Omega }`$ per droplet volume, $`VR^3`$, that is
$$\left(\frac{\mathrm{\Delta }\mathrm{\Omega }}{V}\right)_{droplet}=\left(P_\mathrm{Q}P_\mathrm{H}\right)+\frac{3\gamma }{R}+\frac{3}{10}\mathrm{\Delta }H^2R^2.$$
(6)
Here $`\mathrm{\Delta }=_\mathrm{Q}_\mathrm{H}`$ is the difference of the bulk energy densities of the two phases, $`\gamma `$ is the interface energy per unit area. One should notice that the last term, i.e. the change in the collective kinetic energy, is positive because $`_\mathrm{Q}>_\mathrm{H}`$. This term acts here as an effective long-range potential, similar to the Coulomb potential in nuclei. Since the bulk term does not depend on $`R`$ the minimization condition constitutes the balance between the collective kinetic energy and the interface energy. This leads to an optimum droplet radius
$$R^{}=\left(\frac{5\gamma }{\mathrm{\Delta }H^2}\right)^{1/3}.$$
(7)
One can say that the metastable Q phase is torn apart by a mechanical strain associated with the collective expansion. This phenomenon has a direct analogy with the fragmentation of pressurized fluids leaving nozzles . In a similar way, splashed water forms droplets which have little to do with the equilibrium liquid-gas phase transition.
In the lowest-order approximation the characteristic droplet mass can be calculated as $`M^{}\mathrm{\Delta }V`$. It is natural to think that nucleons and heavy mesons are smallest droplets of the Q phase. For numerical estimates I take $`\gamma =10`$ MeV/fm<sup>2</sup> and $`\mathrm{\Delta }=0.5`$ GeV/fm<sup>3</sup>, i.e. the energy density inside the nucleon. For the Hubble constant I consider two possibilities: $`H^1=20`$ fm/, representing a slow expansion from a soft point, and $`H^1=6`$ fm/c typical for a fast expansion. Substituting these values in Eq. (7) one gets $`R^{}`$=3.4 fm and 1.5 fm for the slow and fast expansion respectively. These two values of $`R^{}`$ give $`M^{}`$ of about 100 GeV and 10 GeV, respectively. At ultrarelativistic energies the collective expansion is very anisotropic, with the strongest component along the beam axes. For the predominantly 1-d expansion one should expect the formation of slab-like structures with intermittent layers of Q and H phases.
After separation the droplets recede from each other according to the global Hubble expansion, predominantly along the beam direction. Hence their center-of-mass rapidities are in one-to-one correspondence with their spatial positions. Presumably they will be distributed more or less evenly between the target and projectile rapidities. Since rescatterings in the dilute H phase are rare, most hadrons produced from individual droplets will go directly into detectors. One can guess that the number of produced hadrons is proportional to the droplet mass. Each droplet will give a bump in the hadron rapidity distribution around its center-of-mass rapidity. If emitted particles have a Boltzmann spectrum, the width of the bump will be $`\delta y2\sqrt{T/m}`$, where $`T`$ is the droplet temperature and $`m`$ is the particle mass. At $`T100`$ MeV this gives $`\delta y2`$ for pions and $`\delta y1`$ for nucleons. These spectra might be slightly modified by the residual expansion of droplets and their transverse motion. The resulting rapidity distribution in a single event will be a superposition of contributions from different droplets, and therefore it will exhibit strong non-statistical fluctuations. The fluctuations will be more pronounced if primordial droplets are big, as expected in the vicinity of the soft point. If droplets as heavy as 100 GeV are formed, each of them will emit up to $``$300 pions within a narrow rapidity interval, $`\delta y1`$. Such bumps can be easily resolved and analyzed. The fluctuations will be less pronounced if many small droplets shine in the same rapidity interval. Critical fluctuations of similar nature were discussed in ref. .
Some unusual events produced by high-energy cosmic nuclei have been already seen by the JACEE collaboration . Unfortunately, they are very few and it is difficult to draw definite conclusions by analyzing them. We should be prepared to see plenty of such events in the future RHIC and LHC experiments. It is clear that the nontrivial structure of the hadronic spectra will be washed out to a great extent when averaging over many events. Therefore, more sophisticated methods of the event sample analysis should be used. The simplest one is to search for non-statistical fluctuations in the hadron multiplicity distributions measured in a fixed rapidity bin . One can also study the correlation of multiplicities in neighbouring rapidity bins, bump-bump correlations etc. Such standard methods as intermittency and commulant moments , wavelet transforms , HBT interferometry can also be useful. All these studies should be done at different collision energies to identify the phase transition threshold. The predicted dependence on the Hubble constant and the reaction geometry can be checked in collisions with different ion masses and impact parameters.
CONCLUSIONS
* The statistical approach (SMM) works well in situations when thermalized sources are well defined and no significant collective flow is present.
* The quantitative agreement of SMM with recent data on the caloric curve and temperature fluctuations provides a strong indication on the nuclear liquid-gas phase transition. The nuclear heat capacity has a peak at $`T`$ 6 MeV.
* A first order phase transition in rapidly expanding matter should proceed through the nonequilibrium stage when a metastable phase splits into droplets. The primordial droplets should be biggest in the vicinity of a soft point when the expansion is slowest.
* Hadron emission from droplets of the quark-gluon plasma should lead to large nonstatistical fluctuations in their rapidity spectra and multiplicity distributions. The hadron abundances may reflect directly the chemical composition in the plasma phase.
* Electromagnetic excitation of nuclei in ultrarelativistic heavy-ion colliders is an important reaction mechanism leading to the deep nuclear disintegration. The multiple neutron emission associated with this process may be used for monitoring ultrarelativistic heavy-ion beams.
* And finally, we should use the lessons of the liquid-gas phase transition for future studies of the deconfinement-hadronization and chiral phase transitions in relativistic heavy-ion collisions.
ACKNOWLEDGMENTS
The author is grateful to J.P. Bondorf and A.D. Jackson for many fruitful discussions. I thank A.S. Botvina, M. D’Agostino, A. Mocsy, I.A. Pshenichnov, O. Scavenius for cooperation. Discussions with D. Diakonov, A. Dumitru, J.J. Gaardhoje, M.I. Gorenstein, W. Greiner, B. Jakobsson, L. McLerran, R. Mattiello, W.F.J. Müller, W. Reisdorf, L.M. Satarov, H. Stöcker, E.V. Shuryak, W. Trautmann and V. Viola are greatly appreciated. I thank the Niels Bohr Institute, Copenhagen University, and the Institute for Theoretical Physics, Frankfurt University, for kind hospitality. This work was carried out partly within the framework of a Humboldt Award, Germany. |
no-problem/9911/math9911130.html | ar5iv | text | # References
CENTRAL ELEMENTS OF THE ALGEBRAS $`U_q^{}(\mathrm{so}_m)`$ AND $`U_q(\mathrm{iso}_m)`$
M. Havlíček, S. Pošta
Department of Mathematics and Dopler Institute, FNSPE, CTU
Trojanova 13, CZ-120 00 Prague 2, Czech Republic
A. Klimyk
Institute for Theoretical Physics, Kiev 252143, Ukraine
## Abstract
The aim of this paper is to give a set of central elements of the algebras $`U_q^{}(\mathrm{so}_m)`$ and $`U_q(\mathrm{iso}_m)`$ when $`q`$ is a root of unity. They are surprisingly arise from a single polynomial Casimir element of the algebra $`U_q^{}(\mathrm{so}_3)`$. It is conjectured that the Casimir elements of these algebras under any values of $`q`$ (not only for $`q`$ a root of unity) and the central elements for $`q`$ a root of unity derived in this paper generate the centers of $`U_q^{}(\mathrm{so}_m)`$ and $`U_q(\mathrm{iso}_m)`$ when $`q`$ is a root of unity.
1. The algebra $`U_q^{}(\mathrm{so}_m)`$ is a nonstandard $`q`$-deformation of the universal enveloping algebra $`U(\mathrm{so}_m)`$ of the Lie algebra $`\mathrm{so}_m`$. It was defined in as the associative algebra (with a unit) generated by the elements $`I_{21}`$, $`I_{32},\mathrm{},I_{m,m1}`$ satisfying the defining relations
$$I_{i+1,i}I_{i,i1}^2(q+q^1)I_{i,i1}I_{i+1,i}I_{i,i1}+I_{i,i1}^2I_{i+1,i}=I_{i+1,i},$$
$`(1)`$
$$I_{i+1,i}^2I_{i,i1}(q+q^1)I_{i+1,i}I_{i,i1}I_{i+1,i}+I_{i,i1}I_{i+1,i}^2=I_{i,i1},$$
$`(2)`$
$$[I_{i,i1},I_{j,j1}]=0\mathrm{for}|ij|>1,$$
$`(3)`$
where \[.,.\] denotes the usual commutator. In the limit $`q1`$ formulas (1)–(3) give the relations defining the universal enveloping algebra $`U(\mathrm{so}_m)`$. Note also that the relations (1) and (2) differ from the $`q`$-deformed Serre relations in the approach of Drinfeld and Jimbo to quantum orthogonal algebras (see, for example, ) by presence of nonzero right hand sides in (1) and (2).
For the algebra $`U_q^{}(\mathrm{so}_3)`$ the relations (1)–(3) are reduced to the following two relations:
$$I_{32}I_{21}^2(q+q^1)I_{21}I_{32}I_{21}+I_{21}^2I_{32}=I_{32},$$
$`(4)`$
$$I_{32}^2I_{21}(q+q^1)I_{32}I_{21}I_{32}+I_{21}I_{32}^2=I_{21}.$$
$`(5)`$
Denoting $`I_{21}`$ and $`I_{32}`$ by $`I_1`$ and $`I_2`$, respectively, and introducing the element $`I_3:=q^{1/2}I_1I_2q^{1/2}I_2I_1`$ we find that relations (4) and (5) are equivalent to three relations
$$q^{1/2}I_1I_2q^{1/2}I_2I_1=I_3,$$
$`(6)`$
$$q^{1/2}I_2I_3q^{1/2}I_3I_2=I_1,$$
$`(7)`$
$$q^{1/2}I_3I_1q^{1/2}I_1I_3=I_2.$$
$`(8)`$
The Inonu–Wigner contruction applied to the algebra $`U_q^{}(\mathrm{so}_m)`$ leads to the algebra $`U_q(\mathrm{iso}_{m1})`$ which was defined in . The algebra $`U_q(\mathrm{iso}_m)`$ is the associative algebra (with a unit) generated by the elements $`I_{21}`$, $`I_{32},\mathrm{},I_{m,m1}`$, $`T_m`$ such that the elements $`I_{21}`$, $`I_{32},\mathrm{},I_{m,m1}`$ satisfy the defining relations of the algebra $`U_q^{}(\mathrm{so}_m)`$ and the additional defining relations
$$I_{m,m1}^2T_m(q+q^1)I_{m,m1}T_mI_{m,m1}+T_mI_{m,m1}^2=T_m,$$
$$I_{m,m1}T_m^2(q+q^1)T_mI_{m,m1}T_m+T_m^2I_{m,m1}=I_{m,m1},$$
$$[I_{k,k1},T_m]:=I_{k,k1}T_mT_mI_{k,k1}=0\mathrm{if}k<m.$$
If $`q=1`$, then these relations define the universal enveloping algebra $`U(\mathrm{iso}_m)`$ of the Lie algebra $`\mathrm{iso}_m`$ of the Lie group $`ISO(m)`$.
The algebra $`U(\mathrm{iso}_2)`$ is generated by two elements $`I_{21}`$ and $`T_2`$ satisfying the relations
$$I_{21}^2T_2(q+q^1)I_{21}T_2I_{21}+T_2I_{21}^2=T_2,$$
$`(9)`$
$$I_{21}T_2^2(q+q^1)T_2I_{21}T_2+T_2^2I_{21}=I_{21}.$$
$`(10)`$
Denoting $`I_{21}`$ by $`I`$ and introducing the element $`T_1:=[I,T_2]_qq^{1/2}IT_2q^{1/2}T_2I`$, we find that the relations (9) and (10) are equivalent to the relations
$$[I,T_2]_q=T_1,[T_1,I]_q=T_2,[T_2,T_1]_q=0.$$
$`(11)`$
2. In $`U_q^{}(\mathrm{so}_m)`$ we can determine elements analogous to the basis matrices $`I_{ij}`$, $`i>j`$, (defined, for example, in ) of $`\mathrm{so}_m`$. In order to give them we use the notation $`I_{k,k1}I_{k,k1}^+I_{k,k1}^{}`$. Then for $`k>l+1`$ we define recursively
$$I_{kl}^\pm :=[I_{l+1,l},I_{k,l+1}]_{q^{\pm 1}}q^{\pm 1/2}I_{l+1,l}I_{k,l+1}q^{\pm 1/2}I_{k,l+1}I_{l+1,l}.$$
$`(12)`$
The elements $`I_{kl}^+`$, $`k>l`$, satisfy the commutation relations
$$[I_{ln}^+,I_{kl}^+]_q=I_{kn}^+,[I_{kl}^+,I_{kn}^+]_q=I_{ln}^+,[I_{kn}^+,I_{ln}^+]_q=I_{kl}^+\mathrm{for}k>l>n,$$
$`(13)`$
$$[I_{kl}^+,I_{nr}^+]=0\mathrm{for}k>l>n>r\mathrm{and}k>n>r>l,$$
$`(14)`$
$$[I_{kl}^+,I_{nr}^+]_q=(qq^1)(I_{lr}^+I_{kn}^+I_{kr}^+I_{nl}^+)\mathrm{for}k>n>l>r.$$
$`(15)`$
For $`I_{kl}^{}`$, $`k>l`$, the commutation relations are obtained by replacing $`I_{kl}^+`$ by $`I_{kl}^{}`$ and $`q`$ by $`q^1`$.
Using the diamond lemma (see, for example, Chapter 4 in ), N. Iorgov proved the Poincaré–Birkhoff–Witt theorem for the algebra $`U_q^{}(\mathrm{so}_m)`$ (proof of it will be published):
Theorem 1. The elements $`I_{21}^{+}{}_{}{}^{n_{21}}I_{32}^{+}{}_{}{}^{n_{32}}I_{31}^{+}{}_{}{}^{n_{31}}\mathrm{}I_{m1}^{+}{}_{}{}^{n_{m1}}`$, $`n_{ij}=0,1,2,\mathrm{}`$, form a basis of the algebra $`U_q^{}(\mathrm{so}_m)`$.
This theorem is true if the elements $`I_{ij}^+`$ are replaced by the corresponding elements $`I_{ij}^{}`$.
Using the generating elements $`I_{21}`$, $`I_{32},\mathrm{},I_{m,m1}`$ of the algebra $`U_q(\mathrm{iso}_m)`$ we define by formula (12) the elements $`I_{ij}^\pm `$, $`i>j`$, in this algebra. Besides, in $`U_q(\mathrm{iso}_m)`$ we also define recursively the elements
$$T_k^\pm :=[I_{k+1,k},T_{k+1}^\pm ]_{q^\pm },k=1,2,\mathrm{},m1.$$
It is shown in that the elements $`I_{ij}^+`$, $`i>j`$, and $`T_k^+`$, $`1km`$, satisfy the commutation relations (13)–(15) and the relations
$$[I_{ln}^+,T_l^+]_q=T_n^+,[T_n^+,I_{ln}^+]_q=T_l^+\mathrm{for}l>n,$$
$$[T_l^+,I_{np}^+]=0\mathrm{for}l>n>p\mathrm{or}n>p>l,$$
$$[T_l^+,I_{np}^+]=(qq^1)(T_n^+I_{lp}^+T_p^+I_{nl}^+)\mathrm{for}n>l>p,$$
$$[T_l^+,T_n^+]_q=0\mathrm{for}n<l.$$
For $`U_q(\mathrm{iso}_m)`$ the Poincaré–Birkhoff–Witt theorem is formulated as
Theorem 2. The elements $`I_{21}^{+}{}_{}{}^{n_{21}}I_{32}^{+}{}_{}{}^{n_{32}}I_{31}^{+}{}_{}{}^{n_{31}}\mathrm{}I_{m1}^{+}{}_{}{}^{n_{m1}}T_{1}^{+}{}_{}{}^{n_1}T_{2}^{+}{}_{}{}^{n_2}\mathrm{}T_{m}^{+}{}_{}{}^{n_m}`$ with $`n_{ij},n_k=0,1,2,\mathrm{}`$, form a basis of the algebra $`U_q(\mathrm{iso}_m)`$.
3. It is easy to check that for any value of $`q`$ the algebra $`U_q^{}(\mathrm{so}_3)`$ has the Casimir element
$$C_q=q^2I_1^2+I_2^2+q^2I_3^2+q^{1/2}(1q^2)I_1I_2I_3.$$
As in the case of quantum algebras (see, for example, Chapter 6 in ), at $`q`$ a root of unity this algebra has additional central elements.
Theorem 3. Let $`q^n=1`$ for $`nN`$ and $`q^j1`$ for $`0<j<n`$. Then the elements
$$C^{(n)}(I_j)=\underset{j=0}{\overset{[\frac{n1}{2}]}{}}(\genfrac{}{}{0pt}{}{nj}{j})\frac{1}{nj}\left(\frac{i}{qq^1}\right)^{2j}I_j^{n2j},j=1,2,3,$$
$`(16)`$
belong to the center of $`U_q^{}(\mathrm{so}_3)`$, where $`[x]`$ for $`xR`$ denotes the integral part of $`x`$.
The proof of this theorem is rather complicated (see ). First it is proved that $`C^{(n)}(I_1)`$ belongs to the center of $`U_q^{}(\mathrm{so}_3)`$. This proof is based on the formula $`I_3I_1^m=p_m(I_1)I_2+q_m(I_1)I_3`$, where
$$p_m(x)=q^{\frac{1}{2}}\left(\frac{x(q+q^1)}{2}\right)^{m1}\underset{t=0}{\overset{[\frac{m1}{2}]}{}}(\genfrac{}{}{0pt}{}{m}{2t+1})\left(\left(\frac{qq^1}{q+q^1}\right)^2\left(\frac{2}{x(q+q^1)}\right)^2\right)^t,$$
$$q_m(x)=q^{\frac{1}{2}}\frac{x(qq^1)}{2}p_m(x)+\left(\frac{x(q+q^1)}{2}\right)^m\underset{t=0}{\overset{[\frac{m}{2}]}{}}(\genfrac{}{}{0pt}{}{m}{2t})\left(\left(\frac{qq^1}{q+q^1}\right)^2\left(\frac{2}{x(q+q^1)}\right)^2\right)^t.$$
The proof also needs deep combinatorial identities, such that
$$\underset{t=0}{\overset{[\frac{N1}{2}]}{}}(\genfrac{}{}{0pt}{}{N}{2t+1})(\genfrac{}{}{0pt}{}{[\frac{N1}{2}]t}{[\frac{N1}{2}]C})(\genfrac{}{}{0pt}{}{t}{M})=$$
$$=4^{CM}(\genfrac{}{}{0pt}{}{C}{M})(N2C(1N^{}))\frac{(2[\frac{N}{2}])!([\frac{N}{2}]+CM)!([\frac{N}{2}]M)!}{[\frac{N}{2}]!(2C+1)!(2[\frac{N}{2}]2M)!([\frac{N}{2}]C)!},$$
$$\underset{j=0}{\overset{d}{}}(\genfrac{}{}{0pt}{}{nj}{j})\frac{2d2j+1+(n2d1)^n^{}}{nj}(\genfrac{}{}{0pt}{}{[\frac{n1}{2}]d}{cj})\frac{(2[\frac{n}{2}]2j)!(n1cd)!([\frac{n}{2}]c)!}{([\frac{n}{2}]j)!(dj+1n^{})!(2[\frac{n}{2}]2c)!(n2d+n^{}1)!}=$$
$$=\frac{(\genfrac{}{}{0pt}{}{n1d}{d})(\genfrac{}{}{0pt}{}{n1c}{c})}{\left(\frac{n}{2}\right)^{1n^{}}(n2d)^n^{}},$$
where $`N,nN`$, $`0C,M[\frac{N1}{2}]`$, $`0c,d[\frac{n1}{2}]`$ and $`N^{},n^{}=0`$ or 1 such that $`N^{}=N\mathrm{mod}\mathrm{\hspace{0.17em}2}`$ and $`n^{}=n\mathrm{mod}\mathrm{\hspace{0.17em}2}`$. One also needs an extensive use of the fact that $`q`$ is a root of unity.
If it is proved that $`C^{(n)}(I_1)`$ belongs to the center of $`U_q^{}(\mathrm{so}_3)`$, then we have to use the automorphism $`\rho :U_q^{}(\mathrm{so}_3)U_q^{}(\mathrm{so}_3)`$ defined by relations $`\rho (I_1)=I_2`$, $`\rho (I_2)=I_3`$, $`\rho (I_3)=I_1`$. This automorphism shows that $`C^{(n)}(I_2)`$ and $`C^{(n)}(I_3)`$ also belong to the center of $`U_q^{}(\mathrm{so}_3)`$.
Conjecture 1. If $`q`$ is a root of unity as above, then the elements $`C_q`$ and $`C^{(n)}(I_j)`$, $`j=1,2,3`$, generate the center of $`U_q^{}(\mathrm{so}_3)`$.
4. Central elements of the algebra $`U_q^{}(\mathrm{so}_m)`$ for any value of $`q`$ are found in and . They are given in the form of homogeneous polynomials of elements of $`U_q^{}(\mathrm{so}_m)`$. If $`q`$ is a root of unity, then (as in the case of quantum algebras) there are additional central elements of $`U_q^{}(\mathrm{so}_m)`$.
Theorem 4. Let $`q^n=1`$ for $`nN`$ and $`q^j1`$ for $`0<j<n`$. Then the elements
$$C^{(n)}(I_{kl}^+)=\underset{j=0}{\overset{[\frac{n1}{2}]}{}}(\genfrac{}{}{0pt}{}{nj}{j})\frac{1}{nj}\left(\frac{i}{qq^1}\right)^{2j}I_{kl}^{+}{}_{}{}^{n2j},k>l,$$
$`(17)`$
belong to the center of $`U_q^{}(\mathrm{so}_m)`$.
Let us prove this theorem for the algebra $`U_q^{}(\mathrm{so}_4)`$ (for the general case a proof is the same). This algebra is generated by the elements $`I_{43}`$, $`I_{32}`$, $`I_{21}`$. We introduce the elements $`I_{31}I_{31}^+`$, $`I_{42}I_{42}^+`$, $`I_{41}I_{41}^+`$ defined as indicated above. Then the elements $`I_{ij}`$, $`i>j`$, satisfy the relations
$$[I_{43},I_{21}]=0,[I_{32},I_{31}]_q=I_{21},[I_{21},I_{32}]_q=I_{31},$$
$`(18)`$
$$[I_{31},I_{21}]_q=I_{32},[I_{43},I_{42}]_q=I_{32},[I_{32},I_{43}]_q=I_{42},$$
$$[I_{42},I_{32}]_q=I_{43},[I_{31},I_{43}]_q=I_{41},[I_{21},I_{42}]_q=I_{41},$$
$$[I_{41},I_{21}]_q=I_{42},[I_{41},I_{31}]_q=I_{43},[I_{42},I_{41}]_q=I_{21},$$
$$[I_{41},I_{32}]=0,[I_{43},I_{41}]_q=I_{31},[I_{42},I_{31}]=(qq^1)(I_{21}I_{43}I_{41}I_{32}).$$
If one wants to prove that an element $`X`$ belongs to the center of $`U_q^{}(\mathrm{so}_4)`$, it is sufficient to prove that $`[X,I_{21}]=[X,I_{32}]=[X,I_{43}]=0`$.
Let us consider the element $`C^{(n)}(I_{21})`$. It belongs to the subalgebra $`U_q^{}(\mathrm{so}_3)`$ generated by $`I_{21}`$, $`I_{32}`$ and $`I_{31}`$:
$`I_{21}`$ $`I_{31}`$ $`I_{41}`$
$`I_{22}`$ $`I_{32}`$ $`I_{42}`$
$`I_{23}`$ $`I_{33}`$ $`I_{43}`$
It follows from Theorem 3 that $`C^{(n)}(I_{21})`$ commutes with element $`I_{32}`$. Using the first relation in (18) we easily see that $`C^{(n)}(I_{21})`$ commutes with $`I_{43}`$ and therefore $`C^{(n)}(I_{21})`$ belongs to the center of $`U_q^{}(\mathrm{so}_4)`$.
Let us consider the element $`C^{(n)}(I_{32})`$. In $`U_q^{}(\mathrm{so}_4)`$ we separate two subalgebras $`U_q^{}(\mathrm{so}_3)`$:
$`I_{21}`$ $`I_{31}`$ $`I_{41}`$
$`I_{22}`$ $`I_{32}`$ $`I_{42}`$
$`I_{23}`$ $`I_{33}`$ $`I_{43}`$
From Theorem 3 we have $`[C^{(n)}(I_{32}),I_{21}]=[C^{(n)}(I_{32}),I_{43}]=0`$ and $`C^{(n)}(I_{32})`$ belongs to the center of $`U_q^{}(\mathrm{so}_4)`$. A proof that the element $`C^{(n)}(I_{43})`$ belongs to the center is the same as for $`C^{(n)}(I_{21})`$.
The elements $`C^{(n)}(I_{31})`$, $`C^{(n)}(I_{42})`$ and $`C^{(n)}(I_{41})`$ belong to the center of $`U_q^{}(\mathrm{so}_4)`$ since they belong to the sabalgebras $`U_q^{}(\mathrm{so}_3)`$ generated by triplets
$$I_{41},I_{31},I_{43}\mathrm{and}I_{21},I_{41},I_{42}.$$
(Note that $`C^{(n)}(I_{31})`$ and $`C^{(n)}(I_{42})`$ commute with $`I_{42}`$ and $`I_{31}`$, respectively, since $`I_{42}=[I_{32},I_{43}]_q`$ and $`I_{31}=[I_{21},I_{32}]_q`$.) Theorem is proved.
Conjecture 2. If $`q`$ is a root of unity as above, then the central elements of and of Theorem 4 generate the center of $`U_q^{}(\mathrm{so}_m)`$.
5. Let us consider the associative algebra $`U_{q,\epsilon }^{}(\mathrm{so}_3)`$ (where $`\epsilon 0`$) generated by three generators $`J_1`$, $`J_2`$, $`J_3`$ satisfying the relations:
$$[J_1,J_2]_q:=q^{1/2}J_1J_2q^{1/2}J_2J_1=J_3,[J_2,J_3]_q=J_1,[J_3,J_1]_q=\epsilon ^2J_2.$$
It is easily proved that this algebra is isomorphic to the algebra $`U_q^{}(\mathrm{so}_3)`$ and the corresponding isomorphism is uniquely defined by $`J_1\epsilon I_1`$, $`J_3\epsilon I_3`$, $`J_2I_2`$. Therefore, the elements
$$\stackrel{~}{C}^{(n)}(J_i,\epsilon ):=n\epsilon ^nC^{(n)}(J_i/\epsilon ),i=1,3,\stackrel{~}{C}^{(n)}(J_2,\epsilon ):=C^{(n)}(J_2)$$
belong to the center of $`U_{q,\epsilon }^{}(\mathrm{so}_3)`$ if $`q^n=1`$. By means of the contraction $`\epsilon 0`$ we transform the algebra $`U_{q,\epsilon }^{}(\mathrm{so}_3)`$ into the algebra $`U_q^{}(\mathrm{iso}_2)`$. Under this contraction the commutation relations $`[\stackrel{~}{C}^{(n)}(J_i,\epsilon ),J_k]=0`$ transform into the relations $`[\stackrel{~}{C}^{(n)}(J_i,0),J_k]=0`$. In other words, we have proved the following
Theorem 5. Let $`q^n=1`$ for $`nN`$ and $`q^j1`$ for $`0<j<n`$. Then the elements $`T_1^n`$, $`T_2^n`$ and $`C^{(n)}(I)`$ belong to the center of the algebra $`U_q^{}(\mathrm{iso}_2)`$.
It was shown in that the element
$$C_q=q^1T_1^2+qT_2^2+q^{3/2}(1q^2)T_1T_2I$$
is central in $`U_q^{}(\mathrm{iso}_2)`$.
Conjecture 3. If $`q`$ is a root of unity as above, then the elements $`C_q`$, $`T_1^n`$, $`T_2^n`$ and $`C^{(n)}(I)`$ generate the center of $`U_q^{}(\mathrm{iso}_2)`$.
Using Theorem 5 and repeating the proof of Theorem 4 we prove the following theorem:
Theorem 6. Let $`q^n=1`$ for $`nN`$ and $`q^j1`$ for $`0<j<n`$. Then the elements
$$C^{(n)}(I_{ij}),i>j,T_j^n,j=1,2,\mathrm{},m,$$
belong to the center of the algebra $`U_q^{}(\mathrm{iso}_m)`$. |
no-problem/9911/cond-mat9911109.html | ar5iv | text | # Stability and dynamics of free magnetic polarons
\[
## Abstract
The stability and dynamics of a free magnetic polaron are studied by Monte Carlo simulation of a classical two-dimensional Heisenberg model coupled to a single electron. We compare our results to the earlier mean-field analysis of the stability of the polaron, finding qualitative similarity but quantitative differences. The dynamical simulations give estimates of the temperature dependence of the polaron diffusion, as well as a crossover to a tunnelling regime.
75.70.Pa
\]
Recently, the interest on free magnetic polarons (FMP) has been renewed due to its relation with colossal magnetoresistance in manganese pyrochlores and in double-exchange models of magnetism in the manganite perovskites .
Free magnetic polarons are magnetically self-trapped carriers in contrast to the more common bound magnetic polarons which become trapped by an impurity. These were extensively studied in diluted magnetic semiconductors and rare earth chalcogenides. Golnik et al. found experimental evidence of the existence of not only bound but free magnetic polarons in $`Cd_{1x}Mn_xTe`$ and $`Pb_{1x}Mn_xTe`$. Their existence leads to an activated behavior of the resistivity above the Curie temperature that is found in materials with negative magnetoresistance, as well as a temperature- dependent spin-splitting seen in magneto-optical experiments. Theoretical models of FMP have been developed within a mean-field approach and generalized to a fluctuation-dominated regime . In most of these systems, the underlying (super)-exchange interaction between the localized spins is antiferromagnetic in nature; we shall however be concerned with the ferromagnetic case.
We study the model of a single electron interacting with a spin background that itself is ordering ferromagnetically . We consider the Hamiltonian:
$`H=`$ $`t`$ $`{\displaystyle \underset{i,j\sigma }{}}c_{i\sigma }^{}c_{j\sigma }`$ (1)
$``$ $`J`$ $`{\displaystyle \underset{i,j}{}}\{(1\alpha )(S_i^xS_j^x+S_i^yS_j^y)+S_i^zS_j^z\}`$ (2)
$``$ $`J^{}`$ $`{\displaystyle \underset{i}{}}\stackrel{}{\sigma }_i\stackrel{}{S}_i`$ (3)
where $`\stackrel{}{S}_i`$ refer to the spin of the magnetic ions in the system. $`c_{i\sigma }^{}`$ creates an electron with spin $`\sigma `$ on the site $`i`$, $`\stackrel{}{\sigma }_i=c_{i\alpha }^{}\stackrel{}{\sigma }_{\alpha \beta }c_{i\beta }`$ is the conduction spin operator and $`i,j`$ denotes sum over the nearest-neighbors pairs. We have added to the Heisenberg term a small Ising anisotropy $`\alpha =0.1`$ to improve convergence at low temperatures and by enforcing a non-zero transition temperature $`T_c`$ . $`J^{}`$ is the coupling between a localized spin and a conduction electron. The qualitative behavior is well understood from previous mean-field analyses . Below a temperature $`T_p`$ a ferromagnetic polaron forms by self-trapping in a ferro-magnetically aligned cluster of spins. As the temperature is lowered toward the Curie temperature $`T_c`$ the polaron grows in size and becomes more stable, because the small-q magnetic susceptibility is growing. Near and below $`T_c`$ the polaron will again become unstable because of the ease of motion in the background ferro-magnetic spin alignment. Notice that this is quite different from the case of anti-ferromagnetic coupling of spins, where the polaron may remain stable well below the magnetic ordering temperature.
There are several deficiencies of the mean field treatment. The most pronounced is a continuum treatment of the spin background where fluctuations are neglected. This approximation is such that the paramagnetic state leads to a vanishing exchange coupling, so that the electron is bound in a potential of depth $`J^{}\overline{S}`$, with $`\overline{S}`$ the average magnetization inside the polaron. As $`J^{}/t\mathrm{}`$, the potential well becomes arbitrarily deep. This is undoubtedly a severe overestimate.
In the paramagnet there will always be low energy states localized in the band tail with energies $`O(t)`$ above the ferromagnetic ground state, even in the strong coupling limit. Such low energy states are produced by random fluctuations of a few neighboring spins into near-alignment. But now one must distinguish between a self-trapped polaron and a localized band-tail state, if indeed such a distinction is appropriate.
In this paper we address the topic by a dynamical simulation of the Hamiltonian of Eq.(1). We show that polarons may be distinguished (when they exist) by a spectroscopic gap to band-like states, and that they move diffusively. As temperature is raised, the polaron level moves toward the band edge, and begins to resonate with states in the band tail, leading to a crossover to hopping conductivity.
We perform a classical Monte Carlo simulation (MC) in two dimensions on a square lattice of localized spins $`\stackrel{}{S}_i`$ which are treated as classical rotors characterized by the angles $`\theta _i`$ and $`\varphi _i`$. We use periodic boundary conditions on a two dimensional lattice of size up to $`30\times 30`$.
We place a single electron in the system in the lowest energy eigenstate of the Hamiltonian consisting of the first and third terms of Eq (1), using the instantaneous spin configuration for the classical spins $`\stackrel{}{S}_i`$. The resulting wave function leads to a local magnetic field proportional to $`|\psi (x,y)|^2`$, used in the next step of the MC spin simulation.
The standard Metropolis algorithm is used. Randomly chosen sites suffer a random change of spin orientation. Changes are allowed if the increment in energy $`\mathrm{\Delta }E`$ is such that the quantity $`exp(\mathrm{\Delta }E/KT)`$ is smaller than a random number between $`0`$ and $`1`$. $`4000`$ reorientations per spin were made for an initial equilibration and $`3000`$ to calculate averages after each diagonalization. Each diagonalization defines our time step. Changing the number of spin reorientations between each diagonalization led to no significant change in either the magnetization or binding energy. All the quantities are given in units of $`J`$, the Heisenberg parameter. The hopping parameter $`t`$ is fixed to $`100`$ (estimated with the mean field relation $`T_czJS^2`$ and the values for the parameters expected for the pyrochlores ) as we are interested in the behavior versus $`J^{}/t`$ and the temperature $`T`$. $`T_c=1.8`$ in these units.
This new approach allows us to calculate in a self-consistent way the wave function and the magnetic polarization over a large range of temperature. In particular, we can explore the region around $`T_c`$ where the mean-field treatment fails.
In Fig. 1 we plot $`|\psi (x,y)|^2`$ and the averaged local magnetization close to $`T_c`$. Visually, the existence of a magnetic polaron is clear, and there is substantial alignment of the moments in the vicinity of the carrier. Note that far from the influence of the wave-function the average magnetization is close to 0, so a large part of the spin configurations are explored, while for the spins close to the center of the wave-function there are few accepted spin flips.
Pictorial evidence is purely qualitative, and does not allow one to extract reliable estimates for the polaron size or binding energy, especially at higher temperatures and lower $`J^{}/t`$, when the polarons are smaller and fluctuating in time. More reliable evidence comes from the time-averaged electronic density of states (DOS), and of the excitation spectrum shown in Fig. 2. In the density of states (inset) a sharp low energy feature is pulled from the bottom of the band (only the lowest 1% of the spectrum is shown) that contains exactly one state. This is the bound polaron level. The level width comes from thermal fluctuations in the energy of the bound state, and the stability of the bound polaron is seen more clearly in the excitation spectrum (main figure) that demonstrates a clear gap corresponding to the electronic part of the binding energy $`E_p`$ of the polaron.
The continuum of excited states can be characterized as the band tail formed by the fluctuating paramagnetic background; the lowest energy states are produced by rare fluctuations of nearby spins into near-alignment. Consequently, the “gap” in the excitation spectrum is soft, and indeed statistically very rare states may occur at energies below the bound state of the polaron. We will discuss this below.
We estimate the electronic binding energy $`E_p`$ by the configurationally averaged gap $`\mathrm{\Delta }=E_1E_0`$ to the lowest excited state in our simulations (we have checked that the separation between excited states scales as $`1/N^2`$ so that these are true continuum states). In Fig. 3 we show the dependence of $`\mathrm{\Delta }`$ and the absolute value of the local magnetization $`M`$ (weighted with the wave-function) on $`T`$ and $`J^{}/t`$. $`M`$ is defined as
$$M=|\underset{i}{}\stackrel{}{S}_i^{}|$$
where $`\stackrel{}{S}_i^{}=|\psi (i)|^2\frac{\stackrel{}{S}_i}{|\stackrel{}{S}_i|}`$.
We are not taking into account thermal excitations of the quasiparticle so our results are valid only when $`\mathrm{\Delta }`$ is bigger than $`T`$. This condition is fulfilled for all the values of $`\mathrm{\Delta }`$ shown in Fig. 3. As expected from previous analyses, the polaron binding energy increases as temperature is lowered from high temperatures, as the thermal spin fluctuations are reduced. For large $`J^{}/t`$ we find a new behavior on $`\mathrm{\Delta }`$ not found within mean-field theory, namely, that it has a maximum at some temperature above $`T_c`$. The existence of a maximum can be understood in terms of the correlation length $`\xi `$. This quantity increases as we decrease the temperature above $`T_c`$. For very small $`\xi `$ (large $`T`$) is very difficult to have a FM cluster for the spin-polaron to sit in and for large $`\xi `$ the electron would rather spread out. This will lead to an intermediate optimum $`\xi `$ for the existence of the polaron that would happen close to $`T_c`$ but not necessarily at $`T_c`$.
The size of the polaron may be estimated from the separation between the first eigenvalue $`E_o`$ and the bottom of the band of the uniform ferromagnet. The bottom of the band in this case is given by $`\frac{3}{4}J^{}4t`$ and the separation should go roughly as $`\frac{1}{L_p^2}`$ being $`L_p`$ the size of the polaron if we assume saturation in the local magnetization. The general trend is that the size decreases as $`T`$ (above $`T_c`$) or $`J^{}`$ increases. From Fig. 3 we can also deduce that the ’window’ above $`T_c`$ where the spin-polaron is stable increases with $`J^{}/t`$. These two results are consistent with previous mean-field calculations .
Although qualitative comparison is satisfactory there are large quantitative differences that point to a great decreasing in the stability of the spin-polarons when fluctuations are taken into account. To be precise we compare the binding energy at $`T=T_c`$. From mean-field calculations on ref. the maximum possible value for $`\mathrm{\Delta }`$ is $`J^{}S`$ but it is not reached due to the kinetic energy that is lost with the formation of a polaron. In the present work $`\mathrm{\Delta }/J1`$ while $`J^{}S/J100`$. So binding energies are reduced by two orders of magnitude compared with the mean-field results because the loss of kinetic energy is not well taken into account in the latter.
The study of the stability conditions for a free magnetic polaron is interesting by itself but the MC simulation also opens us the possibility of learning about its dynamics in a spin-fluctuating landscape. In Fig.4 the probability of moving a distance $`r`$ (defined as the change in the expectation value of the electron position) for different MC times is shown. For time $`\stackrel{~}{t}=1`$ one observes dominant short distance motion with occasional rare hops over long distances. For longer times, the peak of the distribution moves out approximately with $`\sqrt{\stackrel{~}{t}}`$ as expected. This is the expected behavior from a diffusing object. The long-distance hops occur when unoccupied band tail states (which may be localized anywhere in the system) temporarily drop below the bound polaron level. In our algorithm - which automatically populates the lowest energy level - the electron moves to occupy this new state and restabilises the polaron there. These rare events eventually dominate the long-time behavior in our simulations. Of course, very long range hops are unphysical because the tunnelling probability will be exponentially small with distance, and the band-tail states survive in one place for only a short time. Hops to band-tail states will then be limited to some finite range. As temperature is raised, and $`E_p`$ is reduced, hops to band tail states become more frequent; we cross over to a regime of “passive advection” of the wavefunction in the fluctuating spin background .
Our results are fitted to a gaussian in two dimensions plus a constant (to approximately take account of hops to band-tail states). The gaussian dominates for the parameters of interest when a spin-polaron is well formed. The distribution scales with $`\sqrt{\stackrel{~}{t}}`$ as expected for diffusive motion. Hence we calculate the diffusion constant as $`D=_iP(r,\stackrel{~}{t}=1)r^2`$ and the mobility ($`\mu =\frac{D}{T}`$) of the spin polaron for different couplings and temperatures (see Fig. 3). The mobility decreases with temperature, and also with $`J^{}/t`$. The latter is reasonable, because larger polarons should diffuse more slowly. The temperature-dependence is more surprising, and arises because $`D`$ itself is weakly T-dependent. Although the polaron size is decreasing with temperature (tending to increase $`D`$), this is counterbalanced by a reduced probability of favorable FM spin configurations near its boundary as $`T/T_c`$ is increased.
The Heisenberg term has been considered ferromagnetic to compare with the pyrochlores. The change to antiferromagnetic coupling is straightforward and in fact the more common case in manganite perovskites , rare earth chalcogenides , or magnetic semiconductors . We find in the case of an antiferromagnetic background the stability of a free magnetic polaron is enhanced, and will report these results elsewhere. These results are consistent with ref. where a pseudogap in the DOS is associated with phase separation, that is the large scale effect corresponding to spin-polarons.
In conclusion, our dynamical simulations have revealed a picture of the FMP in a ferromagnet above $`T_c`$ which is considerably more complex than given by the mean field pictures. Provided the exchange coupling is large enough, FMP’s are stable above $`T_c`$, but considerably more weakly bound than found by mean field calculations. This by itself raises some doubts about the interpretation given earlier for the Mn pyrochlores, because we require an exchange coupling comparable to the bandwidth for a well-formed polaron with nearly saturated magnetization, whereas in the Mn pyrochlores this coupling is expected to be not large. We find that the motion of the polaron is diffusive, but as temperature is raised the electron fluctuates out of the self-trapped configuration into band-tail states formed by opportunistic fluctuations of the moments.
Acknowledgments. M.J.C. would like to thank all the TCM group at the Cavendish, where the greater part of this work has been done, for all the fruitful discussions on physics and computing advice. Thanks specially to M. Côté and L. Wegener. P.B.L. aknowledge financial support from EPSRC GR/L55346. L.B. and M.J.C. also aknowledge financial support from the MEC of Spain under Contract No. PB96-0085. |
no-problem/9911/astro-ph9911044.html | ar5iv | text | # Cosmic dust grains strike again
## Abstract
A detailed simulation of air showers produced by dust grains has been performed by means of the aires Monte Carlo code with the aim of comparing with experimental data. Our analysis indicates that extensive dust grain air showers must yet be regarded as highly speculative but they cannot be completely ruled out.
PACS number(s): 98.70.Sa, 98.70.-f
It has long been known that small solid particles (or dust grains) may be accelerated effectively at strong collisionless shock waves – popularly in supernovae – . In addition, it was suggested years later that, even upon destruction, a substantial fraction of the super-thermal debris (those created in the pre-shock region) might re-enter the shock waves to undergo additional acceleration . These ideas have been recently strengthened by astronomical observations and by the analysis of primitive meteorites .
Despite some still open questions concerning the timescale for grain destruction in the source environment and, later on, during the trip to Earth, from time to time dust grains have been considered as possible progenitors of giant air showers. The earliest significant contribution we are aware of is due to Herlofson . He argued that relativistic dust grains could indeed produce extensive air showers, provided that in the first interaction each constituent nucleon contributes with an energy above $`10^{14}`$ eV. The field then lay fallow for eighteen years until Hayakawa tried to explain the exceptional event reported by Suga et al. . Prompted by this proposal several plausible suggestions came out in the 1970s .
Unfortunately, the values of the depth of shower maximum registered by Haverah Park and Volcano Ranch experiments do not provide an exact picture of showers initiated by dust grains . Nonetheless, in view of the low statistics at the end of the spectrum and the wide variety of uncertainties in these experiments, one may be excused for reserving the judgment. In order to increase the statistics significantly, the Southern Auger Observatory is currently under costruction: A surface array (that will record the lateral and temporal distribution of shower particles) + an optical air fluorescence detector (which will observe the air shower development in the atmosphere) . A major advantage of the optical device is precisely its capability of measuring the depth for the maximum shower development.
Whatever the source(s) of the highest energy cosmic rays, the end of the spectrum remains unexplained by a unique consistent model . This may be due to the present lack of precision of our knowledge of these rare events, or perhaps, may imply that the origin and nature of ultrahigh energy cosmic rays have more than one explanation. If one assumes the latter hypothesis, relativistic specks of dust are likely to generate some of the events.
The above considerations have motivated us to re-examine the effects of giant dust grain air showers. Before proceeding to discuss in detail how these extensive showers evolve, it is useful to review some key characteristics of the mechanisms that could lead to the destruction of dust grains. Notice that the opacity of the interstellar medium will impose lower and upper limits on the value of the Lorentz factor $`\gamma `$ for the primaries impacting on the Earth atmosphere.
An early investigation by Berezinsky and Prilutsky indicates that the grains turn out to be unstable with respect to the development of a fracture . On the one hand, subrelativistic dust grains disintegrate in collissions with heavy nuclei of the interstellar gas. On the other hand, electrical stress induced by the photoelectric effect in the light field of the Sun results in the mechanical disruption and subdivision of relativistic grains. Doubts have also been expressed about the prospects of surviving against heating arising from photoionization within the solar radiation field . The evaporation of the surface atoms and even the capture of electrons from the interstellar medium have been suggested as possible ways to reduce the accumulation of charges.
All in all, the path length up to the first break-up in favor of these figures, ($`\mathrm{log}\gamma 2`$ and initial radii between 300 - 600 Å) turns out to be of a few parsecs , i.e., much less than the characteristic size of the Milky Way. This entails that only RX J0852.0-4622, the closet young supernova remnant to Earth (distance $`200`$ pc) , could be considered scarcely far away.
Let us now turn to the discussion of dust grain air showers (DGASs). Relativistic dust grains encountering the atmosphere will produce a composite nuclear cascade. Strictly speaking, each grain evaporates at an altitude of about 100 km and forms a shower of nuclei which in turn produces many small showers spreading over a radius of several tens of meters, whose superposition is observed as an extensive air shower. For $`\gamma 1`$, the internal forces between the atoms will be negligible. What is more, the nucleons in each incident nucleus will interact almost independently. Consequently, a shower produced by a dust grain containing $`n`$ nucleons may be modelled by the collection of $`n`$ nucleon showers, each with $`1/n^{\mathrm{th}}`$ of the grain energy. Thus, recalling that muon production in proton showers increases with energy as $`E^{0.85}`$ , the total number of muons produced by the superposition of $`n`$ individual nucleons showers is, $`N_\mu ^{\mathrm{DG}}n(E/n)^{0.85}`$, or, comparing to proton showers, $`N_\mu ^{\mathrm{DG}}=n^{0.15}N_\mu ^p`$. Of course, these estimates are approximations, but without cumbersome numerical calculation one could naively select the event recorded at the Yakutsk array (May 7, 1989) as the best giant DGAS candidate in the whole cosmic ray data sample .
In order to go a step further and test these qualitative considerations, we have performed several atmospheric cascade development simulations by means of the aires Monte Carlo code . Several sets of showers were generated, each one for different specks of metallic nature, i.e. with different Loretz factors. The sample was distributed in the energy range of $`10^{18}`$ eV up to $`10^{20}`$ eV and was equally spread in the interval of 0 to 60 zenith angle at the top of the atmosphere. All shower particles with energies above the following thresholds were tracked: 750 keV for gammas, 900 keV for electrons and positrons, 10 MeV for muons, 60 MeV for mesons and 120 MeV for nucleons and nuclei. The particles were injected at the top of the atmosphere (100 km.a.s.l), and the surface detector array was put beneath different atmospheric densities selected from the altitude of cosmic ray observatories (see Table I). sibyll routines were used to generate hadronic interactions above 200 GeV. Notice that while around 14 TeV c.m. energies the kinematics and particle production of minijets might need further attention, for DGASs the energy of the first interaction is reduced to levels where the algorithms of sibyll accurately match experimental data . Results of these simulations were processed with the help of aires analysis program. Secondary particles of different types and all charged particles in individual showers were sorted according to their distance $`R`$ from the shower axis.
In an attempt to examine our qualitative considerations we first restrict the attention to the highest event reported by the group at Yakutsk. Next, details of the most relevant observables of the showers are given from the analysis of both the particles at ground and those generated during the evolution of the cascade.
The Yakutsk experiment determines the shower energy by interpolating and/or extrapolating the measurements of $`\rho _{600}`$ (charged particle density at 600 m from the shower core), a single quantity which is known from the shower simulations to correlate well with the total energy for all primary particle types. In the case of the event detected on May 1989, the trigger of 50 ground-based scintillation detectors at 200 - 2000 m from the shower core allowed to estimate a reliable value of $`\rho _{600}=54`$ m<sup>-2</sup> and a declination axis given by cos $`\theta `$ = 0.52 . Examining the lateral distribution of our simulation sample, we note that showers initiated by relativistic particles (log $`\gamma `$ = 4 - 3.8) of energies in the range 36 - 38 EeV and with orientation $`\theta =59^{}`$ are compatible with such a value of $`\rho _{600}`$ . It is important to stress that the lower and upper energy bounds are compatible with the event of interest within 2 $`\sigma `$; however, for these high Lorentz factors the interstellar medium is extremely opaque to dust grains (again the reader is referred to Ref. ). In Fig. 1 we show the lateral distributions of muons and charged particles. It is easily seen that the predicted fluxes at ground level are partially consistent with those detected at the giant array (see Fig. 2).
In what follows we shall discuss the main properties of DGASs. The atmospheric depth $`X_{\mathrm{max}}`$ at which the shower reaches its maximum number of secondary particles is the standard observable to describe the speed of the shower development. In view of the superposition principle this quantity is practically independent of $`n`$, depending only upon $`\gamma `$. For this reason the altitude of the $`X_{\mathrm{max}}`$ is generally used to find the most likely mass for each primary. The behavior of the $`X_{\mathrm{max}}`$ with increasing $`\gamma `$ in vertical showers has been already discussed in detail elsewhere . Here we shall extend Linsley’s analyses studying the dependence of the shower evolution with the incident angle. In the last panel of Fig. 3 we show the numerical results from $`10^{19}`$ eV cascades initiated by nucleons with log $`\gamma `$ = 3 and different primary zenith angles. It can be observed that, at the same total energy, an air shower induced by particles with an oblique incidence develops faster than a vertical shower. Since muons are typically leading particles in the cascade, the position of $`X_{\mathrm{max}}`$ is also related to the relative portion of muons in the shower. To illustrate this last point, the resulting fluxes at ground level from the same events are shown in first two panels of Fig. 3. As it is expected, the radial distribution of the shower particles of inclined primaries (mainly dominated by muons) is flatter than the distribution of a vertical shower. The opposite behavior points onto the supremacy of electrons and positrons near the core. The density of these charged particles, however, falls off rapidly with increasing core distance, dimming the electromagnetic cascade.
To analyse the sensitivity of ground arrays to the shower parameters, the radial variation of different groups of secondary particles (we have considered separately $`\gamma `$, $`e^+e^{}`$, and $`\mu ^+\mu ^{}`$) was studied at two observation levels. In Fig. 4 we show the last steps of the evolution of the lateral distribution along the longitudinal shower path. It can be seen that there is practically no change in the radial variation. Thus, we conclude that the flux of particles does not have intrinsic sensitivity to the observation altitude until 1400 m.
Coming back to the general features of the longitudinal development, the last exercise is to analyse the dependence of the atmospheric shower profile with $`\gamma `$. To this end we carry out numerical simulations of vertical showers induced by primaries with the same mass and different Lorentz factors, namely log $`\gamma `$ = 2.5 to 3.2. The results are shown in the last panel of Fig. 4. As expected, for the same primary mass the number of secondary particles increases with rising $`\gamma `$, but it is interesting to note that both cosmic ray cascades present similar shapes and peak around the same atmospheric depth, i.e., $`X_{\mathrm{max}}`$ 350 $`\pm `$ 47 g/cm<sup>2</sup> (consistent with the analysis of ).
Putting all this together one can draw the following tentative conclusions:
(i) The Fly’s Eye collaboration has presented evidence indicating that typical extensive air showers above $``$ 1 EeV develop at a rate which is consistent with a steep power law spectrum of heavy nuclei that is overtaken at higher energies by a flatter spectrum of protons . The group of the Akeno Giant Air Shower Array has reported 7 events above $`10^{20}`$ eV until August 1998 . In general, the muon component agrees with the expectation extrapolated from lower energies . However, the highest energy event recorded at Yakutsk ($`E=1.1\pm 0.4`$ eV) seems to be a rare exception. It can be readily noticed from Fig. 1 that the almost completely muonic nature of particles detected may be explained by a 36 - 38 EeV DGAS. Besides, if this speculation is true the lack of obvious candidate sources at close distances leaves the origin of this event still as a mystery.
(ii) The dependence of the DGAS longitudinal profile on $`\gamma `$ in the studied range is rather weak. The shower disc becomes slightly flatter and thicker with decreasing primary energy and/or increasing zenith angle. This is mainly due to the rising content of muons among all charged particles in the inclined shower.
Dust is a very widespread component of diffuse matter in the Galaxy with apparently active participation in charge particle acceleration. If such acceleration is possible, dust grains may play an important role in determining the composition of galactic cosmic rays. The question of whether the opacity of the interstellar medium could prevent relativistic dust grains from reaching the Earth is as yet undecided. Observation of giant DGASs would give an experimental and definitive answer to this question. We recommend that Auger search data be analysed for evidence of DGASs.
I am grateful to D. Gómez Dumm for helpful comments on the original manuscript and to F. Zyserman for computational aid. I am also indebted to Yakutsk Collaboration, and particularly to M. I. Pravdin for permission to reproduce Fig. 2. This work has been partially supported by CONICET. The author is on leave of absence from UNLP. |
no-problem/9911/astro-ph9911380.html | ar5iv | text | # X–ray observations of the ’composite’ Seyfert/star-forming galaxy IRAS00317-2142
## 1 INTRODUCTION
The ROSAT All-Sky Survey, RASS, (Voges et al. 1996) provided a wealth of information on the X-ray properties of nearby AGN. The cross-correlation of the RASS with the IRAS Point Source Catalogue (Boller et al. 1995, 1998), revealed a new interesting class of objects, named ’composites’ by Moran et al. (1996), with high luminosities $`L_x10^{4243}`$ $`\mathrm{erg}\mathrm{s}^1`$. Their main characteristic is that their \[OIII\] lines are broader than all other narrow lines, forbidden or permitted (Moran et al. 1996). The diagnostic emission line ratio diagrams (Veilleux & Osterbrock 1987) classify these objects as star-forming galaxies. Yet, some of the ’composites’ present broad $`H_\alpha `$ wings suggesting the presence of a weak or obscured AGN. The origin of the powerful X-ray emission remains unknown in these objects. Interestingly, the ’composites’ show very similar optical spectra with those of the narrow-line galaxies (NLXGs) detected in deep ROSAT surveys (eg Boyle et al. 1995). The NLXGs have narrow line optical spectra, some with weak broad wings. Yet, the powerful X-ray emission $`L_x10^{4243}`$ $`\mathrm{erg}\mathrm{s}^1`$strongly suggests the presence of a relatively weak or a heavily obscured AGN.
The most luminous object in the ’composite’ sample, IRAS00317-2142 has a luminosity of $`L_x10^{43}`$ $`\mathrm{erg}\mathrm{s}^1`$(0.1-2 keV) at a redshift of z=0.0268 (Moran et al. 1996). This galaxy belongs to a small group of galaxies, Hickson 4 (Ebeling et al. 1994). Its optical spectrum has been discussed in detail by Coziol et al. (1993). The diagnostic emission line ratios (eg $`H_\alpha /[NII]`$ vs $`H_\beta /[OIII]`$) classify it as an HII galaxy. Nevertheless, its high X-ray luminosity as well as the presence of a faint $`H_\alpha `$ wind in its optical spectrum (Coziol et al. 1993) would clearly assign an AGN classification to this object. Here, I present an analysis of the X-ray properties of IRAS00317-2142 using data from both the ASCA and the ROSAT missions. The goals are to understand the origin of the X-ray emission in this enigmatic class of objects, its possible relation to the NLXGs present in the deep ROSAT surveys and finally to constrain the ’composite’ galaxy contribution to the X-ray background.
## 2 OBSERVATIONS AND DATA REDUCTION
IRAS00317-2142 was observed with ASCA (Tanaka, Inoue & Holt 1994) between the 11 and 12th of December 1995. I have used the “Rev2” processed data from the HEASARC database at the Goddard Space Flight Center. For the selection criteria applied on Rev2 data, see the ASCA Data ABC guide (Yaqoob 1997). Data reduction was performed using FTOOLS v4.2. The net exposure time is about 40 ksec and 37 ksec for the GIS and the SIS detectors respectively. The two GIS and the two SIS detectors on-board ASCA have an energy range roughly between 0.8-10 keV and 0.5-10 keV respectively. The energy resolution of the SIS CCD detectors is 2 per cent at 6 keV while of the GIS detectors is 8 per cent at the same energy. For more details on the ASCA detectors see Tanaka et al. (1994). A circular extraction cell for the source of 2 arcminute radius has been used. Background counts were estimated from source-free regions on the same images. The observed flux in the 2-10keV band is $`f_{210keV}8\times 10^{13}`$ $`\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$while the one in the 1-2 keV band is $`f_{12keV}=2.5\times 10^{13}`$$`\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$; the fluxes are estimated using the best-fit power law model below ($`\mathrm{\Gamma }=1.8`$).
ROSAT observed IRAS00317-2142 on two occasions. It was first detected during the RASS (exposure time 340 s). Its RASS flux is $`2.7\pm 0.46\times 10^{12}`$ (0.1-2 keV), (Moran et al. 1996). It was also observed by ROSAT PSPC as a target during a pointed observation between the 22 and 23rd of June 1992 (exposure time 9.3 ksec). The derived flux was $`2.7\pm 0.05\times 10^{12}`$ $`\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$in the 0.1-2 band, in excellent agreement with the RASS flux. In the 1-2 keV band the flux is $`7.3\times 10^{13}`$ $`\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$a factor of three above that of ASCA in the same band. There is no evidence for extension in the pointed ROSAT PSPC image (eg Pildis, Bregman & Evrard 1995) suggesting that the bulk of the X-ray emission originates in IRAS00317-2142 rather than being diffuse emission from hot intergalactic gas in the galaxy group. Here, I re-analyse the ROSAT data in order to make comparisons with the ASCA spectral fits and to perform joint fits with the ASCA data.
Throughout this paper I adopt $`\mathrm{H}_{}=50\mathrm{k}\mathrm{m}\mathrm{s}^1\mathrm{Mpc}^1`$ and $`q_o=0`$. For the spectral fitting I use XSPEC v.10. I bin the data so that there are at least 20 counts per bin (source and background). Quoted errors to the best-fitting spectral parameters are 90 per cent confidence regions for one parameter of interest.
## 3 SPECTRAL ANALYSIS
### 3.1 The ASCA spectral fits
I first fit a single power-law to the ASCA data. These are consistent with a zero hydrogen column density, $`N_H`$. Hence, hereafter, I have fixed the column to the Galactic column density ($`N_H1.5\times 10^{20}`$ $`\mathrm{cm}^2`$). The results of the ASCA spectral fits are given in Table 1. Entries with no associated error bars were fixed to this value during the fit. The power-law slope $`\mathrm{\Gamma }1.8`$, is consistent with the canonical spectral index of AGN (eg Nandra & Pounds 1994). Although this simple model provides an acceptable fit, $`\chi ^2=118.2/114`$ degrees of freedom (dof), I have also added a Gaussian line component to the fit (the energy and the line width fixed at 6.4 keV and 0.01 keV respectively) as this is a common feature in AGN spectra. The additional component marginally improves the fit ($`\mathrm{\Delta }\chi ^2=2.4`$ for one additional parameter); this is statistically significant at only the 90 per cent confidence level. The 90 per cent upper limit for the equivalent width is 0.9 keV. In Fig. 1 the ASCA spectrum together with the best fit power-law model are given; the data residuals from the model are also plotted. The data have been rebinned in the plot for clarity. A Raymond-Smith (RS) thermal model results in a worse fit ($`\chi ^2=128.0/113`$). The temperature derived (kT=5.8 keV) is reminiscent of nearby normal galaxies. Next, I fit an ionised warm absorber model (eg Brandt, Fabian & Pounds 1996) in addition to the Galactic column density. Indeed warm absorbers are detected in more than 50 per cent of Seyfert 1s (Brandt et al. 1999). The temperature of the absorber is fixed at $`T=10^5\mathrm{K}`$ (Brandt et al. 1999). The best fit ’warm’ column density is $`N_H10^{22}`$ $`\mathrm{cm}^2`$while the ionisation parameter is practically unconstrained. However, $`\mathrm{\Delta }\chi ^22.4`$ for two additional parameters and thus the warm absorber model does not represent a statistically significant improvement.
I also attempt to fit a more complicated model with both a power-law and a RS component. This is because the optical spectrum strongly suggests the presence of a strong star-forming component. The spectral fit above yields $`\chi ^2=115.3/112`$; the inclusion of the additional RS component is not statistically significant ($`<1\sigma `$). I derive a spectral index of $`\mathrm{\Gamma }=1.7_{0.10}^{+0.10}`$. The RS component has a temperature of kT$``$ 0.2 keV lower but consistent with those of the star-forming regions in nearby galaxies (eg Read & Ponman 1997). The abundance remains practically unconstrained and thus it was fixed to the solar value ($`Z=1`$). The luminosity of the RS component is $`5\times 10^{41}`$ $`\mathrm{erg}\mathrm{s}^1`$ or about 25 per cent of the total luminosity in the 0.5-2 keV band. Finally, a power-law and RS model is fit where the obscuring column densities are different in the two components. For example in many Seyfert-2, the power-law component is heavily obscured while the star-forming component is outside the obscuring screen and is relatively unobscured. However, both best fit columns are close to the Galactic disfavouring the above scenario.
### 3.2 ASCA and ROSAT fits
Due to the low energy coverage of the ROSAT PSPC (0.1-2 keV) and its high effective area, it is quite instructive to fit the ROSAT data as well. Nevertheless, the energy resolution of the ROSAT PSPC is very limited ($`\mathrm{\Delta }E/E`$ 50 per cent) at 1 keV. RS spectra have been fit to the ROSAT data by Saracco & Ciliegi (1995) and Pildis et al. (1994). They derive a temperature of kT$``$ 1 keV. Instead, I fit a power-law spectrum which is the standard model for AGN spectra at least in the small ROSAT band. I obtain a spectral index of $`\mathrm{\Gamma }=2.63_{0.16}^{+0.04}`$ much steeper than the ASCA fit. The column density is $`N_H=3.4_{0.4}^{+1.0}\times 10^{20}`$ $`\mathrm{cm}^2`$ higher than the Galactic column ($`\chi ^2=82.1/82`$ dof). Finally, for the sake of completeness, I perform joint fits to the ASCA and the ROSAT data. However, bear in mind that this joint analysis has to be viewed with great caution. Recently Iwasawa Nandra & Fabian (1999) made spectral fits on simultaneous ASCA and ROSAT data of NGC5548. They demonstrate that the power-law fits may differ as much as $`\mathrm{\Delta }\mathrm{\Gamma }0.4`$ even in the common ROSAT /ASCA 0.5-2 keV band. The reason for this large discrepancy may be related with uncertainties in the calibration of both the ASCA and ROSAT detectors. A single power-law fit to the combined ASCA and ROSAT data of IRAS00317-2142 yields $`\mathrm{\Gamma }=2.00_{0.07}^{+0.07}`$, $`N_H=1.9\pm 0.2\times 10^{20}`$ $`\mathrm{cm}^2`$ ($`\chi ^2=227.2/197`$ dof). The power-law normalization is allowed to vary freely between the ROSAT and the ASCA observation epoch. Next, a RS component is added to the model. The temperature is constrained to have an upper limit of 1 keV, otherwise the resulting temperature becomes unrealistically high ($``$ 15 keV). The best fit temperature is $`\mathrm{kT}=0.07_{0.01}^{+0.02}`$ keV while the power-law slope is $`\mathrm{\Gamma }=1.9_{0.03}^{+0.12}`$. Despite the inclusion of the additional RS component the fit did not improve ($`\chi ^2=231.4/195`$ dof).
## 4 DISCUSSION
The detection of variability between the ASCA and the ROSAT data clearly suggests an AGN origin for the X-ray emission. The ASCA data are well represented with a single power-law $`\mathrm{\Gamma }1.8`$ with no absorption above the Galactic $`N_H1.5\times 10^{20}`$ $`\mathrm{cm}^2`$. Hence, the X-ray spectrum alone suggests a Seyfert-1 type AGN. This is again compatible with the high X-ray luminosity of this object, $`L_x10^{43}`$ $`\mathrm{erg}\mathrm{s}^1`$, during the ROSAT observation. However, this interpretation comes in stark contrast with the optical spectrum which is indicative of a low luminosity or obscured AGN.
Then a few possibilities arise for the nature of the AGN in IRAS00317-2142. We could view a Seyfert-1 nucleus overwhelmed in the optical by the emission of a powerful starforming galaxy. In the X-ray band the emission from the Seyfert-1 nucleus should dominate over that arising from star-forming processes. Still, according to the ASCA spectral fits, the luminosity of the RS component alone is $`5\times 10^{41}`$ $`\mathrm{erg}\mathrm{s}^1`$; this classifies IRAS00317-2142 as one of the most powerful X-ray star-forming galaxies known with a luminosity more than an order of magnitude above that of M82 (Ptak et al. 1997). The above scenario for the composites, which was originally proposed by Moran et al. (1996) can be tested by comparing the level of the $`H_\alpha `$ in respect with the hard X-ray luminosity. Indeed, Ward et al. (1988) found a strong correlation between the two quantities in a sample of IRAS selected Seyfert-1 galaxies. Then according to the scenario above, the composites should follow the same relation between $`L_x`$ and broad $`L(H_\alpha )`$. The luminosity of the broad $`H_\alpha `$ component in our object is about half of the total $`H_\alpha `$ luminosity (Moran et al. 1996). Then the observed broad $`H_\alpha `$ luminosity is $`L(H_\alpha )2\times 10^{41}`$ $`\mathrm{erg}\mathrm{s}^1`$(Coziol et al. 1993). This roughly translates to an X-ray luminosity of $`L_x4\times 10^{42}`$ $`\mathrm{erg}\mathrm{s}^1`$according to Ward et al. (1988) not far off the observed X-ray luminosity of $`L_x2\times 10^{42}`$ $`\mathrm{erg}\mathrm{s}^1`$. Of course the long term X-ray variability observed introduces some level of uncertainty in the above test. Note that Bassani et al. (1999) reported the detection of a few Seyfert galaxies which possibly have a weak or absent Broad Line Region. Our object could in principle belong to this category. However, the $`L_x/L(H_\alpha )`$ ratio rather argues against this hypothesis. Again the possibility that the X-ray emission remains relatively unabsorbed while the optical suffers from additional obscuration cannot be ruled out. The Balmer decrement in our object (using the flux of the narrow $`H_\alpha `$ and $`H_\beta `$ from Moran et al. 1996) is about 5 corresponding to a column of $`N_H10^{21}`$ $`\mathrm{cm}^2`$ (Bohlin et al. 1978) ie an order of magnitude higher than the column derived above in the case of cold absorption. The reason for this discrepancy is not obvious. One possibility is that the absorbing column is ionised: indeed in the case of a warm absorber the derived column is $`N_H10^{22}`$ $`\mathrm{cm}^2`$. Then some fraction of the ’warm’ column should be located further away from the nucleus, where the narrow $`H_\alpha `$ and $`H_\beta `$ lines originate.
Finally, IRAS00317-2142 could be a heavily obscured (Compton thick) AGN like eg NGC1068. Then a large fraction of the X-ray emission could be due to scattered X-rays, on a warm electron medium which should be situated well above the obscuring torus. The broad $`H_\alpha `$ wing observed could arise from scattered radiation. However, the narrow and broad $`H_\alpha `$ components have comparable fluxes (Moran et al. 1996) arguing against this interpretation. Note that, the peculiar combination of an un-absorbed X-ray spectrum with a narrow-line dominated, obscured optical spectrum was also encountered in NGC3147 (Ptak et al. 1996). This object has a type-2 type nucleus according to its optical spectrum which presents a relatively broad \[NII\] line (FWHM$``$400 $`\mathrm{km}\mathrm{s}^1`$). The X-ray spectrum of NGC3147 is again very similar to our object as it shows no intrinsic absorption and a steep spectral index $`\mathrm{\Gamma }1.8`$. Its observed X-ray luminosity is far below ($`L_x10^{41}`$ $`\mathrm{erg}\mathrm{s}^1`$) that of our object; an Fe line at 6.4 keV (rest-frame) was clearly detected in NGC3147 (Ptak et al. 1996) with an equivalent width greater than 130 eV. Ptak et al. (1996) favoured a scenario where NGC3147 harbours an obscured AGN. If indeed IRAS00317-2142 is a heavily obscured AGN the intrinsic unabsorbed luminosity of this object should exceed $`10^{45}`$ $`\mathrm{erg}\mathrm{s}^1`$(assuming that we observe only a few percent of scattered light alone our line of sight). The column density should be above $`N_H>10^{24}`$ $`\mathrm{cm}^2`$ in order to absorb all transmitted photons with energy up to 7 keV. Such a large column should produce a high equivalent width Fe line as the cold obscuring material is photoionised by the incident hard X-ray photons. For an obscuring column of $`10^{24}`$ $`\mathrm{cm}^2`$ we expect an equivalent width Fe line of $``$ 1 keV (see Fig. 3 of Ptak et al. 1996). This value is consistent with our 90 per cent upper limit for the Fe line equivalent width (0.9 keV). Probably the most stringent constraints on the AGN nature come from the X-ray to optical emission line ratios. Maiolino et al. (1998) argue that as the \[OIII\] emission comes from scales much larger than the obscuring torus, the $`f_x/f_{[OIII]}`$ ratio provides a powerful diagnostic of the nuclear activity. Hence, an obscured AGN should have very low $`f_x/f_{[OIII]}`$ ratio as the nuclear X-ray luminosity is obscured while ethe \[OIII\] is not. The ratio of the X-ray (2-10 keV) to the \[OIII\] flux (corrected for absorption using the formula of Bassani et al. 1999) is about 2.5 for our object. This is more typical of unobscured AGN (Maiolino et al. 1998) arguing against the obscured AGN scenario.
Additional constraints on the nature of the AGN could in principle, be provided by the observed variability. The Seyfert-1 objects should present rapid variability with the amplitude increasing as we go to lower luminosities (eg Nandra et al. 1997, Ptak et al. 1998). In contrast, the obscured AGN show little or no variability as a large fraction of the X-ray emission comes from re-processed radiation far away from the nucleus. However, in our case the ASCA light curves have poor photon statistics. Both the GIS and the SIS data (4 ksec bins) cannot rule out a constant count rate even at the 68 per cent confidence level. Therefore it is difficult to differentiate, on the basis of variability alone, between a Seyfert-1 and an obscured AGN scenario.
Regardless of the X-ray emission origin, the enigmatic composite objects present many similarities with the NLXGs detected in abundance in deep ROSAT surveys (eg Boyle et al. 1995). Many of these present clear-cut star-forming galaxy spectra. However, their high X-ray luminosities immediately rule out a star-forming galaxy origin for the X-ray emission. Only ROSAT data exist for these faint objects and therefore their X-ray spectra remain yet largely unconstrained (eg Almaini et al. 1996). Thus it is difficult to compare the X-ray properties of NLXGs with those of the ’composites’. However, it is interesting to note that if the ROSAT NLXGs have X-ray spectra similar to IRAS00317-2142, then they would make a small contribution to the X-ray background. Indeed, the X-ray spectrum of IRAS003217-2142 in the 2-10 keV band is much steeper than that of the X-ray background in the same band (Gendreau et al. 1995).
## 5 CONCLUSIONS
The X-ray observations shed more light on the nature of the composite objects. The detection of strong variability (a factor of three) between the ASCA and the ROSAT observations clearly suggests an AGN origin for most of the X-ray emission. Some fraction of the soft 0.1-2 keV X-ray emission can still be attributed to a strong star-forming component ($`L_x10^{4142}`$ $`\mathrm{erg}\mathrm{s}^1`$). Nevertheless, the exact AGN classification remains uncertain. The X-ray spectrum has a steep power-law slope ($`\mathrm{\Gamma }=1.8`$) and presents no absorption above the Galactic. Hence the X-ray spectrum is clearly suggestive of a Seyfert-1 interpretation. However, the optical spectrum shows only a weak $`H_\alpha `$ component and is therefore more reminiscent of an obscured Seyfert galaxy. The discrepancy between the optical and the X-ray spectrum can be alleviated if we assume that while the optical spectrum is influenced by the strong star-forming component, the AGN component is producing most of the X-ray emission; alternatively, it is possible that our object has a dusty ionised absorber which selectively obscures the optical emission. Finally, the possibility that our object is Compton thick (possibly like eg NGC3147) is disfavoured by the large value of the $`f_x/f_{[OIII]}`$ ratio which is more typical of unobscured AGN.
Future imaging observations with Chandra and high throughput spectroscopic observations with the XMM mission will provide the conclusive test on the above hypothesis.
## 6 Acknowledgments
I am grateful to the referee R. Maiolino for many useful comments and suggestions. I would like to thank A. Zezas for extracting the ASCA light curves. It is a pleasure to thank A. Ptak, A. Zezas, I. Papadakis and G.C. Stewart for many useful discussions. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. |
no-problem/9911/math9911004.html | ar5iv | text | # On non-projective normal surfaces
## 1. Introduction
The aim of this note is to construct some simple examples of non-projective normal surfaces, and discuss the degeneration of the Neron-Severi group and its intersection form. Here the word surface refers to a 2-dimensional proper algebraic scheme.
The criterion of Zariski \[3, Cor. 4, p. 328\] tells us that a normal surface $`Z`$ is projective if and only if the set of points $`zZ`$ whose local ring $`𝒪_{Z,z}`$ is not $``$-factorial allows an affine open neighborhood. In particular, every resolution of singularities $`XZ`$ is projective. In order to construct $`Z`$, we therefore have to start with a regular surface $`X`$ and contract at least two suitable connected curves $`R_iX`$.
Our surfaces will be modifications of $`Y=P^1\times C`$, where $`C`$ is a smooth curve of genus $`g>0`$; the modifications will replace some fibres $`F_iY`$ over $`P^1`$ with rational curves, thereby introducing non-rational singularities and turning lots of Cartier divisors into Weil divisors.
The Neron-Severi group $`\mathrm{NS}(Z)=\mathrm{Pic}(Z)/\mathrm{Pic}^{}(Z)`$ of a non-projective surface might become rather small, and its intersection form might degenerate. Our first example has $`\mathrm{NS}(Z)=`$ and trivial intersection form. Our second example even has $`\mathrm{Pic}(Z)=0`$. Our third example allows a birational morphism $`ZS`$ to a projective surface.
These examples provide answers for two questions concerning surfaces posed by Kleiman \[2, XII, rem. 7.2, p. 664\]. He has asked whether or not the intersection form on the group $`\mathrm{N}(X)=\mathrm{Pic}(X)/\mathrm{Pic}^\tau (X)`$ of numerical classes is always non-degenerate, and the first example shows that the answer is negative. Here $`\mathrm{Pic}^\tau (X)`$ is the subgroup of all invertible sheaves $``$ with $`\mathrm{deg}(_A)=0`$ for all curves $`AX`$. He also has asked whether or not a normal surface with an invertible sheaf $``$ satisfying $`c_1^2()>0`$ is necessarily projective, and the third example gives a negative answer. This should be compared with a result on smooth complex analytic surfaces \[1, IV, 5.2, p. 126\], which says that such a surface allowing an invertible sheaf with $`c_1^2()>0`$ is necessarily a projective scheme.
In the following we will work over an arbitrary ground field $`k`$ with uncountably many elements. It is not difficult to see that a normal surface over a finite ground field is always projective. It would be interesting to extend our constructions to countable fields.
## 2. A surface without ample divisors
In this section we will construct a normal surface $`Z`$ which is not embeddable into any projective space. The idea is to choose a suitable smooth curve $`C`$ of genus $`g>0`$ and perform certain modifications on $`Y=P^1\times C`$ called mutations, thereby destroying many Cartier divisors.
###### (2.1)
We start by choosing a smooth curve $`C`$ such that $`\mathrm{Pic}(C)`$ contains uncountably many different classes of rational points $`cC`$. For example, let $`C`$ be an elliptic curve with at least two rational points. We obtain a Galois covering $`CP^1`$ of degree 2 such that the corresponding involution $`i:CC`$ interchanges the two rational points. Considering its graph we conclude that $`i`$ has at most finitely many fixed points; since there are uncountably many rational points on the projective line, the set $`C(k)`$ of rational points is also uncountable.
Since the group scheme of $`n`$-torsion points in the Picard scheme $`\mathrm{Pic}_{C/k}`$ is finite, the torsion subgroup of $`\mathrm{Pic}(C)`$ must be countable. Since $`C`$ is a curve of genus $`g>0`$, any two different rational points $`c_1,c_2C`$ are not linearly equivalent, otherwise there would be a morphism $`CP^1`$ of degree 1. We conclude that $`\mathrm{Pic}(C)`$ contains uncountably many classes of rational points.
###### (2.2)
We will examine the product ruled surface $`Y=P^1\times C`$, and the corresponding projections $`\mathrm{pr}_1:YP^1`$ and $`\mathrm{pr}_2:YC`$. Let $`yY`$ be a rational point, $`f:XY`$ the blow-up of this point, $`EX`$ the exceptional divisor, and $`RX`$ the strict transform of $`F=\mathrm{pr}_1^1(\mathrm{pr}_1(y))`$. Then we can view $`f`$ as the contraction of the curve $`EX`$, and I claim that there is also a contraction of the curve $`RX`$. Let $`DX`$ be the strict transform of $`\mathrm{pr}_2^1(\mathrm{pr}_2(y))`$ and $`=𝒪_X(D)`$ the corresponding invertible sheaf. Obviously, the restriction $`D`$ is relatively ample with respect to the projection $`\mathrm{pr}_1f:XP^1`$; according to some $`^n`$ with $`n>0`$ is relatively base point free, hence the homogeneous spectrum of $`(\mathrm{pr}_1f)_{}(\mathrm{Sym})`$ is a normal projective surface $`Z`$, and the canonical morphism $`g:XZ`$ is the contraction of $`R`$, which is the only relative curve disjoint to $`D`$. We call $`Z`$ the mutation of $`Y`$ with respect to the center $`yY`$.
###### (2.3)
We observe that the existence of the contraction $`g:XZ`$ is local over $`P^1`$; hence we can do the same thing simultaneously for finitely many rational points $`y_1,\mathrm{},y_n`$ in pairwise different closed fibres $`F_i=\mathrm{pr}_1^1(\mathrm{pr}_1(y_i))`$. If $`f:XY`$ is the blow-up of the points $`y_i`$, and $`E_iX,R_iX`$ are the corresponding exceptional curves and strict transforms respectively, we can construct a normal proper surface $`Z`$ and a contraction $`g:XZ`$ of the union $`R=R_1\mathrm{}R_n`$ by patching together quasi-affine pieces over $`P^1`$. Since $`Z`$ is obtained by patching, there is no reason that the resulting proper surface should be projective. We also will call $`Z`$ the mutation of $`Y`$ with respect to the centers $`y_1,\mathrm{},y_n`$.
###### (2.4)
Let us determine the effect of mutations on the Picard group. One easily sees that the maps
$$H^1(C,𝒪_C)H^1(Y,𝒪_Y)H^1(X,𝒪_X)$$
are bijective. Let $`𝔛`$ be the formal completion of $`X`$ along $`R=R_i`$; since the composition
$$H^1(C,𝒪_C)H^1(𝔛,𝒪_𝔛)H^1(R,𝒪_R)$$
is injective, the same holds for the map on the left. Hence the right-hand map in the exact sequence
$$0H^1(Z,𝒪_Z)H^1(X,𝒪_X)H^1(𝔛,𝒪_𝔛)$$
in injective, and $`H^1(Z,𝒪_Z)`$ must vanish. We deduce that the group scheme $`\mathrm{Pic}_{Z/k}^{}`$, the connected component of the Picard scheme, is zero. Since the Neron-Severi group of $`Y`$ is torsion free, the same holds true for $`Z`$, and we conclude $`\mathrm{Pic}^\tau (Z)=0`$.
###### (2.5)
Now let $`F_1,F_2Y`$ be two different closed fibres over rational points of $`P^1`$ and $`y_1F_1`$ a rational point. The idea is to choose a second rational point $`y_2F_2`$ in a generic fashion in order to eliminate all ample divisors on the resulting mutation. Let $`Z^{}`$ be the mutation with respect to $`y_1`$. By finiteness of the base number, $`\mathrm{Pic}(Z^{})`$ is a countable group, in fact isomorphic to $`^2`$. On the other hand, $`\mathrm{Pic}(F_2)`$ is uncountable, and there is a rational point $`y_2F_2`$ such that the classes of the divisors $`ny_2`$ in $`\mathrm{Pic}(F_2)`$ for $`n0`$ are not contained in the image of $`\mathrm{Pic}(Z^{})`$. Let $`Z`$ be the mutation of $`Y`$ with respect to the centers $`y_1,y_2`$.
I claim that there is no ample Cartier divisor on $`Z`$. Assuming the contrary, we find an ample effective divisor $`DZ`$ disjoint to the two singular points $`z_1=g(R_1)`$ and $`z_2=g(R_2)`$ of the surface. Hence the strict transform $`D^{}Z^{}`$ is a divisor with
$$D^{}F_2=\left\{y_2\right\},$$
contrary to the choice of $`y_2F_2`$. We conclude that $`Z`$ is a non-projective normal surface. More precisely, there is no divisor $`D\mathrm{Div}(Z)`$ with $`DF>0`$, where $`FZ`$ is a fibre over $`P^1`$, since otherwise $`D+nF`$ would be ample for $`n`$ sufficiently large. Hence the canonical map $`\mathrm{Pic}(P^1)\mathrm{Pic}(Z)`$ is bijective, $`\mathrm{Pic}(Z)/\mathrm{Pic}^\tau (Z)=`$ holds, and the intersection form on $`N(Z)`$ is zero.
## 3. A surface without invertible sheaves
In this section we will construct a normal surface $`S`$ with $`\mathrm{Pic}(S)=0`$. We start with $`Y=P^1\times C`$, pass to a suitable mutation $`Z`$, and obtain the desired surface as a contraction of $`Z`$.
###### (3.1)
Let $`y_1,y_2Y`$ be two closed points in two different closed fibres $`F_1,F_2Y`$ as in (2.5) such that the mutation with respect to the centers $`y_1,y_2`$ is non-projective. Let $`y_0Y`$ be another rational point in $`\mathrm{pr}_2^1(\mathrm{pr}_2(y_1))`$, and consider the mutation
$$Y\stackrel{f}{}X\stackrel{g}{}Z$$
with respect to the centers $`y_0,y_1,y_2`$. We obtain a configuration of curves on $`X`$ with the following intersection graph:
Here $`A`$ is the strict transform of $`\mathrm{pr}_2^1(\mathrm{pr}_2(y_1))`$ and $`B`$ is the strict transform of $`\mathrm{pr}_2^1(\mathrm{pr}_2(y_2))`$. Consider the effective divisor $`D=3B+2R_0+2R_1`$; one easily calculates
$$DB=1,DR_0=1,\text{and}DR_1=1,$$
hence the associated invertible sheaf $`=𝒪_X(D)`$ is ample on $`DX`$. According to , the homogeneous spectrum of $`\mathrm{\Gamma }(X,\mathrm{Sym})`$ yields a normal projective surface and a contraction of $`AR_2`$. On the other hand, the curves $`R_0`$ and $`R_1`$ are also contractible. Since the curves $`R_0`$, $`R_1`$ and $`AR_2`$ are disjoint, we obtain a normal surface $`S`$ and a contraction $`h:ZS`$ of $`A`$ by patching.
###### (3.2)
Let $``$ be an invertible $`𝒪_S`$-module; then $`=h^{}()`$ is an invertible $`𝒪_Z`$-module which is trivial in a neighborhood of $`AZ`$. Since the maps in
$$\mathrm{Pic}(P^1)\mathrm{Pic}(Z)\mathrm{Pic}(A)$$
are injective, we conclude that $``$ is trivial. Hence $`S`$ is a normal surface such that $`\mathrm{Pic}(S)=0`$ holds.
## 4. A counterexample to a question of Kleiman
In this section we construct a non-projective normal surface $`Z`$ containing an integral Cartier divisor $`DZ`$ with $`D^2>0`$. We obtain such a surface by constructing a non-projective normal surface $`Z`$ which allows a birational morphism $`h:ZS`$ to a projective surface $`S`$; then we can find an integral ample divisor $`DS`$ disjoint to the image of the exceptional curves $`EZ`$.
###### (4.1)
Again we start with $`Y=P^1\times C`$ and choose two closed points $`y_1,y_2Y`$ as in (2.5) such that the resulting mutation is non-projective. Let $`y_2^{}`$ be the intersection of $`F_2=\mathrm{pr}_1^1(\mathrm{pr}_1(y_2))`$ with $`\mathrm{pr}_2^1(\mathrm{pr}_2(y_1))`$, and $`f:XY`$ the blow up of $`y_1`$, $`y_2`$ and $`y_2^{}`$. We obtain a configuration of curves on $`X`$ with intersection graph
Here $`A`$ is the strict transform of $`\mathrm{pr}_2^1(\mathrm{pr}_2(y_2))`$, and $`A^{}`$ is the strict transform of $`\mathrm{pr}_2^1(\mathrm{pr}_2(y_2^{}))`$. One easily sees that there is a contraction $`XS`$ of the curve $`R_1R_2E_2`$ and another contraction $`XZ`$ of the curve $`R_1R_2`$. The divisor $`A^{}`$ is relatively ample on $`S`$ and shows that this surface is projective. On the other hand, I claim that there is no ample divisor on $`Z`$. Assuming the contrary, we can pick an integral divisor $`EZ`$ disjoint to the singularities; its strict transform $`DY`$ satisfies
$$DF_1=\left\{y_1\right\}\text{and}DF_2=\{y_2,y_2^{}\},$$
where $`F_i`$ are the fibres containing $`y_i`$. Since $`A^{}F_2=\left\{y_2^{}\right\}`$ holds, the class of some multiple $`ny_2\mathrm{Div}(F_2)`$ is the restriction of an invertible $`𝒪_Y`$-module, contrary to the choice of $`y_2`$. Hence the surface $`Z`$ and the morphism $`ZS`$ are non-projective.
Mathematisches Institut
Ruhr-Universität
44780 Bochum
Germany
E-mail: s.schroeer@ruhr-uni-bochum.de |
no-problem/9911/cond-mat9911347.html | ar5iv | text | # Charged stripes from alternating static magnetic field
\[
## Abstract
We motivate and perform a calculation of the energy of a cold fluid of charged fermions in the presence of a striped magnetic background. We find that a non-trivial value for the doping density on the walls is preferred.
IASSNS-HEP-99/100
\]
The concept of “fictitious” or “statistical” gauge fields has proved quite fruitful in the study of the quantum Hall effect. Gauge fields have also figured prominently in theoretical attempts to understand other highly correlated electron systems in two dimensions, especially the doped cuprate planes famous for supporting high-temperature superconductivity . In the mean-field treatment of such theories, one is often involved in expanding around non-zero background values for the fictitious magnetic field. This background field spontaneously breaks P and T symmetry, and is therefore naturally associated with a $`Z_2`$ order parameter. Unfortunately, experiments to detect macroscopic P and T violation in real materials have produced negative results.
On the other hand, recently a number of experiments have discovered evidence for inhomogeneous electronic structure in the doped cuprate planes . Existing evidence is consistent with the idea that this structure is roughly describable in terms of antiferromagnetic domains separated by narrow antiphase walls, with the dopants concentrated on the walls, at least for small doping. In view of this observed inhomogeneity, it is interesting and natural to consider the possibility of stripe structure in the context of fictitious gauge fields. Indeed, this concept has several advantages that are evident prior to any detailed calculation. The empirical difficulty of macroscopic P and T violation is ameliorated, and the $`Z_2`$ nature of the order provides a raison d’etre for stable domain walls.
On deeper consideration it appears that there can be, in the context of these ideas, a compelling connection between doping and stripe structure. Indeed, let us suppose that the preferred bulk phase, realized in the absence of doping, prefers values of the magnetic field $`\pm B`$, with one sign chosen uniformly throughout the plane, at which the electrons just fill an integer number of Landau levels. As one dopes away from this preferred filling, two things might happen. It might be that one just has to live with an unfilled band. Alternatively, it might be preferable to produce regions with both $`\pm B`$, separated by narrow walls. One could have then have the filled band in bulk, accommodating dopants on the domain walls.
To determine the viability of this concept, we have performed the following calculation.
Consider ideal fermions in 2 dimensions in uniform magnetic field $`B`$. Initially, exactly $`N`$ first Landau levels ($`n=0,1,\mathrm{},N1`$) are filled. The energy density is $`_0=\frac{1}{2}N\mathrm{}\omega _c\rho `$, where $`\rho =NeB/2\pi \mathrm{}c`$ is the fermion density and $`\omega _c=eB/mc`$ is the cyclotron frequency.
Add or remove $`\mathrm{\Delta }\rho `$ fermions per unit area. If the magnetic field stays commensurate with density, keeping $`N`$ filled Landau levels, $`\mathrm{\Delta }=N\mathrm{}\omega _c\mathrm{\Delta }\rho `$. Otherwise, if $`B`$ remains fixed,
$$\mathrm{\Delta }=\{\begin{array}{cc}(N+1/2)\mathrm{}\omega _c\mathrm{\Delta }\rho \hfill & \text{if }\mathrm{\Delta }\rho >0,\hfill \\ (N1/2)\mathrm{}\omega _c\mathrm{\Delta }\rho \hfill & \text{if }\mathrm{\Delta }\rho <0.\hfill \end{array}$$
(1)
In our alternative scenario, magnetic field is allowed to alternate between $`B`$ and $`B`$, thus producing magnetic domains (Fig. 1). Landau levels are distorted in the vicinity of domain walls. Midgap states induced by the domain wall accommodate newly doped fermions (or holes), so that doped charges form stripes along the domain walls. We wish to compare the energy of this state of a doped system with that in a fixed uniform magnetic field.
The linear density of charge on a stripe $`\nu `$ and the stripe spacing $`\lambda `$ are related to the average density of added charge according to $`\mathrm{\Delta }\rho =\nu /\lambda `$. For small $`\mathrm{\Delta }\rho `$, stripes are far apart compared to the width of the wave functions, and they can be considered independently of one another. In units of magnetic length $`(\mathrm{}c/eB)^{1/2}`$, this requires $`\lambda (N1/2)^{1/2}`$.
For a single domain wall of length $`L`$ with $`𝒩=\nu L`$ added particles, the energy is $`E(𝒩)=E_0+ϵ(\nu )L`$, where $`E_0`$ is the energy of the undoped system without the domain wall and $`ϵ(\nu )`$ is the energy density of a domain wall (per unit length). The relative energy of the system per unit area is
$$\mathrm{\Delta }=\frac{ϵ(\nu )}{\lambda }=\frac{ϵ(\nu )}{\nu }\mathrm{\Delta }\rho =\frac{E(𝒩)E_0}{𝒩}\mathrm{\Delta }\rho .$$
(2)
Thus the preferred linear charge density $`\nu _0`$ is found by minimizing $`ϵ(\nu )/|\nu |`$, the energy per doped particle (or hole). Formation of stripes is advantageous if this energy is lower than that in a uniform field $`B`$:
$$\frac{ϵ(\nu _0)}{|\nu _0|}<\{\begin{array}{cc}\hfill (N+1/2)\mathrm{}\omega _c& \text{for electrons},\hfill \\ \hfill (N1/2)\mathrm{}\omega _c& \text{for holes.}\hfill \end{array}$$
(3)
Energy spectrum of a single domain wall: For a stripe along the $`x`$ axis, $`B(y)=B\mathrm{sgn}(y)`$. It is convenient to work in the Landau gauge $`A_x=B|y|`$, $`A_y=0`$, so that translation symmetry in $`x`$ is manifest. Wavefunctions $`\psi (y)\mathrm{exp}(ikx)`$ then satisfy the Schroedinger equation
$$\psi ^{\prime \prime }(y)+(|y|k)^2\psi (y)=2E\psi (y),$$
(4)
in appropriate units of length and energy.
As a function of momentum $`k`$ along the stripe, there are 3 simple regimes. If $`k>0`$ and large one has two well-separated parabolic wells. Then there are doublets near the levels of the harmonic oscillator, with an exponentially small splitting between the symmetric and antisymmetric states. If $`k<0`$ and large one has a single wedge-like well. Energy levels start (at least) at $`|k|`$ and thus do not contribute to low Landau levels. Finally if $`k=0`$, one has the simple harmonic oscillator.
Thus as $`k`$ varies from $`+\mathrm{}`$ to 0, the $`n`$-th Landau level splits into symmetric and antisymmetric branches. The energy of the symmetric branch $`E_+(k)`$ is lower than $`n+1/2`$ for large k. At $`k=0`$, $`E_+=2n+1/2`$ and $`E_{}=2n+3/2`$.
Solutions of the Schroedinger equation (4) regular at $`y=\mathrm{}`$ are parabolic cylinder functions
$$\psi (y)=\{\begin{array}{cc}D_{E1/2}\left((yk)\sqrt{2}\right)& \text{ if }y0,\hfill \\ \pm \psi (|y|)& \text{ if }y<0.\hfill \end{array}$$
(5)
The spectra $`E_\pm (k)`$ are determined by a boundary condition at $`y=0`$: $`\psi ^{}(+0)=0`$ for symmetric states and $`\psi (+0)=0`$ for antisymmetric ones. All the necessary calculations are now fast computations for MAPLE. The results are shown in Fig. 2.
Energy of a single stripe: Further analysis of the energy competition is greatly simplified by the existence of a one-to-one correspondence between states with and without a domain wall. Since $`k`$ is quantized in units $`\frac{2\pi }{L}`$ in both cases, there are no phase shifts to worry about. We will consider in detail the case of $`N=1`$ filled Landau level.
In uniform magnetic field, all states of the lowest Landau level ($`n=0`$, $`\mathrm{}<k<+\mathrm{}`$) are occupied. To obtain the same number of particles in the system with a single domain wall in the most economical way, we must partially fill the two lowest branches ($`n=0`$), as follows. First fill all the symmetric states with $`k0`$ and all antisymmetric states with $`k>0`$; then slightly rearrange occupied and empty states within the two bands to achieve the state with lowest energy. The Fermi energy and momentum are determined by the equation $`E_+(k_F)=E_{}(k_F)=E_F`$. Note that the Fermi level lies below the bottom of the symmetric $`n=1`$ band (Figure 2).
Without doping, a stripe has a positive linear density of energy. At large doping levels, however, $`E(𝒩)E_0𝒩/2+C`$, as holes are doped into unperturbed Landau states away from the domain wall. The constant $`C`$ is negative. This becomes evident from Figure 2 if one considers $`E_F`$ just above $`1/2`$, for then most of the remaining electrons are in the symmetric band, i.e., below $`1/2`$. The energy per doped hole is
$$\frac{ϵ(\nu )}{|\nu |}=\frac{E(𝒩)E_0}{|𝒩|}\frac{1}{2}+\frac{c}{|\nu |},c=C/L<0.$$
(6)
Because $`ϵ(\nu )/|\nu |>0`$ for $`\nu 0`$, this function has a minimum at a finite $`\nu _0`$. Moreover, condition (3) is satisfied. Thus for $`N=1`$ the stripe phase has lower energy than the phase with a fixed uniform magnetic field, at least for sufficiently low doping levels, such that stripe wavefunctions do not overlap.
In Figure 3, we have displayed the energy per added hole $`ϵ(\nu )/|\nu |`$ for $`N=1`$ and 2 filled Landau levels in bulk. For $`N=1`$, we find $`|\nu _0|0.13`$ particles per magnetic length.
On a lattice with flux $`\varphi `$ piercing an elementary plaquette, the lattice constant is $`a=\varphi ^{\frac{1}{2}}`$. The preferred doping density in lattice units is therefore
$$|\nu _0|\frac{0.13\text{ particles}}{\text{magn. length}}=\frac{0.13\varphi ^{1/2}\text{ particles}}{\text{lattice constant}}.$$
(7)
If $`\varphi =\pi `$, as suggested theoretically , we find $`|\nu _0|0.23`$ particles per lattice constant.
Discussion: In our picture, the domain walls resemble quantum Hall edges. They support currents, alternating in direction from wall to wall, in the ground state.
Our calculation has been highly idealized, of course. For one thing, we have ignored any possibility of energy intrinsically associated with the fictitious field (other than the implicit constraint enforcing values $`\pm B`$). While this approximation is in the spirit of fictitious Chern-Simons fields, or of fields implementing constraints, which have no independent dynamics, in the absence of a detailed microscopic model we cannot assess its accuracy. Similarly, we have been shamelessly opportunistic in maneuvering back and forth between lattice and continuum. With all due reserve, it nevertheless seems appropriate to mention that the sort of model discussed here is remarkable and virtually unique, as far as we are aware, in predicting a non-trivial preferred dopant density along the stripes. Moreover, the preferred numerical value, which has emerged from a dynamical calculation containing no disposable continuous parameters, is consistent with the observed one.
Acknowledgments: We thank M. M. Fogler and A. Zee for helpful discussions. Research supported in part by DOE grant DE-FG02-90ER40542. |
no-problem/9911/astro-ph9911382.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Disks are quite common structures in the Universe. According to the classical theory of White & Rees (1978) spiral galaxies are considered the seeds of galaxies assembly due to gas cooling inside spinning dark matter halos. Apart from spirals, disks are seen also in the center of giant ellipticals; damped Lyman $`\alpha `$ clouds are commonly believed to be gaseous disks. Finally, S0 galaxies are expected to be disk galaxies depauperated of their gas.
However galaxian disks are fragile structures. N-body simulations convincingly show that the merging of two equal mass galaxies is fatal for disks. They are simply destroyed. The observational counterpart is represented -just to mention a source- in the IRAS database, which shows many disk on the way to be destroyed by major mergers.
Since our disk appears almost undamaged, this means that it did not suffer from strong mergers since its formation, and that its past life was relatively quiet. Of course we cannot forget that our disk has a warp, and that presently a dwarf galaxy - Sagittarius - is going to merge with our Galaxy.
Just how the past life was quiet can be gauged by the ability of small galaxies to heat or thicken the disk. Quinn et al (1993) pointed out that even satellites with masses of order $`5\%10\%`$ of the disk mass can thicken the disk by a factor of two or more if they dump all of their available orbital kinetic energy into the disk. Hence the presence of disks with scale heights less than a few hundred parsecs (thin disks) implies the absence of even minor encounters over the age of the thin disk. Nonetheless it may be possible for disks formed early in the lives of spirals to have been heated by satellites forming a thick disk within which a new thin disk was allowed to form. If this was the case, an estimate of the thin disk age can fix the time at which the last encounter occured.
The acute fragility of disks is a very important constraint on the evolution of the environment of protogalaxies that develop into current spirals. Either they were born in isolation (low density environment) or they managed to remove all potentially disk-disturbing debris early enough that a normal disk had time to form. It is clearly very interesting to look from our favorite observational point, the solar system, at the disk population to try to establish its age. This is important to constrain the zero point of chemical evolution models, the relationship between the disk and the other galaxy components, and to fix a rough chronology for the disk development and origin.
## 2 Age indicators
In the following I shall discuss five different methods devised to obtain an estimate of the age of the galactic disk, discussing their feasibility, robustness and limitations.
I shall start making a crucial point. Most methods to infer the age of the galactic disk pretend, or try, to give an age estimate for the entire disk (or even for the Galaxy as a whole), by using age indicators located in the near solar vicinity (see Fig. 1). Whether the solar neighborhood is really representative of all the disk is an open question, which does not hold only for this particular issue - the age of the disk -, but more generally for determining the global chemical evolution and the SF history of the galactic disk looking at nearby indicators (Carraro et al 1998). At present only old open clusters (although the sample is rather poor) can be used to derive an estimate for the age of a significant portion of the galactic disk. Moreover all the methods devised to address this topic make us of indicators located well inside the thin disk. Therefore I am going to discuss the age of the galactic thin disk (Gilmore et al 1989).
### 2.1 Cool White Dwarfs Luminosity Function
The idea that the coolest white dwarfs (WD) can be used to determine the age of the local galactic disk dates back to Schmidt (1959). His idea was the following: if we find the coolest white dwarf (just looking at the downturn of their luminosity function) and measure their cooling time, we can find the age of the disk just adding to this time the lifetime of the WD progenitor. Moreover WDs are easy to be modelled, much easier than Main Sequence (MS) stars (Mestel 1952). At that time it was not possible to detect the turn-down, and the idea failed. Moreover some uncertainties in the models arose, in particular the crystallization. In detail, at decreasing temperature ions start to crystallize, and the energy necessary to maintain the ions lattice causes the WD to cool to invisibility very rapidly.
More recently Winget et al (1987) succeeded to find the turn-down, and since then many authors built up CWDLF (Leggett et al 1998). In addition theorists now agree that the onset of rapid cooling is not reached for the dominant stellar mass in the WD population ($`0.6M_{}`$) until well below the luminosity of the observed shortfall. The most recent work is from Knox et al (1999). They find 52 CWDs using common proper motion binaries. Their sample is the only complete one both in luminosity and in proper motion. The resulting LF (see their Figs. 19,20 and 22) shows a clear short-fall at $`logL/L_{}=4.2`$, and predicts a larger number of WDs per unit volume, as compared with previous determination of the LF. Knox et al (1999) compared their LF with two sets of stellar models, by Wood (1992) and Garcia-Berro et al (1997). The best fit is obtained using Wood (1992) models, and provides a disk age of $`9.0\pm 1.0`$ Gyr. Garcia-Berro et al (1997) models do not fit the LF maximum, and provide a somewhat greater age.
### 2.2 The Hypparcos CMD
The CMD provided by Hipparcos satellite includes stars within about 150 parsecs from the Sun (see Fig. 2). It shows a well defined MS, a prominent clump of He-burning stars and a subgiant region. It represents of course a mix of stellar populations with spreads in age and in metallicity. The dating features used by Jimenez et al (1998) is the red envelope of the sub-giant region, and the methods adopted is the isochrone fitting one. Since the metallicity of the stars is not known - say from observations - the authors assume that the spread in color of the clump is a good indicator of a spread in metallicity. This is a rather crude approximation, as discussed by Girardi (this meeting). Nevertheless, it is possible to obtain a minimum age for this sample, assuming that the star populating the lower red envelope of the subgiant region has the maximum metallicity, since this metallicity provides the minimum age. So doing they obtain a minimum age for the disk of $`8`$ Gyr.
### 2.3 Nearby $`F`$ & $`G`$ stars
Another method - although rather difficult - is the direct age estimate of single stars, like the 187 stars sample of Edvardsson et al (1993). To get an age estimate one needs star photometry, spectroscopy and distance, to put them in the $`M_VlogT_e`$ plane. In the case of the Edvardsson et al sample ages are inferred on the Vandenberg scale (1985). Removing from the sample the presumed thick disk stars, or stars whose orbital motions are not that of thin disk stars, a reasonable estimate for the age of the local galactic disk is $`911`$ Gyr. Recently the Edvardsson et al sample has been revised by Ng & Bertelli (1997), who re-computed the ages of those stars taking into account new distances from Hipparcos, correcting for the Lutz-Kelker effect, and putting them in the Bertelli et al (1994) scale. At older ages the new ages are slightly older, but the conclusion on the disk age is roughly the same.
### 2.4 The Radio-active Clock
Butcher (1987) proposed to derive the age of the Galaxy by observing the radio-active nuclide $`{}_{}{}^{232}Th`$ in stars of different ages, and relating the nucleosynthesis timescale to the stellar and galactic evolution. He considered the evolution of Nd, a stable nuclide and Th (half life 14 Gyr). The first point to stress is the extreme weakness of the spectral features in the measured stars spectra: in particular the Th line falls in a blend with Co. The errors related to the derived abundance are around 0.1 dex. The idea underlying this method is that after its formation, a stars does not modify its envelope abundances of Th and Nd but for radio-active decay. By measuring the ratio of the abundances of these elements, $`[Th/Nd]`$ in stars of different age, it is possible to reconstruct the decay evolution of Th. The basic assumption is that the growth rate of the two elements is the same, although Th is a r-process and Nd is partly a r- and partly a s-process. So doing, they conclude that no reliable chemical evolution model can account for this distribution without assuming an age less than $`9`$ Gyr. The same result has been obtained by Morell et al (1992) who made a new abundance analysis on the same sample.
Clayton (1988) criticized these results, stressing that although the precise nature of r- and s- process is not clear enough, in principle one has to take into account their different evolution. In particular assuming that the contribution to the Nd abundance is about half from r- and half from s-processes, he showed that a simple model of chemical evolution can account for the Butcher distribution assuming ages greater than 12 Gyr, and concluding: ”An unbiased look at all methods together favours an age greater than 12 Gyr, although no single method is reliable. I point out that each nuclear method is still amenable to further improvements, but they alone will not be able to determine the Galaxy’s age. Only a detailed and specific and correct model for the growth and chemical evolution of the solar neighbourhood can enable the galactic age to be inferred from radioactivity.
### 2.5 Old Open Clusters
Old open cluster are well suited to address many issues concerning our disk (Friel 1995, Carraro et al 1998). For the present topic, they are in principle more suitable than the other indicators to derive a lower limit for the age of the galactic disk. In fact they are distributed in a larger portion of the disk. Good data have been obtained recently for clusters older than M 67, so it is actually feasible to use this sample to determine the age of the disk. However there is still debate on the role of the oldest open cluster. NGC 6791, often quoted as the oldest cluster (8-9 Gyr, Carraro et al 1999c), is a rather special object. Its nature is not completely clear, and somebody is suggesting that it could be a bulge globular pushed away by the bar, or the core of a dwarf spheroidal tidally stripped by our Galaxy (see Carraro et al 1999a for a detailed analysis on this cluster). Recently another cluster came out to be very old, Berkeley 17 (Fig. 3). It is actually quite old, with an age around 9 Gyr (Carraro et al 1999b), although optical photometry is not very good yet, and should be improved, being this cluster so important. If Berkeley 17 marks the age of the disk, its minimum age is around 9 Gyr.
However the main drawback of open clusters is that their average lifetime is of the order of some $`10^8`$ yr (Grenon 1990), so many old clusters might have been destroyed. Therefore the statistics of old open clusters is rather poor, and in principle they can provide only a lower limit for the age of the thin disk (Carraro et al 1999c). Anyhow their dating is rather simple and robust.
## 3 Conclusions
In the past several reviews were dedicated to the disk age issue. I would like to remind the reviews from Sandage (1990), van den Bergh (1990) and Grenon (1990).
Sandage at the 1989 Kingston Conference in Canada concluded that the disk overlaps in age the halo, whereas van den Bergh, at the same meeting, suggested that the disk is somewhat younger than the halo.
Obviously the question of a possible star formation delay at the end of the halo assembly depends on the age of the halo, and on the age of the disk. The age of the halo comes from the mean globular clusters age ($`1215`$ Gyr at that time), whereas the age of the disk comes from a variety of methods, going from old open cluster to white dwarfs, from radioactive decay to stars in the solar neighbourhood. At that time the oldest open cluster was NGC 6791, with an age around 12 Gyr according to Sandage, who quoted Janes (1988), and of 7 Gyr according to van den Bergh, who quoted a preliminary work by Demarque et al (1992).
The conclusion of van den Bergh is supported also by Grenon (1990), who in addition stressed that in the solar vicinity there is a group of metal rich stars which seem to be older than open clusters, but whose birthplaces might be inside the bulge.
Ten years after the situation is not much different.
Gratton et al (1997) reported a mean age of the bulk of the halo globulars around $`13`$ Gyr, which holds for all the halo clusters. There is indeed the evidence of a population of very young globulars (Pal 12 for instance), whose belonging to the halo is controversial.
Summarizing all the data I discussed above (see also Fig. 4) a plausible age for the disk is in the range $`810`$ Gyr. Note that the age scale in the case of F & G stars and open clusters is the same as for globulars.
This seems to suggest the occurence of a hiatus, or minimun in the star formation history of the Galaxy, which might reflect the end of the halo/bulge formation. Afterwards the Galaxy started to acquire material to form the disk in an inside-out scenario.
Acknowledgements
I thank L. Girardi and M. Grenon for useful discussions, and Francesca Matteucci for her invitation. |
no-problem/9911/cond-mat9911073.html | ar5iv | text | # Sleep-Wake Differences in Scaling Behavior of the Human Heartbeat: Analysis of Terrestrial and Long-Term Space Flight Data
## Abstract
We compare scaling properties of the cardiac dynamics during sleep and wake periods for healthy individuals, cosmonauts during orbital flight, and subjects with severe heart disease. For all three groups, we find a greater degree of anticorrelation in the heartbeat fluctuations during sleep compared to wake periods. The sleep-wake difference in the scaling exponents for the three groups is comparable to the difference between healthy and diseased individuals. The observed scaling differences are not accounted for simply by different levels of activity, but appear related to intrinsic changes in the neuroautonomic control of the heartbeat.
The normal electrical activity of the heart is usually described as a “regular sinus rhythm” . However, cardiac interbeat intervals fluctuate in an irregular manner in healthy subjects \[fig. 1\] — even at rest or during sleep . The complex behavior of the heartbeat manifests itself through the nonstationarity and nonlinearity of interbeat interval sequences . In recent years, the intriguing statistical properties of interbeat interval sequences have attracted the attention of researchers from different fields .
Analysis of heartbeat fluctuations focused initially on short time oscillations associated with breathing, blood pressure and neuroautonomic control . Studies of longer heartbeat records, however, revealed 1/f-like behavior . Recent analysis of very long time series (up to 24h: $`n10^5`$ beats) show that under healthy conditions, interbeat interval increments exhibit power-law anticorrelations , follow a universal scaling form in their distributions , and are characterized by a broad multifractal spectrum . These scaling features change with disease and advanced age . The emergence of scale-invariant properties in the seemingly “noisy” heartbeat fluctuations is believed to be a result of highly complex, nonlinear mechanisms of physiologic control .
It is known that circadian rhythms are associated with periodic changes in key physiological processes . Here, we ask the question if there are characteristic differences in the scaling behavior between sleep and wake cardiac dynamics<sup>2</sup><sup>2</sup>2Typically the differences in the cardiac dynamics during sleep and wake phases are reflected in the average (higher in sleep) and standard deviation (lower in sleep) of the interbeat intervals . Such differences can be systematically observed in plots of the interbeat intervals recorded from subjects during sleep and wake periods \[fig. 1\].. We hypothesize that sleep and wake changes in cardiac control may occur on all time scales and thus could lead to systematic changes in the scaling properties of the heartbeat dynamics. Elucidating the nature of these sleep-wake rhythms could lead to a better understanding of the neuroautonomic mechanisms of cardiac regulation.
We analyze 30 datasets — each with 24h of interbeat intervals — from 18 healthy subjects and 12 patients with congestive heart failure . We analyze the nocturnal and diurnal fractions of the dataset of each subject which correspond to the 6h ($`n22,000`$ beats) from midnight to 6am and noon to 6pm.
We apply the detrended fluctuation analysis (DFA) method to quantify long-range correlations embedded in nonstationary heartbeat time series. This method avoids spurious detection of correlations that are artifacts of nonstationarity. Briefly, we first integrate the interbeat-interval time series. We then divide the time series into boxes of length $`n`$ and perform, in each box, a least-squares linear fit to the integrated signal. The linear fit represents the local trend in each box. Next, we calculate in each box the root-mean-square deviations $`F(n)`$ of the integrated signal from the local trend. We repeat this procedure for different box sizes (time scales) $`n`$. A power law relation between the average fluctuation $`F(n)`$ and the number of beats $`n`$ in a box indicates the presence of scaling; the correlations in the heartbeat fluctuations can be characterized by the scaling exponent $`\alpha `$, defined as $`F(n)n^\alpha `$.
We find that at scales above $`1`$min ($`n>60`$) the data during wake hours display long-range correlations over two decades with average exponents $`\alpha _W1.05`$ for the healthy group and $`\alpha _W1.2`$ for the heart failure patients. For the sleep data we find a systematic crossover at scale $`n60`$ beats followed by a scaling regime extending over two decades characterized by a smaller exponent: $`\alpha _S0.85`$ for the healthy group and $`\alpha _S0.95`$ for the heart failure group \[fig. 2a,c\]. Although the values of the sleep and wake exponents vary from subject to subject, we find that for all individuals studied, the heartbeat dynamics during sleep are characterized by a smaller exponent \[Table I and fig. 3\].
As a control, we also perform an identical analysis on two surrogate data sets obtained by reshuffling and integrating the increments in the interbeat intervals of the sleep and wake records from the same healthy subject presented in fig. 2a. Both surrogate sets display uncorrelated random walk fluctuations with a scaling exponent of $`1.5`$ (Brownian noise) \[fig. 2d\]. The value $`1.5`$ arises from the fact that we analyze the integral of the signal, leading to an increase by 1 of the usual random-walk exponent of $`1/2`$. A scaling exponent larger than $`3/2`$ would indicate persistent correlated behavior, while exponents with values smaller than $`3/2`$ characterize anticorrelations (a perfectly anticorrelated signal would have an exponent close to zero). Our results therefore suggest that the interbeat fluctuations during sleep and wake phases are long-range anticorrelated but with a significantly greater degree of anticorrelation (smaller exponent) during sleep.
An important question is whether the observed scaling differences between sleep and wake cardiac dynamics arise trivially from changes in the environmental conditions (different daily activities are reflected in the strong nonstationarity of the heartbeat time series). Environmental “noise”, however, can be treated as a “trend” and distinguished from the more subtle fluctuations that may reveal intrinsic correlation properties of the dynamics. Alternatively, the interbeat fluctuations may arise from nonlinear dynamical control of the neuroautonomic system rather than being an epiphenomenon of environmental stimuli, in which case only the fluctuations arising from the intrinsic dynamics of the neuroautonomic system should show long-range scaling behavior.
A possible explanation of the results from our analysis is that the observed sleep-wake scaling differences are due to intrinsic changes in the cardiac control mechanisms for the following reasons: (i) The DFA method removes the “noise” due to activity by detrending the nonstationarities in the interbeat interval signal related to polynomial trends and analyzing the fluctuations along the trends. (ii) Responses to external stimuli should give rise to a different type of fluctuations having characteristic time scales, i.e. frequencies related to the stimuli. However, fluctuations in both diurnal and nocturnal cardiac dynamics exhibit scale-free behavior. (iii) The weaker anticorrelated behavior observed for all wake phase records cannot be simply explained as a superposition of stronger anticorrelated sleep dynamics and random noise of day activity. Such noise would dominate at large scales and should lead to a crossover with an exponent of $`1.5`$. However, such crossover behavior is not observed in any of the wake phase datasets \[fig. 2\]. Rather, the wake dynamics are typically characterized by a stable scaling regime up to $`n=5\times 10^3`$ beats.
To test the robustness of our results, we analyze 17 datasets from 6 cosmonauts during long-term orbital flight on the Mir space station . Each dataset contains continuous periods of 6h data under both sleep and wake conditions. We find that for all cosmonauts the heartbeat fluctuations exhibit an anticorrelated behavior with average scaling exponents consistent with those found for the healthy terrestrial group: $`\alpha _W1.04`$ for the wake phase and $`\alpha _S0.82`$ for the sleep phase \[Table I\]. This sleep-wake scaling difference is observed not only for the group averaged exponents but for each individual cosmonaut dataset \[fig. 2b and fig. 3\]. Moreover, the scaling differences are persistent in time, since records of the same cosmonaut taken on different days (ranging from the 3rd to the 158th day in orbit), exhibit a higher degree of anticorrelation in sleep.
We find that even under the extreme conditions of zero gravity and high stress activity, the sleep and wake scaling exponents for the the cosmonauts are statistically consistent ($`p=0.7`$ by Student’s t-test) with those of the terrestrial healthy group<sup>3</sup><sup>3</sup>3Those findings are not inconsistent with the presence of other manifestations of altered autonomic control during long-term spaceflight (T. Brown et al., preprint).. Thus, the larger values for the wake phase scaling exponents cannot be a trivial artifact of activity. Furthermore, the larger value of the average wake exponent for the heart failure group compared to the other two groups \[Table I\] cannot be attributed to external stimuli either, since patients with severe cardiac disease are strongly restricted in their physical activity. Instead, our results suggest that the observed scaling characteristics in the heartbeat fluctuations during sleep and wake phases are related to intrinsic mechanisms of neuroautonomic control.
The mechanism underlying heartbeat fluctuations may be related to countervailing neuroautonomic inputs. Parasympathetic stimulation decreases the heart rate, while sympathetic stimulation has the opposite effect. The nonlinear interaction between the two branches of the nervous system is the postulated mechanism for the type of complex heart rate variability recorded in healthy subjects . The fact that during sleep the scaling exponents differ more from the value $`\alpha =1.5`$ (indicating “stronger” anticorrelations) may be interpreted as a result of stronger neuroautonomic control. Conversely, values of the scaling exponents closer to $`1.5`$ (indicating “weaker” anticorrelations) for both sleep and wake activity for the heart failure group are consistent with previously reported pathologic changes in cardiac dynamics . However, the average sleep-wake exponent difference remains the same ($`0.2`$) for all three groups. The observed sleep-wake changes in the scaling characteristics may indicate different regimes of intrinsic neuroautonomic regulation of the cardiac dynamics, which may “switch” on and off associated with circadian rhythms.
Surprisingly, we note that for the regime of large time scales ($`n>60`$) the average sleep-wake scaling difference is comparable to the scaling difference between health and disease; cf. Table I and<sup>4</sup><sup>4</sup>4At small time scales ($`n<60`$), we do not observe systematic sleep-wake differences. The scaling exponents obtained from 24h records of healthy and heart failure subjects in the asymptotic region of large time scales are in agreement with the results for the healthy and heart failure groups during the wake phase only. Since the weaker anticorrelations associated with the wake phase are characterized by a larger exponent while the stronger anticorrelated behavior during sleep has a smaller exponent, at large scales the superposition of the two phases (in 24h records) will exhibit behavior dominated by the larger exponent of the wake phase.. We also note that the scaling exponents for the heart failure group during sleep are close to the exponents observed for the healthy group \[Table I\]. Since heart failure occurs when the cardiac output is not adequate to meet the metabolic demands of the body, one would anticipate that the manifestations of heart failure would be most severe during physical stress when metabolic demands are greatest, and least severe when metabolic demands are minimal, i.e., during rest or sleep. The scaling results we obtain are consistent with these physiological considerations: the heart failure subjects should be closer to normal during minimal activity. Of related interest, recent studies indicate that sudden death in individuals with underlying heart disease is most likely to occur in the hours just after awakening . Our findings raise the intriguing possibility that the transition between the sleep and wake phases is a period of potentially increased neuroautonomic instability because it requires a transition from strongly to weakly anticorrelated regulation of the heart.
Finally, the finding of stronger heartbeat anticorrelations during sleep is of interest from a physiological viewpoint, since it may motivate new modeling approaches and supports a reassessment of the sleep phase as a surprisingly active dynamical state. Perhaps the restorative function of sleep may relate to an increased reflexive-type responsiveness of neuroautonomic control, not just at one characteristic frequency, but over a broad range of time scales.
\***
We thank NIH/National Center for Research Resources (P41 RR13622), NASA, NSF, Israel-US Binational Science Foundation, The Mathers Charitable Foundation and FCT/Portugal for support. |
no-problem/9911/chao-dyn9911019.html | ar5iv | text | # Untitled Document
Dissipative tunneling in presence of classical chaos in a mixed quantum-classical system
Bidhan Chandra Bag<sup>1</sup>, Bikash Chandra Gupta<sup>2</sup> and Debshankar Ray<sup>1</sup>
<sup>1</sup>Indian Association for the Cultivation of Science
Jadavpur, Calcutta 700 032, INDIA.
<sup>2</sup>Institute of Physics, Sachivalaya Marg,
Bhubaneswar 751 005, INDIA.
## Abstract
We consider the tunneling of a wave packet through a potential barrier which is coupled to a nonintegrable classical system and study the interplay of classical chaos and dissipation in the tunneling dynamics. We show that chaos-assisted tunneling is further enhanced by dissipation, while tunneling is suppressed by dissipation when the classical subsystem is regular.
PACS number (s) :05.45.+b, 03.65 Bz
I. Introduction
The systems with mixed quantum-classical description have been the subject of considerable interest in recent years \[1-9\]. The validity of this description essentially rests on whether the quantum effects of one subsystem is negligible compared to the others. The well-known example is the Maxwell-Bloch equations which describe a model two-level system (quantum mechanical) interacting with a strong single mode classical electromagnetic field. The others comprise the models involving nuclear collective motion , one dimensional molecule where the motion of an electron is described by an effective potential provided by the nuclei and other electrons within adiabatic approximation scheme, etc. The mixed description has also been employed by Bonilla and Guinea for the measurement processes. Taking recourse to an average description in terms of an effective classical Hamiltonian, Pattanayak and Schieve have studied some interesting aspects of semi-quantal chaos.
Very recently the interplay of classical and quantum degrees of freedom in a special class of systems with system-operators pertaining to a closed Lie algebra with respect to system Hamiltonian has been demonstrated to have acquired special relevance in connection with dissipation in quantum evolution. While the quantum degree of freedom is responsible for the generic quantum features the implication of classicality is two-fold. First, it has been shown that if the classical degree of freedom is assigned the type of evolution as prescribed by the classical treatment of dissipative systems it is possible to realize dissipation for such composite systems via an indirect route without any violation of quantum rule. Second, if the classical subsystem is nonintegrable then the quantum motion by virtue of this nonintegrability may be profoundly affected. Our object here is to study this dissipative evolution of a quantum system coupled to a nonintegrable classical system.
Before going into the further details let us elaborate this issue a little further in somewhat general terms.
We first consider the coupling of a quantum system to a classical system in terms of the following Hamiltonian
$$\widehat{H}=\widehat{H}_q+H_{cl}+\widehat{H}_{qcl},$$
(1)
where $`\widehat{H}_q`$ and $`H_{cl}`$ refer to quantum and classical subsystems of the total Hamiltonian, respectively. $`\widehat{H}_{qcl}`$ is the interaction potential which contains both classical and quantum degrees of freedom. If the quantum operators $`\left\{\widehat{R}_i\right\}`$ form a closed algebra with respect to the Hamiltonian $`\widehat{H}`$ of the system, i. e. , if we have relations of the type
$$[\widehat{H}(t),\widehat{R}_i]=i\mathrm{}\underset{j=1}{\overset{n}{}}g_{ji}(t)\widehat{R}_ji=1,2\mathrm{}n,$$
(2)
where $`g_{ji}`$ are the elements of a $`nn`$ matrix, then one can have, by virtue of the equations of motion
$$\frac{d\widehat{R}_i}{dt}=\frac{i}{\mathrm{}}[\widehat{H},\widehat{R}_i]i=1,2\mathrm{}n,$$
(3)
a set of linear first order differential equations
$$\frac{d\widehat{R}_i}{dt}=\underset{j=1}{\overset{n}{}}g_{ji}\widehat{R}_j,$$
(4)
which describes the temporal evolution of the expectation values.
It is important to note that Eq. (4) depends crucially on the elements $`g_{ji}`$ which are determined by the classical dynamical variables as appeared in the Hamiltonian $`\widehat{H}`$ (Eq.1). In other words, $`g_{ji}`$ is dependent on classical subsystem $`H_{cl}`$. If the energy is taken to be the quantum expectation value of the Hamiltonian $`\widehat{H}`$, then one can easily generate the temporal evolution of classical conjugate variables, coordinate $`x`$ and momentum $`p`$ using $`\widehat{H}`$ as follows;
$`{\displaystyle \frac{dx}{dt}}`$ $`=`$ $`{\displaystyle \frac{\widehat{H}}{p}},`$ (5)
$`{\displaystyle \frac{dp}{dt}}`$ $`=`$ $`{\displaystyle \frac{\widehat{H}}{x}}.`$ (6)
Eqs. (4-6) comprise the complete dynamical evolution corresponding to Hamiltonian $`\widehat{H}`$ in the dissipation-free case.
We now take into consideration two different aspects of classical motion of the subsystem $`H_{cl}`$.
First, the classical dynamical variable is made dissipative in an ad hoc fashion by adding $`\gamma p`$ to Eq.(5), i. e. ,
$$\frac{dp}{dt}=[\frac{\widehat{H}}{x}+\gamma p].$$
(7)
This allows an indirect route to dissipation in the quantum evolution because of $`\widehat{H}_{qcl}`$ term in Eq.(1). It is easy to check that this quantum description is quite valid since no quantum rule is violated in the process. Thus Eqs. 4, 5 and 7 govern the disspative dynamical evolution. We note, in passing, that this method has a distinct advantage over some other approaches \[10-12\] to quantum theory of damping which relies on explicit time-dependent Hamiltonian, such as, Kanai Hamiltonian etc., since these approaches lead to contradiction with uncertainty principle.
The second aspect of classicality of the subsystem concerns the nature of motion generated by $`H_{cl}`$. If the classical subsystem is non-integrable then the quantum evolution is affected through the properties of $`g_{ji}`$ matrix. It is important to emphasize a pertinent point at this stage. There are well-known cases where exponential instability occurs in a system where the overall dynamical system is non-integrable. For example, we refer to the case of optical chaos described by Maxwell-Bloch equations . Similar consideration of quantum-classical mixed description has been given to explore quantum chaos within Born-Oppenheimer approximation in a typical molecular system where the fast electronic motion and the slow nuclear motion (classical) separate in a very natural way. In contrast to these cases the present paper concerns the nonintegrability due to the subsystem $`H_{cl}`$ only.
The studies of dissipative effects in quantum systems are usually based on a traditional system-reservoir linear coupling model. There are two standard ways of formulating the problem\[13-15, 21, 25\]. The first one is based on density operator approach and leads to the socalled master equation while the second one is based on the Heisenberg picture and leads to the noise operators. In the latter case one replaces the reservoir by damping terms in the Heisenberg equations of motion for a dissipation-free system and adds fluctuating forces as the driving terms which impart fluctuations to the system. The operator forces are such that, first, the system has the correct statistical properties to agree in the classical limit and second, they maintain the commutation properties of the Boson operators to ensure that uncertainty principle is not violated. These consideration are taken care of in the treatments of density based or operator based theories (e. g. , see Leggett and co-workers and Krive and coworkers and others . The spiritual root of quantum statistical approach to damping lies in the fluctuation-dissipation theorem, which illustrates a dynamical balance of inward flow of energy due to fluctuations from the reservoir into the system and the outward flow of energy from the system to the reservoir due to dissipation of the system mode. Such a dynamical balance is automatically maintained in the present treatment (and one need not add a stochastic term in Eq.(7)) through a feedback from quantum to classical subsystem since here one deals with a finite number of degrees of freedom in the quantum plus classical subsystems as compared to the infinite number of degrees of freedom of the reservoir in the traditional system-reservoir description. The closure property of the algebra pertaining to the quantum system thus plays an important role in the feedback mechanism. Apart from being simple to implement a decisive advantage of the approach is that it may be carried over to the nonlinear systems (e. g. , a Morse oscillator described by SU(2) or SU(1, 1) Lie algebra ) in a straight forward manner, while the master equations are strictly valid for the linear models.
The consideration of the two aspects of classicality as mentioned earlier therefore leads us to dissipative chaotic evolution of classical degrees of freedom. Our object is to explore the influence of classical chaos on quantum evolution, dissipation being realized indirectly in the overall dynamics through classical friction. The generic quantum feature that we study here is tunneling. It is extremely important to asses what influence if any classical chaos have on it, particularly in presence of dissipation. Although the interplay of chaos and tunneling \[17-19\] or tunneling and dissipation have been the subject of many researchers separately over the last decade, it is not clear how dissipative tunneling is influenced by classical chaos. We are therefore concerned here with all three interplaying aspects of evolution, e. g. , classical chaos, dissipation and tunneling in a typical model with mixed quantum-classical description.
II. The model and the mixed quantum-classical dynamics
We consider a particle with fixed energy $`E_q<0`$ penetrating an inverted potential barrier. $`E_q`$ is the energy measured from the top of the barrier (Fig. 1). The quantum subsystem $`\widehat{H}_q`$ which describes the penetration of the particle through the barrier is given by
$$\widehat{H}_q=\frac{\widehat{p}_q^2}{2m}\frac{1}{2}m\omega _0^2\widehat{x}_q^2.$$
(8)
$`\widehat{x}_q`$ and $`\widehat{p}_q`$ are the quantum mechanical operators corresponding to position and momentum of the particle respectively. m is the mass of the particle and $`\omega _0`$ refers to the frequency of the inverted well.
The Hamiltonian for the classical subsystem is given by
$$H_{cl}=\frac{p_{cl}^2}{2M}+ax_{cl}^4bx_{cl}^2+gx_{cl}\mathrm{cos}\omega _1t,$$
(9)
which governs the motion of a classical system with mass $`M`$ and characterized by its position $`x_{cl}`$ and momentum variable $`p_{cl}`$ in a double-well potential driven by an external field with frequency $`\omega _1`$. $`a`$ ad $`b`$ are the parameters of the double-well oscillator. $`g`$ includes the effect of coupling of the external field to the oscillator and the strength of the field.
$`H_{cl}`$ is nonintegrable and has been widely employed by various workers \[17-19, 22-23\] over the last few years in a variety of situations related to classical and quantum chaos.
The description of the Hamiltonian with mixed degrees of freedom is now made complete by considering the coupling of classical and quantum degrees of freedom in terms of the interaction potential $`\widehat{H}_{qcl}(=\lambda x_{cl}\widehat{x}_q)`$ so that we have
$$\widehat{H}=\widehat{H}_q+H_{cl}+\lambda x_{cl}\widehat{x}_q,$$
(10)
$`\lambda `$ represents the coupling constant.
Making use of the following rescaled dimensionless quantities
$$\begin{array}{ccccccc}\widehat{x}& =& (m\omega _0)^{\frac{1}{2}}\widehat{x}_q& ,& \chi & =& \frac{\lambda }{(mM\omega \omega _0)^{\frac{1}{2}}}\\ & & & & & & \\ \widehat{p}& =& (m\omega _0)^{\frac{1}{2}}\widehat{p}_q& ,& A& =& \frac{a}{M^2\omega ^3}\\ & & & & & & \\ s& =& (M\omega )^{\frac{1}{2}}x_{cl}& ,& B& =& \frac{b}{M\omega ^2}\\ & & & & & & \\ p_s& =& (M\omega )^{\frac{1}{2}}p_{cl}& ,& G& =& \frac{g}{(M\omega ^3)^{\frac{1}{2}}}\end{array}\},$$
(11)
we rewrite the Hamiltonian(10)
$$\widehat{H}=\omega _0\frac{\widehat{p}^2}{2}\omega _0\frac{\widehat{x}^2}{2}+\frac{\omega p_s^2}{2}+\omega As^4B\omega s^2+G\omega s\mathrm{cos}\omega _1t+\chi \omega _0s\widehat{x},$$
(12)
where $`\omega `$ is the linearized frequency of the double-well potential.
By choosing the relevant operators belonging to the set $`\{\widehat{1},\widehat{x},\widehat{p},\widehat{x^2},\widehat{p^2},\widehat{L}=\widehat{x}\widehat{p}+\widehat{p}\widehat{x}\}`$ we may construct a partial Lie algebra ($`\widehat{1}`$ is the unity operator). The temporal evolution of the expectation values of these operators (see Eq.4) is given by the following set of coupled differential equations
$`{\displaystyle \frac{d\widehat{x}}{d\tau }}`$ $`=`$ $`\widehat{p},`$
$`{\displaystyle \frac{d\widehat{p}}{d\tau }}`$ $`=`$ $`\widehat{x}\chi s,`$
$`{\displaystyle \frac{d\widehat{x^2}}{d\tau }}`$ $`=`$ $`\widehat{L},`$
$`{\displaystyle \frac{d\widehat{p^2}}{d\tau }}`$ $`=`$ $`\widehat{L}2\chi s\widehat{p},`$
$`{\displaystyle \frac{d\widehat{L}}{d\tau }}`$ $`=`$ $`2(\widehat{p^2}+\widehat{x^2}\chi s\widehat{x}),`$ (13)
where $`\tau `$ denotes the scaled dimensionless time $`\tau =\omega _0t`$.
Since $`H`$ coincides with the energy of the system, the classical equations of motion are
$`{\displaystyle \frac{ds}{d\tau }}`$ $`=`$ $`\mathrm{\Omega }p_s,`$
$`{\displaystyle \frac{dp_s}{d\tau }}`$ $`=`$ $`\left[\chi \widehat{x}+G\mathrm{\Omega }\mathrm{cos}\mathrm{\Omega }_1\tau +4A\mathrm{\Omega }s^32B\mathrm{\Omega }s+\mathrm{\Gamma }p_s\right],`$ (14)
where,
$`\mathrm{\Omega }`$ $`=`$ $`{\displaystyle \frac{\omega }{\omega _0}},`$
$`\mathrm{\Omega }_1`$ $`=`$ $`{\displaystyle \frac{\omega _1}{\omega _0}},`$
and
$$\mathrm{\Gamma }=\frac{\gamma }{\omega _0}.$$
(15)
Here $`\mathrm{\Gamma }`$ as defined above is the rescaled dimensionless damping constant introduced in the classical equations of motion (14) in an ad hoc fashion. We have already pointed out that this ad hoc introduction of classical friction leads to dissipation in the overall dynamics without any contradiction to uncertainty principle. Eqs. (13) and (14) thus govern the complete dynamics of the mixed system.
We now turn to the motion of the wave packet. In order that the motion of a wave packet comes close to the motion of a classical particle it is necessary that its average position and momentum follow the laws of classical mechanics. However, this condition is automatically satisfied for the inverted harmonic potential barrier we consider here.
We now describe the wave function of a particle by a Gaussian wave packet of the form
$$\psi (x,t)=N(t)\mathrm{exp}\left[\beta (t)(x\alpha (t))^2\right],$$
(16)
where $`\alpha `$(t) and $`\beta `$(t) are two complex, time-dependent parameters to be determined. $`N`$(t) is a normalization factor which is not of much interest here. The Gaussian wave packets have the decisive advantage since the ansatz (16) is a solution of the Schrodinger equation,
$$\left(\widehat{H}i\frac{}{t}\right)\psi (x,t)=0$$
(17)
if $`\alpha `$ and $`\beta `$ satisfy the following two equations
$`i{\displaystyle \frac{d\beta }{dt}}=2\omega _0\beta ^2+{\displaystyle \frac{\omega _0}{2}}`$
and
$$i\beta \frac{d\alpha }{dt}=\chi \frac{\omega _0s(t)}{2}\frac{\omega _0\alpha }{2}$$
(18)
or their scaled version (using $`\tau =\omega _0t`$)
$`i{\displaystyle \frac{d\beta }{d\tau }}`$ $`=`$ $`2\beta ^2+{\displaystyle \frac{1}{2}},`$
$`i\beta {\displaystyle \frac{d\alpha }{d\tau }}`$ $`=`$ $`\chi {\displaystyle \frac{s(\tau )}{2}}{\displaystyle \frac{\alpha }{2}}.`$ (19)
It is easy to express the expectation values of $`\widehat{x},\widehat{p}`$ and others operators in terms of $`\alpha `$ and $`\beta `$ as follows;
$`\widehat{x}`$ $`=`$ $`{\displaystyle \frac{\alpha \beta +\alpha ^{}\beta ^{}}{\beta +\beta ^{}}},`$
$`\widehat{p}`$ $`=`$ $`{\displaystyle \frac{2i\beta \beta ^{}(\alpha \alpha ^{})}{\beta +\beta ^{}}},`$
$`\widehat{x^2}`$ $`=`$ $`{\displaystyle \frac{1}{\beta +\beta ^{}}}+\widehat{x}^2,`$
$`\widehat{p}^2`$ $`=`$ $`{\displaystyle \frac{2\beta \beta ^{}}{\beta +\beta ^{}}}\widehat{p}^2,`$
$`\widehat{L}`$ $`=`$ $`{\displaystyle \frac{d\widehat{x^2}}{d\tau }}.`$ (20)
The wave function of Gaussian form (16) satisfies the minimum uncertainty condition
$$\mathrm{\Delta }p\mathrm{\Delta }x=\frac{1}{2}.$$
(21)
with $`\mathrm{\Delta }x^2=\widehat{x^2}\widehat{x}^2`$ and $`\mathrm{\Delta }p^2=\widehat{p}^2\widehat{p}^2`$. For such a wave packet it is easy to see that
$$\beta =\beta ^{}.$$
(22)
Since for a parabolic barrier the energy of classical particle is equal to the average energy of the quantum particle one can show that the energy spread for the coupling-free parabolic barrier is
$`\mathrm{\Delta }\widehat{H}_q^2`$ $`=`$ $`\widehat{H}_q^2\widehat{H}_q^2`$ (23)
$`=`$ $`\omega _0^2\left[\widehat{p}^2\mathrm{\Delta }p^2+\widehat{x}^2\mathrm{\Delta }x^2\right].`$
It is required that the minimum wave packet should have a minimum energy spread $`\mathrm{\Delta }\widehat{H}_q^2`$ also. Then we infer from Eq.(23) that the wave packet must be prepared at the classical turning point $`x_0`$, where $`\widehat{p}=0`$ and $`\widehat{x}^2`$ has a minimum. Let us therefore assume that the center of the wave packet reaches the classical turning point at $`t=0`$ with $`\widehat{p}=p_0=0`$. We then have \[from $`\widehat{H}_q=\omega \frac{\widehat{p^2}}{2}\frac{\omega _0\widehat{x}^2}{2}`$, Eq.12\]
$$ϵ\omega _0=\frac{\omega _0x^2}{2}$$
(24)
so that $`x_0=\pm (2ϵ)^{\frac{1}{2}}`$, where $`ϵ(=\frac{E_q}{\omega _0})`$ is a dimensionless parameter (see Fig 1) denoting the energy of the particle measured from the top of the barrier. We obtain
$`\alpha (t=0)=\alpha _0=x_0=(2ϵ)^{\frac{1}{2}}`$
$`\beta (t=0)=\beta _0={\displaystyle \frac{1}{2}}.`$ (25)
The initial conditions (25) imply that the particle’s wave packet is as compact as possible when it arrives at the turning point. By choosing $`\beta =\frac{1}{2}`$, the wave packet satisfies the quantum-classical correspondence that the energy of the quantum particle is equal to that of the classical particle for parabolic barrier.
The above two initial conditions for $`\alpha `$ and $`\beta `$ also set the initial conditions for the others. Thus we further have from Eqs. (20),
$`\widehat{x}_{t=0}`$ $`=`$ $`(2ϵ)^{\frac{1}{2}},`$
$`\widehat{p}_{t=0}`$ $`=`$ $`0,`$
$`\widehat{x}^2_{t=0}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(1+4ϵ),`$
$`\widehat{p^2}_{t=0}`$ $`=`$ $`{\displaystyle \frac{1}{2}},`$
$`\widehat{L}_{t=0}`$ $`=`$ $`0.`$ (26)
Eqs. (26) suggest that the initial conditions required for the dynamics can be manipulated by controlling a single parameter $`ϵ`$, which denotes the average energy of the wave packet with which the particle impinges on the barrier at the left turning point.
Since the classical subsystem is coupled to the quantum degrees of freedom it is also necessary to specify the initial conditions for the classical variables for its evolution. To this end we fix the parameter set $`\omega =6.32,A=0.002,B=0.25`$ and $`G=0.63`$ for the present study and choose the initial conditions for $`s`$ and $`p_s`$ for the regular and chaotic trajectories as $`s=4.02,4.52,5.02`$ for three regular and $`s=8.80,9.30,11.31`$ for three chaotic trajectories, $`p_s`$ being chosen to be zero always. We refer to Lin and Ballentine for a typical illustration of the phase space.
The initial conditions (25) and (26) along with those for the classical variables then allow us to follow the evolution of the expectation values and of the wave packet in terms of Eqs. (13-14) and Eq. 19. The relevant physical quantities of interest are determined in the next section.
Before closing this section a relevant point need to be discussed here. We have considered the regular and chaotic trajectories of the classical subsystem and referred to its phase space as if this subsystem were not coupled to the quantum subsystem. Since the quantum subsystem considered here has a potential which is not bounded from below (a prototype model for barrier penetration), the phase space of the mixed system ( quantum plus classical) is unbounded and therefore the very notion of classical chaos is far from clear in such a situation. Based on this consideration we have selected the chaotic and regular parts of the phase space corresponding to the classical subsystem only.
III. The tunneling probability and the current
We are now in a position to calculate the wave function and the corresponding probability that the incident particle penetrates beyond the position $`x=\zeta `$. We write
$$T(x\zeta ,\tau )=_\zeta ^{\mathrm{}}𝑑x\left|\psi (x,\tau )\right|^2.$$
(27)
Normalization of the wave function(16) leads us to
$$\left|\psi (x,\tau )\right|^2=\sqrt{\frac{a(\tau )}{\pi }}\mathrm{exp}[a(\tau )(xx)^2],$$
(28)
where $`a(\tau )=\beta (\tau )+\beta ^{}(\tau )`$.
Since a portion of the wave packet has already tunneled through the barrier at $`\tau =0`$, the tunneling contribution due to it should be subtracted from the tunneling probability given by (27). Thus between $`\tau =0`$ and $`\tau =\mathrm{}`$ the wave packet tunnels with the probability
$`p`$ $`=`$ $`T(xx_0,\tau \mathrm{})T(xx_0,\tau =0)`$ (29)
$`=`$ $`\sqrt{{\displaystyle \frac{a(0)}{\pi }}}{\displaystyle _0^{2x_0}}\mathrm{exp}[a(0)y^2]𝑑y\sqrt{{\displaystyle \frac{a(\mathrm{})}{\pi }}}{\displaystyle _0^{x_0x_{\mathrm{}}}}\mathrm{exp}[a(\mathrm{})y^2]𝑑y.`$
Here $`x`$ at $`\tau =0`$ is defined by $`x_0=x_0`$ and $`x`$ at $`\tau =\mathrm{}`$ by $`x_{\mathrm{}}`$.
Another important quantity of interest is the tunneling current. The time evolution of the tunneling current
$$j(x,\tau )=\frac{1}{2i}[\psi ^{}\frac{\psi }{x}(\frac{\psi }{x})^{}\psi ],$$
(30)
can be calculated by making use of the wavepacket(16) after normalization, together with the solution for $`\alpha (t)`$ and $`\beta (t)`$ in terms of Eqs(18) and (19). It is easy to see that the particle reaches the end of the tunnel at $`x=x_0`$ where it produces the following current:
$$j(x_0,\tau )=\frac{1}{i}[\beta \alpha (\beta \alpha )^{}x_0(\beta \beta ^{})]\left|\psi (x_0,\tau )\right|^2.$$
(31)
We would now like to discuss here the following questions ;
(i) how the tunneling probability (29) depends on classical chaos due to the nonintegrability of the classical subsystem.
(ii) how long it takes a particle to tunnel through the barrier and how it depends on classical chaos.
(iii) since the present model incorporates dissipation through classical friction, it is of interest to consider how the dissipative tunneling probability and the current depend on the nature of the classical motion of the subsystem.
Let us first discuss the case of dissipation-free tunneling ($`\mathrm{\Gamma }=0`$). Fig. 2 depicts the tunneling probability for two sets of regular and chaotic trajectories, when the incident wave packet carries the average energy $`ϵ=1`$ . It is evident that irrespective of the initial energy of the classical subsystem, the tunneling probability is substantially higher for the chaotic trajectories than that for the regular one. The calculation is repeated for lower incident energy (i. e. higher $`ϵ`$) of the wave packet ($`ϵ=3`$) and the result is shown in Fig. 3. The difference is more marked in the region of higher values of coupling constant.
The tunneling current (31) is shown in Fig. 3 as a function of time for two trajectories (one regular and the other chaotic). For a relative comparison we have also plotted the current for the coupling-free case. We observe that a particle needs more time to tunnel through the barrier when the classical subsystem is regular.
We now turn to the case of dissipative tunneling ($`\mathrm{\Gamma }0`$). In Fig. 4 we display the effect of dissipation on the tunneling probability when the classical subsystem is chaotic. A set of three chaotic trajectories corresponding to three different initial conditions ($`p_s=0.0;s=11.31,9.30,8.80`$) which refer to the widely different energies of the undriven oscillator were studied. Similarly the initial conditions for the set of three regular trajectories corresponding to widely seperated turning points of the undriven well ($`p_s=0.0;s=5.02,4.52,4.02`$) were examined. A variation of $`\mathrm{\Gamma }`$ was also carried out. For the sake of brevity we have plotted here only the representative variation for one chaotic ($`s=11.31,p_s=0.0`$) and one regular ($`s=5.02,p_s=0.0`$) trajectory with and without dissipation. It is interesting to observe that dissipation increases the tunneling probability quite significantly when the classical subsystem is chaotic (solid curves) but for the regular classical subsystem (dotted curves) tunneling probability is suppressed by dissipation. Such a differential behavior of dissipative tunneling of wave packets for the chaotic and regular trajectories (which remains qualitatively the same for other sets) is also apparent in the peak height of the tunneling current (Fig. 5), although the tunneling time does not differ too much in the two cases.
Summarizing the above numerical results we observe that (i) classical chaos of the subsystem increases the tunneling probability and decreases the tunneling time quite significantly. (ii) Dissipation enhances the tunneling when the classical subsystem is chaotic in contrast to the case when the subsystem behaves regularly.
The problem of chaos-assisted tunneling has been addressed by various workers over many years. For example, while studying the model two-dimensional autonomous systems it has been observed that energy splitting can increase dramatically with chaos of the intervening chaotic layer in which tunneling takes place between distinct, but symmetry related regular phase space regimes separated by a chaotic layer . Lin and Ballentine carried out a numerical study on a driven double-well oscillator to show that as the separating phase space layer grows more chaotic with increasing driving strength the tunneling rate is enhanced by orders of magnitude over the rate of the undriven system. Utermann et. al. also investigated the same system to point out the role of classical chaotic diffusion as a mediator for barrier tunneling.
The enhancement of tunneling of wave packet by classical chaos in the present model of mixed quantum-classical description can be understood in the light of the of this classical chaotic diffusion. Instead of considering tunneling between two regions(tori) mediated by a chaotic transport between them we consider here a single tunneling process which starts at one of the turning points of the barrier. As the wave packet evolves through the barrier its mean motion by virtue of the coupling of the inverted potential with classical subsystem is affected by classical chaotic diffusive motion of the latter. While in the former cases tunneling is reversible, the process considered here is irreversible in nature.
The role of dissipation is somewhat more intriguing since one is concerned here with three interplaying aspects of evolution, e. g, tunneling, classical chaos and dissipation. The mechanism of enhancement of tunneling probability by dissipation when the subsystem is chaotic, can be understood qualitatively in the following way; It has been shown recently that tunneling may get enhanced due to virtual mixing of excited states by dissipative interaction in contrast to ground state tunneling which is suppressed due to dissipation. The more relevant to the present work is an earlier observation by Ford, Lewis and O’Connell who had employed a quantum Langevin equation and found an increased tunneling rate for a particle tunneling through a parabolic barrier in a black-body radiation(incoherent) field which behaves as a standard harmonic bath. It is important to realize that in the present model the classical chaotic subsystem although a few degree-of-freedom system mimics the behaviour of a typical heat bath which in course of energy exchange with the quantum subsystem acquires partial quantum character and brings in decoherence in the dynamics. The classical chaotic subsystem is thus reminiscent of the background of a black-body radiation field reservoir and the enhancement of the resulting tunneling of the wave packet through a parabolic barrier can be understood qualitatively in the spirit of Ford, Lewis and O’Connell .
IV. Conclusion
In this study we have considered the tunneling of a Gaussian wave packet through a potential barrier which in turn is coupled to a nonintegrable classical system. The operators pertaining to this system with mixed quantum-classical description close a partial Lie algebra with respect to the Hamiltonian operator of the system. By introducing a phenomenological classical friction one realizes a mechanism of dissipation in the overall dynamical evolution in this model without violating any quantum rule. Because of nonintegrability the classical subsystem admits of chaotic behavior. We have studied the interplay of classical chaos and dissipation in the tunneling of wave packet through the barrier and shown that chaos-assisted tunneling is further enhanced by dissipation while tunneling is suppressed by dissipation when the subsystem behaves regularly. Dissipation thus plays a significant role in the evolution of a tunneling process in presence of classical chaos.
Acknowledgements: BCB is indebted to the Council of Scientific and Industrial Research for partial financial support. DSR is thankful to the Department of Science and Technology for a research grant.
References
1. P. M. Stevenson, Phys. Rev. D30 1712(1984); D32 1389(1985).
2. A. K. Pattanayak and W. C. Schieve, Phys. Rev. A46 1821 (1992); H. Schanz and B. Esser, Phys. Rev. A55 3375 (1997).
3. P. W. Milonni, J. R. Ackerhalt and H. W. Galbraith, Phys. Rev. Letts. 50 966 (1983); J. R. Ackerhalt and P. W. Milonni, J. Opt. Soc. Am. B1 116 (1984); P. W. Milonni, J. R. Ackerhalt and H. W. Galbraith, Phys. Rev. A28 887 (1983); P. I. Belobrov, G. P. Berman and G. M. Zaslavski, JETP 49 993 (1979).
4. A. Nath and D. S. Ray, Phys. Rev. A36 431 (1987); Phys. Letts. A117 341 (1986); Phys. Letts. A116 104 (1986); G. Gangopadhyay and D. S. Ray, Phys. Rev. A40 3750 (1989).
5. A. Bulgac, Phys. Rev. Letts. 67 965 (1991).
6. R. Blumel and B. Esser, Phys. Rev. Letts. 72 3658 (1994).
7. L. L. Bonilla and F. Guinea, Phys. Letts. B271 196 (1991); Phys. Rev. A45 7718 (1992).
8. A. K. Pattanayak and W. C. Schieve, Phys. Rev. Letts 72 2855 (1994).
9. A. M. Kowalaski, A. Plastino and A. N. Proto, Phys. Rev. E52 165 (1995).
10. E. Kanai, Prog. Theo. Phys. 3 440 (1948).
11. M. D. Kostin, J. Stat. Phys. 12 145 (1975).
12. D. Greenberger, J. Math. Phys. 20 762 (1979).
13. A. O. Caldeira and A. J. Leggett, Physica 121A 587 (1983).
14. I. V. Krive and A. S. Rozhavskii, Theo. and Math. Phys. 89 1069 (1991); I. V. Krive and S. M. Latinsky, Ann. Phys. 221 204 (1993).
15. W. H. Louisell, Quantum Statistical Properies of Radiation (Wiley, NY 1973).
16. D. S. Ray, Phys. Letts. A122 479 (1987) ; J. Chem. Phys. 92 1145 (1990) ; R. D. Levine and C. E. Wulfman Chem. Phys. Letts. 60 372 (1974).
17. W. A. Lin and L. E. Ballentine, Phys. Rev. Letts. 65 2927 (1990).
18. J. Plata and J. M. Gomez Llorente J. Phys. A25 L303 (1992).
19. R. Utermann, T. Dittrich and P. Hænggi, Phys. Rev. E49 273 (1994).
20. K. Fujikawa, S. Iso, M. Sasaki and H. Suzuki, Phys. Rev. Letts. 68 1093 (1992). See the references therein.
21. A. J. Leggett and A. O. Caldeira Phys. Rev. Letts. 46 211 (1981).
22. S. Chaudhuri, G. Gangopadhyay and D. S. Ray, Phys. Rev. E52 2262 (1995).
23. S. Chaudhuri, D. Majumdar and D. S. Ray, Phys. Rev. E53 5816 (1996).
24. O. Bohigas, S. Tomsovic and D. Ullmo, Phys. Rep 223 43 (1993).
25. G. W. Ford, J. T. Lewis and R. T. O’Connell, Phys. Letts. A158 367 (1991).
Figure Captions
Fig. 1. A gaussian wave packet is shown at the left classical turning point $`x_0`$, of an inverted parabolic barrier.
Fig.2. The tunneling probability($`p`$) is plotted as a function of coupling strength ($`\chi `$), for the wave packet’s average energy $`E_q=\mathrm{}\omega _0(ϵ=3.0)`$ for different initial positions of classical subsystem. The intial positions for three classical chaotic trajectories are ($`a`$) $`s`$=-11.31, ($`b`$) $`s`$=-9.30, ($`c`$) $`s`$=-8.80 and for three regular trajectories and ($`d`$) $`s`$=-5.02, ($`e`$) $`s`$=-4.52 and ($`f`$) $`s`$=-4.02; $`p_s=0`$. (The quantities are dimensionless; scale arbitrary)
Fig.3. The tunneling current(T) is plotted as a function of time($`\tau `$) for the wave packet’s average energy $`ϵ=1.0`$, ($`a`$) for classical chaotic subsystem, ($`b`$) for classical regular subsystem and ($`c`$) uncoupled system ($`\chi `$)=0.
Fig.4. The tunneling probability($`p`$) is plotted as a function coupling strength($`\chi `$) for the wave packet’s average energy $`ϵ=3`$ for different damping values ($`a`$) $`\mathrm{\Gamma }`$=2.0 and ($`b`$) $`\mathrm{\Gamma }`$=0.0 when the classical subsystem is chaotic. Similarly curves (c) $`\mathrm{\Gamma }`$=2.0 and (d) $`\mathrm{\Gamma }`$= 0.0 are plotted for the regular classical subsystem.
Fig.5. The tunneling current(T) is plotted as a function time ($`\tau `$) for the wave packet’s average energy $`ϵ=3`$ for different damping values ($`a`$) $`\mathrm{\Gamma }`$=2.0 and ($`b`$) $`\mathrm{\Gamma }`$=0.0 when the classical subsystem is chaotic. Similarly curves (c) $`\mathrm{\Gamma }`$=2.0 and (d) $`\mathrm{\Gamma }`$= 0.0 are plotted for the regular classical subsystem. |
no-problem/9911/hep-ph9911312.html | ar5iv | text | # 1 The mass bounds of the lightest CP even Higgs boson mass as a function of 𝑀 for various Λ in the Model I 2HDM at 𝑚_𝑡=175 GeV.
Upper and lower bounds of the lightest CP-even Higgs boson
in the two-Higgs-doublet model <sup>1</sup><sup>1</sup>1Talk given by Shinya Kanemura (kanemu@particle.physik.uni-karlsruhe.de) at the 2nd ECFA/DESY Linear Collider Workshop in Obernai, France (16.-19. October 1999) under the title Mass bounds of the lightest CP-even Higgs boson in the two-Higgs-doublet model
Shinya KANEMURA, Takashi KASAI, and Yasuhiro OKADA
Institut für Theoretische Physik, Universität Karlsruhe
D-76128 Karlsruhe, Germany
Theory Group, KEK
Tsukuba, Ibaraki 305-0801, Japan
By imposing validity of the perturbation and stability of vacuum up to an energy scale $`\mathrm{\Lambda }`$ ($`10^{19}`$ GeV), we evaluate mass bounds of the lightest CP-even Higgs-boson mass ($`m_h`$) in the two-Higgs-doublet model (2HDM) with a softly-broken discrete symmetry . In the standard model (SM), both the upper and the lower bounds have been analyzed from these kinds of requirement as a function of $`\mathrm{\Lambda }`$ . There have already been several works on the Higgs mass bounds in the 2HDM without the soft-breaking term . Our analysis is a generalization of these works to the case with the soft-breaking term. Because the introduction of the soft-breaking scale changes property of the 2HDM, it is very interesting to see what happens for the mass bounds in this case. Our results are qualitatively different from the previous works in the region of the large soft-breaking mass, where only one neutral Higgs boson becomes light. We find that, while the upper bound is almost the same as in the SM, the lower bound is significantly reduced. In the decoupling regime where the model behaves like the SM at low energy, the lower bound is given, for example, by about 100 GeV for $`\mathrm{\Lambda }=10^{19}`$ GeV and $`m_t=175`$ GeV, which is smaller by about 40 GeV than the corresponding lower bound in the SM. In general case, the $`m_h`$ is no longer bounded from below by these conditions. If we consider the experimental $`bs\gamma `$ constraint, small $`m_h`$ are excluded in Model II of the 2HDM.
The Higgs potential of the 2HDM is given for both Model I and Model II as
$`V_{2\mathrm{H}\mathrm{D}\mathrm{M}}`$ $`=`$ $`m_1^2\left|\phi _1\right|^2+m_2^2\left|\phi _2\right|^2m_3^2\left(\phi _1^{}\phi _2+\phi _2^{}\phi _1\right)+{\displaystyle \frac{\lambda _1}{2}}\left|\phi _1\right|^4+{\displaystyle \frac{\lambda _2}{2}}\left|\phi _2\right|^4`$ (1)
$`+\lambda _3\left|\phi _1\right|^2\left|\phi _2\right|^2+\lambda _4\left|\phi _1^{}\phi _2\right|^2+{\displaystyle \frac{\lambda _5}{2}}\left\{\left(\phi _1^{}\phi _2\right)^2+\left(\phi _2^{}\phi _1\right)^2\right\}.`$
We here take all the self-coupling constants and the mass parameters in (1) to be real. In Model II, $`\phi _1`$ has couplings with down-type quarks and leptons and $`\phi _2`$ with up-type quarks. Only $`\phi _2`$ has couplings with fermions in Model I.
The masses of the charged Higgs bosons $`(\chi ^\pm )`$ and CP-odd Higgs boson $`(\chi _2)`$ are expressed as $`m_{\chi ^\pm }^2=M^2(\lambda _4+\lambda _5)v^2/2`$, and $`m_{\chi _2}^2=M^2\lambda _5v^2`$, respectively, where $`M=m_3/\sqrt{\mathrm{cos}\beta \mathrm{sin}\beta }`$, $`\mathrm{tan}\beta =\phi _2/\phi _1`$ and $`v=\sqrt{2}\sqrt{\phi _1^2+\phi _2^2}246`$ GeV. The two CP-even Higgs boson masses are obtained by diagonalizing the $`2\times 2`$ matrix, where each component is given by $`M_{11}^2=v^2\left(\lambda _1\mathrm{cos}^4\beta +\lambda _2\mathrm{sin}^4\beta +\frac{\lambda }{2}\mathrm{sin}^22\beta \right)`$, $`M_{12}^2=M_{21}^2=v^2\mathrm{sin}2\beta \left(\lambda _1\mathrm{cos}^2\beta +\lambda _2\mathrm{sin}^2\beta +\lambda \mathrm{cos}2\beta \right)/2`$ and $`M_{22}^2=v^2\left(\lambda _1+\lambda _22\lambda \right)\mathrm{sin}^2\beta \mathrm{cos}^2\beta +M^2`$, where $`\lambda \lambda _3+\lambda _4+\lambda _5`$. The mass of the lighter (heavier) CP-even Higgs boson $`h`$ ($`H`$) is then given by $`m_{h,H}^2=\left\{M_{11}^2+M_{22}^2\sqrt{(M_{11}^2M_{22}^2)^2+4M_{12}^4}\right\}/2`$. For the case of $`v^2M^2`$, they can be expressed by
$`m_h^2`$ $`=`$ $`v^2\left(\lambda _1\mathrm{cos}^4\beta +\lambda _2\mathrm{sin}^4\beta +{\displaystyle \frac{\lambda }{2}}\mathrm{sin}^22\beta \right)+𝒪({\displaystyle \frac{v^4}{M^2}}),`$ (2)
$`m_H^2`$ $`=`$ $`M^2+v^2\left(\lambda _1+\lambda _22\lambda \right)\mathrm{sin}^2\beta \mathrm{cos}^2\beta +𝒪({\displaystyle \frac{v^4}{M^2}}).`$ (3)
Notice that the soft-breaking parameter $`M`$ characterizes the model. In the case of $`M^2\lambda _iv^2`$, these heavy Higgs bosons but the lightest decouple from the low-energy observable, and below the scale $`M`$ the effective theory is the SM with one Higgs doublet. On the other hand, if $`M^2\lambda _iv^2`$, the masses are controlled by the self-coupling constants, and thus the heavy Higgs bosons do not decouple and the lightest CP-even Higgs boson can have a different property from the SM Higgs boson .
As the condition of validity of perturbation theory, we here require that the running coupling constants of the Higgs self-couplings and the Yukawa couplings do not blow up below a certain energy scale $`\mathrm{\Lambda }`$: this leads the constraints on the coupling constants;
$`\lambda _i(\mu )<8\pi ,y_t^2(\mu )<4\pi ,(\mu <\mathrm{\Lambda }).`$ (4)
Next, from the condition of the vacuum stability we obtain constraints;
$`\lambda _1(\mu )>0,\lambda _2(\mu )>0,`$
$`\sqrt{\lambda _1(\mu )\lambda _2(\mu )}+\lambda _3(\mu )+\mathrm{min}[0,\lambda _4(\mu )+\lambda _5(\mu ),\lambda _4(\mu )\lambda _5(\mu )]>\mathrm{\hspace{0.17em}0},(\mu <\mathrm{\Lambda }).`$ (5)
We assume that the tree-level Higgs potential at the weak scale does not have any global minimum except for the one we consider: there is no CP nor charge breaking at the minimum . The conditions (4) and (5) constrain low-energy coupling constants through renormalization group equations (RGE’s). Thus the mass bounds of the Higgs boson are obtained.
In the decoupling regime ($`M^2\lambda _iv^2`$), the 2HDM effectively becomes the SM with one Higgs doublet below $`M`$. In order to include this effect, we use the one-loop SM RGE below $`M`$, and the one-loop 2HDM RGE above $`M`$. They are connected at $`M`$ by identifying the lightest CP-even Higgs boson in the 2HDM as the SM one in the mass formulas in both the models. We here use this procedure for the case $`M^2\lambda _iv^2`$ too, because the correction from the SM RGE is numerically very small in this case, although this procedure is not really justified there.
The 2HDM receives rather strong experimental constraints from the low energy precision data, especially on the $`\rho `$ parameter. The extra contribution of the 2HDM to the $`\rho `$ parameter should satisfy $`\mathrm{\Delta }\rho _{2\mathrm{H}\mathrm{D}\mathrm{M}}=0.00200.00049\frac{m_t175\mathrm{G}\mathrm{e}\mathrm{V}}{5\mathrm{G}\mathrm{e}\mathrm{V}}\pm 0.0027`$. Another important experimental constraint comes from the $`bs\gamma `$ measurement : there is a strong constraint on the charged-Higgs boson mass from below by this process in Model II, while Model I is not strongly constrained. We examine the general mass bounds of $`h`$ as a function of $`\mathrm{\Lambda }`$ varing all the free parameters under these experimental constraints.
By looking at the RGE’s the qualitative result may be understood. In decoupling regime, from Eq. (2) we have $`m_h^2\lambda _2v^2`$ for $`tan\beta 1`$. The RGE for $`\lambda _2`$ is given by
$$16\pi ^2\mu \frac{d\lambda _2}{d\mu }=12\lambda _2^23\lambda _2(3g^2+g^2)+\frac{3}{2}g^4+\frac{3}{4}(g^2+g^2)^2+12\lambda _2y_t^212y_t^4+A,$$
(6)
where $`A=2\lambda _3^2+2(\lambda _3+\lambda _4)^2+2\lambda _5^2>0`$. The SM RGE for $`\lambda _{SM}(m_{H}^{SM}{}_{}{}^{2}/v^2)`$ takes the same form as Eq. (6) substituting $`\lambda _{SM}`$ and $`y_t^{SM}`$ to $`\lambda _2`$ and $`y_t`$ and neglecting the $`A`$ term in the RHS. Hence the only difference from the SM RGE is the existence of the positive $`A`$ term, which works to keep the stability of vacuum. Thus the lower bound is expected to be reduced in the 2HDM in comparison with the SM results.
In Fig. 1 and 2, the upper and lower bounds of the $`m_h`$ are shown as a function of $`M`$ for various cut-off $`\mathrm{\Lambda }`$ for Model I and II, respectively. In Fig. 1, the allowed region of $`m_h`$ lies around $`m_hM`$ for $`M^2\lambda _2v^2`$, where the $`m_h`$ comes from $`M_{22}M`$ and the heavier Higgs boson mass $`m_H`$ has the mass of $`M_{11}\sqrt{\lambda _2}v`$. At $`M=0`$, though there are the upper bounds of $`m_h`$ for each $`\mathrm{\Lambda }`$, $`m_h`$ is not bounded from below by our condition. Our results at $`M=0`$ are consistent with Ref. . On the other hand, in the decoupling regime ($`M^2\lambda _2v^2`$), the situation is reversed; $`m_hM_{11}\sqrt{\lambda _2}v`$, and the bounds no longer depend on $`M`$. If we take account of the experimental result of $`bs\gamma `$, $`m_h`$ is bounded from below in the Model II as seen in Fig. 2, because the small $`M`$ region necessarily corresponds to small $`m_{\chi ^\pm }`$ and this is excluded by the $`bs\gamma `$ constraint.
Finally, we combine the results in the SM and the 2HDM (Model I and II) (Fig. 3). We here choose, as an example, $`\mathrm{\Lambda }=10^{19}`$ GeV for comparison of the results in the SM and the 2HDM at $`m_t=175`$ GeV. For a reference, the bounds of the lightest CP-even Higgs mass in the MSSM are also given for the 1 TeV stop mass . (In the MSSM, $`M`$ corresponds to the CP-odd Higgs boson mass exactly.) From Fig. 3 it is easy to observe the difference of the bounds among the SM, the 2HDM(I) and the 2HDM(II). While the upper bounds are all around 175 GeV in these models, the lower bounds are completely different as we expect; about 145 GeV in the SM, about 100 GeV in the Model II (with respect to $`bs\gamma `$ constraints<sup>2</sup><sup>2</sup>2 If we use more conservative way to add theoretical uncertainties for the $`bs\gamma `$ evaluation, the bound on the charged Higgs boson or on the $`M`$ in Model II becomes rather smaller . The lower bound of $`m_h`$ due to the $`bs\gamma `$ constraint is then reduced for a few GeV according to the changed allowed region of $`M`$. ) and no lower bound in Model I. Although we have shown figures in which $`m_t=175`$ GeV is taken, the top mass dependence cannot be neglected especially for the lower bounds . For example, the lower line for $`\mathrm{\Lambda }=10^{19}`$ GeV in the 2HDM shown in Fig.3 shifts to lower (upper) by 9 GeV for $`m_t=170`$ $`(180)`$ GeV at $`M=1000`$ GeV.
In the SM, the next-to-leading order analysis of the effective potential shows that the lower bound reduces by about 10 GeV ($`\mathrm{\Lambda }=10^{19}`$ GeV) . It may be then expected that a similar reduction of the lower bound would occur in the 2HDM by such higher order analysis.
In the decoupling regime, the properties of the lightest Higgs boson such as the production cross section and the decay branching ratios are almost the same as the SM Higgs boson. We have not explicitly considered constraint from the Higgs boson search at LEP II , but if the Higgs boson is discovered with the mass around $`100`$ GeV at LEP II or Tevatron experiment in near future and its property is quite similar to the SM Higgs boson, the 2HDM with very high cut-off scale is another candidate of models which predict such light Higgs boson along with the MSSM and its extensions.
In summary, we have discussed the mass bounds of the $`h`$ as a function of a cutoff $`\mathrm{\Lambda }`$ by the requirement of perturbativity and vacuum stability in the non-SUSY 2HDM with the softly-broken discrete symmetry. The upper bounds are almost the same as the SM results, while the lower bounds are significantly reduced even for the decoupling regime. In general case, the mass is no longer bounded from below. If we consider the experimental $`bs\gamma `$ constraint, the very light $`h`$ is excluded. |
no-problem/9911/cond-mat9911075.html | ar5iv | text | # Hierarchical model for the scale-dependent velocity of seismic waves
Elastic waves of short wavelength propagating through the upper layer of the Earth appear to move faster at large separations of source and receiver than at short separations. This scale dependent velocity is a manifestation of Fermat’s principle of least time in a medium with random velocity fluctuations. Existing perturbation theories predict a linear increase of the velocity shift with increasing separation, and cannot describe the saturation of the velocity shift at large separations that is seen in computer simulations. Here we show that this long-standing problem in seismology can be solved using a model developed originally in the context of polymer physics. We find that the saturation velocity scales with the four-third power of the root-mean-square amplitude of the velocity fluctuations, in good agreement with the computer simulations.
Seismologists probe the internal structure of the Earth by recording the arrival times of waves created by a distant earthquake or explosion. Systematic differences between studies based on long and short wavelengths $`\lambda `$ have been explained in terms of a scale dependence of the velocity at short wavelengths. The velocity obtained by dividing the separation $`L`$ of source and receiver by the travel time $`T`$ increases with increasing $`L`$, because — following Fermat’s principle — the wave seeks out the fastest path through the medium (see Fig. 1). This search for an optimal path is more effective for large separations, hence the apparent increase in velocity on long length scales. It is a short-wavelength effect, since Fermat’s principle breaks down if the width $`\sqrt{L\lambda }`$ of the first Fresnel zone becomes comparable to the size $`a`$ of the heterogeneities. The scale-dependent velocity of seismic waves was noted by Wielandt more than a decade ago, and has been studied extensively by geophysicists.
A rather complete solution of the problem for small $`L`$ was given by Roth, Müller, and Snieder, by means of a perturbation expansion around the straight path. The velocity shift $`\delta v=v_0(1v_0T/L)`$ (with $`v_0`$ the velocity along the straight path) was averaged over spatially fluctuating velocity perturbations with a Gaussian correlation function (having correlation length $`a`$ and variance $`\epsilon ^2v_0^2`$, with $`\epsilon 1`$). It was found that $`\delta vv_0\epsilon ^2L/a`$ increases linearly with $`L`$. Clearly, this increase in velocity can not continue indefinitely. The perturbation theory should break down when the root-mean-square deviation $`\delta x\epsilon a(L/a)^{3/2}`$ of the fastest path from the straight path becomes comparable to $`a`$. Numerical simulations show that the velocity shift saturates on length scales greater than the critical length $`L_\mathrm{c}a\epsilon ^{2/3}`$ for the validity of perturbation theory. A theory for this saturation does not exist.
It is the purpose of this article to present a non-perturbative theory for this seismological problem, by making the analogy with a problem from polymer physics. The problem of the velocity shift in a random medium belongs to the class of optimal path problems that has a formal equivalence to the directed polymer problem. The mapping between these two problems relates a wave propagating through a medium with velocity fluctuations to a polymer moving in a medium with fluctuations in pinning energy. The travel time of the wave between source and receiver corresponds to the energy of the polymer with fixed end points. At zero temperature the configuration of the polymer corresponds to the path selected by Fermat’s principle. (The restriction to directed polymers, those which do not turn backwards, becomes important for higher temperatures.) There exists a simple solvable model for directed polymers, due to Derrida and Griffiths, that has remained unnoticed in the seismological context. Using that model we can go beyond the breakdown of perturbation theory and describe the saturation of the velocity shift on large length scales.
Hierarchical model
We follow a recursive procedure, by which the probability distribution of travel times is constructed at larger and larger distances, starting from the perturbative result at short distances. At each iteration we compare travel times from source to receiver along two branches, choosing the smallest time. A branch consists of two bonds, each bond representing the length scale of the previous step. This recursive procedure produces the lattice of Fig. 2, called a hierarchical lattice. The lattice in this example represents a two-dimensional system, since at each step the length is doubled while the number of bonds is increased by a factor of four. For the three-dimensional version one would compare four branches at each step (each branch containing two bonds), so that the total number of bonds would grow as the third power of the length. Since most of the simulations have been done for two-dimensional systems, we will consider that case in what follows.
To cast this procedure in the form of a recursion relation, we denote by $`p_k(T)`$ the distribution of travel times at step $`k`$. One branch, consisting of two bonds in series, has travel time distribution
$$q_k(T)=_0^{\mathrm{}}𝑑T^{}p_k(T^{})p_k(TT^{}),$$
(1)
assuming that different bonds have uncorrelated distributions. To get the probability distribution at step $`k+1`$ we compare travel times of two branches,
$`p_{k+1}(T)`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑T^{}{\displaystyle _0^{\mathrm{}}}𝑑T^{\prime \prime }\delta \mathbf{\left(}T\mathrm{min}(T^{},T^{\prime \prime })\mathbf{\right)}q_k(T^{})q_k(T^{\prime \prime })`$ (2)
$`=`$ $`2q_k(T){\displaystyle _T^{\mathrm{}}}𝑑T^{}q_k(T^{}).`$ (3)
We start the recursion relation at step $`0`$ with the distribution $`p_0(T)`$ calculated from perturbation theory at length $`L_c`$. Iteration of eq. (3) then produces the travel time distribution $`p_k(T)`$ at length $`L=2^kL_c`$.
Equation (3) is a rather complicated non-linear integral equation. Fortunately, it has several simplifying properties. One can separate out the mean $`T_0`$ and standard deviation $`\sigma _00`$ of the starting probability distribution, by means of the $`k`$-dependent rescaling $`\tau =(T2^kT_0)/\sigma _0`$. The recursion relation (3) is invariant under this rescaling, which means that we can restrict ourselves to starting distributions $`\stackrel{~}{p}_0(\tau )=\sigma _0p_0(\sigma _0\tau +T_0)`$ having zero mean and unit variance. This is the first simplification. After $`k`$ iterations the mean $`\stackrel{~}{m}_k`$ and standard deviation $`\stackrel{~}{\sigma }_k`$ of the rescaled distribution $`\stackrel{~}{p}_k(\tau )`$ yield the mean $`T_k`$ and standard deviation $`\sigma _k`$ of the original $`p_k(T)`$ by means of
$$T_k=\sigma _0\stackrel{~}{m}_k+2^kT_0,\sigma _k=\sigma _0\stackrel{~}{\sigma }_k.$$
(4)
The second simplification is that for large $`k`$, the recursion relation for $`\stackrel{~}{p}_k(\tau )`$ reduces to
$$\stackrel{~}{p}_{k+1}(\tau )=\frac{1}{2}\alpha \stackrel{~}{p}_k\left(\frac{1}{2}\alpha \tau +\beta \stackrel{~}{\sigma }_k+(1\alpha )\stackrel{~}{m}_k\right),$$
(5)
with universal constants $`\alpha =1.627`$ and $`\beta =0.647`$. Under the mapping (5), the mean and standard deviation evolve according to
$$\stackrel{~}{m}_{k+1}=2\stackrel{~}{m}_k2\beta \stackrel{~}{\sigma }_k/\alpha ,\stackrel{~}{\sigma }_{k+1}=2\stackrel{~}{\sigma }_k/\alpha .$$
(6)
The solution of this simplified recursion relation is
$$\stackrel{~}{m}_k=\frac{2^k\beta }{\alpha 1}(A\alpha ^kB),\stackrel{~}{\sigma }_k=2^kA\alpha ^k.$$
(7)
The coefficients $`A`$ and $`B`$ are non-universal, depending on the shape of the starting distribution $`\stackrel{~}{p}_0`$. For a Gaussian $`\stackrel{~}{p}_0`$ we find $`A=0.90`$, $`B=0.95`$, close to the values $`A=1`$, $`B=1`$ that would apply if eq. (6) holds down to $`k=0`$. For a highly distorted bimodal $`\stackrel{~}{p}_0`$ we find $`A=0.71`$, $`B=0.88`$. We conclude that $`A`$ and $`B`$ depend only weakly on the shape of the starting distribution.
Scaling laws
Given the result (7) we return to the mean and standard deviation of $`p_k(T)`$ using eq. (4). Substituting $`k=\mathrm{log}_2(L/L_c)`$ one finds the large-$`L`$ scaling laws
$`{\displaystyle \frac{T}{L}}`$ $`=`$ $`{\displaystyle \frac{T_0}{L_c}}{\displaystyle \frac{\beta }{\alpha 1}}{\displaystyle \frac{\sigma _0}{L_c}}\left[BA\left({\displaystyle \frac{L_c}{L}}\right)^p\right],`$ (8)
$`{\displaystyle \frac{\sigma }{L}}`$ $`=`$ $`{\displaystyle \frac{\sigma _0}{L_c}}A\left({\displaystyle \frac{L_c}{L}}\right)^p.`$ (9)
The mean travel time $`T`$ and standard deviation $`\sigma `$ scale with $`L`$ with an exponent $`p=\mathrm{log}_2\alpha =0.702`$. This scaling exponent has been studied intensively for the directed polymer problem.
For the seismic problem the primary interest is not the scaling with $`L`$, but the scaling with the strength $`\epsilon `$ of the fluctuations. Perturbation theory gives the $`\epsilon `$-dependence at length $`L_c`$,
$$1v_0T_0/L_c\epsilon ^2L_c/a,v_0\sigma _0/L_c\epsilon \sqrt{a/L_c},$$
(10)
where $``$ indicates that coefficients of order unity have been omitted. (We will fill these in later.) Since $`L_ca\epsilon ^{2/3}`$ (as mentioned in the introduction), we find upon substitution into eq. (8) the scaling of the mean velocity shift at length $`LL_c`$:
$$\delta v/v_01v_0T/L\epsilon ^{4/3}\left[1+𝒪(L_c/L)^p\right].$$
(11)
The mean velocity shift saturates at a value of order $`v_0\epsilon ^{4/3}`$. The exponent $`\frac{4}{3}`$ was anticipated in ref. and is close to the value $`1.33\pm 0.01`$ resulting from simulations.
Comparison with simulations
For a more quantitative description we need to know the coefficients omitted in eq. (10). These are model dependent. To make contact with the simulations we consider the case of an incoming plane wave instead of a point source. The perturbation theory for the mean velocity shift at length $`L_c`$ gives
$$\delta v_0=v_0\epsilon ^2\frac{L_c}{a}\frac{\sqrt{\pi }}{2}\left(1\frac{2}{\sqrt{\pi }}\frac{a}{L_c}\right).$$
(12)
The variance at length $`L_c`$ is
$$\delta v^2_0=v_0^2\epsilon ^2\frac{a}{L_c}\sqrt{\pi }\left(1\sqrt{\pi }\epsilon ^2\frac{L_c^3}{a^3}\right).$$
(13)
We quantify the criterion for the breakdown of perturbation theory by $`L_c=\kappa a\epsilon ^{2/3}`$, with $`\kappa =0.765`$ our single fit parameter. For the non-universal constants $`A`$ and $`B`$ we can use in good approximation $`A=1`$, $`B=1`$. The mean velocity shift in the non-perturbative regime ($`L>L_c`$) is then expressed as:
$$\delta v=\frac{\beta }{\alpha 1}\sqrt{\delta v^2_0}\left[1\left(\frac{L_c}{L}\right)^p\right]+\delta v_0.$$
(14)
For $`L<L_c`$ we use the perturbative result (12) (with $`L_c`$ replaced by $`L`$). As shown in Fig. 3, the agreement with the computer simulations is quite satisfactory, in particular in view of the fact that there is a single fit parameter $`\kappa `$ for all curves.
Conclusion
We have presented a non-perturbative theory of the scale-dependent seismic velocity in heterogeneous media. The saturation of the velocity shift at large length scales, observed in computer simulations, is well described by the hierarchical model — including the $`\epsilon ^{4/3}`$ scaling of the saturation velocity. We have concentrated on the case of two-dimensional propagation (for comparison with the simulations), but the $`\epsilon ^{4/3}`$ scaling holds in three dimensions as well. (The coefficients $`\alpha =1.74`$, $`\beta =1.30`$ are different in 3D.)
Our solution of the seismic problem relies on the mapping onto the problem of directed polymers. This mapping holds in the short-wavelength limit, $`\lambda a^2/L`$. To observe the saturation at $`LL_c`$ thus requires $`\lambda a\epsilon ^{2/3}`$. For $`La^2/\lambda `$ the velocity shift will decrease because the velocity fluctuations are averaged out over a Fresnel zone. There exists a perturbation theory for the velocity shift that includes the effects of a finite wavelength. It is a challenging problem to see if these effects can be included into the non-perturbative hierarchical model as well.
Acknowledgements. We are indebted to R. Snieder for suggesting this problem to us, and to X. Leyronas for valuable discussions throughout this work. We acknowledge the support of the Dutch Science Foundation NWO/FOM. |
no-problem/9911/astro-ph9911236.html | ar5iv | text | # The USA X-ray Timing Experiment
## 1. Introduction
The Unconventional Stellar Aspect (USA) Experiment is a low-cost X-ray timing experiment with the dual purpose of timing X-ray binary systems and exploration of applications of X-ray sensor technology. USA was launched on February 23, 1999 on the Advanced Research and Global Observation Satellite (ARGOS). It is a reflight of two proportional counter X-ray detectors that performed excellently on the NASA Spartan-1 mission (Kowalski et al. 1993). The primary targets are bright Galactic X-ray binaries that are used simultaneously for both scientific and applied objectives. X-ray photon event times are measured to high precision using the GPS receiver on ARGOS. USA has the effective area, precise timing ability, and data throughput capability to probe these sources at the timescales of processes near neutron star surfaces or the innermost stable orbits around black holes. A second objective of the experiment is to conduct experiments involving applied uses of X-ray detectors in space and with reliable computing in space. These will not be discussed here but descriptions are available elsewhere (Wood 1993).
Key characteristics of the experiment and mission that facilitate this overall program include (i) a mission concept that allows long observing times on bright X-ray objects, (ii) large-area detectors with high time resolution capability (effective area: 2000 cm<sup>2</sup>; telemetry: 40 kbps, with 128 kbps available for short periods; 2 $`\mu `$s time resolution), (iii) good low energy response (down to 1 keV), and (iv) a high flexibility in data handling. Other special features include absolute time-tagging (to 2 $`\mu `$s) using a GPS receiver.
## 2. Scientific Program
The principal targets for USA are X-ray binaries whose X-ray emitting members are neutron stars, black holes, or white dwarfs. Study of physical processes in these systems have been among the main thrusts of X-ray astronomy since the founding of the field. Today it remains true that many of the most important results on these systems are found by studying their X-ray variability, and the push to shorter (millisecond) timescales is proving highly fruitful. If the source is bright ($`>`$ milliCrabs) such short timescales are more readily reached with non-imaging instruments having large collecting apertures than with imaging instruments. Physics issues studied in these sources are generally related to the fact that parameters such as magnetic field strength, mass and energy densities, and gravitational fields reach extreme values, hence providing the preferred testing grounds for physical theories. X-ray timing is a cornerstone of relativistic astrophysics.
USA, in turn, is one of the two main resources at the present epoch for X-ray timing experiments, the other being the Proportional Counter Array (PCA) on RXTE. USA has its own special areas of emphasis, one of which is its observing plan. Present plans call for the observation of about 30 primary targets, with each being observed for about 1 month over a nominal mission life of 3 years; selected targets will be observed for shorter periods of time. Sources observed to date (through 31 August 1999) include Cyg X-1 (700 ks on target), Aql X-1 (100 ks), Cen X-3 (65 ks), X1630-472 (60 ks), Cyg X-2 (50 ks), X1636-536 (45 ks), GX 1+4 (40 ks), 1E2259+586 (40 ks), X1820-30 (40 ks), X1630-472 (35 ks), 1E1048.1-5937 (30 ks), and GRS 1915+105 (25 ks). The total time on each source is typically scheduled as a number of $``$1 ks observations distributed over weeks or months. Simultaneous observation with other observatories, such as the Compton Gamma Ray Observatory and Rossi X-ray Timing Explorer, and with ground based telescopes are also being undertaken.
Figure 1 shows two sample light curves taken with USA. The first is an X-ray burst from the burster X1735-444, and the second is an observation of a flaring state of the Galactic microquasar GRS1915+105. In 1735-44 the instrument is on the source throughout the interval displayed while in GRS 1915+105 the steep rise at the beginning of the plot is the instrument slewing onto the source; the earliest seconds represent the background for this observation.
### 2.1. Low-Mass X-ray Binaries
The special importance of the low mass X-ray binaries (LMXBs) arises from their comparatively weak magnetic fields which allows the disk to penetrate very close to the star. This gives rise to fast timing effects that can be used to probe the extreme conditions in the neutron star vicinity. Major gains in the understanding of these phenomena have been made since the launch of the Rossi X-ray Timing Explorer (RXTE) in late 1995. High frequency quasiperiodic oscillations (QPOs) and short strings of coherent pulsations during bursts have been used to argue convincingly that effects associated with inner disk edges and the innermost stable orbits predicted by General Relativity are being seen (van der Klis 1998). Another milestone is the establishment of the spinup evolution of neutron stars through the discovery of the first accretion powered millisecond pulsar (SAX J1808.4-3658).
USA will make further contributions to the study of LMXBs with the application of its unique strengths. In some cases this will mean exploiting the ability to dedicate large blocks of time to a key source, e.g., to refine understanding of SAX J1808.4-3658 or to observe transitions between modes or states. Significant time is being devoted to searches for coherent periods, both on and off bursts. Off burst work is carried out using coherence recovery searches for periods. Observations can also be carried out in various ways to detect or refine orbital periods in LMXBs. Overall, LMXBs are sources that stand to bring major rewards including advances in understanding the role of General Relativity in the dynamics of inner disk regions, but past experience has also shown that these rewards are achieved only through major investments of observing time and analysis, chiefly because of the elusiveness and short timescale of the spin periods.
### 2.2. High-Mass X-ray Binaries
USA will also accumulate significant time on a number of high-mass X-ray binaries. Many of these systems have accretion rates that far exceed the Eddington limit locally in the accretion column. This means that radiation pressure has a significant influence on the flow. Recently Jernigan et al. (1999) reported the discovery of Photon Bubble Oscillations (PBOs) in Cen X-3. USA will observe this source and other bright HMXBs to help characterize their high frequency power spectra independently from RXTE. Outstanding puzzles in these systems are the details of angular momentum transfer from the disk to the star (including possible reversals of the sense of disk rotation) and understanding in detail the photohydrodymics of the accretion column in which the super Eddington accretion funneled flow is converted to the observed X-ray emission. Bright binary pulsars such as Her X-1, Cen X-3, and Vel X-1 will be observed to gain insight into these issues. Monitoring over both short and long time periods allows the correlation between period, period derivative, and luminosity to be probed which addresses the angular momentum transfer issue. USA observes at significantly lower energies than the BATSE instrument on CGRO, which has gathered much of the data on this topic in recent years.
### 2.3. Black Hole Candidates
USA will pursue several investigations into the nature of black holes. Chief among these are (1) The characterization of high frequency ($`\nu >1`$ kHz) variability in Cyg X-1. USA has begun a long term study of Cyg X-1 in which more than 1 Ms of exposure will be accumulated on Cyg X-1 in each of the spectral states it exibits during the USA three year mission. Using techniques developed for calibrating high frequency systematic effects in HEAO A-1 and RXTE (Chaput et al. 1999), we will constrain the sub-millisecond variability of Cyg X-1. (2) Simultaneous X-ray and infrared observations of the galactic microquasar GRS 1915+105. USA will, for the first time, determine the 1–3 keV behavior of this interesting source. Figure 1 does not begin to convey the range of modulation patterns seen in this object. A new window at lower energies from USA, gathered in simultaneity with other space-based and ground facilities, may contribute to modeling the fluid dynamical processes near the black hole. The model that can account for the wealth of variability effects seen in GRS 1915+105 may bring us closer to understanding how plasma behaves near black holes, including the relativistic effects on orbits.
### 2.4. Other Sources
The Anomalous X-ray Pulsars (AXPs) appear to be a distinct subpopulation whose observed emission is powered by spin-down. Timing these pulsars over a period of years will place strong constraints on their nature (whether they are magnetars or accreting neutron stars) and emission mechanism. Subtle effects of (currently unsuspected) binary companions may show up or else derivatives that provide insight into source dynamics may be measured. There are several good candidates and the result will not necessarily be the same in all instances. Long-term monitoring of AXPs is made feasible by the soft response of USA and the good absolute timing. Rotation-powered (radio) pulsars are also observed by USA to validate the time transfer between USA and RXTE, and to measure the radio to X-ray offset of the pulses.
Cataclysmic variables (CVs) exhibit a wide range of timing phenomena,including QPOs, X-ray transients, and complex light curves. While CVs are typically $`100`$ times fainter than LMXBs, their dynamical time scales are $`1000`$ times longer. Moreover the magnetic CVs, which will be USA’s prime targets among CVs are distinguished by having the largest magnetic moments among known stellar populations, including even magnetars. Curiously, accretion-induced QPOs were predicted historically to result from this flow before they were observed. The QPOs have been seen repeatedly in optical wavelengths but never in X-rays, despite searches. Highly correlated optical and X-ray luminosity variations are predicted in current hydrodynamic models (Wolff, Wood & Imamura 1991)
Finally, USA can exploit its great flexibility to observe targets of opportunity that are deemed important by the science working group. Already, USA has observed Aql X-1, the Rapid Burster, and X1630-472 during outbursts. Of course, the accreting millisecond pulsar SAXJ1808.4-3658 would be of particular interest if it becomes active during the USA mission.
## 3. Instrument Description
### 3.1. Proportional Counter X-ray Detectors
The detector (Table 1) consists of two multiwire constant flow proportional counters equipped with a 5.0 $`\mu `$m Mylar window and an additional 1.9 $`\mu `$m thick aluminized Mylar heat shield. The detector is filled with a mixture of 90% argon and 10% methane (P-10) at a pressure of 16.1 psia (at room temperature). The detector interior contains an array of wires which provides two layers of nine 2.8 cm square cells, each containing one anode wire, running the length of the counter. An additional wire runs around the periphery of the array as part of the cosmic ray veto system. The electronics are designed to accept primarily X-ray events, which arise in one cell only. Events registered in two or more wires by a cosmic ray track are vetoed with an efficiency of about 99%.
The high voltage on the anode wires is adjusted continuously to stabilize the gain, using a feed-back loop which monitors the pulse-height distribution of X-ray events in a small separate proportional counter. Two discriminators provide a normalized value independent of the absolute source intensity of the <sup>55</sup>Fe source used in the feed-back counter.
The collimators serve to support the window as well as to define the field of view. To place reasonable requirements on the pointing system the collimator was constructed with a field of view of approximately 1.2$`\times `$1.2 and a flat top of approximately 0.05. Each collimator consists of 8 modules 7.5 cm $`\times `$ 28 cm $`\times `$ 11 cm high filled with copper hexcell formed from 25 $`\mu `$m sheet stock with a 2.5 mm altitude for each hexagon. The sides and ends of each module are formed from 1 mm Cu sheet which provides stiffness across the width of the collimator to support a 98% transmission nickel mesh that in turn supports the Mylar window. The response function of each collimator module was measured with X-rays before the modules were assembled into the collimator frames.
### 3.2. Support Hardware
The ARGOS spacecraft is three-axis-stabilized and nadir-pointed. The X-ray detectors are mounted on a 2-axis gimballed platform to permit inertial pointing at celestial objects. The pointer is configured as an equatorial mount looking aft (away from the velocity vector of the spacecraft). Pointing is accomplished by a yaw rotation to acquire the target followed by a continuous slew in the pitch to track the target as it rotates about the orbital pole.
The primary tasks of the central command and control electronics (CE) are command and data interface to the ARGOS spacecraft MIL-1553 bus, data acquisition from the detector modules, control of the pointer system, and interface with USA’s RH3000 and IDT3081 processors. The command and control processor is a Harris radiation hardened 80C86 microprocessor.
The structural elements of the pointer (Table 2), the support pylons, and the yoke which serves as the inner gimbal form the primary structure of the experiment. Each axis has a drive unit which forms the pivot on one end of the axis and a position encoder unit which supports the opposite end. Actual alignment is measured by rastering through sources in flight.
The detector interface board (DIB) in the CE performs time tagging and data formatting for the X-ray science data as well as formatting detector housekeeping data. The microprocessor used is an Analog Devices ADSP2100. The DIB receives a fast photon arrival signal from each detector which enables the timing to $`1`$ microsecond accuracy. A 1 Hz clock (with corresponding GPS time tag) is received directly from the spacecraft to synchronize the event time tagging clock. Pulse height data for each photon are transmitted from the detector to the DIB upon completion of the analog to digital conversion. There are two standard telemetry modes: event and spectral. Event mode is the “workhorse” telemetry mode for USA; for moderate count rates, it allows the maximal amount of information to be preserved on each photon. In event mode, the arrival time and some energy information is stored for each photon detected. There are two submodes of event mode providing 32 $`\mu `$s time and 16 pulse height channels in a 12 bit word and 2 $`\mu `$s time with 8 pulse height channels in a 15 bit word respectively. Data may be output in event mode at either 40 or 128 kbps providing maximum count rates of 3060 or 9940 events per second for 32 $`\mu `$s time or 2448 or 7952 events per second for $`\mu `$s time. In spectral mode, a full resolution energy spectrum (48 channels) is generated every 10 milliseconds for each detector.
The USA experiment also provides space for two “ride-along” processor boards, the RH3000 and the IDT3081. The RH3000 board is built around a pair radiation hardened Harris Semiconductor version of the MIPS R3000 configured as a shadow pair with 2 MB of memory. The IDT3081 board incorporates the commercial-off-the-shelf IDT3081 processor and 2 MB of DRAM without any special error correcting hardware. Both computer boards have access to the downlink science telemetry stream. These processors will be used to conduct experiments in fault-tolerant computing, autonomous spacecraft navigation, and to perform special data analysis functions which are beyond the scope of the normal science telemetry modes, or which require bandwidths greater than 128 kbps.
### 3.3. Instrument Status
The USA instrument has been performing well since activation began on 30 April 1999, but the USA mission has not been without its difficulties. Approximately two weeks after launch the detector heat shields suffered from degradation which has imposed additional constraints on USA pointing with respect to the Sun. On 8 June 1999 Detector 2 suffered an event which increased the gas leak rate to a very high level and exhausted the P-10 supply leaving only Detector 1 to complete the mission, halving the effective area. Two spacecraft performance issues which are described in more detail below have also impacted USA operations.
## 4. ARGOS Mission Description
USA exploits the flight opportunity provided by the ARGOS mission under the DoD Space Test Program (STP). STP was established in 1965 as an activity under the executive management of the Air Force Systems Command with the objective of providing spaceflight for DoD research and development experiments which are not authorized to fund their own flight. Both engineering/technology development and scientific payloads have been flown with great frequency under this program. ARGOS is the only Delta-class STP free-flyer mission to be launched in the 1990s.
The 5000 lb ARGOS satellite was launched from Vandenberg AFB at 10:30 UT on 23 February 1999 aboard a Boeing Delta-II rocket. The prime satellite contractor was Boeing who built and tested the satellite at their Seal Beach, CA facility. ARGOS carries a complement of 9 experiments which address such topics as ionospheric remote sensing, space dust, advanced electric propulsion, and high temperature superconductivity.
Spacecraft downlink telemetry bandwidth is done at 1, 4, or 5 Mbps. Data are stored in a 2.4 Gbit solid state recorder and downlinked at station passes to AFSCN ground stations. The spacecraft is operated in a 3-axis stabilized mode, with Z-axis of the spacecraft always pointed to nadir. Attitude control is based on a system of gyros and horizon sensors feeding into reaction wheels and CO<sub>2</sub> thrusters. The orbit is nearly circular with a 830 km altitude and a 98.7 inclination. It is Sun-synchronous with a beta-angle of 25–45 degrees, i.e., it crosses the equator at approximate local times of 14:00 on the day side and 02:00 on the night side. This nearly polar orbit means that USA encounters a high radiation environment multiple times per orbit as it passes through the Earth’s radiation belts at latitudes above 50. This forces USA to take data at lower duty cycle, turning off the detectors in the radiation belts and the South Atlantic Anomaly to prevent detector breakdowns.
### 4.1. Mission Operations and Data Processing
The satellite mission operations are handled by the Air Force SMC/TEO at Kirtland AFB, NM. They are responsible for uplinking commands to and for receiving data from the satellite. Individual experiment command uploads are delivered to TEO via FTP and uploaded during ground contacts. Data downlinked during the pass is recorded at the ground station and mailed to TEO because the AFSCN does not support real time links of $`>1`$ Mbps. This results in a delay of 7–21 days in getting science data back to the experimenters.
USA operation is largely automatic. Twice daily command uploads contain timed execution commands to slew the insrument, command it to track the source, switch on the high voltage, select telemetry rate and perform calibrations. These command sequences are generated by a highly automated observation scheduling system which optimizes source selection, manages solid state recorder space, and builds the command uploads.
The USA data processing system is also highly automated. As data appear at Kirtland, files are automatically retreived via FTP and the first several processing steps are performed. Quicklook data are checked for anomalous conditions and the USA team is alerted by e-mail if problems occur. Subsequently, observations are extracted from the Level 0 archive, converted to FITS, and distributed to the scientific analysis centers, including NRL and SLAC.
A Science Working Group (SWG) has been established to help optimize the scientific potential of USA. The SWG determines scientific priorities for observing targets, subject to certain constraints. Scheduling of targets during the USA mission will be consistent with experiment science objectives, priorities, and mission operations capabilities. Telemetry formats will be selected to support overall objectives. USA has the flexibility to respond quickly to some targets of opportunity with approximately a 1–3 day turnaround after the decision to revise the observing plan. The SWG decides whether to respond to potential targets of opportunity and also identifies instances when coordinated observations with ground-based observers, CGRO, RXTE, or other ARGOS instruments are scientifically advantageous. The USA team is receptive to collaborations to make better use of the data, but the small size of the group does not allow us to operate a conventional guest observer facility.
### 4.2. ARGOS Mission Anomalies and Events
The ARGOS launch and deployment went flawlessly, but since then several problems have surfaced with various subsystems. Generally, the spacecraft has been very robust and has autonomously safed itself when presented with dramatic disturbances, such as a battery exploding on the electric propulsion experiment and during the USA Detector 2 gas leak. Here we will just summarize the issues and describe how they affect the operation of USA.
Shortly after launch, it was discovered that the GPS receiver is unable to stay locked on to the GPS solution and provide good navigation information. This problem was traced to an unexpectedly large input level to the receiver which causes cross correlation errors which disrupt the solution. Generally the receiver will lock on, then oscillate between navigation and acquisition mode for a period of a few minutes to a few hours before losing the solution completely. To recover the time resolution required for many of the USA objectives, new software was uploaded to the satellite to make it safer to initialize the receiver repeatedly. Currently the receiver is initialized 4 times per day. Software is being developed to be able to interpolate times using the onboard clock to recover precise absolute times between periods when the receiver is locked on to a good solution.
A problem was discovered with the Scanning Horizon Sensors which are used to control the pitch and roll of the satellite. They are more radiation sensitive than expected and experience data dropouts or return incorrect data during most passages through the South Atlantic Anomaly (SAA). This causes the spacecraft to respond and produces attitude disturbences in the satellite when it is in the SAA. This does not affect USA because USA never operates in the SAA.
At the time of the software upload to work around the GPS receiver problem, a problem with the offset pointing of USA from the satellite appeared. It appears that the navigation message sent to USA every second no longer represents the true attitude of the spacecraft. It appears, after numerous scanning observations using USA, that the satellite is out of alignment in the roll direction by about 1. The cause of this is currently unknown, and work is ongoing to troubleshoot this problem and design a workaround.
## REFERENCES
Chaput, C., Bloom, E., Cominsky, L., et al. 1999, ApJ, submitted (astro-ph/9901131)
Jernigan, J. G., Klein, R. I., Arons, J. 1999, ApJ, in press (astro-ph/9909133)
Kowalski et al. 1993, ApJ, 412, 489
van der Klis, M. 1998, in Proceedings of the Third William Fairbank Meeting
Wolff, M. T., Wood, K. S., & Imamura, J. N. 1991, ApJ, 375, L31
Wood, K. S., et al. 1991, ApJ, 379, 295
Wood, K.S. 1993, SPIE 1940, 105 |
no-problem/9911/hep-ph9911433.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Recently there has been much discussion of the measurement of the pion structure function $`F_{2\pi }(x,Q^2)`$ at HERA using deep inelastic scattering (DIS) of high-energy leptons on the virtual pion of the $`\pi N`$ Fock state of the proton and selecting semi-inclusive events
$$epe^{}nX$$
(1)
tagged by a leading neutron , for earlier works see . The underlying DIS
$$\gamma (q)+\pi (k)X$$
(2)
is considered in the high-energy regime of very large Regge parameter
$$\frac{1}{x}=\frac{W^2+Q^2}{Q^2}=\frac{1}{x_{Bj}}1,$$
(3)
where $`W^2=(q+k)^2`$ is the $`\gamma ^{}\pi `$ c.m.s. collision energy squared, $`Q^2=q^2`$ is the virtuality of the photon and $`x`$ is the conventional Bjorken variable. For $`Q^2`$ much larger than the pion virtuality ($`K^2=k^2`$) the pion can be considered as the non-perturbative on-shell parton in the proton. For the light-cone derivation of the flux of pions in the $`\pi N`$ Fock state see , a similar flux is found in the recent Regge theory analysis . This makes it possible to measure $`F_{2\pi }(x,Q^2)`$ down to $`x10^4`$ way beyond the reach of $`\pi N`$ Drell-Yan experiment (hereafter in discussion of DIS we don’t make distinction between $`x`$ and $`x_{Bj}`$). Very recently the first experimental data on leading neutron production at HERA and their interpretation in terms of $`F_{2\pi }`$ have been reported .
Besides giving access to the $`F_{2\pi }`$, the semi-inclusive reaction (1) affects via unitarity the flavor content of the proton sea, for the quantitative interpretation of the recent E866 data see , for the review of earlier works see .
In the literature there exist several parameterizations of parton distributions in the pion based on the $`\pi N`$ Drell-Yan data taken at relatively large $`x\text{ }>510^2`$. Extrapolations of these parameterizations to $`x1`$ diverge quite strongly, which is not surprising if one recalls that the conventional DGLAP phenomenology lacks the predictive power at small $`x`$ and the small-$`x`$ extrapolations of the pre-HERA fits to the proton structure functions did generally miss the small-$`x`$ HERA results .
In this communication we address the issue of the small-$`x`$ behavior of the pion structure function in the BFKL-Regge framework. As noticed by Fadin, Kuraev and Lipatov and discussed in detail by Lipatov , the incorporation of asymptotic freedom, i.e. the running QCD coupling, into the BFKL equation makes the QCD pomeron a series of Regge poles. The contribution of the each pole to scattering amplitudes satisfies the standard Regge-factorization . Recently we reformulated the BFKL-Regge expansion in the color dipole (CD) basis which we apply here to calculation of the small-$`x`$ behavior of the pion structure function. As a by-product of our analysis we present evaluations of the hard scattering contribution to the observed rise of the pion-nucleon, nucleon-nucleon and real photo-absorption $`\gamma N`$ and $`\gamma \pi `$ total cross sections.
## 2 The color dipole BFKL-Regge factorization
In the color dipole basis the beam-target interaction amplitude is expanded in terms of amplitudes of scattering of color dipoles $`𝐫`$ and $`𝐫^{}`$ in both the beam ($`b`$) and target ($`t`$) particles (here $`𝐫`$ and $`𝐫^{}`$ are the two-dimensional vectors in the impact parameter plane). Invoking the optical theorem for forward scattering as a fundamental quantity one can use the dipole-dipole cross section $`\sigma (x,𝐫,𝐫^{})`$. The $`\mathrm{log}\frac{1}{x}`$ BFKL evolution in the color dipole basis (CD BFKL) has been studied in detail in . Once $`\sigma (x,𝐫,𝐫^{})`$ is known one can calculate the total cross section of $`bt`$ scattering $`\sigma ^{bt}(x)`$ making use of the color dipole factorization
$$\sigma ^{bt}(x)=𝑑zd^2𝐫𝑑z^{}d^2𝐫^{}|\mathrm{\Psi }_b(z,𝐫)|^2|\mathrm{\Psi }_t(z^{},𝐫^{})|^2\sigma (x,𝐫,𝐫^{}),$$
(4)
where $`|\mathrm{\Psi }_b(z,𝐫)|^2`$ and $`|\mathrm{\Psi }_t(z^{},𝐫^{})|^2`$ are probabilities to find a color dipole, $`𝐫`$ and $`𝐫^{}`$ in the beam and target, respectively. Here we emphasize that the dipole-dipole cross section is beam-target symmetric and universal for all beams and targets, all the beam and target dependence is contained in the color dipole distributions $`|\mathrm{\Psi }_b(z,𝐫)|^2`$ and $`|\mathrm{\Psi }_t(z,𝐫)|^2`$.
We start with the minor technical point that $`\sigma (x,𝐫,𝐫^{})`$ can depend on the orientation of the target and beam dipoles and can be expanded into the Fourier series
$$\sigma (x,𝐫,𝐫^{})=\underset{n=0}{\overset{\mathrm{}}{}}\sigma _n(x,r,r^{})\mathrm{exp}(in\phi )$$
(5)
where $`\phi `$ is an azimuthal angle between $`𝐫`$ and $`𝐫^{}`$. By the nature of calculation of the beam-target total cross section $`\sigma ^{bt}(x)`$ only the term $`n=0`$ contributes in (4).
The CD BFKL-Regge factorization uniquely prescribes that the $`1/x`$ dependence of the dipole-dipole total cross section $`\sigma (x,r,r^{})`$ is of the form
$$\sigma (x,r,r^{})=\underset{m}{}C_m\sigma _m(r)\sigma _m(r^{})\left(\frac{x_0}{x}\right)^{\mathrm{\Delta }_m}.$$
(6)
Here the dipole cross section $`\sigma _m(r)`$ is an eigen-function of the CD BFKL equation
$$\frac{\sigma _m(x,r)}{\mathrm{log}(1/x)}=𝒦\sigma _m(x,r)=\mathrm{\Delta }_m\sigma _m(x,r),$$
(7)
with eigen value (intercept) $`\mathrm{\Delta }_m`$ and $`\sigma _m(x,r)`$ being of the Regge form
$$\sigma _m(x,r)=\sigma _m(r)\left(\frac{x_0}{x}\right)^{\mathrm{\Delta }_m}.$$
(8)
The running strong coupling exacerbates the well known infrared sensitivity of the CD BFKL equation and infrared regularization is called upon: infrared freezing of $`\alpha _S`$ and finite propagation radius $`R_c`$ of perturbative gluons were consistently used in our color dipole approach to BFKL equation since 1994 . The past years the both concepts have become widely accepted: for a review of recent works on freezing $`\alpha _S`$ see , our choice $`R_c=0.27\mathrm{fm}`$ has been confirmed by the recent determination of $`R_c`$ from the lattice QCD data on the field strength correlators .
## 3 Summary on BFKL eigen-functions in the CD basis
Here we recapitulate the principal findings on eigen-functions $`\sigma _m(r)`$ and eigenvalues $`\mathrm{\Delta }_m`$ of the CD BFKL equation . There is a useful similarity to solutions of the Schrödinger equation and the intercept plays the role of a binding energy. The leading eigen-function $`\sigma _0(r)`$ for the ground state with largest binding energy $`\mathrm{\Delta }_0\mathrm{\Delta }_{𝐈𝐏}`$ is node free. The eigenfunction $`\sigma _m(r)`$ for excited state has $`m`$ radial nodes. With our infrared regulator the intercept of the leading pole trajectory is found to be $`\mathrm{\Delta }_{𝐈𝐏}=0.4`$ . The intercepts $`\mathrm{\Delta }_m`$ follow closely the law $`\mathrm{\Delta }_m=\mathrm{\Delta }_0/(m+1)`$ suggested first by Lipatov from quasi-classical approximation to running BFKL equation in the related basis. The sub-leading eigen-functions $`\sigma _m(r)`$ are also similar to Lipatov’s quasi-classical eigen-functions for $`m1`$. For our specific choice of the infrared regulator the node of $`\sigma _1(r)`$ is located at $`r=r_10.050.06\mathrm{fm}`$, for larger $`m`$ the first nodes move to a somewhat larger $`r`$ and accumulate at $`r0.1\mathrm{fm}`$, for the more detailed description of the nodal structure of $`\sigma _m(r)`$ see . Here we only emphasize that for solutions with $`m4`$ the higher nodes are located at a very small $`r`$ way beyond the resolution scale $`1/\sqrt{Q^2}`$ of foreseeable DIS experiments. Because for these higher solutions all intercepts $`\mathrm{\Delta }_m1`$, in practical evaluation of $`\sigma ^{bt}`$ we can truncate expansion (6) at $`m=3`$ lumping in the term with $`m=3`$ contributions of all singularities with $`m3`$. Such a truncation can be justified a posteriori if such a contribution from $`m3`$ turns out to be a small correction, which will indeed be the case at very small $`x`$.
The practical calculation of $`\sigma (x,r,r^{})`$ by running the CD BFKL evolution requires the boundary condition $`\sigma (x_0,r,r^{})`$ at certain $`x_01`$. The expansion coefficients $`C_m`$ in eq.(6) are fully determined by the boundary condition $`\sigma (x_0,r,r^{})`$ and by the choice of the normalization of eigen-functions $`\sigma _m(r)`$. To this end recall that the CD BFKL evolution sums the leading $`\mathrm{log}(1/x)`$ diagrams for production of $`s`$-channel gluons via exchange of $`t`$-channel perturbative gluons. It is tempting then, although not compulsory for any fundamental reasons, to take for boundary condition at $`x=x_0`$ the Born approximation, i.e. evaluate dipole-dipole scattering via the two-gluon exchange. This leaves the starting point $`x_0`$ the sole parameter. We follow the choice $`x_0=0.03`$ made in . The very ambitious program of description of $`F_2^p(x,Q^2)`$ starting from the, perhaps excessively restrictive, but appealingly natural, two-gluon approximation has been launched by us in and met with remarkable phenomenological success .
The exchange by perturbative gluons is a dominant mechanism for small dipoles $`r\text{ }<R_c`$. In Ref. interaction of large dipoles has been modeled by the non-perturbative, soft mechanism which we approximate here by a factorizable soft pomeron with intercept $`\alpha _{\mathrm{soft}}(0)1=\mathrm{\Delta }_{\mathrm{soft}}=0`$, i.e., flat vs. $`x`$ at small $`x`$. Then the extra term $`C_{\mathrm{soft}}\sigma _{\mathrm{soft}}(r)\sigma _{\mathrm{soft}}(r^{})`$ must be added in the r.h.s. of expansion (6). The exchange by two non-perturbative gluons has been behind the parameterization of $`\sigma _{\mathrm{soft}}(r)`$ suggested in and used later on in and here, see also Appendix. More recently several related models for $`\sigma _{\mathrm{soft}}(r)`$ have appeared in the literature, see for instance models for dipole-dipole scattering via polarization of non-perturbative QCD vacuum and the model of soft-hard two-component pomeron .
## 4 CD BFKL-Regge expansion for structure function
Now we recall briefly the formalism for calculation of the target (t) structure function ($`t=p,\pi ,\gamma ^{},\mathrm{}`$). It is convenient to introduce the eigen structure functions $`(m=\mathrm{soft},0,1,2,\mathrm{})`$
$$f_m(Q^2)=\frac{Q^2}{4\pi ^2\alpha _{em}}\sigma _m^\gamma ^{}(Q^2),$$
(9)
where
$$\sigma _m^\gamma ^{}(Q^2)=\gamma _T^{}|\sigma _m(r)|\gamma _T^{}+\gamma _L^{}|\sigma _m(r)|\gamma _L^{}.$$
(10)
Then the virtual $`\gamma ^{}t`$ photo-absorption cross section and the target structure function $`F_{2t}(x,Q^2)`$ take the form ($`m=\mathrm{soft},0,1,2,\mathrm{}`$)
$`\sigma ^{\gamma ^{}t}(x,Q^2)={\displaystyle \underset{m}{}}C_m\sigma _m^\gamma ^{}(Q^2)\sigma _m^t\left({\displaystyle \frac{x_0}{x}}\right)^{\mathrm{\Delta }_m}+\sigma _{\mathrm{val}}^{\gamma ^{}t}(x,Q^2)`$ (11)
$`={\displaystyle \underset{m}{}}A_m^t\sigma _m^\gamma ^{}(Q^2)\left({\displaystyle \frac{x_0}{x}}\right)^{\mathrm{\Delta }_m}+\sigma _{\mathrm{val}}^{\gamma ^{}t}(x,Q^2),`$
$$F_{2t}(x,Q^2)=A_m^tf_m(Q^2)\left(\frac{x_0}{x}\right)^{\mathrm{\Delta }_m}+F_{2t}^{\mathrm{val}}(x,Q^2),$$
(12)
where
$$\sigma _m^t=t|\sigma _m(r)|t=𝑑zd^2𝐫|\mathrm{\Psi }_t(z,r)|^2\sigma _m(r).$$
(13)
The color dipole distributions in the transverse (T) and longitudinal (L) photon of virtuality $`Q^2`$ derived in read
$$|\mathrm{\Psi }_T(z,r)|^2=\frac{6\alpha _{em}}{(2\pi )^2}\underset{1}{\overset{N_f}{}}e_f^2\{[z^2+(1z)^2]\epsilon ^2K_1(\epsilon r)^2+m_f^2K_0(\epsilon r)^2\},$$
(14)
$$|\mathrm{\Psi }_L(z,r)|^2=\frac{6\alpha _{em}}{(2\pi )^2}\underset{1}{\overset{N_f}{}}4e_f^2Q^2z^2(1z)^2K_0(\epsilon r)^2,$$
(15)
where
$$\epsilon ^2=z(1z)Q^2+m_f^2.$$
(16)
In Eqs. (14)-(16) $`K_0`$ and $`K_1`$ \- are the modified Bessel functions, $`e_f`$ is the quark charge, $`m_f`$ is the quark mass, $`\alpha _{em}`$ is the fine structure constant and $`z`$ is the Sudakov variable, i.e. the fraction of photon’s light-cone momentum carried by one of the quarks of the pair ($`0<z<1`$). The functional form of $`f_m(Q^2)`$ convenient in applications was presented in (see also Appendix). For the practical phenomenology at moderately small $`x`$ we include a contribution from DIS on valence quarks in the target $`F_{2\pi }^{\mathrm{val}}(x,Q^2)`$ which is customarily associated with the non-vacuum reggeon exchange.
In the evaluation of the proton structure function $`F_{2p}(x,Q^2)`$ we use the symmetric oscillator wave function of the $`3`$-quark proton. We recall that in this approximation the proton looks as $`3/2`$ color dipoles spanned between quark pairs. The distribution of sizes of dipoles $`𝐫`$ spanned between quark pairs in the proton reads
$$|\mathrm{\Psi }_p(r)|^2=\frac{1}{2\pi r_p^2}\mathrm{exp}\left(\frac{r^2}{2r_p^2}\right)$$
(17)
where $`r_p^2=0.658\mathrm{fm}^2`$ as suggested by the standard dipole form factor of the proton. In we introduced normalization of $`\sigma _m(r)`$ such that for the proton target $`(m=\mathrm{soft},0,1,2,\mathrm{})`$
$$A_m^p=C_m\sigma _m^p=1.$$
(18)
Within this convention we have the proton expectation values $`\sigma _m^p`$ and parameters $`C_m`$ of the truncated, $`m=\mathrm{soft},0,1,2,3`$, dipole-dipole expansion cited in the Table 1.
We recall that because of the diffusion in color dipole space, exchange by perturbative gluons contributes also to interaction of large dipoles $`r>R_c`$ . However at moderately large Regge parameter this hard interaction driven effect is still small. For this reason in what follows we refer to terms $`m=0,1,2,3`$ as hard contribution as opposed to the genuine soft interaction.
Table 1. CD BFKL-Regge expansion parameters.
| $`m`$ | $`\mathrm{\Delta }_m`$ | $`\sigma _m^p,\mathrm{mb}`$ | $`\sigma _m^\pi ,\mathrm{mb}`$ | $`C_m,\mathrm{mb}^1`$ | $`A_m^\pi `$ | $`A_m^p`$ | $`\sigma _m^\gamma ^{}(0),\mu \mathrm{b}`$ |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 0.40 | 1.243 | 0.822 | 0.804 | 0.661 | 1. | 6.767 |
| 1 | 0.220 | 0.462 | 0.303 | 2.166 | 0.656 | 1. | 1.885 |
| 2 | 0.148 | 0.374 | 0.244 | 2.674 | 0.653 | 1. | 1.320 |
| 3 | 0.111 | 0.993 | 0.647 | 1.007 | 0.651 | 1. | 3.186 |
| soft | 0. | 31.19 | 18.91 | 0.0321 | 0.606 | 1. | 79.81 |
## 5 CD BFKL-Regge predictions for $`F_{2\pi }(x,Q^2)`$
The extension to small-$`x`$ DIS off pions is quite straightforward. In the normalization (18)
$$A_m^\pi =\frac{\sigma _m^\pi }{\sigma _m^p}.$$
(19)
We evaluate $`\sigma _m^\pi =\pi |\sigma _m(r)|\pi `$ with the oscillator approximation for the $`q\overline{q}`$ wave function of the pion
$$|\mathrm{\Psi }_\pi (r)|^2=\frac{3}{8\pi r_\pi ^2}\mathrm{exp}\left(\frac{3r^2}{8r_\pi ^2}\right),$$
(20)
where the charge radius of the pion suggests $`r_\pi ^2=0.433\mathrm{fm}^2`$. Then the calculation of $`\sigma _m^\pi `$ is parameter-free, the results for $`\sigma _m^\pi `$ and $`A_m^\pi `$ are cited in Table 1.
The minor change is that in a scattering of color dipole on the pion the effective dipole-dipole collision energy is $`3/2`$ of that in the scattering of color dipole on the three-quark nucleon which is our reference process at the same total c.m.s. energy $`W`$. Then the further evaluation of $`\sigma ^{\gamma ^{}\pi }(x,Q^2)`$ and of the pion structure function
$$F_{2\pi }(x,Q^2)=\underset{m=0}{\overset{3}{}}A_m^\pi f_m(Q^2)\left(\frac{3}{2}\frac{x_0}{x}\right)^{\mathrm{\Delta }_m}+A_{\mathrm{soft}}^\pi f_{\mathrm{soft}}(Q^2)+F_{2\pi }^{\mathrm{val}}(x,Q^2)$$
(21)
shown by the solid curve in fig. 1 does not involve any adjustable parameters. The valence component of the pion structure function (the dot-dashed curve) is quite substantial at $`x\text{ }>10^2`$, but it is reasonably constrained by the $`\pi N`$ Drell-Yan data and can be regarded as weakly model dependent. In our calculations we employ the recent parameterization . We find a good agreement with the recent H1 determination of $`F_{2\pi }(x,Q^2)`$ . We do not see any need to modify the above specified natural, perturbative two-gluon exchange, boundary condition to CD BFKL evolution.
Although the agreement with the experimental data is good, testing the complete CD BFKL-Regge expansion in all its complexity is not a trivial task. As we mentioned in the Introduction, small-$`x`$ extrapolations of existing DGLAP fits to the $`\pi N`$ Drell-Yan data diverge quite strongly, but the proper fine tuning of the input can produce the DGLAP fits which in a limited range of moderately small $`x`$ will be very similar to our results. To this end, several aspects of the predictive power of CD BFKL approach are noteworthy. First, there is an obvious prediction that the leading hard pole contribution to single-pomeron exchange will dominate at sufficiently small $`x`$ for all $`Q^2`$, but it is not easy to check it within the reach of the HERA experiments and better understanding of the unitarity corrections to single-pomeron exchange is needed at extremely small $`x`$ beyond the HERA range. Second, as noticed in because of the short propagation radius for perturbative gluons, $`R_c^2r_p^2,r_\pi ^2`$, in our CD BFKL approach we have an approximate additive quark counting for hard components, $`m=0,1,2,3`$, i.e.,
$$\frac{\sigma _m^p}{\sigma _m^\pi }\frac{3}{2},$$
(22)
see entries in Table 1. Third, according to the conventional estimates for the proton and pion radii based on the experimental data on charge radii, the dipoles spanned between quark pairs in the proton, and the quark and antiquark in the pion, have approximately similar sizes, cf. (17) and (20). Consequently, for a reason quite different from that for the hard BFKL exchange, an approximate additive quark counting holds also for the soft-pomeron exchange, see Table 1. A good approximation to the CD BFKL-Regge factorization prediction is
$$F_{2\pi }(x,Q^2)\frac{2}{3}F_{2p}(\frac{2}{3}x,Q^2).$$
(23)
This is an arguably natural, but far from trivial, relationship. Given that the origin of additive quark counting for hard and soft components is so different, it is not surprising that (23) has not necessarily been borne out by the diversity of the pre- and post-HERA DGLAP fits to the pion and proton structure functions. Fourth, the finding of a substantial soft contribution shown by the short-dashed line must not surprise anyone in view of a familiar strong sensitivity of the results of DGLAP evolution to the input structure function at a semi-hard starting point. Fifth, the CD BFKL approach predicts uniquely, that sub-leading eigen structure functions $`f_{m1}(Q^2)`$ have their first node at $`Q^22060`$ GeV<sup>2</sup>, see and Appendix, and we would like to comment on this node effect in more detail.
The point is that in the vicinity of the node of sub-leading contributions the pion structure function is well reproduced by the Leading Hard$`+`$ Soft$`+`$Valence Approximation (LHSVA)
$$F_{2\pi }^{\mathrm{LHSVA}}(x,Q^2)\frac{2}{3}\left[f_0(Q^2)\left(\frac{3}{2}\frac{x_0}{x}\right)^{\mathrm{\Delta }_0}+f_{\mathrm{soft}}(Q^2)\right]+F_{2\pi }^{\mathrm{val}}(x,Q^2),$$
(24)
which gives a unique handle on the intercept $`\alpha _{𝐈𝐏}(0)=1+\mathrm{\Delta }_0`$ of the leading hard BFKL pole. This point is illustrated in fig. 1 where the dotted curve represents the LHSVA. A comparison with the solid curve for the complete BFKL-Regge expansion shows that, as a matter of fact, the contribution from sub-leading BFKL-Regge poles is marginal at all values of $`Q^2`$ of the practical interest. In order to delineate the impact of the valence component we show by the long-dashed curve the sum of the soft and leading hard pole contributions without the valence component. Even such a crude two-pole model does a good job at $`x\text{ }<`$(2-5)$`10^3`$, although at larger $`x`$ the impact of the valence component on the $`x`$-dependence of the pion structure function is substantial. The accuracy of the LHSVA improves rapidly at larger $`Q^2`$, see the boxes for $`Q^2=7.5`$ to 13.3 to 28.6 and 100 GeV<sup>2</sup> in fig. 1. As a manifestation of the nodal structure of $`f_{m1}(Q^2)`$ the LHSVA overshoots the result of the full complete-Regge expansion slightly at $`Q^2=28.6`$ and $`100\mathrm{GeV}^2`$, while it undershoots the complete BFKL-Regge expansion slightly at $`Q^213.3\mathrm{GeV}^2`$.
As eq. (23 suggests, this discussion of LHSVA is fully applicable to the proton structure function as well. This point is demonstrated by the decomposition into different components of our earlier CD BFKL-Regge predictions for the proton structure function shown in Fig. 2. Because the LHSVA formula (24) is sufficiently simple and the soft contribution is small for $`Q^2`$10-100 GeV<sup>2</sup>, a reliable determination of $`\alpha _{𝐈𝐏}(0)`$ for the rightmost BFKL singularity is feasible even from the pion data. An independent determination of $`\alpha _{𝐈𝐏}(0)`$ and, consequently, the test of the CD BFKL approach is possible from the charm structure function of the proton - as we argued elsewhere , it receives a negligible soft and sub-leading contributions and is entirely dominated by the contribution from the rightmost BFKL singularity (for the related recent discussion of the charm structure function see also ).
## 6 Extending CD BFKL-Regge expansion to real photons and hadrons
Finally, with certain reservations on absorption corrections we can extend the BFKL-Regge factorization from DIS to real photo-production and even hadron-hadron scattering. Take for instance pion-nucleon scattering. There is always a contribution from small-size dipoles in the pion to the color dipole factorization formula (6). For example, a probability $`w_\pi (r<r_0)`$ to find dipoles of size $`r\text{ }<r_0`$ can be estimated as
$$w_\pi (r<r_0)\frac{3}{8}\frac{r_0^2}{r_\pi ^2},$$
(25)
which gives quite a substantial fraction of the pion,
$$w_\pi (r<0.2\mathrm{fm})310^2$$
(26)
the interaction of which with the target nucleon proceeds in the legitimate hard regime typical of DIS. The corresponding contribution to $`\sigma _{\mathrm{tot}}^{\pi N}`$ must exhibit the same rapid rise with energy as the proton structure function. Furthermore, in the BFKL approach there is always a diffusion in the dipole size by which there is a feedback from hard region to interaction of large dipoles and vice versa. How substantial is such a hard contribution to the hadronic and real photon total cross sections?
The explicit realization of this idea within the CD BFKL-Regge factorization is embodied in expansion for the vacuum exchange contributions to the total cross sections
$$\sigma ^{pp}=\underset{m}{}A_m^p\sigma _m^p\left(\frac{2}{3}\frac{x_0}{x}\right)^{\mathrm{\Delta }_m},\sigma ^{\gamma p}=\underset{m}{}A_m^p\sigma _m^\gamma ^{}(0)\left(\frac{x_0}{x}\right)^{\mathrm{\Delta }_m},$$
(27)
$$\sigma ^{\pi p}=\underset{m}{}A_m^\pi \sigma _m^p\left(\frac{x_0}{x}\right)^{\mathrm{\Delta }_m},\sigma ^{\gamma \pi }=\underset{m}{}A_m^\pi \sigma _m^\gamma ^{}(0)\left(\frac{3}{2}\frac{x_0}{x}\right)^{\mathrm{\Delta }_m},$$
(28)
where $`m=\mathrm{soft},0,1,2,3`$. In the real photo-production limit the light quark masses $`m_{u,d}=0.135\mathrm{GeV},m_s=0.285\mathrm{GeV}`$ serve to define the transverse size of the hadronic component of the photon which it is reasonable to expect close to the transverse size of vector mesons and/or pions. The real photo-absorption cross sections $`\sigma _m^\gamma ^{}(0)`$ and $`\sigma _{\mathrm{soft}}^\gamma ^{}(0)`$ as given by eqs.(9,32) extended to $`Q^2=0`$ are cited in Table 1. Regarding the similarity of the transverse size of real photon and $`\rho /\pi `$-meson, notice that the ratio $`\sigma _{\mathrm{soft}}^{\gamma ^{}p}/\sigma _{\mathrm{soft}}^{\pi p}1/230`$ is very close to the standard vector meson dominance estimate .
In eqs.(27,28) the plausible choice of the Regge parameter is $`1/x=W^2/m_\rho ^2,`$ in the NN and $`\gamma \pi `$ scattering we introduce the aforementioned energy rescaling factor $`2/3`$ and $`3/2`$, respectively. In our approach $`\mathrm{\Delta }_{\mathrm{soft}}=0`$ and $`\sigma _{\mathrm{soft}}=\mathrm{const}(W^2)`$, so that the intrusion of hard regime into soft scattering is the sole source of the rise of total cross sections. The CD BFKL-Regge factorization based evaluation of the vacuum exchange contribution to $`pp,\pi p,\gamma p`$ and $`\gamma \pi `$ total cross sections is presented in Fig. 3. At low energies the vacuum exchange contribution underestimates the observed total cross section which can evidently be attributed to the non-vacuum Regge exchanges not considered here.
Although the hard pomeron component is important in both the purely hadronic ($`NN,\pi N`$) and photo-absorption reactions, there is an important distinction between the two cases. Namely, in contrast to the proton/pion wave function (17,20) which is smooth at $`r0`$, the real photon wave function squared $`|\mathrm{\Psi }_T(r)|^2`$ is singular (14). Furthermore, this singularity is a legitimate pQCD effect and makes (i) evaluation of hard contributions, $`m=0,1,2,3`$, to $`\gamma \pi ,\gamma p`$ cross sections more reliable and (ii) uniquely predicts that hard contribution to $`\sigma _{\mathrm{tot}}^{\gamma \pi },\sigma _{\mathrm{tot}}^{\gamma p}`$ is relatively stronger than to $`\sigma _{\mathrm{tot}}^{\pi p},\sigma _{\mathrm{tot}}^{pp}`$. Indeed, a closer inspection of the Table 1 shows that in $`\gamma N`$ the relative weight of hard component $`_0^3\sigma _m/\sigma _{\mathrm{soft}}`$ is almost twice as large as in the $`\pi N`$ scattering.
We observe that hard effects in both the $`pp,\pi p`$ and $`\gamma \pi ,\gamma p`$ to exhaust to a large extent, or even completely, the observed rise of $`\sigma _{tot}(W^2)`$ at moderately large $`W^2`$. Furthermore, there arises an interesting scenario, discussed first in a simplified version in , in which soft interactions at moderately high energies can be described by the soft pomeron with $`\mathrm{\Delta }_{\mathrm{soft}}=0`$ plus small contribution from hard scattering, the combined effect of which mimics the usually discussed phenomenological pomeron pole endowed with the intercept $`\mathrm{\Delta }0.08`$ . In this scenario the real issue is not an explanation of the rise of hadronic $`\sigma _{tot}(W^2)`$ at moderately high energies, rather it is whether there exists a mechanism to tame a too rapid rise of extrapolation of the hard component of the CD BFKL-Regge expansion to very high-energies. To this end recall that we considered only the non-unitarized running CD BFKL amplitudes too rapid a rise of which must be tamed by the unitarity absorption corrections. The discussion of the issue of unitarization goes beyond the scope of the present communication.
## 7 Conclusions
We explored the consequences for small-$`x`$ structure functions and high energy total cross sections from the color dipole BFKL-Regge factorization. We use very restrictive perturbative two-gluon exchange as a parameter-free boundary condition for BFKL evolution in the color dipole basis. Under plausible assertions on the color dipole structure of the pion, our parameter-free description of the pion structure function agrees well with the H1 determinations. The found relationship between $`F_{2p}(x,Q^2)`$ and $`F_{2\pi }(x,Q^2)`$ is very close to $`F_{2\pi }(x,Q^2)\frac{2}{3}F_{2p}(\frac{2}{3}x,Q^2)`$. The contribution of sub-leading BFKL-Regge poles to the pion and proton structure function is found to be small in a broad range of $`Q^2`$ of the practical interest, which makes feasible the determination of the intercept $`\alpha _{𝐈𝐏}(0)`$ for the rightmost BFKL singularity form the structure function data.
Because of the diffusion in color dipole space, the hard scattering which is a dominant feature of $`F_{2\pi }(x,Q^2)`$ and $`F_{2p}(x,Q^2)`$ at large $`Q^2`$ contributes also to real photo-absorption and hadron-hadron scattering. This make plausible a scenario in which the rising hard component and the genuine soft component with $`\mathrm{\Delta }_{\mathrm{soft}}=0`$ mimics the effective vacuum pole with $`\mathrm{\Delta }_{\mathrm{soft}}0.08`$.
Acknowledgments: This work was partly supported by the grants INTAS-96-597 and INTAS-97-30494 and DFG 436RUS17/11/99.
Appendix
Although we have certain ideas on the shape of eigen-functions $`\sigma _m(r)`$ as a function of $`r`$ and/or eigen structure functions $`f_m(Q^2)`$ as a function of $`Q^2`$ , they are only available as a numerical solution to the running color dipole BFKL equation. On the other hand, for the practical applications it is convenient to represent the results of numerical solutions for $`f_m(Q^2)`$ in an analytical form
$$f_0(Q^2)=a_0\frac{R_0^2Q^2}{1+R_0^2Q^2}\left[1+c_0\mathrm{log}(1+r_0^2Q^2)\right]^{\gamma _0},$$
(29)
$$f_m(Q^2)=a_mf_0(Q^2)\frac{1+R_0^2Q^2}{1+R_m^2Q^2}\underset{i=1}{\overset{n_{max}}{}}\left(1\frac{z}{z_m^{(i)}}\right),m1,$$
(30)
where $`\gamma _0=\frac{4}{3\mathrm{\Delta }_0}=\frac{10}{3}`$ and
$$z=\left[1+c_m\mathrm{log}(1+r_m^2Q^2)\right]^{\gamma _m}1,\gamma _m=\gamma _0\delta _m$$
(31)
and $`m_{max}=`$min$`\{m,2\}`$.
The nodes of $`f_m(Q^2)`$ are spaced by 2-3 orders of magnitude in $`Q^2`$-scale. The first nodes of sub-leading $`f_m(Q^2)`$ are located at $`Q^22060GeV^2`$, the second nodes of $`f_2(Q^2)`$ and $`f_3(Q^2)`$ are at $`Q^2510^3GeV^2`$ and $`Q^2210^4GeV^2`$, respectively. The third node of $`f_3(Q^2)`$ is at $`210^7GeV^2`$, way beyond the reach of accelerator experiments at small $`x`$. The parameters tuned to reproduce the numerical results for $`f_m(Q^2)`$ at $`Q^2\text{ }<10^5GeV^2`$ are listed in the Table 2. For $`m=3`$ in this limited range of $`Q^2`$ we take a simplified form with only two first nodes, it must not be used for $`Q^2\text{ }>10^5`$ GeV<sup>2</sup>.
Table 2. CD BFKL-Regge structure functions parameters.
| $`n`$ | $`a_m`$ | $`c_m`$ | $`r_m^2,`$ $`\mathrm{GeV}^2`$ | $`R_m^2,`$ $`\mathrm{GeV}^2`$ | $`z_m^{(1)}`$ | $`z_m^{(2)}`$ | $`\delta _m`$ |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 0.0232 | 0.3261 | 1.1204 | 2.6018 | | | 1. |
| 1 | 0.2788 | 0.1113 | 0.8755 | 3.4648 | 2.4773 | | 1.0915 |
| 2 | 0.1953 | 0.0833 | 1.5682 | 3.4824 | 1.7706 | 12.991 | 1.2450 |
| 3 | 0.4713 | 0.0653 | 3.9567 | 2.7756 | 1.4963 | 6.9160 | 1.2284 |
| soft | 0.1077 | 0.0673 | 7.0332 | 6.6447 | | | |
The soft component of the proton structure function derived from eq.(9) with $`\sigma _{\mathrm{soft}}(r)`$ taken from is parameterized as follows
$$f_{\mathrm{soft}}(Q^2)=\frac{a_{\mathrm{soft}}R_{\mathrm{soft}}^2Q^2}{1+R_{\mathrm{soft}}^2Q^2}\left[1+c_{\mathrm{soft}}\mathrm{log}(1+r_{\mathrm{soft}}^2Q^2)\right],$$
(32)
with parameters cited in the Table 2.
Figure Captions
1. Predictions from the CD BFKL-Regge factorization for the pion structure function $`F_{2\pi }(x_{Bj},Q^2)`$ (solid lines). The experimental data from the H1 Collaboration are shown by full circles. The different components of $`F_{2\pi }(x_{Bj},Q^2)`$ are shown: the valence contribution (dashed-dotted), the non-perturbative soft contribution (dashed), the Leading Hard+Soft+Valence Approximation $`F_{2\pi }^{\mathrm{LHSVA}}(x_{Bj},Q^2)`$ (dotted) and the sum of the soft and leading hard pole contributions without valence component (long dashed).
2. The decomposition into different components of earlier predictions from the CD BFKL-Regge factorization for the proton structure function $`F_{2p}(x,Q^2)`$ (solid lines). The experimental data from the H1 and ZEUS Collaborations are shown by circles and triangles, respectively. The different components of $`F_{2p}(x,Q^2)`$ are shown: the valence contribution (dashed-dotted), the non-perturbative soft contribution (dashed), the Leading Hard+Soft+Valence Approximation (LHSVA) $`F_{2p}^{\mathrm{LHSVA}}(x,Q^2)`$ (dotted) and the sum of the soft and leading hard pole contributions without valence component (long dashed).
3. The CD BFKL-Regge factorization evaluation of the vacuum exchange contribution to $`\sigma _{\mathrm{tot}}^{NN}`$ (solid line in box (a)), $`\sigma _{\mathrm{tot}}^{\pi N}`$ (solid line in box (b)), $`\sigma _{\mathrm{tot}}^{\gamma N}`$ (solid) and $`\sigma _{\mathrm{tot}}^{\gamma \pi }`$ (dashed line in box (c)). The data points are from . |
no-problem/9911/astro-ph9911288.html | ar5iv | text | # Can Heavy WIMPs Be Captured by the Earth?
## 1 Introduction
If weakly interacting massive particles (WIMPs) comprise the dark matter, they would have a local density $`\rho _x0.3\mathrm{GeV}\mathrm{cm}^3`$. They would be potentially detectable in several ways including directly in underground detectors, from WIMP-anti-WIMP annihilations in the Galactic halo, and inidrectly from neutrinos produced by annihilations of WIMPs captured by the Earth and the Sun (e.g., Jungman, Kamionkowski, & Griest 1996).
WIMP capture by the Sun was first discussed in the context of a proposed WIMP solution to the solar-neutrino problem (Press & Spergel 1985; Faulkner & Gilliland 1985). Subsequently, several workers realized that if both WIMPs and anti-WIMPs were captured by the Sun (Silk, Olive, & Srednicki 1985; Gaisser, Steigman, & Tilav 1986; Srednicki, Olive, & Silk 1987; Griest & Seckel 1987) or Earth (Freese 1986; Krauss, Srednicki, & Wilczek 1986; Gaisser et al. 1986), they might annihilate into neutrinos which would then pass directly through these boides and into neutrino detectors near the surface of the Earth. The failure to detect neutrinos coming from the centers of the Sun and Earth could then place constraints on the WIMP mass $`M_x`$ and cross section $`\sigma _x`$.
Because the Sun’s gravitational potential is much deeper than the Earth’s, it initially appeared as though the Sun would be so much more effective at capture that the annihilation signal from the Sun would always be much larger than from the Earth. However, Gould (1987) showed that there were “resonant enhancements” in capture by the Earth whenever the WIMP mass $`M_x`$ was close to the mass $`M_A`$ of an element $`A`$ that is common in the Earth. Specifically, for hard-sphere cross sections $`\sigma _{xA}`$, the capture rate is given by (see also Gould 1992)
$$C=\frac{N_A\sigma _{xA}}{\beta _{}}_0^{v_{\mathrm{cut}}}d^3u(v_{\mathrm{cut}}^2u^2)\frac{f(𝐮)}{u}$$
(1)
where
$$v_{\mathrm{cut}}^2\beta _{}v_{\mathrm{esc}}^2,\beta _\pm \frac{4M_xM_A}{(M_x\pm M_A)^2},$$
(2)
$`f(𝐮)`$ is the distribution of velocities $`𝐮`$ of WIMPs relative to the Earth in the terrestrial neighborhood but away from the Earth’s potential, $`N_A`$ is the total number of atoms $`A`$ in the Earth, and $`v_{\mathrm{esc}}13\mathrm{km}\mathrm{s}^1`$ is the escape velocity at the position of the atoms. In principle, one should allow for a range of escape velocities of the atoms, but for the case of iron in the Earth (of interest here), Gould (1987) showed that the error induced by simply using the average escape velocity is small.
In the present context, it is very important to note that $`v_{\mathrm{cut}}`$ is the maximum velocity for which an ambient WIMP can be captured by the Earth. If $`M_AM_x`$, then $`\beta _{}1`$ and so $`v_{\mathrm{cut}}v_{\mathrm{esc}}`$. In this case, capture is dominated by WIMPs that have velocities typical of the halo $`v_h300\mathrm{km}\mathrm{s}^1`$. This is the source of the “resonant enhancements” mentioned above: over the whole mass range $`12\mathrm{GeV}M_x65\mathrm{GeV}`$, WIMP capture is dominated by resonances with elements common in the Earth, oxygen, magnesium, silicon, and iron. See Figure 2 from Gould (1987). For masses $`M_x65\mathrm{GeV}`$, WIMP capture is overwhelmingly due to the “tail” of the iron resonance. If the velocity distribution at low velocities could be approximated as a constant $`f(𝐮)f(0)`$, then the capture rate would take the form
$$C_0=\frac{\pi N_A\sigma _{xA}f(0)v_{\mathrm{cut}}^4}{\beta _{}}4\pi N_A\sigma _{xA}f(0)v_{\mathrm{esc}}^4\frac{M_A}{M_x}[\mathrm{assuming}f(𝐮)=f(0)\mathrm{for}u<v_{\mathrm{cut}}].$$
(3)
Thus in the high-mass limit (evaluated after the arrow), the capture rate would fall $`C_0M_x^2`$: one power of $`M_x`$ appears explicitly and the other is implicit in $`f(𝐮)`$ for fixed $`\rho _x`$.
Gould (1988) showed that, unfortunately, the high-mass capture rate might not be so simple. If one restricts attention to WIMPs that are not bound tothe Sun, then
$$f_{\mathrm{unbound}}(𝐮)=0\mathrm{for}|𝐮+𝐯_{}|2^{1/2}v_{},$$
(4)
where $`𝐯_{}`$ is the velocity of the Earth. To properly evaluate capture in the high-mass regime it is therefore necessary to evaluate the velocity distribution of bound as well as unbound WIMPs. In particular, if there were no bound WIMPs, then there would be no WIMP capture at all for WIMPs with $`v_{\mathrm{cut}}(2^{1/2}1)v_{}`$, and WIMP capture would be highly suppressed for WIMPs with $`v_{\mathrm{cut}}v_{}`$. These thresholds correspond to $`M_x320\mathrm{GeV}`$ and $`M_x120\mathrm{GeV}`$ respectively.
Gould (1991) argued from detailed balance that regardless of the initial WIMP velocity distribution at the time of the formation of the solar system, $`f_{\mathrm{bound}}(𝐮)`$ would be driven toward $`f_{}(0)`$, the low-velocity limit of the velocity distribution in the solar neighborhood but away from the solar potential,
$$f_{\mathrm{bound}}(𝐮)f_{}(0)(\mathrm{Gould}1991).$$
(5)
He found that the charactersitic time for the evolution of the velocity distribution is less than the age of the solar system for $`uv_{}`$ (Fig. 3 from Gould 1991). While these arguments still left indeterminate the bound WIMP distribution for $`uv_{}`$, in practice the capture rate would not be seriously affected for any mass even if all these orbits were empty. Hence Gould’s (1991) arguments appeared to resolve the problem of WIMP capture for any $`M_x`$, $`\sigma _{xA}`$ and Galactic WIMP distribution.
## 2 New Developments
There have been two new developments since 1991 that challenge this seemingly closed case. First, Farinella et al. (1994) have shown by direct numerical integration that a large fraction of Earth-crossing asteroids are systematically driven into the Sun by various solar-system resonances. Gladman et al. (1997) and Migliorini et al. (1998) have further studied this problem and generally confirmed the initial results. These solar collisions occur on Myr time scales, several orders of magnitude faster than the characteristic diffusion times evaluated by Gould (1991). If WIMPs were, like asteroids, also driven into the Sun, they would be captured by the Sun and hence would be unavailable for capture by the Earth.
Second, Damour & Krauss (1998, 1999) have shown that WIMPs captured by the outer layers of the Sun into highly eccentric orbits could evolve into non-Sun-crossing orbits before they again collided with solar nuclei. If so, WIMPs on these eccentric orbits could substantially increase the number of low-velocity WIMPs in the solar neighborhood and so dramatically increase the capture rate in the mass range $`60\mathrm{GeV}M_x130\mathrm{GeV}`$ (Bergstrom et al. 1999). Below 60 GeV, WIMP capture is dominated by the iron resonance while above 130 GeV, $`v_{\mathrm{cut}}>v_{}`$, the minimum speed relative to the Earth for WIMPs on highly eccentric orbits.
Here we assess the problem of interpreting neutrino-detection experiments in light of these two conflicting developments.
## 3 Basic Approach
The central experimental fact that must guide our analysis is that to date no neutrinos have been detected from the center of either the Earth or the Sun. The experiments therefore place upper limits on WIMPs. Thus, a conservative interpretation of these experiments requires that one assume that only those parts of velocity space are populated as can be justified based on very secure theoretical arguments. This perspective implies that we must assume that all bound WIMPs with Earth crossing orbits are driven into the Sun within a few Myr. This includes primordial WIMPs, the gravitationally diffused WIMPs of Gould (1991), and the solar-collision WIMPs of Damour & Krauss (1998, 1999).
Before continuing, we wish to emphasize that it is by no means proven that all bound WIMPs are in fact driven into the Sun. The numerical integrations carried out to date (Farinella et al. 1994; Gladman et al. 1997; Migliorini et al. 1998) apply to a very special subclass of Earth-crossing orbits, namely those of existing minor bodies. Near-Earth asteroids are believed to be transported from their reservoir in the asteroid belt by means of resonances. It therefore may not be surprising that their continued orbital evolution is dominated by resonances. The main population of Earth-crossing WIMPs, which acquire their orbits by quite different, non-resonant paths (Gould 1991; Damour & Krauss 1998,1999), could be virtually unaffected by the resonances that drive asteroids into the Sun. Our viewpoint is simply that in the absence of proof that Earth-crossing WIMPs are not depopulated, one cannot place reliable upper limits on WIMPs from the failure to detect the annihilation signal that would be triggered by the capture of these Earth-crossing WIMPs.
## 4 Assured WIMP Capture
We begin by adopting an ultra-conservative view and assume that all bound WIMPs acquired by the solar system are immediately driven into the Sun. Then, for $`v_{\mathrm{cut}}<(2^{1/2}+1)v_{}`$, it is straight forward to show from equation (1) that
$$C_{\mathrm{ultra}}=\frac{2\pi N_A\sigma _{xA}f_{}(0)}{\beta _{}}_{u=v_{\mathrm{hole}}v_{}}^{v_{\mathrm{cut}}}𝑑u^2(v_{\mathrm{cut}}^2u^2)\left(1\frac{v_{\mathrm{hole}}^2v_{}^2u^2}{2uv_{}}\right).$$
(6)
where
$$v_{\mathrm{hole}}=2^{1/2}v_{}.$$
(7)
Hence, the ratio of the number of WIMPs captured under ultra-conservative assumptions to the naive formula (3) is
$$\frac{C_{\mathrm{ultra}}}{C_0}=\left\{\frac{1}{2}(1s^2)+\frac{v_{\mathrm{cut}}}{v_{}}\left[t^2(1s)+\frac{(1+t^2)(1s^3)}{3}\frac{1s^5}{5}\right]\right\}\mathrm{\Theta }(1s),$$
(8)
where
$$s=\frac{v_{\mathrm{hole}}v_{}}{v_{\mathrm{cut}}}=0.41\frac{v_{}}{v_{\mathrm{cut}}},t=\frac{\sqrt{v_{\mathrm{hole}}^2v_{}^2}}{v_{\mathrm{cut}}}=\frac{v_{}}{v_{\mathrm{cut}}},$$
(9)
and $`\mathrm{\Theta }`$ is a step function. The ratio $`C_{\mathrm{ultra}}/C_0`$ as a function of WIMP mass $`M_x`$ is shown as a solid line in Figure 1.
Equation (6) is in fact too conservative. Gould (1991) showed that Jupiter-crossing orbits (including those that also cross the Earth’s orbit) are populated from the reservoir of Galactic WIMPs on very short time scales. To an adequate approximation, the Jupiter-crossing orbits can be described as those with Earth-crossing velocities $`𝐮`$ constrained by $`|𝐮+𝐯_{}|>[2(1a_{}/a_J)]^{1/2}v_{}`$, where $`a_J/a_{}5.2`$ is the ratio of the orbital radii of Jupiter and the Earth. Hence the ratio of capture rates in this conservative (but not ultra-conservative) framework is still given by equation (8) but with $`v_{\mathrm{hole}}^22(1a_{}/a_J)v_{}^2`$ and consequently
$$s=0.27\frac{v_{}}{v_{\mathrm{cut}}},t=0.78\frac{v_{}}{v_{\mathrm{cut}}}.$$
(10)
This result is shown as a bold curve in Figure 1.
In principle one should take into account the loss of coherence in WIMP-nucleon interactions that occurs when the momentum transfer, $`q`$, becomes comparable to (or larger than) the inverse radius of the nucleus, $`R3.7`$fm for iron. The suppression due to loss of coherence is $`\mathrm{exp}(q^2R^2/3\mathrm{}^2)`$ (Gould 1987). However, for WIMPs of mass $`M_x`$, the maximum momentum transfer that leads to capture is $`q_{\mathrm{max}}=(M_{\mathrm{Fe}}M_x)^{1/2}v_{\mathrm{cut}}`$. In the high mass limit, $`q_{\mathrm{max}}2M_{\mathrm{Fe}}v_{\mathrm{esc}}`$. Hence, the most extreme suppression factor is $`\mathrm{exp}[(2M_{\mathrm{Fe}}v_{\mathrm{esc}}R_{\mathrm{Fe}}/\mathrm{})^2/3]0.997`$. We conclude that loss of coherence can be ignored. See also Gould (1987).
## 5 Discussion
From Figure 1, we see that under the conservative assumption that WIMPs do not populate bound orbits (unless they are Jupiter-crossing), WIMP capture is highly suppressed for WIMP mass $`M_x150\mathrm{GeV}`$ and completely eliminated for $`M_x>630\mathrm{GeV}`$. These results imply that, at present, the non-detection of neutrinos coming from the Earth’s center cannot be used to place limits on WIMPs of mass $`M_x>630\mathrm{GeV}`$. Moreover, for masses $`75\mathrm{GeV}M_x630\mathrm{GeV}`$, the limits must be softened relative to what would be obtained from equation (3).
It is quite possible that WIMPs are not generically driven into the Sun. The evidence that they are comes from numerical integration of asteroid orbits, and these latter could occupy a very special locus in parameter space. It will be necessary to integrate typical WIMP orbits to find out if WIMPs survive longer than asteroids. In particular, these integrations should focus on the highly eccentric orbits for which Damour & Krauss (1998,1999) predict a huge enhancement for $`M_x130\mathrm{GeV}`$. If typical WIMP orbits are found to survive substantially longer than asteroid orbits, then the limits derived from the naive calculations of Gould (1987, 1991) would become valid. It is even possible that the much stronger limits derived by Bergstrom et al. (1999) would apply.
Acknowledgements: We thank A. Quillen for pointing out the importance of recent work on asteroid orbits. This work was supported in part by grant AST 97-27520 from the NSF. |
no-problem/9911/cond-mat9911435.html | ar5iv | text | # Analyzing fragmentation of simple fluids with percolation theory
## 1 Introduction
Fragmentation phenomena concern a wide diversity of objects in nature, at many scales of distance and time. A natural question is to ask what is generic and what is specific in these phenomena. Most theoretical efforts made to understand fragmentation apply only to specific objects or, on the contrary, concern simple models with few links with reality . Moreover, experimental data are rather sparse and often the experimental conditions are ill defined. As a consequence, the question of the possible existence of universal fragmentation mechanisms remains an open problem .
The arguments in favour of the existence of universal mechanisms are of various orders. For instance, in aggregation phenomena, which can be considered as the opposite of fragmentation, it is possible to define universal classes (Diffusion Limited Aggregation, Clustering of clusters…) in terms of the initial conditions (number of seeds) and the motion of the aggregating particles (ballistic, Brownian). The fractal structure (dimension) of the aggregation cluster is the fingerprint of these universal classes. In many fragmentation processes the experimental observation, over many orders of magnitude, of power law (scale invariant) fragment size distributions is another possible indication of universal classes. In this case, the value of the power law exponent would be the corresponding fingerprint .
Collision induced fragmentation of small fluid drops (atomic nuclei , atomic aggregates , liquid droplets ) is a field of experimental research particularly active because it offers the best possibilities of complete identification of the fragmentation products. We present in this paper an analysis of the fragmentation of atomic nuclei in high energy collisions. We show that random percolation theory accounts for experimental data without any adjustable parameter and we discuss two possible explanations for this agreement, depending if one assumes or not that thermal equilibrium is reached before fragmentation occurs. It is then suggested that this percolation type fragmentation mechanism could be universal for simple fluids, i.e. fluids made of structureless particles interacting with short range potentials.
The structure of the paper is the following. In section 2, a brief description of the experiment precedes the analysis of data. Section 3 is devoted to the interpretation of the results. Some final remarks and conclusions are in section 4.
## 2 Analysis of fragmentation data
### 2.1 The experiment
The experiment was performed by the Aladin collaboration at Gesellschaft für Schwerionenschung (G.S.I.), Darmstadt . Beams of gold ions (Au, $`Z=79`$) at 600 and 1000 MeV/nucleon incident energies were used to bombard targets made of thin copper foils. Data for more than $`3.10^5`$ events were collected. At these high energies<sup>3</sup><sup>3</sup>3High energies compared with the 8 MeV/nucleon binding energies or the 35 MeV/nucleon Fermi energies of nuclei., the commonly admitted scenario of the collision is the following: The part of the projectile that does not geometrically overlap with the target at the point of closest approach is thought to decouple from the rest and form a sub-system called the projectile spectator (PS). The size of this system and its excitation energy $`E^{}`$ are therefore dependent on the impact parameter. The overlapping part of the system is completely vaporized and therefore does not contribute to the formation of fragments with nuclear charge $`(z)`$ greater than one.
In the present experiment, the Aladin device detected with very high efficiency all spectator fragments (those resulting from the decay of the unstable PS) with nuclear charge $`z>1`$. Neither hydrogen isotopes nor neutrons were detected, event by event, with any significant efficiency. The initial size of the spectator system ($`Z_{PS}`$) is not experimentally measured and can only be inferred from the comparison with a nuclear reaction model. Empirically, this size can be estimated, on average, from the following relation:
$$<Z_{PS}>=25+Z_{bound}0.004Z_{bound}^2$$
(1)
where $`Z_{bound}`$ is equal to the sum of all the charges of products with $`z2`$. In all the studies of these experiments, the parameter $`Z_{bound}`$ is used as control parameter. Notice that $`Z_{bound}`$, which is closely related to the impact parameter, decreases when the violence of the collision increases.
### 2.2 Analysis of data
To analyze the Aladin data, the following very simple procedure has been used. For a given value of $`Z_{bound}`$, the size of the PS was deduced from eq.1 and percolation calculations on a lattice of corresponding size were performed by varying the bond breaking parameter (random-bond percolation). Only those events with the proper $`Z_{bound}`$ were kept and the fragment size distribution of these compared to the corresponding experimental data. In this manner the comparison is parameter free. Notice that by fixing a value of $`Z_{bound}`$, the maximum value $`z_{max}`$ of $`z`$ is also constrained by this value.
Figure 1 shows, for $`Z_{bound}`$ values of 20, 48, 60 and 70, the experimental fragment size distributions. One observes three different regimes of fragmentation. For large $`Z_{bound}`$, large impact parameter and low excitation energy, only a heavy residue and light particles are produced. This is the “evaporation” regime. For small $`Z_{bound}`$ (most violent collisions) only small fragments are produced and no heavy residue is left. The distribution of this “vaporization” regime decreases exponentially. Remarkably, at $`Z_{bound}`$ around $`48`$ an intermediate regime exists, for which the distribution can be fitted by a power law $`n(z)z^{2.2}`$. The mean excitation energies per particle can be estimated for the fragmenting systems as a function of $`Z_{bound}`$ : they correspond to about $`<E^{}>14,6,3,1`$ MeV per particle for $`Z_{bound}=20,48,60,70`$ respectively.
The full lines in figure 1 represent the corresponding distributions obtained from the percolation calculation. The calculation reproduces quantitatively, and over several orders of magnitude, the data in the three regimes<sup>4</sup><sup>4</sup>4A close inspection of the data for $`Z_{bound}=70`$ shows an excess of fragments produced around $`z=30`$ due to the presence of fission events. This is understandably beyond the scope of percolation theory.. We are naturally led to associate the experimental power law behaviour at $`Z_{bound}48`$ to the percolation critical behaviour.
Close to the critical regime, percolation theory predicts typical scaling properties of the cluster size distributions . The size distributions can be reduced to an universal function, $`f(\frac{z}{\overline{z}})`$, by the following formula :
$$n(z)=n_c(z)f(\frac{z}{\overline{z}})$$
(2)
where $`n_c(z)`$ is the size distribution at the critical point, taken here as the size distribution observed for $`Z_{bound}=48`$. The “characteristic size” $`\overline{z}`$ is defined by :
$$\overline{z}=m_3/m_2$$
(3)
with $`m_k=_{z2}z^kn^{}(z)`$, where $`n^{}(z)`$ is the mean fragment size distribution obtained excluding, event by event, the largest fragment. One observes in figure 2 that, within the scattering of the data, this rule is rather well satisfied, keeping in mind that this corresponds to the ratio of two quantities varying over more than three orders of magnitude.
The size of the largest fragment plays in percolation theory the role of the order parameter. For an infinite system, it is of infinite size in the percolating phase and finite in the non-percolating one. In a finite system, this transition is smooth, as illustrated in figure 3. This figure on the left compares, again as a function of $`Z_{bound}`$, the experimental measured size of the heaviest fragment $`z_{max}`$ to the one obtained from the percolation calculations. The figure on the right shows the fluctuations of $`z_{max}`$,
$$\sigma _{zmax}^2=\frac{<z_{max}^2><z_{max}>^2}{<z_{max}>}.$$
(4)
As expected , these functions show a maximum around the “critical” value of $`Z_{bound}`$. The agreement between experiment and theory is good, except for the highest values $`Z_{bound}`$ where the fluctuations should vanish for $`z=79`$. Very probably, either contributions of the target have been included in the data set or miss-identification of fragments is present. Naturally, the percolation model used here cannot account for either of these features. In figure 4 one can see that even the full distribution $`P(z_{max})`$ is well accounted for by the calculation.
By any standard, the agreement observed in figures 1 to 4, between the experiment and the calculations is very good. In the analysis presented here, as already stressed, no adjustable parameters are used.
## 3 Interpretation of the results
The choice of percolation theory to analyze the experimental results is motivated by the following reasoning. A fully microscopic description of nuclear fragmentation is, a priori, out of scope of theory. Atomic nuclei behave in their ground state as small drops of Fermi liquids composed by particles strongly interacting mainly with a two-body, short range force. This interaction is ill defined at short distances and the techniques used to solve this complicated many body problem are not fully under control. The description of collisions, using transport equations for example, is even more difficult. In view of the above remarks and with the stated goal of understanding the universal features of the data, we believe that a more fruitful point of view is to tackle the problem with the minimum number of assumptions.
The simplest hypothesis would be to assume the equiprobability of all partitions of the integer number $`Z=79`$. This is equivalent to a “maximum entropy principle” . However, the resulting $`n(z)`$ are always exponentially decaying functions, in contradiction with experiments.
A step further is to consider topological constraints, by considering fragments in a 3-dimensional space. Among the infinity of models one can imagine to make fragments, random-bond percolation seems a good candidate because of its simplicity and because it retains only the essential constraints. For example, the shape of the fragments is not assumed a priori. It turns out that, as shown in section 2, it suffices to reproduce very well the experimental data. How does one understand this agreement ? One can imagine at least two scenarios.
In the first scenario, one idealizes the nuclear fluid as an ensemble of particles connected by bonds. The bond between a pair of particles is active as long as the magnitude of their potential energy is greater than the relative kinetic energy. During the collision some of these bonds break because of the change in the position and/or the velocity of the particles. The simplest assumption is that bonds are broken randomly, which corresponds to the usual uncorrelated bond percolation model . Particles connected by unbroken bonds form the fragments. These fragments separate rapidly, pushed away by the long range Coulomb force between protons.
In a second scenario, one assumes that after the collision phase, equilibrium thermodynamics applies. The system expands until a “freezout” density is reached, at which the fragments cease to interact by the strong nuclear attractive force and their size distribution is “frozen”. The Coulomb force accelerates the fragments, as before. In order to calculate its distribution, we have to define first what a “fragment” is. In the present context, it seems natural to call “fragment” a self-bound ensemble of particles <sup>5</sup><sup>5</sup>5Another possibility is to impose the stability by particle evaporation only . At the present level of accuracy, both definitions are operationally very similar. Definition is equivalent to the ones proposed by Hill and by Coniglio and Klein . With these definitions, one finds in the $`\rho T`$ diagram a percolation line (sometimes called the Kertész line ) that separates a percolating and a non-percolating phases. The line joins the thermodynamical critical point to the random bond percolation critical point. On it the fragment size distribution is a power law $`n(z)z^\tau `$, with $`\tau 2.2`$. (see fig. 1 and 2 of ref. ). In a small system like an atomic nucleus, rather than a sharp critical line, one finds a “critical zone” on which the $`n(z)`$ approaches this behaviour (see figure 4 of ref. ). The similarity of these calculated $`n(z)`$ with the experimental ones shown in figure 1 is very obvious. Therefore, by inspection of the $`n(z)`$ alone, it is not possible to disentangle between these two scenarios. However, one could hope that extra information on, for example the fragment kinetic energies, could indicate if thermalization is present or not.
More generally, the main difficulty in analyzing nuclear fragmentation data is due to the very small size of the system, which fundamentally limits the extraction of the universal critical exponents. Indeed, a proper characterization of the physical process requires, apart from the $`\tau `$ exponent, the determination of the other critical exponents associated with the moments $`m_k`$ of the fragment size distribution. Such measurements are, in principle, possible for simple fluid systems (i.e. made of structureless particles subject to short range forces) of larger sizes such as atomic aggregates or macroscopic pieces of matter, such as liquid drops .
We consider as very plausible the possibility to observe this percolation type fragmentation in simple fluids. Indeed, the arguments that we have developed to explain the success of percolation theory should apply without restriction to any system of structureless particles interacting with a short range potential. In fact, attempts have been made to show experimentally this behaviour in the fragmentation of hydrogen aggregates . In these experiments, the multiplicity of fragments $`m_0`$ is used as control parameter. As a function of $`m_0`$ the mass of the largest fragment and its fluctuation, evolve qualitatively as expected in percolation theory. However, in ref., the data are compared with a percolation system of improper size and no definite conclusions can be drawn.
On the theoretical side, we are performing large scale classical molecular dynamics simulations of a Lennard-Jones fluid, for both the sudden disassembly of an equilibrated system and for the collisions of drops. We clearly find in these calculations the fragment size distributions predicted by percolation theory .
## 4 Final remarks
Other experiments of nuclear fragmentation at high bombarding energies have been successfully interpreted with percolation theory . However, at lower bombarding energies (30-50 MeV/nucleon) the agreement is less satisfactory . The shape of the $`n(z)`$ evolves qualitatively as in figure 1, but a closer examination shows a systematic over-production of intermediate size fragments. The origin of these discrepancies is still unclear. Different explanations can be considered:
a) the identification of the source of the fragments is more difficult at lower collisionnal energies where the separation between the projectile, the target, or the fused system is not as clear as in the case of high energy collisions, leading thus to possible contaminations between these different sub-systems.
b) different fragmentation mechanisms could result from the smaller relative velocities between projectile and target, particularly in the case of fusion at small impact parameter, inducing possible compression effects.
These discrepancies, more generally, could just signal the fact that the assumption that fragmenting nuclei behave as “simple fluids” breaks down at these lower incident energies.
Nuclear fragmentation experiments are often analyzed with the so-called Statistical Multifragmentation Models . In brief, these models deal with the equilibrium thermodynamics of ensembles of spherical drops of nuclear matter confined in a “freezout” volume. Drops interact with each other only by the long range Coulomb force and their internal partition function is taken from empirical mass formulas or from experiment. These models<sup>6</sup><sup>6</sup>6These models have been also implemented to study the fragmentation of atomic clusters . are successful in describing the above mentioned low bombarding energy experiments when fixing the size, the density and the excitation energy of the fragmenting source. For the high energy Aladin data similar analysis have been performed but with a larger set of input parameters.
The use of percolation theory concepts is however more comprehensive while much easier to handle. It provides an excellent agreement with the data without requiring any adjustable parameter.
We hope that the present results will encourage both the theoretical and the experimental studies of the fragmentation of simple fluids. The fragmentation by collisions of very large aggregates seems particularly promising, because it combines the experimental possibility to detect fragments together with a reduction of finite size and surface corrections effects.
We thank the Aladin collaboration at G.S.I. for allowing us to use their experimental data. |
no-problem/9911/astro-ph9911003.html | ar5iv | text | # ROTATION, PLANETS, AND THE “SECOND PARAMETER” OF THE HORIZONTAL BRANCH
## 1 INTRODUCTION
In recent years $`20`$ Jupiter-like planets have been discovered around main sequence stars. This further increased the great interest in the mechanisms for planet formation, in particular at close orbital separations. Another aspect of the presence of planets is their influence on the central star and its evolution. The processes, some of which were studied before the discovery of extrasolar planets, include the transfer of a massive planet into a star through the accretion from the envelope of a giant star (Eggleton 1978; Livio 1982; Livio & Soker 1984; Soker, Harpaz, & Livio 1984), formation of SiO masers in the magnetospheres of several gas-giant planets around Mira-stars (Struck-Marcell 1988), formation of R Coronae Borealis stars through enhanced mass loss rate caused by a planet inside the envelope of an evolved star (Whitney, Soker, & Clayton 1991), the formation of elliptical planetary nebulae by planets spiraling-in inside the envelop of AGB stars (Soker 1992; 1996; 1997), evaporation of planets deep in the envelopes of giants (Harpaz & Soker 1994; Siess & Livio 1999a,b), and enhancing metallicity by planet evaporation inside main sequence stars (Sandquist et al. 1998).
In this paper we examine the evolution of angular momentum of stars evolving from the RGB to the HB, after being spun-up on the RGB by planets. To account for fast rotating HB stars Peterson, Tarbell & Carney (1983) already mentioned the possibility that planets can spin-up RGB stars, though later they abandoned this idea. Following the reasons given by Soker (1998), we think that planets are required to account for the fast rotating HB stars. Pinsonneault, Deliyannis, & Demarque (1991) argue that single stars can account for the fast HB rotators. We disagree with them on two points. First, they require that the cores of main sequence stars rotate much faster than the envelopes. Soker (1998; also Behr et al. 1999) brings causes to question this assumption. Second, they require the core of the RGB star to possess higher angular momentum than the envelope. Since the core of RGB stars is very small this seems to us an unreasonable assumption. Possible support for our claim is the rotation velocity plot of the globular cluster M13 presented by Peterson, Rood, & Crocker (1995) and Behr et al. (1999), which show that hotter HB ($`10,500T_{\mathrm{eff}}20,000\mathrm{K}`$) stars have lower rotation velocities than cooler ($`8,000T_{\mathrm{eff}}10,000\mathrm{K}`$) stars. Not only do the hotter HB stars in these sample rotate slower, they are also smaller and less massive than the cooler stars, hence they have much less angular momentum than the cooler HB stars. If the angular momentum comes from the core after most of the envelope has been lost on the RGB, we would expect to find the same angular momentum distribution along the HB. This problem with a single star evolution was noted by Behr et al. (1999) as well. We attribute the much lower angular momentum of hotter HB stars to angular momentum loss on the RGB, and possibly during the contraction toward the HB and on the HB. In our proposed scenario, stars which do not interact with any massive planet, brown dwarf, or a low-mass main sequence star rotate very slowly and do not lose much mass on the RGB. Such are, for example, the slowly rotating RR Lyrae variables (Peterson, Carney, & Latham 1996).
We also accept the scenario proposed by Soker (1998) to account for some anomalies on the HB, and the view that the presence of planets and brown dwarfs is one of the factors which determine the “second parameter” (see also Siess & Livio 1999b). The HB morphologies, i.e., the distribution of stars on the HB of a stellar system, differ substantially from one globular cluster to another. It has long been known that metallicity is the main factor which determines the location of HB stars on the HR diagram. For more than 30 years, though, it has been clear that another factor is required to explain the variation in HB morphologies among globular clusters with similar metallicity (Sandage & Wildey 1967; van den Bergh 1967; see reviews by Fusi Pecci & Bellazzini 1997; Rood, Whitney, & D’Cruz 1997; Rood 1998; de Boer 1998). This factor is termed the second parameter of the HB. It seems that stellar companions alone cannot be the second parameter (e.g., Rich et al. 1997), nor any other single factor which has been examined (e.g., Ferraro et al. 1997 and references therein). We think that the presence of low mass stars and of planets (or brown dwarfs) is the main second parameter factor (but probably not the only one), with planets occurring more frequently.
The main role, but not the only one, of the planets is the spinning-up of the RGB envelope. It is commonly accepted now that rotation has a connection with the second parameter, probably through its role in determining the mass loss on the RGB, directly or indirectly. We agree with this assertion, and further claim that the source of angular momentum in many cases is the interaction with a planet. The arguments to support this claim, the scenario proposed by Soker (1998), and the aim of the present work are summarized in $`\mathrm{\S }2`$. In $`\mathrm{\S }3`$ we follow the evolution of angular momentum from the RGB to the HB and along the HB, and apply the results to the stars of the globular cluster M13. Our summary is in $`\mathrm{\S }4`$.
## 2 PLANETS, ROTATION, AND MASS LOSS
In recent years it has become a common view that the second parameter determines the HB morphology by regulating the mass loss on the RGB (e.g., Dorman, Rood, & O’Connell 1993; D’Cruz et al. 1996; Whitney et al. 1998; D’Cruz et al. 2000; Catelan 2000). According to this view, the extreme HB (EHB) stars for example, lose almost all their envelope while still on the RGB (Dorman et al. 1993; D’Cruz et al. 1996). On the RGB, which is the stage prior to the HB and before core helium ignition, the star is large and luminous and has a high mass loss rate. Sweigart & Catelan (1998; also Sweigart 2000 and Moehler, Sweigart, & Catelan 1999) claim that mass loss on the RGB by itself cannot be the second parameter, and it should be supplied by another process, e.g., rotation, or helium mixing, which requires rotation as well. They term the addition of such a process a “noncanonical scenario”. Behr et al. (1999) find the second parameter problem to be connected with rotation, and note that single star evolution cannot explain the observed rotation of HB stars, even when fast core rotations are considered. The rich variety of HB morphologies (e.g., Catelan et al. 1998) suggests that there is a rich spectrum in the factor(s) behind the second parameter.
We argue, following Soker (1998), that these factors are the masses and orbital separations of the companions, which in most cases are planets and brown dwarfs, and in the remaining cases they are low mass main sequence stars. That is, we claim, like Siess & Livio (1999b), that the “noncanonical scenarios” (e.g., Sweigart & Catelan 1998; Sweigart 2000), involves interaction with planets. There is no reason to reject the idea that planets similar to those discovered recently in the solar neighborhood exist in old star clusters (Ford et al. 1999). Ford et al. (1999) argue for the presence of a primordial star-planet system in the PSR B1620-26 system in the globular cluster M4, holding that this finding may hint at the presence of many planets in globular cluster. The companions can influence the mass loss rate in several ways (Soker 1998; Siess & Livio 1999b). First, as the companion spirals-in inside the RGB stellar envelope it deposits (positive) gravitational energy which can remove part of the envelope. Second, the companion spins-up the envelope. Angular momentum may enhance mass loss rate, mainly near the equator, by, e.g., enhancing magnetic activity. Third, the rotation may mix helium to the envelope (Sweigart 1997a,b; Sweigart & Catelan 1998). The higher helium abundance increases the RGB tip luminosity, hence total mass loss on the RGB, leading to the formation of blue HB stars. Sweigart (1997a,b) suggests that this can explain the second parameter, though he does not mention the required angular velocity and how his model accounts for the different groups on the HB. Whitney et al. (1998) show that the mixing mechanism of Sweigart cannot explain the data presented in their paper, while Ferraro et al. (1997), Grundahl et al. (2000), and Charbonnel, Denissenkov, & Weiss (2000) discuss other difficulties with the mixing mechanism proposed by Sweigart being always important. In the scenario proposed by Soker (1998), helium mixing is only one of several processes caused by planets, so the comments raised by these works do not contradict the model.
In the scenario proposed by Soker (1998), the planets which enter the envelope of RGB stars can end in one of three ways (Livio & Soker 1984; also Seiss & Livio 1999b): ($`i`$) evaporation of the planet in the envelope, before the entire envelope is lost; ($`ii`$) collision of the planet with the core, i.e., the planet overflows its Roche lobe, before the entire envelope is lost (a planet of radius $`0.1R_{}`$ and a mass of $`0.01M_{}`$ will overflow its Roche lobe when at $`1R_{}`$ from the core); and ($`iii`$) expelling the envelope while the planet survives the common envelope evolution. Soker suggests that these three routes may explain the three subgroups found by Sosin et al. (1997) in the blue HB of the globular cluster 2808. In the original scenario, the three routes were determined mainly by the secondary mass $`M_s`$. The first route, evaporation, occurs for $`M_s1M_J`$, where $`M_J`$ is Jupiter’s mass; the second route, of core collision, occurs for $`1M_jM_s10M_J`$, while the third route, of surviving companions, occurs for $`M_s10M_J`$.
Soker claims also that: “It is clear that not only planets play a role in the proposed second parameter mechanism due to companions, since, for example, stellar binary companions and occasional collisions with passing stars can also influence the mass loss on the RGB”. Soker (1998) estimates that in $`40\%`$ of the cases the interaction of the RGB star is with a stellar companion rather than with a planet. When publishing the model Soker (1997; 1998) was not aware of a new class of objects named EC14026 (e.g., Kilkenny et al. 1997; Koen et al. 1997; Koen et al. 1998, and references therein) and their relation to EHB stars (Bono & Cassisi 1999). The EC14026 stars are sdB stars which have very rapid light variations, resulting from pulsation in several modes. Low mass binary companions have been detected in several of these stars. PG 1336-018, for example, has a secondary of mass $`0.15M_{}`$ with an orbital period of $`0.1`$ days (Kilkenny et al. 1998). Maxted et al. (2000) find the orbital periods and minimum companion masses of two sdB stars: $`0.63M_{}`$ and $`8.33`$ days for PG 0940+068, and $`0.09M_{}`$ and $`0.599`$ days for PG 1247+554. For others, the companion, if it exists, is limited to spectral type M0 or later (e.g., PG 1605+072, Koen et al. 1998; PG 1047+003, O’Donoghue et al. 1998). For these systems, we suggest that the companion may be a brown dwarf or a massive planet as well. Allard et al. (1994) estimate that $`60\%`$ of hot B subdwarfs have binary stellar companions. Here again, the stars with no stellar companion may have a substellar companion. Green et al. (1998) argue that their “investigations in open clusters and the field strongly suggest that most metal-rich BHB \[blue HB\] stars are in binaries”. The EC14026 members with surviving low mass stellar companions support the scenario proposed by Soker (1997, 1998), and suggest that this scenario can be extended to include low-mass main sequence stars. Therefore, the third route, where the companions survive at small orbital separations, should include low-mass stars, as well as massive planets and brown dwarfs. When low mass stellar companions are included, the lower limit on the mass of surviving secondaries can be raised, without causing difficulties to the model, from $`10M_J`$ to several$`\times 10M_J`$. This enriches the varieties of HB stars that can be formed from binary interactions.
The main effects of a binary companion as it spirals-in inside the envelope, or at an earlier phase outside the envelope through tidal interaction, are the deposition of gravitational energy and angular momentum (Soker 1998). The RGB star, now rotating faster, is likely to increase its mass loss rate, lose more mass on the RGB, and hence turn into a bluer HB star. But many blue HB stars retain some of the envelope mass, which means that the RGB star does not lose its entire envelope. In this paper we examine the evolution of the angular momentum in these stars, which possess high angular momentum, as they contract and evolve toward and on the HB.
Before turning to the calculation of angular momentum evolution we estimate the angular momentum deposited by a low-mass companion (Soker 1998). When the star evolves along the RGB it expands slowly. When its radius $`R`$ becomes $`20\%`$ of the orbital separation $`a_0`$, tidal forces will cause the substellar companion orbit to decay in a time shorter than the evolutionary time (Soker 1996), thus forming a common envelope phase. The angular velocity of the envelope $`\omega `$ can be estimated as follows. The envelope’s moment of inertia is $`I_{\mathrm{env}}=\alpha M_{\mathrm{env}}R^2`$, where $`M_{\mathrm{env}}`$ is the envelope’s mass and $`R`$ is the stellar radius (we neglect the core’s moment of inertia since it is very small), and $`\alpha 0.1`$ for upper RGB stars (see next section). The final envelope angular momentum $`I\omega `$ is equal to the planet’s initial orbital angular momentum $`J_p=M_p(GM_1a_0)^{1/2}=M_p\omega _{\mathrm{Kep}}(a_0R^3)^{1/2}`$, where $`\omega _{\mathrm{Kep}}`$ is the Keplerian angular velocity on the RGB star’s surface, $`M_p`$ is the planet’s mass and $`M_1`$ is the primary’s total mass. The envelope angular velocity is
$`{\displaystyle \frac{\omega }{\omega _{\mathrm{Kep}}}}=0.1\left({\displaystyle \frac{M_p}{0.01M_{\mathrm{env}}}}\right)\left({\displaystyle \frac{a_0}{R}}\right)^{1/2}\left({\displaystyle \frac{\alpha }{0.1}}\right)^1.`$ (1)
Wide stellar companions ($`a_05\mathrm{AU}`$) can deposit angular momentum via tidal interaction, leading to similar effects as those of planets.
## 3 ANGULAR MOMENTUM EVOLUTION
### 3.1 Angular Momentum Loss
Not considering magnetic influence beyond the stellar surface, and assuming a solid body rotation through the stellar envelope, the angular momentum loss rate from stars is
$`\dot{J}_{\mathrm{wind}}=\beta \omega R^2\dot{M},`$ (2)
where $`\omega ,J,R,M`$ are the stellar angular velocity, angular momentum, radius, and mass, respectively, and $`\beta `$ depends on the mass loss geometry. For a constant mass loss rate per unit area on the surface $`\beta =2/3`$, while for an equatorial mass loss $`\beta =1`$. The angular momentum of the star is $`J_{\mathrm{env}}=I\omega `$, where $`I`$ is the moment of inertia given by
$`I=\alpha M_{\mathrm{env}}R^2,`$ (3)
where $`M_{\mathrm{env}}`$ is the envelope mass, and we neglect the core’s moment of inertia relative to that of the envelop and the change in the core mass at late RGB stages. The change of envelope’s angular momentum with time is given by
$`\dot{J}_{\mathrm{env}}=\dot{I}\omega +I\dot{\omega }={\displaystyle \frac{dI}{dM_{\mathrm{env}}}}\dot{M}\omega +I\dot{\omega }.`$ (4)
Dividing equation (2) by equation (3) multiplied by $`\omega `$, gives also
$`{\displaystyle \frac{d\mathrm{ln}J_{\mathrm{env}}}{d\mathrm{ln}M_{\mathrm{env}}}}={\displaystyle \frac{d\mathrm{ln}\omega }{d\mathrm{ln}M_{\mathrm{env}}}}+{\displaystyle \frac{d\mathrm{ln}I}{d\mathrm{ln}M_{\mathrm{env}}}}={\displaystyle \frac{\beta }{\alpha (M_{\mathrm{env}})}}\delta .`$ (5)
If the structure of the atmosphere does not change much with mass loss, and the density is given by $`\rho r^2`$, then $`\alpha =2/9`$ and $`d\mathrm{ln}I/d\mathrm{ln}M_{\mathrm{env}}=1`$, for which we find $`d\mathrm{ln}\omega /d\mathrm{ln}M_{\mathrm{env}}=2`$, for $`\beta =2/3`$ (Harpaz & Soker 1994). This case is appropriate for the upper AGB. On the RGB, if the envelope mass is not too low, we can take $`\alpha 0.1`$ (see below), for which we find from equation (5) that the total envelope angular momentum decreases as (neglecting changes in the core mass) $`J_{\mathrm{env}}M_{\mathrm{env}}^\delta `$, where $`\delta \beta /\alpha 67`$.
### 3.2 The Angular Momentum of Stars in M13
We now analyze the rotation velocities of HB stars in the globular cluster M13, as presented by Behr et al. (1999), with the goal of finding the total angular momentum each star had on the RGB. For the purpose of this simple analysis, we can make the following simplifying assumptions:
(1) We assume that the angular momentum was deposited to the RGB star before it had lost much of its envelope (or if no interaction with a stellar or substellar companion occurred, then the angular momentum is that which the star had on the main sequence). We take the envelope mass prior to the intensive mass loss episode to be $`M_0=0.3M_{}`$, as appropriate for a well developed core on the upper RGB. For the situation where the deposition of angular momentum occurred after the envelope mass was already reduced to $`M_0=0.2M_{}`$, for example, the angular momentum in each case will be reduced by $`(2/3)^\delta =0.2`$ (0.07), for $`\delta =4`$ (6.5).
(2) We neglect core evolution during the intense mass loss rate. This is reasonable, as most of the mass is lost on the upper RGB.
(3) We assume that the angular momentum evolves according to equation (5), with a constant value of $`\alpha `$, and with $`\beta =2/3`$.
(4) For the envelope mass and luminosity of the HB stars in the interesting region $`7,000<T_{\mathrm{eff}}<20,000\mathrm{K}`$, we approximate the models of Dorman et al. (1993) and D’Cruz et al. (1996) for \[Fe/H\]=-1.5, by taking for the envelope mass (in $`M_{}`$)
$`M_{\mathrm{HBe}}=1.370.31\mathrm{log}T_{\mathrm{eff}},`$ (6)
and for the luminosity (in $`L_{}`$)
$`\mathrm{log}L=5.140.9\mathrm{log}T_{\mathrm{eff}},`$ (7)
where $`T_{\mathrm{eff}}`$ is the effective temperature (in K). The stellar radius is calculated from $`L`$ and $`T_{\mathrm{eff}}`$.
(5) Like Pinsonneault et al. (1991) we neglect structural changes along the considered segment of the HB, and take the moment of inertia of all HB models to be $`I_{\mathrm{HB}}=0.01M_{\mathrm{HBe}}R^2`$, i.e., $`\alpha _{\mathrm{HB}}=0.01`$. Any relative change in the value of $`\alpha _{\mathrm{HB}}`$ is much smaller than the uncertainties in the values of $`\alpha `$ on the RGB, hence of $`\delta `$.
These assumptions allow us to find the present angular momentum of the HB stars,
$`J_{\mathrm{HB}}=0.01M_{\mathrm{HBe}}Rv_r,`$ (8)
where $`v_r`$ is the rotation velocity. The rotation velocity is taken to be the value of $`v\mathrm{sin}i`$, as given for M13 by Behr et al. (13 stars) and Peterson et al. (1995) (25 stars; for the 4 stars common to these two works we take values from Behr et al.). From equation (5), the total angular momentum that the RGB progenitor had before the intensive mass loss started is, with assumption (1) above,
$`J_0=J_{\mathrm{HB}}(M_{\mathrm{HBe}}/M_0)^\delta .`$ (9)
To find the value of $`\alpha `$ on the RGB we evolved an RGB model. The model was based on a previous work by one of us (Harpaz & Shaviv 1992), hence we used a solar composition. However, since we are interested in the value of $`\alpha `$, which is the ratio of the envelope moment of inertia to $`M_{\mathrm{env}}R^2`$, the exact values of the RGB radius, luminosity, and core mass are not so important. Indeed, the value of $`\alpha `$ changes only little for different radii in the range $`90120R_{}`$, core masses of $`>0.4M_{}`$, and envelope masses in the range $`0.050.3M_{}`$, and its value on the upper RGB is $`\alpha 0.1`$. More accurately, we can approximate the value of $`\alpha `$ in the range of $`0.03M_{\mathrm{env}}0.3M_{}`$ by $`\alpha =0.1+1.1\times 10^4(M_{\mathrm{env}}/M_{})^{1.6}`$. As the envelope mass decreases below $`0.03M_{}`$, the RGB envelope shrinks, and $`\alpha `$ increases on a slower rate, becoming only $`\alpha 0.16`$ for $`M_{\mathrm{env}}=0.01M_{}`$. We find that an appropriate average value is $`\alpha =0.103`$, which gives $`\delta (RGB)=\beta /\alpha 6.5`$ (for $`\beta =2/3`$). We use this value for $`\delta `$ even though it decreases (since $`\alpha `$ increases) for very low envelope masses, e.g., $`\delta 5.9`$ for $`M_{\mathrm{env}}=0.05M_{}`$, since for the blue HB stars in the sample used here the required angular momentum is very large, and the evolution is more complicated. That is, other uncertainties in the interaction process of the more massive companion with the RGB envelope dominate, as we discuss later. For example, angular momentum loss may be even larger, since $`\beta >2/3`$, at early stage for the RGB progenitors of the blue HB stars in the sample.
In Figure 1 we plot the RGB angular momentum for $`M_0=0.3M_{}`$, and for $`\delta =4`$ (Fig. 1a; upper panel) and $`\delta =6.5`$ (Fig. 1b; lower panel). We present the results for $`\delta =4`$ as well as the expected value of $`\delta 6.5`$, since we find a uniform (more or less) distribution in $`J_0`$ (see Fig. 1a). The large symbol in the right-hand side of Figure 1b stands for a star which had $`J_0=6.05\times 10^{51}\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$, hence it was reduced by a factor of 10 to fit into the graph. The value of $`J_0`$ will be reduced by a factor of 10 if we take $`M_0=0.17M_{}`$ and $`0.21M_{}`$, for $`\delta =4`$ and $`\delta =6.5`$, respectively. Of course, the mass of the envelope when angular momentum was deposited to it changes from one star to another, but we have no way of telling the value of $`M_0`$ from the properties of the star on the HB.
Although the angular momentum on the HB is $`J_{\mathrm{HB}}J_{}`$, where $`J_{}`$ is the angular momentum of the sun, as noted already by Pinsonneault et al. (1991), the angular momentum required on the RGB for our preferred scenario ($`\delta =6.5`$ and $`M_0=0.3M_{}`$) is up to three orders of magnitude higher. Even if we take $`M_0=0.2M_{}`$ and $`\delta =4`$, the required angular momentum is up to $`5J_{}`$. As noted by Peterson et al. (1983), the angular momentum requires a planet to spin-up the RGB envelopes. The orbital angular momentum of a planet (or any other companion of mass $`M_p`$) is
$`J_p=8\times 10^{49}\left({\displaystyle \frac{M_p}{M_J}}\right)\left({\displaystyle \frac{M_{10}}{0.9M_{}}}\right)\left({\displaystyle \frac{a}{1\mathrm{AU}}}\right)^{1/2}\mathrm{g}\mathrm{cm}^2\mathrm{s}^1,`$ (10)
where $`M_J`$ is Jupiter’s mass, $`M_{10}`$ is the initial stellar mass, and $`a`$ is the initial orbital separation. Jupiter like planets or lighter planets at orbital separations of $`1\mathrm{AU}`$ can account for the angular momentum of $`30\%`$ of the stars in Figure 1b. Allowing planet masses up to $`5M_J`$ and orbital separation up to $`2\mathrm{AU}`$, 34 out of the 38 stars in the sample can be accounted for.
The most extreme case, with $`J_0=6\times 10^{51}\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$, requires a companion of mass $`0.07M_{}`$ ($`0.22M_{}`$) at an orbital separation of $`1\mathrm{AU}`$ ($`0.1\mathrm{AU}`$), by our calculations. However, in our calculations we assumed that the deposition of angular momentum occurs within a short period, after which mass loss is a continuous process. The situation is more complicated when the angular momentum is large. This is seen by calculating the angular velocity of the RGB star after an angular momentum of $`J_p`$ is deposited to its envelope
$`{\displaystyle \frac{\omega }{\omega _{\mathrm{Kep}}}}=0.06\left({\displaystyle \frac{M_1}{0.8M_{}}}\right)^{1/2}\left({\displaystyle \frac{M_{\mathrm{env}}}{0.3M_{}}}\right)^1\left({\displaystyle \frac{R}{100R_{}}}\right)^{1/2}\left({\displaystyle \frac{\alpha }{0.1}}\right)^1\left({\displaystyle \frac{J_p}{10^{50}\mathrm{g}\mathrm{cm}^2\mathrm{s}^1}}\right)^1.`$ (11)
Several things should be noted. First, when $`J_p3\times 10^{50}\mathrm{g}\mathrm{cm}^2\mathrm{cm}^1`$ the angular velocity is $`\omega 0.2\omega _{\mathrm{Kep}}`$, and mass loss is expected to be concentrated toward the equatorial plane. This means that $`\beta >2/3`$ (see eq. 2), and $`\delta >6.5`$ at early stages. Second, for a required angular momentum of $`J_010\times 10^{51}\mathrm{g}\mathrm{cm}^2\mathrm{cm}^1`$, as for two stars in Figure 1b, the companion may bring the envelope to synchronization with the orbital motion before it enters the envelope. This means that the wind may carry even more angular momentum, since the companion keeps the envelope at a more or less constant angular velocity even after substantial mass has been lost. Third, if the companion keeps the envelope rotation synchronized with the orbital motion until the star goes through a helium flash and starts its contraction to the HB, the star will reach the HB with very high angular velocity (because of the contraction). This must result in a substantial mass loss during the contraction phase, which will slow down the star by orders of magnitude (see next subsection). These three points show that the evolution of precursors of rotating blue HB stars maybe very complicated. In any case, we conclude that low-mass main sequence stars, or even brown dwarfs and planets can explain the angular momentum of all stars in this sample. Models based on single stars cannot account for the fast rotators on the HB.
The main factor that determines the mass loss during a common envelope evolution is the mass of the companion, while the angular momentum is determined by the orbital separation as well. Assuming that the distributions of initial orbital separations and initial masses of the companions are independent, we expect that the most massive companions will cause higher mass loss rate as well as deposit more angular momentum. This is compatible with the distribution on Figure 1b, where the RGB progenitor of HB stars which have less envelope mass had more angular momentum.
### 3.3 Angular Velocity Evolution along the HB
An HB star slows down very fast with mass loss on the HB, since $`\alpha 0.01`$ on the HB. For example, for $`\alpha =0.01`$ the star will slow down by a factor of 2 (30) by losing only $`1\%`$ ($`5\%`$) of its envelope mass. Therefore, envelope rotation on the HB, even if enhancing the mass loss rate, will not change the evolution on the HB much since the star will slow down before losing much mass. As an example let us examine the rotation velocities of M13 (Fig. 7b of Peterson et al. 1995). Hot stars ($`10,500T_{\mathrm{eff}}13,000\mathrm{K}`$) have rotation velocities of $`10\mathrm{km}\mathrm{s}^1`$. They cannot be the descendants of the cooler ($`8,000T_{\mathrm{eff}}10,000\mathrm{K}`$) fast rotating, $`40\mathrm{km}\mathrm{s}^1`$, stars. To move by $`2000\mathrm{K}`$ to the left on the HR diagram (becoming hotter) on this part of the HB requires the envelope to lose $`20\%`$ of its mass \[e.g., D’Cruz et al. 1996; eq. (6) above\]. For $`\alpha 0.01`$ on this part of the HB (Pinsonneault et al. 1991), such a mass loss will reduce the angular velocity almost to zero.
There are no observations to indicate the mass loss rate of HB stars. The Reimers (1975) formula is not a good estimate for the mass loss of HB stars (Koopmann et al. 1994). Instead, Koopmann et al., as well as Demarque & Eder (1985) take a mass loss rate up to $`10^9M_{}\mathrm{yr}^1`$. A higher upper mass loss rate of $`3\times 10^9M_{}\mathrm{yr}^1`$ was assumed for HB stars by Yong & Demarque (1997). Following these uncertainties, we stay with the Reimers (1975) mass loss rate
$`\dot{M}2\times 10^{11}\eta \left({\displaystyle \frac{L}{30L_{}}}\right)\left({\displaystyle \frac{R}{R_{}}}\right)\left({\displaystyle \frac{M}{0.5M_{}}}\right)^1M_{}\mathrm{yr}^1,`$ (12)
where $`L,R,M`$ are the stellar luminosity, radius, and mass, respectively, and $`\eta `$ is a constant of order unity. The lifetime on the HB is $`10^8\mathrm{yr}`$. According to the above mass loss rate, HB stars will lose $`2\%`$ of their envelope mass during their lifetime on the HB. This will not change the location on the HB much, but by taking $`\alpha =0.01`$ in equation (5), we find that the mass loss will slow down the stellar rotation by a factor of $`4`$.
## 4 SUMMARY
Rotation along the RGB may increase the total mass loss, increase the core mass at helium flash (Mengel & Gross 1976), and mix helium to the envelope (Sweigart & Catelan 1998, and references therein). Mengel & Gross (1976) find that the rotation velocity of the core should be $`\omega _{\mathrm{core}}>2\times 10^4\mathrm{s}^1`$, i.e., two orders of magnitude faster than the solar rotation velocity, in order to substantially influence the evolution along the RGB. We think that such high rotation velocities, even of the core alone, can be attained only via the interaction with a gas giant planet, a brown dwarf, or a low-mass main sequence star. This led us, following Soker (1998; also Siess & Livio 1999b), to propose that planets are the main “second parameter” factor of the HB, although planets alone cannot explain all blue HB stars. Extreme HB stars seem to require the interaction with a low-mass stellar companion, through a common envelope evolution or via tidal interaction with the companion outside the envelope. That stellar companions can lead to the formation of sdB stars was suggested by Mengel, Norris, & Gross (1976), through mass transfer. We here invoke different processes, namely tidal spin-up or common envelope evolution. Support for the common envelope evolution is the sdB star PG 1336-018 which has a secondary of mass $`0.15M_{}`$ with an orbital period of $`0.1`$ days (Kilkenny et al. 1998; see also Maxted et al. for the sdB binary system PG 1247+554).
In this paper we analyze the angular momentum evolution from the RGB ro the HB and along the HB. Using rotation velocities for stars in the globular cluster M13, we find that the required angular momentum for the fast rotators is up to $`13`$ orders of magnitude (depending on some assumptions) larger than that of the sun. On the other hand, planets of masses up to five times Jupiter’s mass and up to an initial orbital separation of $`2\mathrm{AU}`$ are sufficient to spin-up the RGB progenitors of most of these fast rotators (other had been spun-up by brown dwarfs or low-mass main sequence stars). Our results show that the fast rotating HB stars must have been spun-up by companions, planets, brown dwarfs, and low-mass main sequence stars, while they evolve on the RGB.
We argue that the angular momentum considerations presented in this paper further support the “planet second parameter” model. In this model, the “second parameter” process of the HB is interaction with low-mass companions, in most cases gas giant planets, and in a minority of the cases with brown dwarfs or low-mass main sequence stars. The masses and initial orbital separations of the planets (or brown dwarfs or low-mass main sequence stars) form a rich spectrum of different physical parameters, which manifests itself in the rich varieties of HB morphologies observed in the different globular clusters and elliptical galaxies.
###### Acknowledgements.
This research was supported in part by grants from the Israel Science Foundation and the US-Israel Binational Science Foundation.
FIGURE CAPTIONS
Figure 1: The total angular momentum of the progenitor RGB star prior to the intensive mass loss vs. effective temperature of the descendant HB star. The total angular momentum for each RGB star is calculated by equation (9) with $`M_0=0.3M_{}`$ and a constant value of $`\delta `$. (a) For $`\delta =4`$ (upper panel). (b) For $`\delta =6.5`$ (lower panel). The large symbol in the right-hand side stands for a star which had $`J_0=6.05\times 10^{50}\mathrm{g}\mathrm{cm}^2\mathrm{s}^1`$, hence it was reduced by a factor of 10 to fit into the graph. |
no-problem/9911/astro-ph9911112.html | ar5iv | text | # On the Reliability of Cross Correlation Function Lag Determinations in Active Galactic Nuclei
## 1 Introduction
Active galactic nuclei (AGN) often exhibit variable luminosity. In several highly variable AGN, the observed UV/optical emission–line fluxes are well correlated with the continuum variations, but with a time delay (e.g. Peterson 1988; Ulrich, Maraschi & Urry 1997). In effect, the line emission is a light echo of the photoionizing continuum. For Seyfert 1 galaxies, the well–measured time delays (“lags”) range from $``$1–80 d, and depend on which emission line is observed (the higher ionization lines respond more quickly to continuum variations than do lower ionization lines; see e.g. Clavel et al. 1991; Peterson et al. 1998b). The observed time delay gives a characteristic timescale that, under reasonable assumptions (i.e. the lines are responding to photoionization, the observed continuum closely mimics the photoionizing continuum, and the light–travel timescale is the most relevant timescale), provides a characteristic lengthscale (see e.g. Peterson 1988, 1993). Thus the observed lag between AGN continuum and emission–line light curves gives a measure of the size of the line–producing region, i.e., the broad–line region (BLR). Note that these lengthscales correspond to angular scales of microarcseconds on the sky, so “echo mapping” studies offer the potential for extremely high spatial resolution studies of AGN (e.g. the proceedings by Peterson and by Horne in Gondhalekar, Horne & Peterson 1994).
The Seyfert galaxy NGC 5548 has been the target of several variability studies and has been intensely monitored for several years (e.g. Peterson et al. 1999). Of particular interest are the time lag determinations for the H$`\beta `$ emission line with respect to the optical continuum. The lag (as defined by the peak of the cross correlation function) ranges from $``$8.7 d to $``$22.9 d over a span of 8 years (1989-1996) with an rms scatter of 4.5 d (Peterson et al. 1999). These variations have been interpreted as evidence for real structural changes in the BLR, either due to physical changes of the ensemble of BLR clouds or to changes in the photoionizing radiation field (e.g. Wanders & Peterson 1996). The dynamical timescale for the BLR ($`\frac{BLRsize}{BLRcloudvelocities}\frac{c\times lag}{linewidth}`$) is on the order of a few years, so changes in the BLR structure on this timescale is a realistic possibility.
However, there is considerable difficulty in determining the lag from AGN time series, in particular, because (i) the data are short in duration compared to the timescales of interest and (ii) the data are usually not equally sampled. Previous studies have investigated the issue of noisy and poorly sampled data, e.g., see the summary in Koen (1994). Several different methods for computing the CCF have been constructed: the interpolated CCF (“ICCF” — e.g. Gaskell & Peterson 1987); the discrete CCF method (“DCF” — Edelson & Krolik 1988); the inverse Fourier transform of the discrete power spectrum (Scargle 1989); and the Z–transform CCF (Alexander 1997). In addition, methods other than the CCF have been used to measure time lags, e.g., optimal reconstruction via minimizing $`\chi ^2`$ (Press, Rybicki & Hewitt 1992) or a parametric approach (i.e., modeling the light curves as random walks; Koen 1994). For comparisons of the DCF and ICCF methods, see e.g., Litchfield, Robson & Hughes (1995), White & Peterson (1994), and Rodriguez–Pascual, Santos–Lleo & Clavel (1989). Simulations have shown that these methods can provide reasonably accurate determinations of the lag under sampling and noise conditions similar to the actual observations (e.g. Peterson et al. 1998a; White & Peterson 1994, Koratkar & Gaskell 1991b). Hence the changing H$`\beta `$ lag in NGC 5548 seems quite real.
In this paper we consider two additional sources of “error” in the CCF lag determinations, which to our knowledge have not been fully addressed in the astronomical literature. The first is a bias inherent in the definition of the CCF such that, on average, the computed CCF lag is not the true lag. The second error is concerned with gross changes in the CCF due to changes in the ACF of the continuum. These changes in the ACF are not real, in that the underlying physical process generating the continuum variations have not changed; they are simply statistical fluctuations inherent in any finite realization of a stochastic process.
In §2 we define the autocorrelation function (ACF), cross correlation function (CCF) and transfer function and examine the relationships between them in the AGN echo mapping context. Several aspects of the CCF are examined in §3, and at the risk of being overly pedagogical, we present this material in detail because they are crucial to the interpretation and usefulness of the CCF. In particular, the bias inherent in the definition of the CCF and the error in the lag determination are highlighted. To illustrate and quantify these “problems” with the CCF, simulations tailored to match the characteristics of the H$`\beta `$ observations of NGC 5548 are presented in §4. The simulations clearly show the bias and large variance present in the CCF. Additional simulations are also presented to: (1) explore a wider range in parameter space, (2) help quantify the amount of bias and variance, and (3) illustrate how the bias and variance can be reduced by removing low frequency trends from the light curves. We conclude with a discussion of our results in §5.
## 2 The ACF, CCF and Transfer Function
### 2.1 The Standard Definitions of the ACF and CCF
The auto– and cross–correlation functions are standard tools of time series analysis (e.g. Jenkins & Watts 1969; Box, Jenkins & Reinsel 1994; Chatfield 1996; Wall 1996). The cross–correlation function, sometime called the serial correlation function, quantifies the amount of similarity or correlation between two time series as a function of the time shift (i.e., the delay or “lag”) between the two time series. The auto–correlation function (ACF) measures the similarity of a single time series to a delayed version of itself.
The standard definition of the cross–correlation function of two time series $`x_i`$ and $`y_i`$ sampled at discrete times $`t_i`$ ($`i=1,\mathrm{},N`$) with equal sampling ($`\mathrm{\Delta }t=t_{i+1}t_i`$) is:
$`CCF(\tau _k){\displaystyle \frac{\frac{1}{N}{\displaystyle \underset{i=1}{\overset{Nk}{}}}(x_i\overline{x})(y_{i+k}\overline{y})}{\left[\frac{1}{N}{\displaystyle \underset{i=1}{\overset{N}{}}}(x_i\overline{x})^2\right]^{1/2}\left[\frac{1}{N}{\displaystyle \underset{i=1}{\overset{N}{}}}(y_i\overline{y})^2\right]^{1/2}}}`$ (1)
where the lag $`\tau _k`$ is the size of the time shift: $`\tau _k=k\mathrm{\Delta }t`$, $`k=0,\mathrm{},N1`$ and $`\overline{x}`$, $`\overline{y}`$ are the means of $`x_i`$ and $`y_i`$ (see e.g. Jenkins & Watt (1969), Chatfield (1996)). The ACF is similarly defined, with $`x_i`$ itself in place of $`y_i`$. \[NB: for negative lags, simply interchange $`x`$ and $`y`$.\]
It will be helpful to express the CCF more succinctly, and we will use the continuous definition to do so:
$`CCF(\tau )={\displaystyle x(t)y(t+\tau )𝑑t}`$ (2)
$`ACF(\tau )={\displaystyle x(t)x(t+\tau )𝑑t}`$ (3)
It should be explicitly stated that we use equations (2) & (3) only as shorthand representations of equation (1), as the discrete and continuous CCF are not the same. Also note that in this nomenclature $`\overline{x}`$ and $`\overline{y}`$ are by definition zero and the light curves have been normalized to unity variance.
### 2.2 The transfer function $`\mathrm{\Psi }`$
AGN broad emission–line light curves are not simply delayed copies of the continuum light curves. This can be understood as a geometrical effect, as first pointed out by Blandford & McKee (1982): the line–emitting region extends over a large volume of space, so the light travel time across the BLR is significant. The integrated line light curve thus appears as a delayed and blurred version of the continuum light curve.
Blandford & McKee (1982) expressed the relationship between the line emission $`L(t)`$ and the continuum emission $`C(t)`$ as:
$$L(t)=C(t\tau )\mathrm{\Psi }(\tau )𝑑\tau .$$
(4)
The geometry and responsivity of the gas is contained in the “transfer function” $`\mathrm{\Psi }`$, and equation (4) simply states that the line light curve is equal to the continuum light curve convolved with the transfer function. The transfer function can be thought of as a “point spread function”, “impulse–response function”, or as a filter of a linear moving average process with the continuum as the driver (but note that in this interpretation the continuum is not an uncorrelated white noise process). Only if $`\mathrm{\Psi }`$ is a delta function will the $`L(t)`$ be an identical (but lagged) version of $`C(t)`$ \[NB: we use the term “identical” loosely here: we mean $`L(t)`$ is identical to $`C(t)`$ within a scale factor and constant, allowing for $`\mathrm{\Psi }𝑑\tau 1`$ and a background contribution to both the line and continuum. We also implicitly assume $`\mathrm{\Psi }(\tau <0)=0`$, i.e., $`\mathrm{\Psi }`$ is purely causal.\] Recovering the transfer function is a major goal of variability studies in AGN, as it contains information on the geometry and kinematics of the BLR. The reader is referred to Horne (1994), Pijpers & Wanders (1994), Krolik & Done (1995), Vio, Horne & Wamsteker (1994), and the proceedings in Gondhalekar, Horne & Peterson (1994) for a discussion of the transfer function and inverting equation (4) to solve for $`\mathrm{\Psi }`$.
### 2.3 The relationship between the ACF, CCF and transfer function
From the definitions it can be shown (e.g. Koratkar & Gaskell 1991a; Penston 1991; Sparke 1993) that
$$CCF(\tau )=_{\mathrm{}}^{\mathrm{}}\mathrm{\Psi }(\tau ^{})ACF_c(\tau \tau ^{})𝑑\tau ^{},$$
(5)
i.e., the cross–correlation function is equal to the transfer function convolved with the auto–correlation function of the continuum. In this representation it becomes clear that the theoretical CCF is identical to a blurred version of the transfer function.
If the continuum light curve consists of a well–isolated sharp pulse, or equivalently, its power spectrum is white, then its ACF<sub>c</sub> is a delta function and the CCF is then identical to $`\mathrm{\Psi }`$. In this circumstance the peak of the CCF will occur at the same lag as the peak of $`\mathrm{\Psi }`$. More generally, the ACF is a broad and even function, so the peak of the CCF will not necessarily occur at the same lag as the peak in $`\mathrm{\Psi }`$. The lag determined from the CCF should be considered only a characteristic time scale.
Equation (5) concisely describes the fundamental issue we address in this paper: the determination of the lag between line and continuum light curves depends on both the shape of the transfer function and the ACF of the continuum. However, it is crucial to understand that eq. (5) is valid only in the infinite duration limit — for finite limits, it is straightforward to show the equality is not true.
## 3 Practical Issues
Just as with Fourier analysis, the difference between the mathematical CCF and the experimental (discrete and finite–sampled) CCF can be large. Thus it is useful to examine some of the details and practical issues concerning the computation of the CCF, with emphasis on the AGN echo mapping problem. Several of the issues discussed in this section will be illustrated with simulations presented in §4. Problems concerning unequal sampling have been discussed in the literature (see the references in the introduction) and will not be repeated here.
### 3.1 General aspects of the CCF
#### 3.1.1 “self correlation”
The individual points in the ACF and CCF are highly correlated with themselves, i.e., neighboring points are not independent (a derivation can be found in Jenkins & Watts (1969)). In general, the neighboring values in the ACF/CCF are more correlated with themselves than neighboring values in the original time series (Jenkins & Watts 1969). It is important to be aware of this “self–correlation” in the interpretation of the ACF/CCF because trends in the ACF/CCF are long–lived, e.g. it can take a surprisingly long time for the ACF/CCF to decay from a peak. This can also lead to spurious large values of the CCF, especially for time series whose ACFs contain intrinsically broad peaks — see e.g. Figs. 1–4 in Koen (1994).
#### 3.1.2 bias
The CCF as defined in equation (1) has the peculiar property that the summation in the numerator goes from $`i=1`$ to $`Nk`$, but the normalization is $`1/N`$, not $`1/(Nk)`$. This normalization is chosen primarily because the variance of the CCF is then reduced, i.e., the sample CCF is a better estimator of the true CCF in the mean square error sense when the $`\frac{1}{N}`$ normalization is used (Jenkins & Watts 1969). This normalization also guarantees that the CCF is always bounded by $`\pm 1`$, and the autocorrelation matrix is positive semi–definite so that the ACF and power spectrum are Fourier transform pairs — the Wiener–Khinchin theorem (e.g. Jenkins & Watt 1969). However, the $`\frac{1}{N}`$ normalization means that eq. (1) is only an asymptotically unbiased estimator of the correlation function, and its use introduces a well–known bias towards zero (e.g. Kendall 1954; Marriott & Pope 1954; Otnes & Enochson 1978; Chatfield 1996). This bias grows worse with increasing lag, and results in a triangular–shaped reduction of the CCF and hence underestimates the lag of the peak of the CCF. The trade–off between the bias and variance in an estimator is a common problem in statistics: often one must choose between precision (low variance) and accuracy (low bias). For the CCF, the bias goes as $`\frac{1}{N}`$ while the variance goes as $`\frac{1}{\sqrt{N}}`$; this is why reducing the bias has not been treated with equal importance as reducing the variance (Tjostheim & Paulsen 1983). It is argued that in most cases $`N1`$ and $`kN`$, so one can tolerate a small bias to achieve a large the reduction in variance. Furthermore, eq. (1) is extremely simple to compute. For these reasons the $`\frac{1}{N}`$ normalization has gained predominance.
In the case of AGN variability, however, the ACFs are intrinsically broad and the lags of interest are usually a substantial fraction of the duration of the observations, hence this bias can lead to a significant underestimation of the lag. As Scargle (1989) points out, this bias can be devastating. One could simply replace the $`\frac{1}{N}`$ term with $`\frac{1}{Nk}`$, but then the CCF has the very undesirable property that it can exceed unity. Indeed, it is likely to exceed unity because of the self–correlation in the CCF; the ACF at $`k`$=0 is forced by definition to be exactly 1, so due to self–correlation the value of the AGN ACF at $`k`$=1 will tend to be close to 1 as well, and so on for many lags. As shown by Marriott & Pope (1954) and by Kendall (1954), the bias in the estimated ACF at lag $`\tau `$ depends, in general, on the value on the true ACF at lags $`\tau `$ and earlier, a consequence of the strong self–correlation. Because of this, the bias cannot be removed a priori and the simple $`\frac{1}{Nk}`$ attempt at bias correction will in general fail.
In his method for coping with unequally sampled data, Scargle (1989) suggests renormalizing the ACF with the ACF of the sampling pattern, which will remove both the effects of this bias as well as the effects of irregular sampling. Scargle notes that this correction is on average equivalent to replacing $`\frac{1}{N}`$ term with $`\frac{1}{Nk}`$, and so it also allows values of the CCF to exceed unity.
We find that in practice, the $`\frac{1}{Nk}`$ normalization has the side–effect of boosting noise at large lags, but even worse, the shape of the ACF can be modified, and peaks in the CCF at $`\tau 0`$ can be shifted to longer lags. This is totally unacceptable for our purposes and so this proposed renormalization is rejected.
To the best of our knowledge, most of the effort in reducing the bias in the CCF as given in equation (1) has been motivated by the desire to accurately determine the Yule–Walker filter coefficients of a stochastic moving average (or autoregressive) process. These filter coefficients can be determined from the ACF, and in particular, the first few lags of the ACF. Since this usually corresponds to $`kN`$, a “better” definition of the ACF has not been sought and instead bias–corrections have been developed to treat the specific problem of determining the filter coefficients (e.g. Kendall 1954; Tjostheim & Paulsen 1983; Marriott & Pope 1954). In their noteworthy analysis, Sutherland, Weisskopf & Kahn (1978) address the question of bias specifically for the ACF in the case of a noisy and finite length shot–noise light curve, and in particular, they derive a “partially unbiased” definition for the ACF. They show that in addition to the bias that is present in the noise–free case, there is an additional reduction of the ACF due to noise; this comes about because the normalization of the ACF depends on the variance of the light curve, and in the presence of noise, the variance itself is biased too high. Thus the value of CCF depends on the signal–to–noise ratio (S/N) of the data. The motivation for their work was to deduce the correlation time constant for the shot noise in Cyg X-1. They show that this can be deduced in an unbiased fashion from the ratio of the ACF at lags $`k`$=1 and $`k`$=2. It is not obvious that their partially unbiased ACF, suitable for small lags, is applicable to the CCF at large lags but this is a topic worth pursuing.
Another possible method to reduce the bias in the ACF is to use the “jackknife” or Quenouille method (see e.g. Chatfield 1996): $`ACF^{}=2ACF\frac{1}{2}(ACF_1+ACF_2)`$, where $`ACF_1`$ is the ACF of the first half of the data set and $`ACF_2`$ is the ACF of the latter half. The bias in $`ACF^{}`$ is reduced from order $`\frac{1}{N}`$ to $`\frac{1}{N^2}`$. Although this reduces the length of the already too short AGN time series by a factor of two, it does allow one to check the stationarity assumption between the two halves. While we have not investigated this bias correction method, it is worth consideration in future studies.
#### 3.1.3 peak or centroid?
The peak of the CCF gives the lag where the two time series are most highly linearly correlated. Yet the peak of the CCF need not correspond to the peak of the transfer function $`\mathrm{\Psi }`$ — indeed, the transfer function may not have a well–defined peak at all. The peak of the AGN line–continuum CCF tends to correspond to where the line echo response is most coherent, i.e. the innermost region of the BLR, and hence can underestimate the BLR size (e.g. Gaskell & Sparke 1986; Edelson & Krolik 1988; Robinson & Perez 1990; and especially Pérez et al. 1992).
To avoid the uncertainty in the interpretation of the peak of the CCF, the centroid (center of gravity) of the CCF is often quoted. In the infinite duration limit, the centroid of the CCF corresponds to the centroid of $`\mathrm{\Psi }`$ (e.g. Penston 1991; Koratkar & Gaskell 1991a<sup>1</sup><sup>1</sup>1The end result is correct, but the derivation contains an error.). As Penston points out, this is intuitively obvious because the ACF is an even function and the CCF is the convolution of $`\mathrm{\Psi }`$ and the ACF. The lag determined from the centroid of the CCF is therefore sometimes preferred over the peak lag (e.g. Peterson et al. 1998a) because it is more easily interpreted as the responsivity–weighted radius of the BLR. Thus it has become common practice to quote both the peak and centroid of the CCF when making lag estimations.
However, the centroid suffers from three serious flaws. First, the centroid of the CCF based on finite–duration light curves is not identically the centroid of $`\mathrm{\Psi }`$. This follows directly from the fact that the sample CCF is not identically the sample ACF convolved with $`\mathrm{\Psi }`$, i.e., eq. (5) is only true in the infinite duration case. The best that can be hoped for is that the centroid of the CCF is approximately the same as the centroid of $`\mathrm{\Psi }`$ if the durations of the light curves are much longer then the lag. Second, the centroid is poorly defined. Obviously not all points in the CCF should be used to define the centroid, only those near the peak should be included. So some threshold is arbitrarily set, and the value of the centroid can depend very strongly on what threshold is chosen (see Koratkar & Gaskell 1991a,b). Third, because the centroid integrates over a range of values of the CCF, it is more sensitive to the bias inherent in the CCF than the peak. Thus the centroid underestimates the lag more than the peak. This bias in the centroid then negates the argument that the centroid gives a more characteristic BLR size than the peak. The problems with using the centroid to estimate the lag will be illustrated via the simulations in §4, where it will be seen that the centroid of the CCF generally fares worse than the peak (see also Pérez et al. 1992). <sup>2</sup><sup>2</sup>2 A similar preference for the use of the CCF peak has been previously shown by Wade & Horne (1998). In using spectroscopy to measure radial velocities, they found that fitting a very narrow Gaussian to the peak of the CCF gives a more reliable velocity estimate than fitting a broad Gaussian to the CCF.
#### 3.1.4 the removal of the mean
The terms $`\overline{x}`$ and $`\overline{y}`$ in eq. (1) are the mean values of the entire light curves. For a stationary process, the sample mean of all the data is clearly the best estimate for the mean. But for a finite duration realization of a stochastic process, this may not be the case. Scargle (1989) makes the point that for positive definite quantities (e.g. fluxes), removing the sample mean is not always desirable. Instead, removing a fraction of the mean may provide a better CCF estimate. The question of what fraction to use depends upon the data themselves, and experimentation (and judgment) is required to find what fraction is optimal. One could in fact solve for the mean as a free parameter (e.g. Press, Rybicki & Hewitt 1992).
The standard definition of the CCF (eq. (1)) implicitly assumes the time series are stationary in their mean and variance. To help fulfill this requirement, one can “pre–whiten” the data by: (a) removing a linear (or higher order) fit, or (b) apply a differencing operator to the data. In the latter case, the data to be analyzed are derived from differences of successive pairs of the light curve: $`x_i^{}x_ix_{i1}`$. The differencing operator is in fact a high–pass filter, and under high signal–to–noise ratio (S/N) conditions it is the preferred technique to remove trends — see e.g. Koen (1993, 1994). Unfortunately, it fails in practice because the S/N of currently available UV/optical AGN variability data is too low, and one is left mostly with noise.
#### 3.1.5 the “local” CCF
An alternative definition of the CCF in which only those $`(Nk)`$ points that overlap at a given lag ($`\tau _k=k\mathrm{\Delta }t`$) are used to determine the mean and standard deviations is:
$`localCCF(\tau _k){\displaystyle \frac{\frac{1}{Nk}{\displaystyle \underset{i=1}{\overset{Nk}{}}}(x_i\overline{x_{}})(y_{i+k}\overline{y_{}})}{\left[\frac{1}{Nk}{\displaystyle \underset{i=1}{\overset{Nk}{}}}(x_i\overline{x_{}})^2\right]^{1/2}\left[\frac{1}{Nk}{\displaystyle \underset{i=k+1}{\overset{N}{}}}(y_i\overline{y_{}})^2\right]^{1/2}}}`$ (6)
where $`\overline{x_{}}=\frac{1}{Nk}_{i=1}^{Nk}x_i`$ and $`\overline{y_{}}=\frac{1}{Nk}_{i=k+1}^Ny_i`$ are the means of $`x_i`$ and $`y_i`$ in the overlap interval. We refer to this method of computing the CCF as the “local CCF” method. The local CCF naturally accounts for the $`1/N`$ versus $`1/(Nk)`$ problem because only those values of the light curves which overlap at a lag $`k\mathrm{\Delta }t`$ are included in the determination of the mean and standard deviations. The value of the CCF determined in this fashion is then identical to the product–moment correlation coefficient, also known as Pearson’s $`r`$ statistic (see e.g. Press et al. 1996 for a discussion) and this is the method used by White & Peterson (1994). The simulations in §4 will demonstrate that while the bias is not completely removed, this definition produces a CCF with far more desirable qualities. Notice that in effect the local CCF method applies a varying high–pass filter to the data whose cutoff frequency depends on the number of points in the time series at a particular lag (i.e. $`Nk`$). Thus the local CCF handles data containing low frequency trends far better than the standard CCF. For these reasons we prefer the local CCF over the standard definition (eq. (1)), but we caution that CCFs constructed in this manner are for lag determinations only, as to our knowledge the statistical properties and the relationship between the local CCF and the (a) inverse Fourier transform of the power spectrum or (b) coefficients of autoregressive (or moving average) processes has not been investigated. Jenkins & Watts (1969) do not recommend the local CCF for these reasons; however, recovery of ARMA coefficients and the power spectrum is not the goal of CCF analysis in the context of AGN echo studies — the determination of an unbiased lag estimate is.
#### 3.1.6 the CCF: an intuitive statistic?
As Jenkins & Watts (1969) point out, it should be kept in mind that the standard CCF as defined by eq. (1) has not in any way been proven to be the best possible estimator. It is used primarily because it is an efficient, consistent estimator with intuitive appeal. However, our intuition can be grossly incorrect in cases when $`k`$ is not $`N`$. So we should not consider equation (1) to be sacrosanct, and in particular, the “local ccf” implementation largely reduces the bias problems mentioned above, and in our simulations, yielded results more akin to our expectations than the standard method.
The local CCF has an easy to understand interpretation (see e.g. Bevington & Robinson 1992): For a given lag $`\tau _k`$, there are $`Nk`$ overlapping points. A least–squares straight–line fit to the mean–subtracted values of $`y`$ versus $`x`$ will yield some slope $`b`$. A non–zero slope suggests a correlation. However, the value of $`b`$ cannot be used as a measure of the strength of the correlation because $`x`$ and $`y`$ could be strongly correlated and yet have a small slope (e.g., if $`x`$ spanned many orders of magnitude more than $`y`$ the slope would be small even if there were a perfect correlation). Reversing the dependent and independent variables and fitting a line to $`x`$ versus $`y`$ will give a slope $`b^{}`$. As with $`b`$, a non–zero slope $`b^{}`$ implies a possible correlation. The product $`bb^{}`$ is then a measure of the correlation strength, independent of the slope of the relationship. The quantity $`\sqrt{bb^{}}r`$ = local CCF($`\tau _k`$).
Despite the apparent intuitive appeal of using the CCF to detect a correlation, a few caveats and comments are in order: (i) equation (1) defines a linear correlation coefficient between two series. The restriction to a linear correlation is arbitrary, and a non–linear analysis may prove fruitful; (ii) the use of non–parametric correlation tests (such as Spearman’s rank–order correlation) may be of value (see Press et al. 1996) since they tend to be more robust; (iii) our intuition is based on the abstract mathematical case of infinite duration time series, and this can be grossly misleading.
#### 3.1.7 finite duration sampling
Flagrant violations of our intuition about the ACF/CCF can occur if the duration of the light curves are finite. The problem is particularly serious in the AGN context because the lags of interest are often a sizeable fraction of the total duration of the light curves.
In theory, the CCF should be the convolution of the ACF and the transfer function $`\mathrm{\Psi }`$. But even in the absence of noise and with ideal sampling this will often not even be approximately true. Leaving aside the effects of bias in the standard CCF definition, the major cause of the difference between the expected and measured CCF is due to changes in the observed ACF. For any finite realizations of a stochastic process, the ACFs will not be exactly the same, even if the underlying generating process is unchanged. The usual interpretation of the CCF demands stationarity in the mean and variance, but this is in general unlikely for finite duration observations generated from a stochastic process, and in particular, AGN light curves have red power spectra and are far from being stationary on year–to–year timescales. As a consequence, the ACFs can vary from epoch to epoch, and with it the CCF lags, despite no real physical changes in the emission producing mechanism (either the continuum source or BLR). To help mitigate this effect, low frequency power in the data should be removed, or if the data S/N allow, a differencing operator should be applied. In §4.2 simulations will show the improvements in the CCF lag determination that can result by removing low–frequency trends and softening the edge effects in the finite light curves.
### 3.2 The errors in the lag determination
Real data have finite time resolution, contain noise, may be unequally sampled, and have finite duration. The effects of finite sampling rate are not a problem provided the data are not undersampled, and the effects of noise and unequal sampling have been discussed extensively in the astronomical literature, e.g., White & Peterson (1994), Maoz & Netzer (1989), Peterson et al. (1998a), and references therein. Simulations have shown that the CCF can be satisfactorally recovered under a wide range of realistic sampling and noise conditions. However, the effects of bias and finite duration observations have not be properly appreciated when considering the errors in the lag determination.
Ignoring bias for the moment, there are two distinct sources of error: (i) external errors due to observational noise and irregular sampling, and (ii) internal errors due to the random nature of the light curves. Finite duration sampling of the light curves brings about the latter source of error. It is independent of observational noise or the sampling pattern.
Maoz & Netzer (1989) estimated the errors on the CCF by producing a set of simulated line and continuum light curves with random sampling and noise, then constructed a “cross–correlation peak distribution” (CCPD) histogram showing the spread in lags of the peak. From the CCPD one can measure the mean (or median or mode) and the associated uncertainty on the lag for the chosen model. For their continuum time series they used either an interpolated version of the Peterson et. al. (1985) Akn 120 light curve or the Clavel et al. (1987) NGC 4151 light curve. So in fact identical parent continuum light curves were used for each of their two sets of simulations. The CCPD clearly shows a large spread in the determinations of the lag, but this spread only shows the effects of the sampling and observational noise.
The White & Peterson (1994) analysis is more general in that the continuum light curves are not identical; instead they have a power–law power density spectrum (PDS) of either $`P(f)f^{2.5}`$ or $`f^{1.8}`$, motivated by the power spectra of NGC 5548 (Clavel et al. 1991) and NGC 3783 (Reichert et al. 1994), respectively. However, although the continuum light curves are different in each realization, they all have very similar ACFs (since by definition they have identical PDS and the Weiner–Khinchin theorem states that the PDS and ACF are Fourier pairs). The scatter in their CCPD is therefore dominated by observational noise and irregular and sparse sampling, and do not realistically include finite sampling–induced changes in the ACF.<sup>3</sup><sup>3</sup>3Technical note: The construction scheme used by White & Peterson (1994) to add backgrounds to the light curves (such that the fractional rms “$`F_{var}`$” matches the observed value of 0.32 for NGC 5548) results in a correlation between the intrinsic rms of the light curve and the amount of observational noise added. The simulated noise is therefore not constant, nor is it dependent only on the simulated fluxes; it also depends on the amount of intrinsic variability. This correlation may result in a slight overestimate of the reliability of the lag determinations, since for continuum light curves with small intrinsic fluctuations the observational noise is reduced.
In their pioneering work, Gaskell & Peterson (1987) do indeed take into consideration the changes in the ACF, since their simulated continuum light curves were generated using a first–order autoregressive model (e.g. see Jenkins & Watts 1969). In fact their figure 12 shows the problems mentioned in the previous section: bias in the correlation coefficient (height of the peak of the CCF) and bias in the position of the peak of the CCF (towards too small lags). However, the emphasis of their work was to highlight their interpolation method for coping with irregular, sparse sampling and observational noise, and neglected the issues we investigate here.
The more recent analysis of CCF uncertainties by Peterson et al. (1998a) attempts to generate a realistic CCPD using a combined Monte Carlo and bootstrap method (see e.g. Press et al. 1996 for a discussion of the bootstrap method). The Monte Carlo “flux randomization” jitters the observed data values by a random amount consistent with the observational noise, while the bootstrap “random subset selection” picks subsets of the time series at random. The authors conclude that the method produces estimates of the errors that are generous, i.e., slightly larger than the errors in the parent distribution. While their detection of a wavelength–dependent lag in NGC 7469 seems irrefutable, the analysis of the errors is only partially complete. In their simulations, the method used to generate the light curves was nearly identical to that in White & Peterson (1994), so they do not precisely mimic reality in the construction of the parent CCPD distribution. The fact that the CCPD generated via a Monte Carlo and bootstrap treatment of a single light curve realization is larger than the parent CCPD is comforting, but this may still underestimate the true uncertainty in the lag.
Note that the Peterson et al. (1998a) implementation of the bootstrap omits roughly 37% of the observed continuum and line data pairs; this can be avoided if one reverses the order of the Monte Carlo flux and the bootstrap sampling. By bootstrap sampling first, one can keep track of the data pairs that are selected more than once and the error bars on those points can be reduced accordingly, before the data are jittered by the Monte Carlo method. This brings the method more in line with the standard bootstrap technique (Efron 1983; Diaconis & Efron 1983), where in essence the data are not omitted, but rather, a random weighting is given to the points — in the analysis, a weight of zero is assigned to a datum if it is not chosen, and a weight of $`n`$ is assigned if that datum is selected $`n`$ times. The process is repeated many times and in all cases the total number of input values used is constant. Note that for data whose error bars are roughly constant, it is the bootstrapping, not the Monte Carlo step, that gives the correct error distribution, and for this reason the modification of the Peterson et al. (1998a) technique is important. For the equal sampling case with equal size error bars, Monte Carlo flux randomization alone can grossly underestimate the CCPD width.
The Monte Carlo plus bootstrap method as suggested by Peterson et al. (1998a) and modified to improve efficiency as stated above appears to be the best way to estimate the CCPD and hence the uncertainty of the lag for a given time series. Yet if the continuum light curve is not at least weakly stationary, knowing the uncertainty for a given realization does not give a reliable estimate for the full range of scatter in the determination of the lag. Even if the continuum generating process is stationary on long timescales, short observations may mimic non–stationarity. The fact that the observed yearly mean fluxes from NGC 5548 are not consistent with each other is evidence that the process is not stationary over the timescales of interest and so comparison of CCF from different years is problematic. Only if the ACFs from year to year are identical can the changes in the lag be ascribed to changes in the transfer function and hence changes in the BLR. In summary, even if large and apparently significant changes in the lag are observed, these do not necessarily imply changes in the AGN/BLR — the changes may simply be due to finite–length observations.
Returning to the issue of bias, the standard CCF will underestimate the lag on average, and this bias will not be included in the uncertainty estimates — it is a systematic error. Even with sufficient sampling and no noise, the CCPD is skewed toward smaller lags, and this bias becomes worse as the duration of the light curves decreases. The CCPDs shown by Maoz & Netzer (1989), and by Netzer & Peterson (1997) do not show this skew because they used a specific continuum ACF shape then resample it; there is a bias in their CCPD, but it is nearly the same for each of their simulations. The local CCF method reduces the bias, but it is still present.
There are a number of examples in the literature that show the presence of the bias, e.g., the CCPDs in Litchfield et al. (1995) clearly show a bias toward underestimating the lag in both the (local) interpolated CCF method and in the discrete correlation function. Litchfield et al. (1995) attributed this bias to the asymmetric shape of their simulated blazar single–flare light curves (rise time much shorter than decay time), but it is in fact a general property of the CCF. The bias can also be see in Table 1 (column 6) of White & Peterson (1994) in which the results of Monte Carlo simulations show that the peak of the CCF usually occurs at slightly smaller lags than the true lag.<sup>4</sup><sup>4</sup>4Technical note: For the cases in White & Peterson’s Table 1 with anisotropic BLR cloud emission, i.e., the anisotropy factor $`A=1`$, the values quoted for the true expected lag refer to the centroid of the CCF. However, for the simulations it was the peak of the CCF that was measured. For these asymmetric right triangle–shaped transfer functions, the centroid and peak values are very different. So the comparisons for cases with $`A=1`$ are only approximately valid, and the bias cannot be readily noticed. Another example of the presence of the bias can be seen in the Monte Carlo tests listed in Table 2 (column 2) of Peterson et al. (1998a) and the corresponding skewed CCPD shown in their Fig. 1. This figure also illustrates that while the CCF centroid distribution can be more strongly peaked than the CCF peak distribution, it is also more heavily skewed — it has a higher precision, but lower accuracy. Unfortunately, it is difficult to assess the amount of bias present in real data since it depends on the true shape of the CCF, i.e., we need to know the true ACF and the true $`\mathrm{\Psi }`$ — the broader either of these, the worse the bias. Simulations are required to estimate the bias present in the lag determination, and this then becomes model dependent.
## 4 Simulations
There is no doubt that the lag of the centroid (or peak) of the CCF for the NGC 5548 H$`\beta `$ light curve is changing from year to year. The question is, do these changes indicate that the transfer function $`\mathrm{\Psi }`$ is changing, or do the changes merely reflect the changing ACF due to finite duration sampling? To answer this question and to illustrate the points made in §3, we performed the following simulations.
### 4.1 Construction of the simulations
In this investigation we consider only equally sampled data; this way we cleanly separate the effects due to unequal sampling and those due to changes in the ACF. We construct the simulations such that the changes in the observed ACFs are due to the finite length of the light curve, not due to intrinsic changes in the AGN continuum–generating process, although this would have the same effect.
To simulate the observed continuum, we created a time series from a simple power–law power density spectrum (PDS with $`P(f)f^\alpha `$) with index $`\alpha `$=–2 and random phases. A value of $`\alpha `$ –2 to –3 for the UV continuum in NGC 5548 has been determined by Krolik et al. (1991), so $`\alpha `$=–2 is a reasonable choice, though we emphasize that this is a convenient parameterization for pedagogical purposes, not to be over interpreted as a true representation of the AGN light curve. This artificial time series has zero mean and spans 10 years in time with 1–day sampling.
To simulate the observed continuum light curve $`C`$, the time series is normalized to give an intrinsic rms variability of 2.0 (in units of $`10^{15}`$ erg s<sup>-1</sup> cm<sup>-2</sup> Å<sup>-1</sup>), and a constant value is added so the average continuum level is 10.0. Gaussian distributed white noise with a standard deviation of 0.333 is then added to mimic observational noise.
A Gaussian centered at $`\tau `$=20 d with width $`\sigma `$=6 d was chosen for the transfer function $`\mathrm{\Psi }`$. This form of the transfer function is motivated by the appearance of the observed H$`\beta `$ transfer function in NGC 5548 (Horne, Welsh & Peterson 1991; Peterson et al. 1994), though we stress that the conclusions derived from this choice of $`\mathrm{\Psi }`$ are true in general. In fact, because this $`\mathrm{\Psi }`$ has a well–defined peak, unlike, say a thick spherical BLR transfer function, our simulations present a somewhat optimistic case because the resultant CCF should have a relatively sharp peak.
The line light curve $`L`$ is generated by convolving the original noise–free and zero–mean $`C`$ with $`\mathrm{\Psi }`$. The line light curve is then scaled to give an rms value of 1.5 (in units of $`10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup>) and a constant of 7.5 is added. Gaussian distributed white noise with a standard deviation of 0.30 is then added to the line light curves to mimic observational noise. The parameters are summarized in Table 1, along with the actual observed values for NGC 5548 from Peterson et al. (1999). Figure 1 shows a typical 10–year long simulated continuum light curve, along with its local ACF and PDS.<sup>5</sup><sup>5</sup>5Technical note: all PDS were computed using linearly detrended and Welch tapered light curves. The two lowest frequency bins were not used in the fit to the PDS. The local CCF between the simulated continuum and line light curve for the entire 10 yr period is also shown.
The light curve is then broken into 10 isolated segments, each 300 days long. Each of these segments corresponds to a season’s worth of simulated AGN data. Note that each continuum light curve is independent — the mean, the rms variability, the ACF, and the PDS are not fixed. Figure 2 shows the PDS and CCFs for five seasons extracted from the light curve in Fig. 1. Notice the large changes in shape and lag of the CCFs in these examples, all of which were taken from the same parent light curve. Also notice the differences between the standard and local CCFs.
To build up a statistical number of realizations, the above construction process was repeated 100 times, yielding 1000 seasons of independent simulated continuum and line light curves. The standard and local methods were used to compute the CCF for each of the 1000 pairs of light curves. Both the peak of the CCF and the centroid position were recorded; as in Peterson, et al. 1999, only values of the CCF that lie above 0.8 times the maximum r value were used in the centroid determination.
### 4.2 Simulation Results
#### 4.2.1 The local vs. standard CCF
Figures 3 and 4 show the peak lag values for each of the 1000 trials, along with their CCPD, i.e., a histogram of the lag values. Results from the local CCF method (Fig 3), and the standard CCF definition (Fig 4) are shown. The superiority of the local method is immediately evident. These figures also illustrate two points: (1) there is a downward bias in the CCF, so that the average or median value underestimates the true lag of 20 d; (2) the scatter of the CCF peaks is very large.
In Fig. 5, eight different CCPD are shown, each resulting from a different method of computing the lag, but all from the same 1000 pairs of light curves. On the left, the peak and centroid lags are shown for the local and standard CCF methods. The right panels show the same but for light curves that have had a linear fit subtracted prior to computing the CCF. This figure reveals several features: all CCPD show a downward bias, and this bias is very severe for the standard CCF method; the centroid distributions are more susceptible to bias than the peak distributions; the linear detrending greatly improves the standard method, but only slightly improves the local method (because the local method inherently contains a high–pass filter). The large number of CCFs that peak at zero delay in the standard CCF are due to trends in those season’s light curves: a strong linear trend in both continuum and line light curves will dominate the CCF. Linear detrending of each season’s light curves is therefore highly beneficial.
The simulations described above are optimistic in several regards: (i) the light curves are equally sampled with no gaps, (ii) the observational noise–induced scatter in fluxes are purely independent and Gaussian, (iii) the transfer function has a well–defined peak, and (iv) the duration of the light curves are 15 times longer than the lag of the peak of the transfer function. As a result, the conclusions drawn from these simulations are robust — more realistic simulations would show a larger scatter.
It was noticed that on occasion, the line ACF was narrower than the continuum ACF. This has sometimes been seen in AGN light curves and suggests that $`\mathrm{\Psi }`$ contains negative values or is non–linear (see the discussion by Sparke 1992). However, it can simply be a side–effect of finite duration sampling.
#### 4.2.2 The effects of the continuum power spectrum “color”
To test the sensitivity to the assumed PDS power–law exponent $`\alpha `$, we carried out simulations using parent continuum light curves with a $`1/f`$ and $`1/f^3`$ PDS. The peaks of the local CCFs were determined, after the light curves had linear trends removed. The results are shown in Figs. 6 and 7, where it is clear that the scatter in the lags depends strongly on the value of $`\alpha `$. This is because the width of the ACF is sensitive to $`\alpha `$: redder PDS (more negative $`\alpha `$) have broader ACFs. As stated in eq. (5), the ideal CCF is identical to the transfer function $`\mathrm{\Psi }`$ convolved with the ACF, so a broad ACF yields a broad CCF whose peak is poorly defined. The consequence is that the redder the continuum fluctuations, the more the ability to infer properties of $`\mathrm{\Psi }`$ from the measured CCF degrades. These simulation can be compared with the three model CCFs shown in Fig. 4 of Edelson & Krolik (1988), where redder continuum light curves yield a stronger correlation, but contain less structure and hence less information.
The above claim that the CCF peak should be more easily measured for whiter PDS leads to an apparent contradiction. The transfer function acts similarly to a low–pass filter, hence high frequency variations present in the continuum are averaged out and are not seen in line light curve. The thicker the BLR, the more the high frequencies are lost. This suggests that a continuum light curve dominated by low frequencies, i.e., a very red PDS, would provide a better “driver” for echo mapping. Indeed, this effect is seen by White & Peterson (1994) in their CCPD comparisons using $`1/f^{1.8}`$ and $`1/f^{2.5}`$ simulated continuum light curves. The contradiction is resolved by noting that the signal–to–noise ratio of typical AGN variability data is rather low, hence noise in the light curves is important. For continuum light curves with equal intrinsic rms variability but different PDS power–law slopes, the deleterious effect of white noise is more pronounced for whiter PDS. In other words, the S/N is timescale dependent: low frequency variations have effectively a higher S/N than high frequency variations. Since convolution with the transfer function preserves low–frequency power, continuum light curves with redder PDS yield CCFs that are are less sensitive to noise. However, one cannot escape the fact that a very red PDS continuum will have a very broad ACF and CCF, making its peak and centroid determinations difficult. For high S/N data, a whiter PDS will enable a richer echo mapping analysis.
#### 4.2.3 The duration of the light curves
As the duration of the time series increases, one expects the reliability of CCF lag determination to improve. To quantify this behavior, Figure 8 shows the mean and median lag values plotted as a function of the length of the hypothetical observing campaign. Five curves are drawn, corresponding to the mean and median for the peaks and centroids of the local CCFs, and the median of the peaks using the standard CCF method. As before, 1000 simulations were used to determine the mean and median, and these simulations were identical in all respects except for the number of points. While all the distributions encompass the true value within $`\pm 1\sigma `$, they are all biased too low. For simulations that match the characteristics of the observations of NGC 5548, the lag is underestimated by $``$5% using the local CCF method.
From this figure it is clear that the median is a far better statistic than the mean. This is because large outliers are not rare. For light curves of 150–300 days duration, the median bias in the local CCFs is roughly 5–10%. For shorter duration light curves, the bias and variance increase rapidly: for 100 day long time light curves the bias is $``$10% and for 50 day long light curves the bias grows to $``$25%. For light curves this short the huge uncertainty in the lag makes a single estimate almost meaningless. For comparison, the standard CCF method produces significantly more biased values: even with 300–day long light curves the bias is $``$25%.
The commonly held belief that the time series used to compute the CCF should be at least 4 times longer than the lags of interest, and preferably $``$10 times longer, is illustrated in this figure. As the light curves lengthen, the lags only asymptotically approach the unbiased value. Extending a 200 day long light curve by 100 days does not decrease the bias nearly as much as extending a 100 day long light curve by the same amount. Once the light curves exceed about 10 times the lag, increasing the S/N of the data leads to more significant improvements then extending the duration of the light curves.
#### 4.2.4 The signal–to–noise ratio
The previous simulations were all based on a line S/N of 5, mimicking the NGC 5548 H$`\beta `$ observations. S/N is defined here as the ratio of intrinsic line rms variations to the simulated observational noise per datum — see Table 1 for the numerical values. To quantify the effects of a change in S/N, Fig. 9 shows the median of the CCPD for S/N values of 2.5, 5 and 10, plotted as functions of the duration of the light curves. For this figure, lag is defined as the peak of the local CCF. Both the line S/N and the continuum S/N were boosted by the same factors, achieved in realization by reducing the amount of added simulated observational noise.
As expected, the higher the S/N, the smaller the bias, and more importantly, the smaller the variance about the median. From this figure it can be deduced that under certain conditions, doubling the S/N of the individual observations can be as significant as doubling the duration of the light curves.
#### 4.2.5 The effects of detrending
In §3.1.4, 3.1.7, and 4.2.1 it was stated that “detrending” or removing low frequency power from the light curves can improve CCF lag determinations. Figure 10 explicitly shows this effect. Plotted are the median values of the 1000 CCF simulations versus the order of the polynomial used to remove trends from the light curves (order 0 = mean, order 1 = linear, order 2 = quadratic, etc.). The detrending was accomplished by least–squares fitting a polynomial to the entire light curve for the observing season (300 days in all cases) then subtracting off the polynomial prior to computing the CCF. In all cases identical light curves were used (with a line S/N = 5). For clarity, only the error bars for the median lag computed with the local CCF are shown. In general, as the order of the polynomial increases, the bias in the CCF decreases.
For the standard method, simply removing a linear trend can result in a substantially better estimate of the lag. Significant additional improvements can be made by going to a 3rd or 4th order polynomial. However, for higher orders, the variance increases, offsetting the benefits of detrending.
The beneficial effect of detrending is more pronounced for the standard CCF method than for the local method because the local method intrinsically contains a detrending–like filter (see §3.1.5). Nevertheless, the local method also benefits from low–order polynomial detrending of the light curves. (The apparent degrading of the local CCF median when going from no detrending to linear detrending is a statistical event. More typically, the median value with no detrending is equal to or worse than with linear detrending.) Figure 10 again illustrates that the local CCF outperforms the standard CCF, and that the peak is a better (less biased) estimator than the centroid. However, the local CCF is more noisy than the standard CCF, particularly at large lags, and this grows worse with higher order detrending.<sup>6</sup><sup>6</sup>6In the situation where outliers become common at the largest lags the median is no longer a good statistic to use to characterize the CCPD, since it depends on the limits of where the lags are computed (i.e., the median becomes sensitive to the endpoints of the interval over which the CCF is computed). A better statistic would be the mode or the center of a narrow Gaussian fit to the distribution.
Conceptually, detrending works for the following reason: AGN light curves have a red power spectrum, so most of the signal in the light curves are contained in the lowest frequencies. However, these low frequencies are the most poorly sampled in the light curve: there is only one measurement of the lowest Fourier frequency, two of the second lowest frequency, and so on. Due to the small number of samples, statistical fluctuations are very important. This is in contrast to the high frequency power, where there are many samples, but the observational noise is large (or even dominant). Detrending the light curves with polynomials applies a smooth and gently rolling high–pass filter.<sup>7</sup><sup>7</sup>7For equally sampled data a sharp, well–defined high-pass filter could be applied in the Fourier domain, but for unequally sampled data working in the Fourier domain is problematic. The CCF will no longer be dominated by poorly sampled low frequencies, and hence less prone to random fluctuations. The scatter in the CCF is therefore reduced. Another way to think of it is that the detrending sharpens the ACFs, and since the CCF is approximately the ACF convolved with the $`\mathrm{\Psi }`$, the CCF sharpens up.
To understand why detrending reduces bias, one must realize that the CCF is extremely efficient at finding the lag if the time series PDS are white; otherwise the CCF can give poor results. For example, if the time series contains a trend then for a long time interval the data will tend to be above (or below) the mean. Thus the data values are not randomly distributed about the mean; instead they are highly correlated on long timescales. This correlation will dominate the CCF and the peak of the CCF will occur at (or near) zero lag. Thus there is a bias towards small lags if there is any low frequency power in the time series. For example, the peak of the standard CCF will occur at zero lag for any two linear light curves. Unless the deviations from the straight lines are large, the CCF will tend to peak at zero lag.
AGN light curves are dominated by low frequency power hence the CCF will be biased towards too small lags unless the data are “pre–whitened”.<sup>8</sup><sup>8</sup>8Determining radial velocity shifts of lines in a flux spectrum does not suffer as much bias because the power spectrum of the flux spectrum is mostly white. Nevertheless, any trends in the continuum must be removed or the resulting radial velocities will be biased. Detrending the light curves via polynomials is one way of removing low frequency power; other methods include subtracting off splines or a moving average, applying a differencing operation (see e.g. Chatfield 1996), or directly multiplying by a high–pass filter in the Fourier domain. Since AGN light curves are red, it is clear that some form of pre–whitening should be carried out.
What order polynomial should be used to detrend the light curves? The answer depends on the characteristics of the data: the redder (more negative $`\alpha `$) the power spectra, the more detrending is required; the lower the S/N, the less detrending can be tolerated. AGN light curves have power at all observed frequencies so there will always be linear and other low–frequency trends, independent of the duration of the light curve. The minimum order of the detrending polynomial is therefore insensitive to the length of the light curve: a linear trend should always be removed. A crude estimate for the maximum order can be made as follows. A polynomial of order $`M`$ has at most $`M`$ zeroes, so it removes power on timescales greater than $`2T/(M1)`$ where $`T`$ is the duration of the time series. For a reliable CCF estimate, a light curve with a duration of 5–10 times the lag timescale is necessary. This gives a polynomial of order $`M`$ 0.4–0.5 $`\times T/\tau _\mathrm{\Psi }`$, where $`\tau _\mathrm{\Psi }`$ is the lag expected for the given transfer function. If the S/N is poor, the maximum order polynomial for detrending will be less than this. When the light curves are heavily detrended, much of the intrinsic signal in the data is removed, leaving lower and lower S/N data for the CCF to work with. The lag of the peaks of the CCFs will therefore not necessarily converge with increasing detrending. In the extreme limit where the filtering leaves only the observational white noise, the ACF again will peak at zero lag because of correlated noise between the continuum and line observations (since they are both measured from the same spectrum). So while the detrending removes bias, it also increases the variance, and in extreme limits it re–introduces a bias. For this reason, large amounts of detrending is not beneficial. The technique of differencing the data (see §3.1.4) removes all low–frequency trends, and hence is not a viable option for data that is not of exceptionally high S/N.
The strong recommendation that results from this work is that removal of low–frequency trends in the light curves can significantly improve the reliability of the CCF lag determinations. Removal of a linear trend is essential; removal of a cubic or quartic trend is recommended; higher orders may be useful if the correlation remains strong enough to provide an unambiguous determination of the peak. In practice, one should compute and compare the CCF for progressively larger amounts of detrending.
#### 4.2.6 The effects of tapering
Tapering (also called “windowing”) a time series is common practice in Fourier analysis (see, e.g., Jenkins & Watts 1968; Press et al. 1996). Tapering helps compensate for “end effects” of a finite duration time series: a sampled time series can be thought of as the product of two time series, the “true” infinite duration time series and a time series whose value is unity during the data acquisition interval and zero elsewhere. The multiplication of the true time series with this sampling function is identical to convolution of the Fourier transform of the true time series with the Fourier transform of a boxcar. The result is that the Fourier transform is broadened by convolution with a sinc function. The broadening results in “leakage” of power from one frequency into other frequency bins. Tapering the light curves consists of multiplying the light curves with a function that slowly goes from zero to unity and back over the duration of the experiment. The new time series has “softer” edges that produce a Fourier transform with less leakage. For time series that have a red power spectra, the leakage of power from low frequencies to higher frequencies is significant, and possibly even dominant at if the intrinsic PDS is redder than $`1/f^2`$. Because of the equivalence of the CCF with its discrete Fourier transform counterpart, reducing leakage from low to high frequencies should improve the CCF lag estimate. The red PDS of AGN light curves means leakage is significant and suggests that tapering the light curves can have a benefical effect.
To explore the possible benefits of tapering, simulations were carried out using light curves that were detrended and tapered prior to computing the CCF. The taper was applied in two ways: (1) a global taper, applied once to the entire light curve; (2) a local taper, applied to each overlapping segment pair. The global taper is more akin to what is used to reduce spectral leakage; the local taper forces the same taper to be applied independent of lag.
The results indicate that tapering does on average improve the reliability of the CCF lag estimate. However, the effect is small compared to the effect produced by detrending. As expected, the greater the amount of detrending, the less effect the tapering had. The two different methods of tapering (global and local) produced similar results for small lags; the differences were much less than the variance in the estimates of the lag. For globally tapered light curves, the local and standard methods of computing the CCF gave very similar results for small lags. For large lags, the global taper significantly reduces the amplitude of the CCF. This has two effects: (1) it greatly reduces the number of outliers in the CCPD; (2) it introduced a strong bias against finding a correlation at a large lag. Provided the light curves are long compared to the true lag (something that is not known a priori), the latter effect is not serious. In summary, tapering does have a beneficial effect and the benefits are not very sensitive to the specific method of tapering (or taper shape), but the effect of detrending is far more important. In practice, one should compute the CCF several ways: using the standard and local method, different amounts of detrending, and with and without tapering.
## 5 Discussion and Conclusion
We have discussed some properties of the cross correlation function, specifically in the AGN echo mapping context. The two main issues we address are (1) the bias in the CCF and (2) the uncertainties in the CCF lag determinations. Both of these stem from finite duration sampling of the light curves, not irregular/sparse sampling or observational noise. Bias can also be introduced if low frequency power dominates the light curves. Since AGN light curves have a red power spectrum, this second source of bias is also present.
Because of the bias problem the CCF fails on average to reproduce the correct lag. The bias is inherent in the definition of the standard CCF itself and depends strongly on the ratio of the intrinsic (true) lag to the duration of the observed light curves and also the sharpness of the continuum autocorrelation function and transfer function. Unfortunately the amount of bias cannot be determined from the data themselves, i.e., one needs to know the true CCF in order to calculate the bias. As a result, exact corrections are impossible and simulations are required to estimate the statistical size of the bias. However, much of the bias can be removed by simply detrending (and to a lesser extent tapering) the light curves.
The impact on AGN variability studies is that the standard CCF tends to underestimate the true time lag, therefore the derived characteristic radius for the BLR is underestimated. From simulations designed to mimic the well–sampled light curves of NGC 5548, the estimated lag is too low by $``$ 5–10%; for more poorly sampled light curves the bias can be much larger. This bias amplitude is based on using the “local” CCF method, in which the means and standard deviations used to calculate the CCF are determined using only those parts of the light curves that overlap at for a given lag. We find that the local CCF gives superior results compared to the standard definition of the CCF, where the bias can be 3 times larger. Although the size of the bias is relatively small compared to the intrinsic uncertainty in the measured lags of many AGN, as the quality of the data continues to improve, the bias will not be negligible and its effect should not be ignored.
We also find that the lag of the centroid of the CCF does not yield a more accurate representation of the BLR size because (1) it is more heavily biased than the peak of the CCF; (2) unlike the infinite case, the centroid of the sample CCF does not necessarily correspond to the centroid of the transfer function.
It has been observed that the H$`\beta `$ lag in NGC 5548 changes from year to year (Peterson et al. 1999) and this can be interpreted in several ways. The variations can be attributed to the AGN itself, e.g., the BLR structure may be evolving, or the illumination of the BLR by the photoionizing source may be changing (Wanders & Peterson 1996), or the engine producing the continuum variability is changing such that the continuum ACF is variable. However, an alternate explanation is simply that the changing CCF lag is due to finite duration sampling of the light curves. Simulations that mimic the optical continuum and H$`\beta `$ observations of NGC 5548 demonstrate that, even with perfect sampling and with a transfer function that has a well–defined peak, the scatter in measured CCF lags is large. Thus the scatter in the observed $`H\beta `$ lags in NGC 5548 can be attributed to finite duration sampling of a random process.
Observations have shown that AGN flux time series are not stationary on timescales spanning several observing seasons, i.e., the means and variances of the light curves do not remain constant from year to year. Since the continuum variability properties are not constant (in particular the ACFs), one cannot use the observed CCFs to unambiguously deduce changes in the transfer function.
It is of course possible that the changing lag is intrinsic to the AGN, but we have shown that the scatter in the lags are also consistent with the interpretation of being a consequence of finite duration sampling of a random walk–like process. Our simulations produce a distribution of lags that is as wide as the observed scatter. Given that the artificial data were oversampled, equally sampled, had perfectly known noise characteristics and a transfer function with a well–defined peak, the results of the simulations are highly robust.
To determine if the observed lag variations are intrinsic to the AGN, one needs to show that a realistic simulation produces a narrower scatter in lag distribution than what is observed, or that yearly changes in the lags are not random. Given that much longer observing runs than what has already been obtained for NGC 5548 are not feasible, the resolution of the question of the significance of the changing lags will demand new data with much higher S/N. This would substantially tighten the scatter in the simulated lag distributions, while its effect on the observed scatter depends on if the variations are intrinsic or not. Also, a better understanding of the continuum variability characteristics such as the power spectrum power–law exponent $`\alpha `$ would allow more realistic simulations. As we have shown, the scatter in the simulated lag distribution depends strongly on the power law slope of the power spectrum. In this regard, a Fourier analysis of the long–term NGC 5548 continuum light curve is warranted.
Finally, enumerated below are some practical suggestions that can improve the reliability of CCF lag determinations in AGN: (1) Detrending the light curves produces far more reliable CCFs. Linear detrending is required, and higher order detrending can be beneficial if the S/N is high; (2) The peak of the CCF gives a more reliable lag estimate than the centroid. (3) Tapering the light curves also has a beneficial effect, though not as significant as detrending; (4) The “local CCF” is less biased and therefore gives better results than the standard CCF, especially for small lags. However, if the light curves are detrended and tapered, the advantage the local CCF has over the standard CCF is small. If simulations are used to estimate uncertainties in the lag estimates, then: (5) The median of the CCPD is more reliable than the mean or mode for light curves that are not too heavily detrended; (6) An improvement of the bootstrap + Monte Carlo method (Peterson et al. 1999a) as described in §3.2 should be used. However, simulations of this nature can only yield an estimate of the uncertainty of the lag for that particular sample of light curve. Without an understanding of the true ACF itself (not the sample ACF), estimates based on resampling or perturbing the observed sample light curves can underestimate the variability of the lag.
The author thanks an anonymous referee whose comments led to a significant improvement in the depth and presentation of this work. The author wishes to express appreciation to E.L. Robinson for extremely valuable discussions and to Chris Koen for inspiring this investigation and for comments on a draft of this paper. The author also thanks Divas Sanwal for helpful comments throughout the various stages of this work. The author acknowledges with gratitude the work of the many members of the AGN Watch and the availability of the data and papers on their web site: “http://www.astronomy.ohio-state.edu/~agnwatch/”. This work was supported through NASA ADP grant NAG5-7002. |
no-problem/9911/astro-ph9911130.html | ar5iv | text | # Evolution of Neutron-Star, Carbon-Oxygen White-Dwarf Binaries
## 1 Introduction
We consider the evolution of neutron-star, carbon-oxygen white-dwarf binaries using both the Bethe & Brown (1998) schematic analytic evolutions and the Portegies Zwart & Yungelson (1998) numerical population syntheses.
The scenario in which the circular neutron-star, carbon-oxygen white-dwarf binaries (which we denote as $`(ns,co)_𝒞`$ hereafter) have gone through common envelope evolution is considered. In conventional common envelope evolution for the circular binaries it is easy to see that the observed ratio of these to eccentric binaries (hereafter $`(ns,co)_{}`$) should be $`50`$ because: (i) The formation rate of the two types of binaries is, within a factor 2, the same. (ii) The magnetic fields in the circular binaries will be brought down by a factor of $`100`$ by He accretion in the neutron-star, He-star phase following common envelope evolution just as the inferred pulsar magnetic field strengths in the double neutron star binaries are brought down (Brown 1995). In the eccentric binaries the neutron star is formed last, after the white dwarf, so there is nothing to circularize its orbit. More important, its magnetic field will behave like that of a single star and will not be brought down from the $`B10^{12}`$ G with which it is born. (At least empirically, neutron star magnetic fields are brought down only in binaries, by accreting matter from the companion star, Taam & Van den Heuvel 1986, although Wijers 1997 shows the situation to be more complex.) Neutron stars with higher magnetic fields can be observed only for shorter times, because of more rapid spin down from magnetic dipole radiation. The time of possible observation goes inversely with the magnetic field $`B`$. We use the observability premium
$`\mathrm{\Pi }=10^{12}\mathrm{G}/B`$ (1)
(Wettig & Brown 1996) which gives the relative time a neutron star can be observed. Given our above point (ii), the circular binaries should have an observability premium $`\mathrm{\Pi }100`$ as compared with $`\mathrm{\Pi }1`$ for the higher magnetic field neutron star in an eccentric orbit. Correcting for the factor 2 higher formation rate of the eccentric binaries (point (i) above) this predicts the factor $`50`$ ratio of circular to eccentric binaries.
In our paper we cite one firm eccentric neutron-star, carbon-oxygen white-dwarf binary $`(ns,co)_{}`$ B2303$`+`$46 and argue for a recently observed second one, J1141$``$65. Portegies Zwart & Yungelson (1999) suggest PSR 1820$``$11 may also be in this class, but cannot exclude the possibility that the neutron star companion is a main sequence star (Phinney & Verbunt 1991). This would imply that $`\text{ }>100`$ such binaries with circular orbits should be observed. But, in fact, only one <sup>1</sup><sup>1</sup>1 We have a special scenario for evolving it; see section 3.2. B0655$`+`$64 is observed if we accept the developing concensus (Section 2.3) that those observed $`(ns,co)_𝒞`$ are evolved with avoidence of common envelope evolution. We are thus confronted by a big discrepancy, for which we suggest a solution.
In order to understand our solution, we need to review three past works. In the earlier literature the observed circular $`(ns,co)_𝒞`$ were evolved through common envelope, e.g., see Van den Heuvel (1994) and Phinney & Kulkarni (1994). Accretion from the evolving giant progenitor of the white dwarf was neglected, since it was thought that the accretion would be held to the Eddington rate of $`\dot{M}_{\mathrm{Edd}}1.5\times 10^8\text{ }M_{}`$ yr<sup>-1</sup>, and in the $`1`$ year long common envelope evolution a negligible amount of matter would be accreted. We term this the standard scenario. However, Chevalier (1993) showed that once $`\dot{M}`$ exceeded $`10^4\dot{M}_{\mathrm{Edd}}`$, it was no longer held up by the radiative pressure due to the X rays from the neutron star, but that it swept them inwards in an adiabatic inflow. Bethe & Brown (1998) employed this hypercritical accretion in their evolution of double neutron star $`(ns,ns)`$ and neutron-star, low-mass black-hole binaries $`(ns,lmbh)`$ and we shall use the same techniques in binary evolution here. In particular, these authors found that including hypercritical accretion in the standard scenario for double-neutron star binary evolution, the first born neutron star went into a low-mass black hole. To avoid the neutron star going through the companion’s envelope, a new scenario was introduced beginning with a double He star binary. It gives about the right number of double neutron star binaries.
A new development has been that most of the circular $`(ns,co)_𝒞`$ are currently evolved with avoidance of common envelope evolution. In Section 2.3, we shall summarize this work, carried out independently by King & Ritter (1999) and Tauris, Van den Heuvel & Savonije (2000). If we accept the new scenario, at most one or two circular $`(ns,co)_𝒞`$ that went through common envelope evolution have been observed. Yet, in the standard scenario at least $`50`$ of them should be seen.
In this paper we find that in $`(ns,co)`$ which do go through common envelope evolution, the neutron star goes into a black hole. The $`(ns,co)`$ binaries observed to date have been identified through radio emission from the neutron star. Thus, binaries containing a low-mass black hole would not have been seen. We discuss masses for which neutron stars go into black holes.
Although the main point of our paper relies only on relative formation rates, we shall show in Appendix A that the Bethe & Brown (1998) schematic, analytic analysis agrees well with the detailed numerical population synthesis of Portegies Zwart, once both are normalized to the same supernova rate.
## 2 The Problem
### 2.1 Standard Scenario vs Observations
.
Portegies Zwart & Yungelson (1998), in a very careful population synthesis, have calculated the expected number of newly created binaries of compact stars (neutron stars or black holes) and white dwarfs. Among the latter, they distinguish between those consisting of helium and those consisting of carbon-oxygen (denoted $`co`$). To make an eccentric binary containing a neutron star, the supernova must occur after the carbon-oxygen star has formed (Portegies Zwart & Yungelson 1999). To make a circular binary containing a neutron star it is necessary that its companion be close so that at some earlier stage in evolution (but after formation of the neutron star) there was mass transfer or strong tidal interaction, which requires the companion to (nearly) fill its Roche Lobe.
Since the Bethe & Brown (1998) schematic calculations did not include mass exchange, which is very important in evolving $`(ns,co)_{}`$ binaries, we need the more complete calculations of Portegies Zwart & Yungelson, which are listed in Table 1. We discuss this in more detail later. These do not include hypercritical accretion; i.e., they follow the standard scenario. In this case the formation ratio of $`(ns,co)_𝒞`$ to $`(ns,co)_{}`$ is $`17.7/32.1=0.55`$. We now make the case that if the $`(ns,co)_𝒞`$ were to be formed through common envelope evolution (Phinney & Kulkarni 1994; Van den Heuvel 1994) in the standard scenario, their pulsar magnetic fields would be brought down to $`B10^{10}`$ G because of the similarity to binary neutron star systems in which this occurs. In detail, this results from helium accretion during the neutron-star, He-star stage which precedes the final binary (Brown 1995, Wettig & Brown 1996).
Detailed calculation of Iben & Tutukov (1993) for original donor masses $`46\text{ }M_{}`$ of the white dwarf progenitor show that following common envelope evolution the remnant stars fill their Roche lobes and continue to transfer mass to their companion neutron star. These remnants consist of a degenerate carbon-oxygen core and an evolving envelope undergoing helium shell burning. The mass transfer to the neutron star is at a rate $`\dot{M}<10^4\dot{M}_{\mathrm{Edd}}`$, the lower limit for hypercritical accretion, so it is limited by Eddington. Van den Heuvel (1994) estimates that the neutron star accretes about $`0.045\text{ }M_{}`$ and $`0.024\text{ }M_{}`$ in the case of the ZAMS $`5\text{ }M_{}`$ and $`6\text{ }M_{}`$ stars, and $`0.014\text{ }M_{}`$ for a $`4\text{ }M_{}`$ star, where these ZAMS masses refer to the progenitors of the white dwarfs. The accretion here is of the same order, roughly double,<sup>2</sup><sup>2</sup>2 The He burning time to be used for the progenitor of the white dwarf is $`10^6`$ years, whereas for the relativistic binary pulsars the average time of $`5\times 10^5`$ years is more appropriate, so one would expect a factor $`2`$ greater accretion. the wind accretion used by Wettig & Brown (1996) in the evolution of the relativistic binary pulsars B1534$`+`$12 and B1913$`+`$16. There the magnetic fields were brought down by a factor $`100`$ from $`B10^{12}`$ G to $`10^{10}`$ G, increasing the observability premium $`\mathrm{\Pi }`$ by a factor of $`100`$. Thus, the scenario in which the $`(ns,co)_𝒞`$ are produced through common envelope evolution without hypercritical accretion should furnish them with $`\mathrm{\Pi }100`$, by helium accretion following the common envelope. Although the detailed description may not be correct, the similarity of evolution of $`(ns,co)_𝒞`$ with that of binary neutron stars in the older works (Phinney & Kulkarni 1994; Van den Heuvel 1994) should furnish these with about the same $`\mathrm{\Pi }`$.
There is one confirmed $`(ns,co)_{}`$, namely B2303$`+`$46, see Table 2, so there should be about 50 circular ones which went through common envelope evolution. Indeed, several circular ones have been observed (see Table 2), and one or two of these may have gone through common envelope evolution. Thus we have a big discrepancy between the standard scenario and the observations. In the next section, we discuss the possibility of PSR J1141$``$6545 being $`(ns,co)_{}`$, which enhances the discrepancy.
### 2.2 Is PSR J1141$``$6545 $`(ns,co)_{}`$ ?
Not only is the eccentric B2303$`+`$46 quite certain, but a relativistic counterpart, PSR J1141$``$6545 has recently been observed (Kaspi et al. 2000), in an eccentric orbit. The inferred magnetic dipole strength is $`1.3\times 10^{12}`$ G, and the total mass is $`2.300\pm 0.012\text{ }M_{}`$. Kaspi et al. argue that the companion of the neutron star can only be a white dwarf, or neutron star. With a total mass of $`2.3\text{ }M_{}`$, if J1141$``$65 were to contain two neutron stars, each would have to have a mass of $`1.15\text{ }M_{}`$, well below the 19 accurately measured neutron star masses, see Fig. 1 (Thorsett & Chakrabarty 1999).
We can understand the absence of binary neutron stars with masses below $`1.3\text{ }M_{}`$, although neutron stars of this mass are expected to result from the relatively copious main sequence stars of ZAMS mass $`1013\text{ }M_{}`$ from the argument of Brown (1997). The He stars in the progenitor He-star, pulsar binary of mass $`\text{ }<4\text{ }M_{}`$ (Habets 1986) expand substantially during He shell burning. Accretion onto the nearby pulsar sends it into a black hole. Indeed, with inclusion of mass loss by helium wind, He stars of masses up to 6 or $`7\text{ }M_{}`$ expand in this stage (Woosley, Langer & Weaver 1995). Fryer & Kalogera (1997) find that special kick velocities need to be selected in order to avoid the evolution of PSR 1913$`+`$16 and PSR 1534$`+`$12 from going into a black hole by reverse Case C mass transfer (mass transfer from the evolving He star companion onto the pulsar in the He-star, neutron-star stage which precedes that of the binary of compact objects).
Our above argument says that the first neutron star formed in these would be sent into a black hole when its companion He star evolved and poured mass on it. Therefore, we believe the companion in J1141$``$65 must be a white dwarf. Earlier Tauris & Sennels (2000) developed the case that J1141$``$65 was an eccentric neutron-star, white-dwarf binary. Given the high magnetic field of J1141$``$65 ($`1.3\times 10^{12}`$ G) with low observability premium of 0.77, this would increase the predicted observed number of circular $`(ns,co)_𝒞`$ which had gone through common envelope evolution to $`130`$ in the standard scenario.
### 2.3 Evolution of Neutron-Star, Carbon-Oxygen White-Dwarf Binaries with Avoidance of Common Envelope Evolution
Our discussion of the common envelope evolution in the last section applied to convective donors. In case the donor is radiative or semiconvective, common envelope evolution can be avoided. Starting from the work of Savonije (1983), Van den Heuvel (1995) proposed that most low mass X-ray binaries would evolve through a Her X-1 type scenario, where the radiative donor, more massive than the neutron star, poured matter onto its accretion disk at a super Eddington rate, during which time almost all of the matter was flung off. This involved Roche Lobe overflow. Although Van den Heuvel limited the ZAMS mass of the radiative donor to $`2.25\text{ }M_{}`$ in order to evolve helium white-dwarf, neutron star binaries, his scenario has been extended to higher ZAMS mass donors in order to evolve the carbon-oxygen white-dwarf, neutron star binaries. The advection dominated inflow-outflow solutions (ADIOS) of Blandford & Begelman (1999) suggest that the binding energy released at the neutron star can carry away mass, angular momentum and energy from the gas accreting onto the accretion disk provided the latter does not cool too much. In this way the binding energy of gas at the neutron star can carry off $`10^3`$ grams of gas at the accretion disk for each gram accreting onto the neutron star. King & Begelman (1999) suggest that such radiatively-driven outflows allow the binary to avoid common envelope evolution.
As noted above, for helium white dwarf companions, Van den Heuvel (1995) had suggested Cyg X-2 as an example following the Her X-1 scenario. King & Ritter (1999) calculated the evolution of Cyg X-2 in the ADIOS scenario in detail. These authors also evolved the $`(ns,co)_𝒞`$ binaries in this way, using donor stars of ZAMS masses $`47\text{ }M_{}`$. Tauris, Van den Heuvel, & Savonije (1999) have carried out similar calculations, with stable mass transfer. These authors find that even for extremely high mass-transfer rates, up to $`\dot{M}10^4\dot{M}_{\mathrm{Edd}}`$, the system will be able to avoid a common envelope and spiral-in evolution.
Tauris, van den Heuvel & Savonije 2000 evolve J1453$``$58, J1435$``$60 and J1756$``$5322, the three lowest entries in our Table 2, through common envelope. We obtained the eccentricities and $`\dot{P}`$’s for the first two of these (Fernando Camilo, private communication). The binary J1453$``$58, quite similar to J0621$`+`$1002 has a substantial eccentricity and clearly should be evolved with a convective donor as Tauris et al did for J0621$`+`$1002. The spin periods of J1435$``$60 and J1756$``$5322 are short, indicating greater recycling than the other listed pulsars. It would seem difficult to get the inferred magnetic field down to the $`4.7\times 10^8`$ G of J1435$``$60 by the Iben & Tutukov or Wettig & Brown accretion scenarios following common envelope evolution as discussed in Section 2.1. If, however, one does believe that J1435$``$60 and J1756$``$5322 have gone through common envelope, the discrepancy between predicted and observed circular binaries in the standard scenario is only slightly relieved.
### 2.4 Are There Observational Selection Effects ?
In Table 3 we have tabulated $`S_{400}\times d^2`$ in order to see whether the normalized intensity gives strong selection effects. Note that the 35.95 for B2303$`+`$46 is not so different from the 43.56 and 203.35 for B1534$`+`$12 and B1913$`+`$16, respectively. For the circular binaries $`(ns,co)_𝒞`$ the intensities are less, but their empirical Observability Premium $`\mathrm{\Pi }`$ is much larger. There may be other observational selection effects, but, we believe that there are no observational selection effects strong enough to compensate for the factor 100 discrepancy between the observed population and the one expected from the standard model. So the problem remains the same.
## 3 The Answer
### 3.1 Black Hole Formation in Common Envelope Evolution
We believe the answer to the missing binaries is that the neutron star goes into a black hole in common envelope evolution, as we now describe. We label the mass of the neutron star as $`M_A`$ and that of the giant progenitor of the white dwarf as $`M_B`$. Following Bethe & Brown (1998) we choose as variables the neutron star mass $`M_A`$ and $`YM_B/a`$, where $`a`$ is the orbital radius. From their eq. (5.12) we find
$`{\displaystyle \frac{M_{A,f}}{M_{A,i}}}=\left({\displaystyle \frac{Y_f}{Y_i}}\right)^{c_d1}`$ (2)
where $`c_d`$ is the drag coefficient. From Shima et al. (1985) we take
$`c_d=6.`$ (3)
We furnish the energy to remove the hydrogen envelope of the giant B (multiplied by $`\alpha _{ce}^1`$, where $`\alpha _{ce}`$ is the efficiency of coupling of the orbital motion of the neutron star to the envelope of B) by the drop in orbital energy of the neutron star; i.e.,
$`{\displaystyle \frac{0.6GM_{B,i}Y_i}{\alpha _{ce}}}={\displaystyle \frac{1}{2}}GM_{A,i}Y_i\left({\displaystyle \frac{Y_f}{Y_i}}\right)^{6/5}.`$ (4)
Here the $`0.6GM_{B,i}Y_i`$ is just the binding energy of the initial giant envelope, found by Applegate (1997) to be $`0.6GM_{B,i}^2a_i^1`$, and the right hand side of the equation is the final gravitational binding energy $`\frac{1}{2}GM_{A,f}M_{B,f}a_f^1`$ in our variables. Using eqs. (2) and (3) in eq. (4) one finds
$`{\displaystyle \frac{M_{A,f}}{M_{A,i}}}=\left({\displaystyle \frac{1.2M_{B,i}}{\alpha _{ce}M_{A,i}}}\right)^{1/c_d}.`$ (5)
For the sake of argument, we take the possible range of initial neutron star mass to be $`1.21.5\text{ }M_{}`$ (the upper bound is the Brown & Bethe (1994) mass at which a neutron star goes into a low-mass black hole), and the main sequence progenitor masses of the carbon-oxygen white dwarf to be $`M_{B,i}=2.2510\text{ }M_{}`$. As we show in Appendix C, in the Bethe & Brown (1998) schematic model, mass transfer was assumed to take place when the evolving giant reached the neutron star, whereas more correctly it begins when the envelope of the giant comes to its Roche Lobe. For the masses we employ, main sequence progenitors of the carbon-oxygen white dwarf of $`2.2510\text{ }M_{}`$, the fractional Roche Lobe radius is
$`r_L0.5.`$ (6)
The binding energy of the progenitor giant at its Roche Lobe is, thus, double what it would be at $`a_i`$, the separation of giant and neutron star. Therefore, a Bethe & Brown $`\alpha _{ce}=0.5`$ corresponds to a true efficiency $`\widehat{\alpha }_{ce}1`$, if the latter is defined as the value for which the envelope removal energy, at its Roche Lobe, is equal to the drop in neutron star orbital energy as it moves from $`a_i`$ to $`a_f`$. If we take $`\alpha _{ce}=0.5`$ in eq. (5) we find, given our assumed possible intervals
$`1.54\text{ }M_{}\text{ }<M_{A,f}\text{ }<2.38\text{ }M_{}.`$ (7)
These are above the neutron star mass limit $`1.5\text{ }M_{}`$ (Brown & Bethe 1994) beyond which a neutron star goes into a low-mass black hole. Thus, all neutron stars with common envelope evolution in our scenario evolve into black holes. This solves the big discrepancy between the standard scenario and observation. The only remaining problem is the evolution of B0655$`+`$64, which survived the common envelope evolution, and we suggest a special scenario for it in the next section.
### 3.2 Is B0655$`+`$64 a problem?
Van den Heuvel & Taam (1984) were the first to notice that the $`(ns,co)_𝒞`$ system B0655$`+`$64 might have been formed in a similar way as the double neutron stars. The short period of 1.03 days, magnetic field $`10^{10}`$ G, and the high companion mass of $`1\text{ }M_{}`$ make this binary most similar to a binary neutron star, but with a carbon-oxygen white-dwarf companion, resulting from probable ZAMS masses $`58\text{ }M_{}`$. For a $`1.4\text{ }M_{}`$ neutron star with $`1\text{ }M_{}`$ white-dwarf companion $`a_f=5.7R_{}`$.
The similarity of B0655$`+`$64 to the close neutron star binaries suggests the double helium star scenario (Brown 1995) to calculate the evolution. The ZAMS mass of the primary is chosen to be just above the limit for going into a neutron star, that of the secondary just below. For the double He star scenario the ZAMS masses of primary and secondary cannot be more than $`5\%`$ different. However, in this case the ratio $`q`$ of masses is so close to unity that the secondary will not be rejuvenated (Braun & Langer 1995: If the core burning of hydrogen to helium in the companion star is nearly complete, the accreted matter would have to cross a molecular weight barrier in order to burn and if $`q`$ is near unity there is not time enough to do so. Thus He cores of both stars will evolve as if the progenitors never had more than their initial ZAMS mass.)
What we have learned recently about effects of mass loss (Wellstein & Langer 1999) will change the Brown (1995) scenario in detail, but not in general concept. An $`10\text{ }M_{}`$ ZAMS star which loses mass in RLOF to a lower mass companion will burn helium as a lower-mass star due to subsequent mass loss by helium winds, roughly as an $`8\text{ }M_{}`$ star (Wellstein & Langer, in preparation). Thus, the primary must have ZAMS mass $`\text{ }>10\text{ }M_{}`$ in this case in order to evolve into a neutron star following mass loss. Although the secondary will not be rejuvenated as mass is transferred to it, it will burn helium without helium wind loss because it is clothed with a hydrogen envelope. Thus, a secondary of ZAMS $`8\text{ }M_{}`$ will burn He roughly as the primary of $`10\text{ }M_{}`$ in the situation considered. Given these estimates, a primary of ZAMS mass $`M\text{ }<10\text{ }M_{}`$ will evolve into a white dwarf, whereas a secondary of mass $`\text{ }>8\text{ }M_{}`$ will end up as a neutron star. Of course, the former must be more massive than the latter, but stars in this mass range are copious because this is the lowest mass range from which neutron stars can be evolved, so there will be many such cases.
This scenario might not be as special as outlined because the fate of stars of ZAMS mass $`810\text{ }M_{}`$, which do not form iron cores but do burn in quite different ways from more massive stars, is somewhat uncertain in the literature. Whereas it is generally thought that single stars in this range end up as neutron stars, it has also been suggested that some of them evolve as AGB stars ending in white dwarfs. In terms of these discussions it does not seem unlikely that with two stars in the binary of roughly the same mass, the first to evolve will end up as a neutron star and the second as a white dwarf, especially if the matter transferred in RLOF cannot rejuvenate the companion.
Van den Heuvel & Taam (1984) evolved B0655$`+`$64 by common envelope evolution. In taking up the problem again, Tauris, Van den Heuvel & Savonije (2000) in agreement with King & Ritter (1999) find that B0655$`+`$64 cannot be satisfactorily evolved with their convective donor scenario. Tauris et al. suggest a spiral-in phase is the most plausible scenario for the formation of this system, but we find that the neutron star would go into a black hole in this scenario, unless the two progenitors burn He at the same time.
### 3.3 Neutron Star Masses
There is by no means agreement about maximum and minimum neutron star masses in the literature. The mass determination of Vela X-1 have been consistently higher than the Brown & Bethe $`1.5\text{ }M_{}`$ which is consistent with well measured neutron star masses in Fig. 1. In a recent careful study at ESO Barziv et al. (2000), as reported by Van Kerkwijk (2000), obtain
$`M_{NS}=1.87_{0.17}^{+0.23}\text{ }M_{}.`$ (8)
Even at 99% confidence level, $`M_{NS}>1.6\text{ }M_{}`$. Taking the maximum mass to be $`1.87\text{ }M_{}`$ and $`\alpha _{ce}=0.5`$, $`(M_{NS})_{min}=1.2\text{ }M_{}`$ one finds from eq. (5) that the maximum carbon-oxygen white dwarf progenitor mass of $`(ns,co)_𝒞`$ is
$`\left(M_{B,i}\right)_{max}=\alpha _{ce}\left({\displaystyle \frac{M_{A,f}}{M_{A,i}}}\right)^6{\displaystyle \frac{M_{A,i}}{1.2}}7.2\text{ }M_{}.`$ (9)
Although there is some uncertainty in the efficiency $`\alpha _{ce}`$, the ratio $`M_{B,i}/\text{ }M_{}`$ is much more sensitive to $`M_{A,f}`$ because of the 6th power of the ratio in eq. (9).<sup>3</sup><sup>3</sup>3 With $`M_{NS}=1.5\text{ }M_{}`$ we get $`(M_{B,i})_{max}=1.9\text{ }M_{}`$ which is below the minimum $`M_B`$ ($`2.25\text{ }M_{}`$) for forming a carbon-oxygen white dwarf, so no $`(ns,co)_𝒞`$ survive the common envelope evolution. But then one cannot explain why no $`(ns,co)_𝒞`$ (except B0655$`+`$64) which survived the common envelope evolution are seen, since this mass is high enough to give those of the white dwarf companions.
Distortion of the $`20\text{ }M_{}`$ B-star companion by the neutron star in Vela X-1 brings in large corrections (Zuiderwijk et al. 1977, van Paradijs et al. 1977a, van Paradijs et al. 1997b) making measurement of neutron star masses in high-mass X-ray binaries much more difficult than those with degenerate companions.
Given the $`Y_e0.43`$ at the collapse of the core of a large star (Aufderheide et al. 1990) one finds the cold Chandrasekhar mass to be
$`M_{CS}=5.76Y_e^2\text{ }M_{}1.06\text{ }M_{}`$ (10)
where $`Y_e`$ is the ratio of the number of electrons to the number of nucleons. Thermal corrections increase this a bit, whereas lattice corrections on the electrons decrease it, so that when all is said and done, $`M_{CS}\text{ }>1.1\text{ }M_{}`$ (Shapiro & Teukolsky 1983). The major dynamical correction to this is from fallback following the supernova explosion. We believe that fallback in supernova explosions will add at least $`0.1\text{ }M_{}`$ to the neutron star, since bifurcation of the matter going out and in happens at about 4000 km (Bethe & Brown 1995). Thus our lower limit of $`1.2\text{ }M_{}`$ is reasonable.
## 4 Discussion and Conclusions
At least one, but more likely two or more, $`(ns,co)_{}`$ binaries with an unrecycled pulsar have been observed. According to the standard scenario for evolving neutron stars which are recycled in a common envelope evolution we then expect to observe $`\text{ }>50`$ $`(ns,co)_𝒞`$. We only observe B0655+64 (which we evolve in our double He-star way) and possibly one or two binaries that went through common envelope evolution and from that we conclude that the standard scenario must be revised. Introducing hypercritical accretion into common envelope evolution (Brown 1995; Bethe & Brown 1998) removes the discrepancy.
We believe that the evolution of the other $`(ns,co)_𝒞`$ binaries may originate from systems with a neutron star with a radiative or semi-convective companion. The accretion rate in these systems can be as high as $`10^4\dot{M}_{\mathrm{Edd}}`$ but common envelope evolution is avoided. This possibility, however, does not affect our conclusion concerning hypercritical accretion.
It is difficult to see “fresh” (unrecycled) neutron stars in binaries because they don’t shine for long. B2303$`+`$46 (Table 2) is the most firm example of a $`(ns,co)_{}`$ binary with a fresh neutron star. Although binaries where a “fresh” neutron star is accompanied by a black hole have similar birthrates ($`10^4`$ yr<sup>-1</sup> for both types; Bethe & Brown, 1999, and Portegies Zwart & Yungelson 1999) and lifetime, none are observed. In Appendix A we quote results of Ramachandran & Portegies Zwart (1998) that show there is an observational penalty which disfavors the observation of neutron stars with black holes as companions, because of the difficulty in identifying the pulsar due to the Doppler shift which smears out the signal in these short-period objects. Because of the longer orbital period and lower companion mass of $`(ns,co)_{}`$, such binaries are less severely plagued by this effect, although the recently discovered J1141$``$6545 is a relativistic binary with 5 hr period. We therefore argue that it is not unreasonable that no $`(lmbh,ns)`$ binaries have yet been observed, but that they should be actively searched for since the probability of seeing them is not far down from that of seeing the $`(ns,co)_{}`$’s.
We are grateful to Fernando Camilo for information in Table 2. We would like to thank Brad Hansen, Marten van Kerkwijk, Ralph Wijers, Thomas Tauris and Lev Yungelson for useful discussions and advice, and thank Justin Holmer for providing us with the convective envelopes for stars in the giant phase. GEB would like to thank Brad Hansen also for correspondence which started him on this problem. GEB & CHL were supported by the U.S. Department of Energy under grant DE-FG02-88ER40388. SPZ was supported by NASA through Hubble Fellowship grant HF-01112.01-98A by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. SPZ is grateful for the hospitality of the State University of New York at Stony Brook.
## Appendix A Comparison of Population Syntheses
In this Appendix we first compare results that the Bethe & Brown (1998) schematic analytic evolution would have given without hypercritical accretion with the Portegies Zwart & Yungelson (1998, 1999) results of Table 1, which do not include hypercritical accretion. We can then illustrate how hypercritical accretion changes the results.
Without hypercritical accretion the $`(lmbh,ns)`$ binaries of Bethe & Brown would end up rather as neutron star binaries $`(ns,ns)`$, giving a summed formation rate of $`1.1\times 10^4`$ yr<sup>-1</sup>, to compare with $`1.1\times 10^4`$ yr<sup>-1</sup> from the Portegies Zwart numerical driven population sythesis results presented in Table 1. This good agreement indicates that kicks that the neutron star receives in formation were implemented in the same way in the two syntheses. Introduction of hypercritical accretion leaves only those neutron stars which do not go through a common envelope; i.e., those in the double He star scenario of Brown (1995), with formation rate $`10^5`$ yr<sup>-1</sup>. This is much closer to the estimated empirical rate of $`8\times 10^6`$ yr<sup>-1</sup> of Van den Heuvel & Lorimer (1996) which equals the rate derived independently by Narayan et al. (1991) and Phinney (1991). Large poorly known factors are introduced in arriving at these “empirical” figures, so it is useful that our theoretical estimates end up close to them. In our theoretical estimates the possibility described earlier that the pulsar in the lower mass binary pulsars goes into a black hole in the He shell burning stage of the progenitor He-star, neutron-star binary (Brown 1997) was not taken into account and this process may change $``$ half of the remaining neutron star binaries in our evolution into $`(lmbh,ns)`$ binaries.
Bethe & Brown (1998) had a numerical symmetry between high-mass binaries in which both massive stars go supernova and those in which the more massive one goes supernova and the other, below the assumed dividing mass of $`10\text{ }M_{}`$, did not; i.e., the number of binaries was equal in the two cases. Taking ZAMS mass progenitors of $`2.310\text{ }M_{}`$ for carbon-oxygen white dwarfs, we then find a rate of
$`R=2\times {\displaystyle \frac{102.3}{10}}\times 1.1\times 10^4\mathrm{yr}^1=16.9\times 10^5\mathrm{yr}^1`$ (A1)
for the formation rate of $`(ns,co)_𝒞`$ binaries. The $`1.1\times 10^4`$ yr<sup>-1</sup> is taken from the last paragraph and applies here because of the numerical symmetry mentioned above. The factor 2 results because there is no final explosion of the white dwarf to disrupt the binary, as there was above in the formation of the neutron star. This rate $`R`$ is to be compared with $`(ns,co)_𝒞=17.7\times 10^5`$ yr<sup>-1</sup> in Table 1. These $`(ns,co)_𝒞`$ binaries are just the ones in which the neutron star goes into a black hole in common envelope evolution, unless the masses of the two initial progenitors are so close that they burn He at the same time. Then a binary such as B0655$`+`$64 can result, since the two helium stars then go through a common envelope, rather than the neutron star and main sequence star.
It is of interest to compare the populations of $`(ns,ns)`$ binaries with the $`(ns,co)_{}`$ binaries. We must rely on the Portegies Zwart result for the latter, which cannot be evolved without mass transfer, which is not included in the Bethe & Brown evolution. The $`(ns,ns)`$ binaries involve common envelope evolution, whereas the $`(ns,co)_{}`$ do not. Thus, results for the rates should differ substantially in the standard scenario, which does not include hypercritical accretion, and our scenario which does. The ratio for $`(ns,ns)`$ and $`(ns,co)_{}`$ from Table 1 are $`10.6\times 10^5`$ yr<sup>-1</sup> and $`32.1\times 10^5`$ yr<sup>-1</sup>. The $`(ns,ns)`$ are recycled in the He-star, pulsar stage by the He wind, giving an observability premium of $`\mathrm{\Pi }100`$ (Brown 1995). The pulsar in the $`(ns,co)_{}`$ is not recycled. Thus, the expected observational ratio is
$`{\displaystyle \frac{(ns,ns)}{(ns,co)_{}}}{\displaystyle \frac{100\times 10.6}{32.1}}33.`$ (A2)
Now B2303$`+`$46, and possibly B1820$``$11 and J1141$``$6545 lie in the $`(ns,co)_{}`$ class, whereas B1534$`+`$12 and B1913$`+`$16 are relativistic binary neutron stars with recycled pulsars. We do not include the neutron star binary 2127$`+`$11C, although it has the same $`B`$ as the other two. It is naturally explained as resulting from an exchange reaction between a neutron star and a binary which took place $`<10^8`$ years ago in the cluster core of M15 (Phinney & Sigurdsson 1991). Thus, the empirical ratio eq.(A2) is not much different from unity. In Bethe & Brown (1998) common envelope evolution cuts the $`(ns,ns)`$ rate down by a factor of 11, only the 1/11 of the binaries which burn He at the same time surviving. The remaining factor 3 is much closer to observation. Furthermore, Ramachandran & Portegies Zwart (1998) point out that there is an observational penalty of a factor of several disfavoring the relativistic binary neutron stars because of the difficulty in identifying them due to the Doppler shift which smears out the signal in these short-period objects. Some observational penalty should, however, also be applied to J1141$``$6545, which is a relativistic binary. We estimate that the combination of neutron stars going into black holes in common envelope evolution and the greater difficulty in seeing them will bring the ratio of 33 in eq. (A2) down to $``$ 1 or 2, close to observation.
We have shown that there is remarkable agreement between the Bethe & Brown (1998) schematic analytic population synthesis and the computer driven numerical synthesis of Portegies Zwart & Yungelson (1998). This agreement can be understood by the scale invariance in the assumed logarithmic distribution of binary separations. In general we are interested in the fraction of binaries which end up in a given interval of $`a`$. E.g., in Bethe & Brown (1998) that fraction was
$`d\varphi ={\displaystyle \frac{d(\mathrm{ln}a)}{7}}`$ (A3)
where $`d(\mathrm{ln}a)`$ was the logarithmic interval between the $`a_i`$ below which the star in the binary would merge in common envelope evolution and $`a_f`$, the largest radius for which they would merge in a Hubble time. Here 7 was the assumed initial logarithmic interval over which the binaries were distributed. Thus, the desired fraction
$`d(\mathrm{ln}a)=\mathrm{\Delta }a/a`$ (A4)
is scale invariant. Mass exchange in the evolution of the binary will change the values of $`a_i`$ and $`a_f`$ delineating the favorable logarithmic interval, but will not change the favorable $`d(\mathrm{ln}a)`$. Of course, when He stars go into neutron stars, the probability of the binary surviving the kick velocity does depend on the actual value of $`a`$, violating the scale invariance. But this does not seem to be a large effect in the calculations. In the case of the formation of $`(ns,co)_{}`$ binaries, the neutron star is formed last, out of the more massive progenitor. Mass transfer is required for this, because otherwise the more massive progenitor would explode first. The mass must not only be transferred, but must be accepted, so that the companion star is rejuvenated (unless $`q1`$ as discussed). We need the Portegies Zwart & Yungelson detailed numerical program for this. In fact, in calculations with this program (See Table 1) the formation of $`(ns,co)_{}`$ binaries is nearly double that of $`(ns,co)_𝒞`$ ones. However, for $`q\text{ }>0.75`$, where $`q`$ is the mass ratio of original progenitors, of the ZAMS progenitors Braun & Langer (1995) showed that the transferred hydrogen has trouble passing the molecular weight barrier in the companion, so that the latter would not be rejuvenated. We have not included this effect here, but roughly estimate that it will lower the predicted numbers of $`(ns,co)_{}`$ by a factor $`>2`$, bringing it down below the number of $`(ns,co)_𝒞`$, exacerbating the problems of the standard model of binary evolution.
In the literature one sees statements such as “Population syntheses are plagued by uncertainties”. It is, therefore, important to show that when the same assumptions about binary evolution are made and when the syntheses are normalized to the same supernova rates, similar results are obtained. The evolution in the Bethe & Brown (1998) schematic way is simple, so that effects in changes in assumptions are easily followed.
## Appendix B Hypercritical Accretion
We develop here a simple criterion for the presence of hypercritical accretion. We further show that if it holds in common envelope evolution for one separation $`a`$ of the compact object met by the expanding red giant or supergiant, it will also hold for other separations and for other times during the spiral in. We assume the envelope of the giant to be convective.
In the rest frame of the compact object, Bondi-Hoyle-Lyttleton accretion of the envelope matter (hydrogen) of density $`\rho _{\mathrm{}}`$ and velocity $`V`$ is (for $`\mathrm{\Gamma }=5/3`$ matter)
$`\dot{M}=2.23\times 10^{29}(M_{co}/\text{ }M_{})^2V_8^3\rho _{\mathrm{}}\mathrm{g}\mathrm{s}^1`$ (B1)
where $`M_{co}`$ is the mass of the compact object, and $`V_8`$ is the velocity in units of 1000 km s<sup>-1</sup>, $`\rho _{\mathrm{}}`$ is given in g cm<sup>-3</sup>. From Brown (1995) the minimum rate for hypercritical accretion is
$`{\displaystyle \frac{\dot{M}_{cr}}{\dot{M}_{\mathrm{Edd}}}}=1.09\times 10^4.`$ (B2)
For hydrogen
$`\dot{M}_{cr}=0.99\times 10^{22}\mathrm{g}\mathrm{s}^1.`$ (B3)
Using eqs.(B1) and (B2) we obtain
$`(\rho _{\mathrm{}})_{cr}=0.44\times 10^7(\text{ }M_{}/M_{co})^2V_8^3\mathrm{g}\mathrm{cm}^3.`$ (B4)
Using Kepler for circular orbits
$`V^2={\displaystyle \frac{GM_{\mathrm{tot}}}{a}}`$ (B5)
where $`M_{\mathrm{tot}}`$ is the mass of the compact object plus the mass of the helium core of the companion plus the envelope mass interior to the orbit of the compact object. One finds
$`(\rho _{\mathrm{}})_{cr}=2.1\times 10^9(\text{ }M_{}/M_{co})^2\left({\displaystyle \frac{M_{\mathrm{tot}}/10\text{ }M_{}}{a_{12}}}\right)^{3/2}\mathrm{g}\mathrm{cm}^3.`$ (B6)
The $`a`$-dependence of $`(\rho _{\mathrm{}})_{cr}`$ is the same as the asymptotic density for the $`n=3/2`$ polytrope which describes the convective envelope. Thus, if the criterion for hypercritical accretion is satisfied at one time and at one radius it will tend to be satisfied for other times and for other radii. The change of $`M_{tot}`$ with $`a`$ is unimportant because from Table 4 it can be seen that $`\rho >(\rho _{\mathrm{}})_{cr}`$ already near the surface of the star.
In order to check the applicability of hypercritical accretion to the compact object in the relatively low-mass stars we consider in this paper, we make application to a $`4\text{ }M_{}`$ red giant of radius $`R=100R_{}`$, evolved as pure hydrogen but with inclusion of dissociation by Justin Holmer (1998). In the Table 4 we compare the densities in the outer part of the hydrogen envelope with those needed for hypercritical accretion. From the table it can be seen that hypercritical accretion sets in quickly, once the compact object enters the envelope of the evolving giant.
Note that the accretion through most of the envelope will be $`>1\text{ }M_{}`$ yr<sup>-1</sup>. Since the total mass accreted by the neutron star is $`1\text{ }M_{}`$ this gives a dynamical time of $`\text{ }<1`$ year, although the major part of the accretion takes place in less time. This is in agreement with the dynamical time found, without inclusion of accretion, by Terman, Taam & Hernquist (1995).
## Appendix C Efficiency
We discuss the definition of the efficiency of the hydrodynamical coupling of the orbital motion of the neutron star to the envelope of the main sequence star.
Van den Heuvel (1994) starts from the Webbink (1984) energetics in which the gravitational binding energy of the hydrogen envelope of the giant is taken to be
$`E_{env}={\displaystyle \frac{G(M_{\mathrm{core}}+M_{\mathrm{env}})M_{\mathrm{env}}}{R}}`$ (C1)
which results in the envelope gravitational energy
$`E_{env}={\displaystyle \frac{0.7GM^2}{R}},`$ (C2)
where $`M=M_{\mathrm{core}}+M_{\mathrm{env}}`$ is the total stellar mass and the Bethe & Brown (1998) approximation $`M_{\mathrm{core}}0.3M`$ has been used.
Applegate (1997) has calculated the binding energy of a convective giant envelope, obtaining
$`E_B=0.6GM^2/R={\displaystyle \frac{1}{2}}E_{env}.`$ (C3)
Note that $`M`$ is the total stellar mass, also that $`E_B`$ is just 1/2 of the gravitational potential energy, the kinetic energy being included in $`E_B`$. Eq. (C3) was checked independently by Holmer (1998). Unfortunately, this work was never published. Van den Heuvel and others have introduced an additional parameter $`\lambda `$ that both takes into account the kinetic energy and the density distribution of the star, $`R`$ in eqs. (C1) & (C2) being replaced by $`R\lambda `$. They use
$`E_B={\displaystyle \frac{0.7GM^2}{\lambda R}}.`$ (C4)
With $`\lambda =7/6`$ this is the same as $`E_B`$ in eq. (C3). Van den Heuvel (1994) chooses $`\lambda =1/2`$. The result is that his efficiency $`\eta `$ is a factor of 7/3 too high. Thus, his suggested efficiency $`\eta =4`$ is more like $`\eta 12/7=\widehat{\alpha }_{ce}`$.
Bethe & Brown (1998) used the Applegate result but incorrectly took the necessary energy to expel the giant envelope as
$`E_g=0.6GM^2/a_1`$ (C5)
rather than
$`E_g=0.6GM^2/a_1r_L,`$ (C6)
the latter being the correct energy needed to remove the giant envelope at its Roche Lobe. The correct efficiency is
$`\widehat{\alpha }_{ce}=(\alpha _{ce})_{BB}/r_L.`$ (C7)
and since for the binaries considered here with $`q4`$ the fractional Roche Lobe is $`r_L0.5`$,
$`\widehat{\alpha }_{ce}2(\alpha _{ce})_{BB}=1,`$ (C8)
with the Bethe & Brown (1998) $`\alpha _{ce}=0.5`$.
Of course, $`\widehat{\alpha }_{ce}`$ should not vary with Roche Lobe, the Bethe & Brown (1998) usage of $`\alpha _{ce}`$ being in error.
For $`\widehat{\alpha }_{ce}=1`$, the envelope would be removed from the giant, but would end up with zero kinetic energy, which is unreasonable. Thus, without additional energy sources, as discussed earlier in our note, one would expect $`\widehat{\alpha }_{ce}0.5`$, in which case the kinetic energy of the envelope would remain unchanged in its expulsion. The Bethe & Brown (1998) results were insensitive to changes in $`\widehat{\alpha }_{ce}`$, which changed the location but not the magnitude of the favored logarithmic intervals, as noted by those authors.
In the present case Van den Heuvel’s $`\widehat{\alpha }_{ce}=1.2`$ definitely indicated the presence of energy sources additional to the drop in orbital energy, although they are not as large as he indicated. We have checked that with $`\widehat{\alpha }_{ce}=1.2`$ and $`c_d1`$ in the Bethe & Brown (1998) formation we obtain the numerical results of Table 1 of Van den Heuvel (1994). |
no-problem/9911/cond-mat9911457.html | ar5iv | text | # Linking numbers for self-avoiding loops and percolation: application to the spin quantum Hall transition
## Abstract
Non-local twist operators are introduced for the $`\mathrm{O}(n)`$ and $`Q`$-state Potts models in two dimensions which, in the limits $`n0`$ (resp. $`Q1`$) count the numbers of self-avoiding loops (resp. percolation clusters) surrounding a given point. Their scaling dimensions are computed exactly. This yields many results, for example the distribution of the number of percolation clusters which must be crossed to connect a given point to an infinitely distant boundary. Its mean behaves as $`(1/3\sqrt{3}\pi )|\mathrm{ln}(p_cp)|`$ as $`pp_c`$. These twist operators correspond to $`(r,s)=(1,2)`$ in the Kac classification of conformal field theory, so that their higher-order correlation functions, which describe linking numbers around multiple points, may be computed exactly. As an application we compute the exact value $`\sqrt{3}/2`$ for the dimensionless conductivity at the spin Hall transition, as well as the shape dependence of the mean conductance in an arbitrary simply connected geometry with two extended edge contacts.
preprint: XXXX
The conformal field theory/Coulomb gas approach to two-dimensional percolation and self-avoiding walk problems has been extraordinarily fruitful . In addition to values for many of the critical exponents, other universal scaling functions such as percolation crossing probabilities have been obtained exactly . In this Letter a set of correlation functions of non-local operators is introduced, which describe topological properties of percolation clusters and self-avoiding walks, and count the number of clusters or loops which must be crossed in order to connect two or more points.
It turns out that these exponents may easily be computed using standard Coulomb gas methods, for general $`n`$ or $`Q`$ in the $`\mathrm{O}(n)`$ or $`Q`$-state Potts model respectively. In the limits $`n0`$ or $`Q1`$, corresponding to self-avoiding walks or to percolation respectively, the scaling dimensions of these operators vanish, so that some of their correlations are trivial. However, it is their derivatives with respect to $`n`$ or $`Q`$ which give physical information, thereby giving rise to a variety of logarithmic behavior. The frequent occurrence of logarithmic correlations in such conformal field theories (CFTs) with vanishing central charge has recently been pointed out in several contexts .
In certain cases these topological operators may be recognized as twist operators which have already been identified in $`c=1`$ theories and the 3-state Potts model . From the CFT point of view, they turn out to correspond to degenerate Virasoro representations labeled by $`(r,s)=(1,2)`$ in the Kac classification. These bulk operators have not previously been identified for general $`n`$ and $`Q`$. The fact that they are degenerate means that their higher-point correlations may be computed exactly.
In addition to bulk operators of the above type it is also possible to define operators which count the number of loops or clusters surrounding a point near the boundary of a system. They may be used to compute the universal mean conductance at the spin quantum Hall transition, which has recently been shown to map onto the percolation problem .
$`\mathrm{O}(n)`$ model. Let us recall the elements of the Coulomb gas approach . $`n`$-component spins $`𝐬(r)`$ with $`𝐬^2(r)=1`$ are placed at the sites $`r`$ of a lattice, with a nearest neighbor interaction. The partition function $`\mathrm{Tr}_{rr^{}}\left(1+y𝐬(r)𝐬(r^{})\right)`$, when expanded in powers of $`y`$, gives a sum over self-avoiding loops with a factor $`y`$ for each bond and $`n`$ for each loop. Each loop may be replaced by a sum over its orientations, with a weight $`e^{\pm i\pi \chi }`$ for each, with $`\chi `$ chosen so that the sum gives $`n=2\mathrm{cos}\pi \chi `$. This gas of oriented loops is then mapped onto a height model with variables $`\varphi (R)\pi 𝐙`$ on the dual lattice, such that on the dual bond $`RR^{}`$ $`\varphi (R)\varphi (R^{})=0,\pm 1`$ according to whether the bond it crosses is empty or is occupied by a loop segment of one orientation or the other. At the critical fugacity $`y_c`$ this is supposed to renormalize onto a free field with reduced free energy functional $`(g/(4\pi ))(\varphi )^2d^2r`$, with respect to which the long-distance behavior of all correlations may be computed as long as phase factors associated with non-contractible loops are correctly accounted for. $`g`$ may be fixed by a variety of methods: for the dilute regime of interest here, $`g=1+\chi `$. On an infinitely long cylinder of perimeter $`L`$, in order to correctly count loops which wrap around the cylinder with weight $`n`$ it is necessary to insert factors $`e^{\pm i\chi /\varphi (\pm \mathrm{})}`$ at either end: these modify the free energy per unit length to $`(\pi /6L)(1(6/g)\chi ^2)`$, from which the value of the central charge $`c=16(g1)^2/g`$ follows. For self-avoiding walks, $`n=0`$, $`\chi =\frac{1}{2}`$, and $`g=\frac{3}{2}`$, so that $`c=0`$.
Now suppose that loops which wrap around the cylinder are counted with a different weight $`n^{}=2\mathrm{cos}\pi \chi ^{}`$. (A similar construction was used in Ref. to count the winding angle of open walks). The cylinder free energy per unit length will be modified by a term $`(\pi /g)(\chi _{}^{}{}_{}{}^{2}\chi ^2)`$. In the plane, this may be interpreted as the insertion of a non-local operator $`\sigma _n^{}`$ whose scaling dimension is
$$x(n;n^{})=(1/2g(n))\left(\chi _{}^{}{}_{}{}^{2}\chi ^2\right).$$
(1)
so that the two-point function $`\sigma _n^{}(r_1)\sigma _n^{}(r_2)`$ decays like $`|r_1r_2|^{2x(n;n^{})}`$ at criticality. The interpretation of this is as a defect line joining $`r_1`$ and $`r_2`$: loops which cross this defect line an odd number of times, that is, which surround either but not both of the points, now carry a factor $`n^{}`$ instead of $`n`$. A particular example is $`n^{}=n`$, which is equivalent to making the identification $`𝐬(r)𝐬(r)`$ as the defect line is crossed. Such twist operators have been identified in other models: like disorder operators, they reflect the existence of non-trivial homotopy. From (1) it follows that the dimension of this operator is $`x(n;n)=(3/2g)1`$, which is $`x_{1,2}`$ in the Kac classification $`x_{r,s}=\left((rgs)^2(g1)^2\right)/2g`$. However, for small $`n`$ and $`n^{}`$, $`x(n,n^{})(1/6\pi )(nn^{})`$, so that, to first order, it does not matter whether $`x(n,n)`$ or $`x(0;n^{})`$ is used apart from factors of 2. Note that if $`n^{}=n`$ the fusion rules of CFT imply the operator product expansion (OPE) $`\sigma \sigma =1+ϵ+\mathrm{}`$, where $`ϵ`$ is the $`(1,3)`$ operator which has been identified as the local energy density of the $`\mathrm{O}(n)`$ model . $`ϵ(r)`$ counts whether a given bond $`r`$ is occupied or not – consistent with the interpretation that, as $`a0`$, the $`O(n)`$ term in $`\sigma (r+a)\sigma (ra)`$ counts whether the single loop is trapped between the points $`r\pm a`$.
Here are a few examples of the application of this formula. Away from criticality it implies that $`\sigma _n^{}(R)C(n;n^{})\xi ^{x(n,n^{})}`$ where the correlation length $`\xi `$ behaves as $`(y_cy)^\nu `$, and $`C(0;0)=1`$. When $`n=0`$, the coefficient of $`n_{}^{}{}_{}{}^{M}`$ counts configurations of $`M`$ nested loops surrounding the point $`R`$ of the dual lattice. Denoting the number of such loops of total length $`N`$ by $`b_N^{(M)}`$, the most singular term in the generating function $`_Nb_N^{(M)}y^N(1/M!)((1/8\pi )|\mathrm{ln}(y_cy)|)^M`$ as $`yy_c`$, using $`\nu =\frac{3}{4}`$. This yields the asymptotic behavior
$$b_N^{(M)}(\sigma _0/(M1)!)(1/8\pi )^M(1/N)(\mathrm{ln}N)^{M1}\mu ^N,$$
where $`\mu =y_c^1`$ is the usual lattice-dependent connective constant, and $`\sigma _0`$ is a positive integer taking account of the fact that, on a loose-packed lattice, there are $`\sigma _0`$ equivalent singularities on the circle $`|y|=y_c`$. For $`M=1`$, one may sum over the position of the marked point $`R`$, instead of summing over the position of the loop, so that $`b_N^{(1)}=A_Np_N`$, where $`p_N`$ is the number of single loops per lattice site and $`A_N`$ is their average area. The amplitude $`1/8\pi `$ in this case agrees with an earlier result found by a different (more involved) method . It is interesting also to calculate higher point functions. For example the $`O(n)`$ term in the connected 4-point function $`\sigma \sigma \sigma \sigma _c`$ counts the number of single loops which have non-trivial winding around all four points. Examples are shown in Fig. 1. Since $`\sigma `$ is a (1,2) operator this 4-point function may be computed exactly at criticality, in terms of hypergeometric functions. The details will be given elsewhere .
Percolation. The Coulomb gas formulation of the $`Q`$-state Potts model is similar to the above, except that it is valid only at the critical point . The model is first mapped onto the random cluster model, in which every cluster configuration is weighted by a factor of $`p/(1p)`$ for each bond and $`Q`$ for each cluster. Each cluster may be identified by its outer and inner hulls, which form a dense set of closed loops. At criticality, each hull then carries a weight $`\sqrt{Q}`$. Once again this may be mapped onto a gas of oriented loops with phase factors $`e^{\pm \pi \chi }`$, where $`\sqrt{Q}=2\mathrm{cos}\pi \chi `$, and then to a free field theory, where, however, this time $`g=1\chi `$. The central charge then vanishes for $`g=\frac{2}{3}`$, corresponding to $`\chi =\frac{1}{3}`$ or $`Q=1`$, the percolation limit. Once again one may define a non-local operator which counts the hulls which surround a marked point with a different weight $`\sqrt{Q^{}}=2\mathrm{cos}\pi \chi ^{}`$, and standard Coulomb gas methods then lead to a result identical in form to (1) for its dimension $`x(Q;Q^{})`$. For this to be given by $`x_{(1,2)}=(3g/2)1`$, $`Q^{}=(2Q)^2`$. For $`Q=3`$, for example, the clusters which wrap around the marked point are counted with weight $`Q^{}=1`$. This is consistent with the identification of the $`𝐙_2`$ twist operator for the 3-state model made in Ref. . For the Ising model ($`Q=2`$), it is the same as the disorder operator. Note that as $`Q1`$, the $`O(Q1)`$ term in $`\sigma (r+a)\sigma (ra)`$ counts the number of clusters separating $`r\pm a`$, as compared with the $`\mathrm{O}(n)`$ model where it simply counted whether or not the single loop passed between them. For this reason the lattice interpretation of the (1,3) operator in the OPE $`\sigma \sigma `$ is problematic in this case.
For $`Q=1+\delta `$ and $`Q^{}=1+\delta ^{}`$ ($`|\delta |,|\delta ^{}|1`$) note that $`x(Q;Q^{})(1/4\sqrt{3}\pi )(\delta \delta ^{})`$ (with $`\delta ^{}2\delta `$ corresponding to $`x_{1,2}`$), while for small $`Q^{}`$ $`x(1;Q^{})=\frac{5}{48}\frac{3}{8\pi }\sqrt{Q^{}}+O(Q^{})`$. As before, one application of these results is to count the number of clusters which surround a given point, that is, which have to be crossed to escape to infinity. If $`P(M)`$ is the probability of there being exactly $`M`$ such clusters,
$$\underset{M}{}P(M)Q_{}^{}{}_{}{}^{M}C(Q^{})(p_cp)^{\nu x(1;Q^{})}$$
where now $`\nu =\frac{4}{3}`$. For $`Q^{}=0`$ this gives $`P(0)`$, which is the probability that the dual site $`R`$ is connected to the boundary by a set of dual bonds: this gives the usual exponent $`\beta =\frac{5}{36}`$ of percolation. Expanding in powers of $`\sqrt{Q^{}}`$ now gives
$$P(M)P(0)\left(|\mathrm{ln}(p_cp)|/2\pi \right)^{2M}/(2M)!$$
Note that although the amplitude in $`P(0)`$ is not universal, the ratios $`P(M)/P(0)`$ are.
However, this is valid only for $`M|\mathrm{ln}(p_cp)|`$. To find the behavior near the average value, expand around $`Q^{}=1`$. The mean value $`\overline{M}(1/3\sqrt{3}\pi )|\mathrm{ln}(p_cp)|`$, and the variance $`\overline{M^2}\overline{M}^2|\mathrm{ln}(p_cp)|`$ also. This implies that, as $`pp_c`$, the distribution of $`M`$ becomes peaked about $`\overline{M}`$. Alternatively, one could work at the percolation threshold in a finite system of size $`L`$, in which case $`\overline{M}(1/4\sqrt{3}\pi )\mathrm{ln}L`$.
Spin quantum Hall transition. Recently Gruzberg et al. have shown that certain properties of a model of non-interacting quasiparticles for the spin Hall transition (a 2D metal-insulator transition in a disordered system in which time-reversal symmetry is broken but $`\mathrm{SU}(2)`$ spin-rotation symmetry is not ) may be mapped exactly onto percolation. In particular, the mean conductance between two extended contacts on the boundary of a finite system is (apart from a factor of 2 for the spin sum) equal to the mean number of distinct clusters whose outer hulls connect the two contacts. As argued above, such quantities are related to the derivative with respect to $`Q`$ at $`Q=1`$ of correlation functions of a twist operator. This will however now be a boundary twist operator. While it is possible to adapt the above Coulomb gas arguments to account for the boundary, since in other examples such methods are known to fail for boundary operators, we shall instead give a more direct argument.
First consider the example of an annulus of width $`L`$ and circumference $`W`$. The geometry is shown in Fig. 2. Apart from the clusters whose outer hulls cross the sample, there are those which touch the lower edge but not the upper, those which do the opposite, and those which touch neither edge. There may also be one cluster which crosses the sample but which also wraps around the annulus, so that its outer hulls do not connect the contacts. Denote the numbers of such clusters in a given configuration of the random cluster version of the Potts model by $`N_c`$, $`N_1`$, $`N_2`$, $`N_b`$ and $`N_w`$ respectively. (Note that $`N_c=0`$ if $`N_w=1`$). Let $`Z_{ij}(Q)`$ denote the Potts model partition function with boundary condition of type $`i`$ on the lower edge and $`j`$ on the upper edge. The cases of interest are where $`i`$ or $`j`$ correspond to either free boundary conditions, denoted by $`f`$, or to fixed, in which the Potts spins on the boundary are frozen into a given state, say $`1`$. Then
$`Z_{ff}`$ $`=`$ $`Q^{N_c+N_w+N_1+N_2+N_b}Z_{11}=Q^{N_b}`$
$`Z_{1f}`$ $`=`$ $`Q^{N_2+N_b}Z_{f1}=Q^{N_1+N_b}`$
so that
$$N_c+N_w=(/Q)|_{Q=1}(Z_{ff}Z_{11}/Z_{f1}Z_{1f})$$
(2)
According to the theory of boundary CFT , $`Z_{ij}\mathrm{exp}\left[\pi ((c/24)\mathrm{\Delta }_{ij})(W/L)\right]`$ as $`W/L\mathrm{}`$, where $`\mathrm{\Delta }_{ij}`$ is the lowest scaling dimension out of all the conformal blocks which can propagate around the annulus with the given boundary conditions. When $`i=j`$ this corresponds to the identity operator, so that $`\mathrm{\Delta }_{ii}=0`$, but for the mixed case $`(ij)=(f1)`$ it corresponds to the (1,2) Kac operator, so that $`\mathrm{\Delta }_{f1}=\mathrm{\Delta }_{1,2}=\frac{1}{2}x_{1,2}(Q)`$ in the previous notation. This identification was previously used at $`Q=1`$ in Ref. to compute crossing probabilities, i.e. the probability that $`N_c>0`$, in simply connected regions. Substituting into (2) gives $`N_c2\pi \mathrm{\Delta }_{1,2}^{}(1)(W/L)`$ as $`W/L\mathrm{}`$, since $`N_w1`$. From this follows the universal critical conductivity $`\sqrt{3}/2`$.
At finite $`W/L`$ the corrections to the mean conductance are expected to be of the order of $`e^{\pi \mathrm{\Delta }_{2,2}W/L}`$ where $`\mathrm{\Delta }_{2,2}=\frac{1}{8}`$ at $`Q=1`$, but the full dependence requires knowledge of the entire operator content of the model for the different boundary conditions. This seems to be beyond the reach of current methods. However, for a simply connected finite sample the arguments of Ref. may be adapted. Consider a simply connected region with contacts $`C_1C_2`$ and $`C_3C_4`$ on its boundary, as shown in Fig. 3. The remainder of the boundary has hard wall conditions on the quasiparticle wave functions, corresponding to free boundary conditions on the Potts spins. The mean number of clusters crossing between the contacts is still given by (2) (with $`N_w=0`$), where the different boundary conditions are placed on the segments $`C_1C_2`$ or $`C_3C_4`$, with the remaining boundary Potts spins being free. This may then be written in terms of correlation functions of boundary condition changing operators
$$N_c=\frac{}{Q}|_{Q=1}\left(\frac{\varphi _{f1}(C_1)\varphi _{1f}(C_2)\varphi _{f1}(C_3)\varphi _{1f}(C_4)}{\varphi _{f1}(C_1)\varphi _{1f}(C_2)\varphi _{f1}(C_3)\varphi _{1f}(C_4)}\right)$$
These correlation functions are computed by conformally mapping the interior of the region to the upper half plane. Any conformal rescaling factors for $`Q1`$ cancel in the ratio, which then depends only on the cross-ratio $`\eta =(z_1z_2)(z_3z_4)/(z_1z_3)(z_2z_4)`$ of the images $`z_i`$ of the points $`C_i`$ under this mapping. For a rectangle with $`|C_1C_2|=W`$ and $`|C_2C_3|=L`$, $`\eta =(1k)^2/(1+k)^2`$ where $`W/L=K(1k^2)/2K(k^2)`$ and $`K`$ is the complete elliptic integral of the first kind. Since $`\varphi _{f1}`$ is a degenerate (1,2) operator, its four-point function satisfies a hypergeometric equation. The details of this calculation will be given elsewhere . The result for the mean conductance is
$$\overline{g}=1\frac{\sqrt{3}}{2\pi }\left(\mathrm{ln}(1\eta )+2\underset{m=1}{\overset{\mathrm{}}{}}\frac{(\frac{1}{3})_m}{(\frac{2}{3})_m}\frac{(1\eta )^m}{m}\right)$$
For $`W/L1`$ this reproduces the above result for the conductivity. In the opposite limit $`\overline{g}Ae^{(\pi /3)(L/W)}`$, in agreement with Ref. , but now with a definite prefactor $`A=3\mathrm{\Gamma }(\frac{2}{3})/2\mathrm{\Gamma }(\frac{1}{3})^2`$.
The author thanks J. Chalker, A. Ludwig, V. Gurarie and H. Saleur for useful correspondence and discussions. This research was supported in part by the Engineering and Physical Sciences Research Council under Grant GR/J78327. |
no-problem/9911/patt-sol9911003.html | ar5iv | text | # Development and geometry of isotropic and directional shrinkage crack patterns
## I Introduction
Shrinkage crack patterns are common in both natural and man-made systems, including dried mud layers and the glaze on a ceramic mug. They are in many instances undesirable, as in the case of paint or other protective coatings. Crack patterns have long been of interest in geology in connection with the formation of columnar joints and ancient crack patterns preserved in the geological record . They arise in materials which contract on cooling or drying. This contraction, coupled with adhesion to a substrate, leads to a buildup of stress in the material, and when the stress exceeds the local tensile strength, the material fractures. This relieves the stress locally along the sides of the crack, but concentrates stress at the crack tip. As a result the crack tip propagates until the stress there is reduced below the local strength of the material . A crack pattern forms as multiple cracks grow and interconnect. In a homogeneous medium a crack will form and grow perpendicular to the direction of maximum stress. Since the stress near an existing crack face is parallel to the face, any crack nucleating at or growing to meet the edge of a preexisting crack will meet that crack perpendicularly. Non-perpendicular junctions can form due to the influence of nearby cracks on the local stress field, by the nucleation of multiple cracks at a point, or by the splitting of a crack tip into multiple cracks. The lateral length scales of crack patterns range from millimeters in thin layers of ceramic glaze to tens of meters in the case of ice-wedge polygons found in the Arctic .
Shrinkage results from the removal of a diffusing quantity — water in the case of dessication, or heat in the case of cooling. In thin layers, crack development can be slow enough compared to diffusion that the layer is homogeneous over its thickness and the resulting crack pattern is effectively two-dimensional. In contrast, in the drying or cooling of thick layers, gradients of the diffusing quantity — which can only be removed from exterior surfaces — become important. Cracks tend to propagate in the direction of the gradient, following it as it moves through the material. This leads to three-dimensional crack patterns. The crack pattern in turn affects the rate of removal of the diffusing quantity. Remarkably, an initially disordered network can become more ordered as it propagates into the material. This leads, for example, to the formation of hexagonal columnar joints in cooling basaltic lavas . This ordering process is currently poorly understood .
In this paper we report on experiments on crack patterns formed by the drying of slurries of 130 Å alumina (Al<sub>2</sub>O<sub>3</sub>) particles in water. We study two-dimensional patterns formed in a uniformly dried thin layer. A typical crack pattern formed in one of these experiments in shown in Fig. 1. We also performed experiments on “directional drying” , in which a one-dimensional pattern of cracks propagates due to a moving drying front. A pattern formed by directional drying is shown in Fig. 2.
Although qualitative observations of crack patterns were reported several decades ago , there have been rather few well-controlled experimental studies of the phenomenon. Skejltorp and Meakin investigated the drying of monolayers of close-packed polystyrene beads. Cracks formed at grain boundaries and propagated in straight lines defined by the hexagonal lattice of the beads. Later cracks were wavy. Groisman and Kaplan studied crack patterns in layers of coffee-water mixtures. They found that the length scale of the pattern was proportional to the layer thickness, $`d`$, and increased when the bottom friction was reduced. For thick layers they observed a polygonal pattern consisting mostly of straight cracks meeting at 90 junctions. As $`d`$ decreased, they observed a transition below which a large fraction of the junctions were at 120 and wavy cracks appeared. A similar transition was observed in drying corn flour-water mixtures by Webb and Beddoe . Pauchard et al. studied the dessication of colloidal silica sols; in this case the fluid properties are complex and the nature of the crack patterns was a function of the ionic strength of the suspension. Korneta et al. studied the geometrical properties of shrinkage crack patterns formed by sudden thermal quenches.
There have been several theoretical and computational studies of two-dimensional crack patterns using spring-block models . Andersen and coworkers observed a transition in the crack pattern morphology as the strain and the coupling to the substrate were varied. Hornig, Sokolov and Blumen, in a similar model, observed a change in the way in which fragmentation occurred as the amount of disorder in the system was varied: for small disorder, fragments (i.e., polygons) formed through the propagation of cracks in straight lines defined by the lattice structure. Cracks tended to appear in the middle of existing segments, so the length scale of the pattern decreases by a factor of two with each generation of cracks. For somewhat larger disorder, cracks propagated along wavy paths. For large disorder, cracks did not propagate but rather formed by the coalescence of independent point defects. Crosby and Bradley found similar regimes in their simulations, but as a function of the applied stress due to a sudden temperature quench: low stress led to polygonal patterns and high stress to a complicated pattern of strongly interacting cracks. Recently Kitsunezaki has done theoretical studies of crack pattern formation, along with numerical simulations using a random two-dimensional lattice, and reproduced many of the qualitative features seen in experiments. Kitsunezaki found analytically that the pattern length scale should be proportional to layer thickness, as observed experimentally, if the fracture occurs due to a critical stress criterion, but would be nonlinear if fracture obeyed the Griffiths criterion .
There have been fewer studies of directional cracking in the presence of moving gradients in two or three dimensions. Yuse and Sano examined the morphology of single cracks which formed in thin glass plates with a moving thermal gradient and observed an instability of the crack tip as the speed of the cooling front was increased . Fracture patterns in thin layers of a directionally dried colloidal suspension between two glass plates have been studied by Allain and Limat . Komatsu and Sasa explained the observed regular crack spacing by again noting that cracks will form in the middle of existing segments where the stress is highest. Their theory predicted a wavelength of the crack pattern that varied as $`d^{2/3}`$, in good agreement with the experimental results of Ref. , but different from the linear relation found for the isotropic case.
Recently Müller has studied three-dimensional shrinkage crack patterns in drying cornstarch-water mixtures . The resulting “starch columns” are strikingly similar to basalt columns formed by the cooling of basaltic lava flows. Müller showed that the in-plane length-scale of the pattern (i.e., the column width) was dependent on $`dc/dz`$, where $`c`$ is the water concentration at the propagating crack front and $`z`$ is the coordinate along the propagation direction. From this he inferred that the column width in basalt is similarly determined by $`dT/dz`$, where $`T`$ is the rock temperature at the crack front.
There has been relatively little theoretical work on three-dimensional gradient problems. Motivated by the experiments of Yuse and Sano , Hayakawa considered two- and three-dimensional breakable spring models of directional cracking. In three dimensions, he found columnar structures reminiscent of those found in basalt and evidence of scaling behavior in the crack spacing. He did not include adhesion to a substrate, however.
The remainder of this paper is organized as follows. In Section II, we describe our experimental apparatus. In Section III, we discuss our results, first on the temporal development of uniform and directionally dried crack patterns, and then on the geometric characteristics of the final state. Sections IV and V contain a discussion and a brief conclusion.
## II Experiment
Shrinkage crack patterns were produced by drying layers of a slurry of 130Å Al<sub>2</sub>O<sub>3</sub> particles in water. These particles were very uniform and inspection of the dried layers indicated that the composition of the layers was quite homogeneous over their depth. Two types of experiments were performed: “isotropic drying,” in which the entire layer was dried uniformly, and “directional drying,” in which the layer was dried from one end.
The isotropic drying experiments were performed by pouring a quantity of the Al<sub>2</sub>O<sub>3</sub>-water mixture into a Plexiglas pan 62 cm square. The pan was housed in an insulated enclosure. Four 15 Watt light bulbs mounted 1.6 m above the pan served as a heat source to dry the films, and also provided illumination for video imaging. The temperature of the layer was between 25 C and 28 C for all runs, and was constant to within 1 C for each run. Time-lapse video recordings of the cracking process were made using a charge coupled device video camera mounted near the top of the enclosure. Frames from the video record were digitized for later analysis, and photographs of the dried layer taken at the end of each run were also analyzed. Experiments were performed with with the Plexiglas substrate untreated, as well as with the substrate coated with a thin transparent coating of teflon to reduce the friction between the drying layer and the substrate. Experiments were also done with the teflon-coated surface and “impurities,” in the form of 10 cm<sup>3</sup> of sand grains, 0.425–0.500 mm in size, sprinkled evenly over the top of the slurry before the drying began. The sand floated on top of the layers during drying. These impurity particles were $`3\times 10^3`$ times the size of the alumina particles and of the same order as the layer thickness. The analysis discussed in Section III was done on 8 cm square sample areas within the pan away from the edges.
Directional drying experiments were performed on layers of Al<sub>2</sub>O<sub>3</sub>-water mixture placed in a 12 cm $`\times `$ 8 cm rectangular pan. The pan was covered with a sealed window, leaving a 3 mm air space above the layer. A very slow laminar breeze of ultradry N<sub>2</sub> gas, 20 above ambient temperature, was passed unidirectionally through the air space. The N<sub>2</sub> became saturated with water as it passed through the cell, resulting in a drying front that moved downwind. By controlling the rate of flow, the front speed could be varied between 0.01 mm/hr and 10 mm/hr. The progress of the front was monitored with time-lapse video. The roughness of the substrate vas varied by putting teflon and various grades of sandpaper on the bottom of the pan. Our apparatus differs from that of Allain and Limat in that here the upper surface of the drying layer is free and only the lower surface is constrained by adhesion to the substrate. We found that a considerable transport of material occurred over the course of an experiment, so that the final dry layer could be as much as 50% thicker at the downwind end than at the upwind end. This could be compensated for by tilting the apparatus.
## III Results
### A Dynamics
The alumina-water mixture is initially fluid, but becomes gel-like a few hours after being poured into the experimental container. Cracks first appear in the layer after a wait of from several hours to a few days, depending on the thickness and composition of the layer. The drying process in the “isotropic” experiments is, of course, neither perfectly isotropic nor perfectly uniform as a result of nonuniformities in thickness and in the heating of the layer by the light bulbs. Because of this, cracks tend to start in one or more separate regions of the layer and move into the rest of the system, so that different parts of the system are at different stages of crack-pattern evolution at any given time. The formation of the crack pattern and complete drying of the layer took up to eight days, again varying with layer thickness.
The dynamics of crack pattern formation were observed on time-lapse video recordings. Although some characteristic behavior was observed, in some cases paralleling predictions of the theoretical models discussed above, there was no single clear-cut path followed during the development of the patterns.
In the first stage of crack pattern development, cracks nucleate at a small number of points in the layer. In the absence of sand impurities, these cracks then propagate in fairly straight lines, presumably in a direction determined by the local drying conditions. In runs with sand added to the layer, cracks tend to nucleate at the sand grains and in many cases do not propagate long distances across the layer, but rather move in short steps from one sand grain to the next, occasionally meeting another crack moving in a similar fashion. In the runs with no sand, other defects in the layer structure presumably serve as nucleation sites. Typically, symmetric arrangements of two or three cracks nucleate at a site and propagate away from it. Images taken from time lapse video recordings showing a number of triplets of cracks in the early and later stages of the development of a crack pattern are shown in Fig. 3. More examples can be found in Fig. 4, which shows a pattern from a run in a thin layer with sand impurities. In runs with no sand impurities, the number of these nucleations is small. After the initial nucleation of a few cracks, most new cracks start at the sides of existing cracks and propagate away from their parent crack at right angles.
The longest cracks in Fig. 1 are an order of magnitude longer than the final length scale of the pattern. These long, straight cracks are referred to as primary cracks. Although these may not be the very first cracks to form in the layer, they are usually the oldest cracks locally. As the slurry dries further, successive generations of cracks form, often joining older cracks and forming a complicated array of polygons. The resulting pattern has a characteristic length scale, as can be seen in Fig. 1. The polygonal fragments bounded by the cracks are predominantly four-sided. Most of the junctions between cracks are at right angles, with a few non-perpendicular junctions. In runs with no sand impurities, the faces of the cracks are smooth and the cracks, while they can curve smoothly, are usually locally straight.
In thinner layers, or layers with sand impurities added, the patterns are more disordered in appearance. An example is shown in Fig. 4. A few long primary cracks are still present, but are much less dominant than in Fig. 1. In runs with sand added, cracks can change direction suddenly, and in thin layers the crack faces appear less smooth. Despite the more disordered appearance of the pattern in Fig. 4, the fragments of dried mud are still predominantly four-sided and crack junction angles remain primarily at 90, as discussed below.
A common process in the isotropic system we call “arching.” In this process, a new crack propagates away from an existing pattern of cracks. It then curves back on itself in a roughly parabolic path, eventually stopping as it runs into another crack. The region enclosed by the arch is fragmented by later cracks as it dries further. The stages in this process are illustrated in Fig. 5.
Another very commonly observed mode of pattern formation is shown in Fig. 6; we refer to this process as “laddering.” In this process, two or more cracks propagate more or less parallel to each other, following a local drying front. As the layer behind the moving crack tips dries further, perpendicular cracks form, joining two of the parallel cracks like the rungs of a ladder. Fig. 6 shows the stages in this process, and regions in which extensive laddering have taken place can be seen in Fig. 1. This process is dominant in the directional drying case, as can be seen in Fig. 2, and clearly involves a local directionality of the drying process in an essential way.
Using the directional drying technique, we could examine the laddering process in a more controlled way. The directional experiments involved much sharper fronts than those that emerged spontaneously in the isotropic version of the experiment. Fronts produced alignment and a uniform spacing similar to that observed by Allain and Limat, although on a larger scale. By varying the rate of drying, we found that the spacing of the primary cracks was independent of the speed of front advance over 3 orders of magnitude. Once the spacing of the primary cracks was established perpendicular to a planar front, we did not observe it to change subsequently. In particular, we do not normally observe the sequential bisection proposed in Ref. . The scale of the primary cracks in turn determined that of the secondary, “rung” cracks of the ladder.
The primary crack spacing in the directional experiments does not have a simple relationship to the local depth, because the propagation of the front introduces a form of history-dependence. We constructed a cell which contained a 0.5 mm downward step and examined the primary crack pattern as it passed over the step. The subsequent behavior of the primary crack spacing was strongly influenced by the roughness of the substrate. For the teflon-covered substrates, no new primary cracks appeared as the front passed the step, as illustrated in Fig. 2. For substrates covered with the roughest grades of sandpaper, under otherwise identical conditions, the primary crack spacing doubled as it passed over the step, with a primary crack disappearing between each of the original ones. This is shown in Fig. 7. This demonstrates that the primary crack spacing is, in a sense, hysteretic on rough substrates, with a range of spacings which can stably propagate under given conditions.
The theories of Refs. and predict that cracks will tend to form in the middle of existing fragments of the layer, since that is ideally where the stress in the fragment will be largest. In our isotropic experiments new cracks are observed to form at almost any location in an existing fragment. They usually start at one edge, rather than in the interior, and propagate more or less in a straight line across the fragment. It is rare that the new crack will exactly bisect the fragment, presumably due to nonuniformities in the adhesion of the fragment to the substrate or in its dryness or composition. This is evident from the range of sizes of the fragments seen, for example, in Fig. 1. However on occasion near-perfect bisections do occur; one example is shown in Fig. 8. In this example the regions between propagating parallel cracks are bisected by new cracks, as proposed for the directional drying case in Ref. .
In the very late stages of crack pattern formation, new cracks tend to simply cut off the corners of existing fragments, breaking off a piece much smaller than the original fragment. The results of this process can be seen in many places in Fig. 1.
### B Geometrical characteristics
The characteristic length scale of the final crack pattern was determined in real space as well as from a Fourier analysis of the patterns. We estimate the pattern “wavelength” $`\lambda _r`$ by $`l/\sqrt{N_p}`$ where $`N_p`$ is the number of polygons in a square region of the pattern of side $`l`$. For a perfect pattern of equal-sized squares, this would be equal to the true wavelength. $`\lambda _r`$ was measured in a number of 8 cm by 8 cm square regions, neglecting the parts of the layer close to the edges, and is plotted as a function of the mean local layer depth $`d`$ in Fig. 9. In all three sets of isotropic drying experiments (untreated substrate, teflon-coated substrate, coated substrate plus added impurities) $`\lambda _r`$ is proportional to depth, as found previously by Groisman and Kaplan . The slope is a function of the experimental conditions, with the runs having lower friction between the layer and the substrate (Fig. 9(b)) having the highest slope. Both increased friction (Fig. 9(a)) and the addition of impurities (Fig. 9(c)) cause the slope to decrease. Equivalently, at a given layer depth, increased friction and the addition of impurities lead to a decrease in the length scale of the pattern. The scatter in the data increases for depths greater than about 2.75 mm, particularly for the runs with reduced friction and no impurities. This may be a result of the decreased effect of bottom friction with thickening layers, or of three-dimensional effects.
The distribution of length scales in the pattern was also studied by performing two-dimensional FFTs of video images of the dried layer. This analysis does not require any assumptions about the shape or orientation of the polygons making up the pattern. The two-dimensional Fourier power spectrum $`P_1(k)`$ of an image, where $`k`$ is the wavenumber, was calculated. A large low-$`k`$ peak, due to spatial nonuniformities in the illumination, was removed and the power spectrum was azimuthally averaged. The resulting spectrum is peaked about a wavenumber $`k_c`$, and $`\lambda _c=2\pi /k_c`$ can be taken as the characteristic length scale of the crack pattern. The spectra were asymmetric, with the skewness of the distribution being close to zero for the thinnest layers studied and increasing with depth. Fig. 9 also shows the values of $`\lambda _c`$ found in this way. The two methods of finding the length scale of the pattern are consistent.
As can be seen in Figs. 1 and 4, most polygons in the final pattern are four-sided, and most crack junction angles are at 90. Even non-four-sided polygons tend to have 90 corners; this is possible because the cracks which define their sides are curved. Examples of this can also be seen in Fig. 1. Fig. 10 shows the distribution of $`n`$-sided polygons $`P_2(n)`$ averaged over all depths studied, for runs with the untreated substrate, the reduced friction substrate, and the treated substrate with added sand impurities. The distributions were essentially independent of depth over the range studied ($`0.13`$ mm $`d4.12`$ mm), except as noted below, and the distributions in the three cases are the same within the experimental uncertainties. With both the treated and untreated substrates (but without impurities), there is some indication of a drop in the fraction of three-sided polygons and a corresponding jump in the fraction of four-sided polygons as the depth increases above 1.5 mm. If this variation (which, although persistent, is within the statistical uncertainty of the data) is real, it is probably a result of cracks breaking off the corner of an existing polygon in the late stages of pattern formation, as described above. This process can turn, for example, a four-sided polygon into a three- and a five-sided polygon. This phenomenon is less common in thicker layers and in runs with added impurities.
The distribution $`P_3(\theta )`$ of crack junction angles $`\theta `$ was sharply peaked at 90 for all runs. Sample distributions, using 5 bins for $`\theta `$, are shown in Fig. 11. Fig. 11(a) shows $`P_3(\theta )`$ for runs with the untreated substrate, for a relatively thin layer and a relatively thick layer. The distributions are essentially identical. In particular, the mean is 90 within the experimental uncertainties and the width of the distribution does not change with depth. Fig. 11(b) shows distributions for runs with the reduced-friction substrate. Again the mean is $`90^{}`$ for these runs, but in this case the distribution become much broader for thinner layers than for thicker layers. When sand was added to the layer, the mean remained at 90 and again the distribution tended to broaden as $`d`$ decreased. None of our runs had clear peaks at angles other than 90; the broadening of the distribution for small $`d`$ was due to a general spreading of the distribution and not to the emergence of peaks at, for example, 120. Although symmetric triplets of cracks with junction angles of 120 were seen to nucleate in the early stages of drying, as discussed above, the number of such events was always too small to show up as a significant peak in the distribution functions, even in runs with sand added.
## IV Discussion
Cracks develop in a drying layer when the shrinkage of the layer induces sufficient stress that the layer fractures. Our observations indicate that cracks initially appear by nucleation at a few points. When sand impurities were added to the layer, sand grains were frequent nucleation sites. Triplet junctions, at which three cracks meet with junction angles of 120, are formed by such nucleations in the early stages of pattern development. At later times, however, cracks meet predominantly at 90 junctions. This is due to the fact that cracks propagate in the direction which most efficiently relieves the stress. Since the stress near a given crack is parallel to its surface, other cracks will tend to approach and meet it at right angles.
The formation of crack patterns is strongly influenced by drying gradients, which in turn lead to stress gradients. Cracks tend to propagate in the direction of a dryness gradient. This is most obvious in the patterns produced in the directional drying experiments, but is also the cause of the laddering patterns observed in the isotropic experiments. The arching illustrated in Fig. 5 presumably also results from the interacting between the propagating crack and the local dryness or stress field.
Theories of crack pattern formation indicate that each generation of cracks should appear in the middle of existing fragments, leading to successive halvings of the length scale of the pattern. This follows from the fact that ideally the stress in a fragment of the drying layer will be a maximum midway between two cracks. Near-perfect halving of existing fragments, as in Fig. 8, was seen only rarely in our experiments. This is presumably due to unevenness in the local stress distributions, in turn resulting from nonuniform adhesion to the substrate and nonuniformities in the dryness and composition of the layer.
The spacing of the primary cracks in the directional drying experiments did not exhibit the halving effect either. Instead, the spacing that was quickly established near the beginning of the run propagated stably, as if we only observed the endpoint of the halving process. Even when sudden changes in stress were forced by propagating fronts over steps in the depth, spacing changes only occurred for rougher substrates.
The length scale of the final crack pattern in the isotropic experiments was found to be proportional to the layer depth, as shown in Fig. 9, with the constant of proportionality being largest for reduced friction between the layer and the substrate. This is in agreement with the experiments of Groisman and Kaplan and also with the theoretical work of Kitsunezaki under the assumption that the cracks develop as a result of a critical stress mechanism. This is easily understood. With reduced substrate adhesion, the stress in the layer will grow more slowly with distance away from an existing crack, and so a larger region is needed for the layer to reach the critical stress for fracture in that case.
The data in Fig. 10 show that the distributions of the number of sides of the polygons in the final pattern are essentially the same for the experimental conditions studied. The distribution of junction angles was sharply peaked at $`90^{}`$ in all runs. For experiments with the untreated substrate, there was no significant variation in the distribution of angles with depth, as indicated in Fig. 11(a). The standard deviation of the distribution was about 15 for all depths, and no peaks were observed at any other angles. With reduced bottom friction, both with and without added sand, the distribution broadened as the layer depth decreased, as indicated in Fig. 11(b). The standard deviation of the distribution increased from approximately 5 for layers 2.5 mm thick to 20 for layers 0.7 mm thick. Although nucleation of triple cracks was observed in the early stages of pattern formation, particularly with added sand, there was no sign of significant peaks in the distribution at angles other than 90.
For comparison, Groisman and Kaplan showed a distribution of junction angles for a thick layers of dried coffee-water mixture which has a mean of 93.1 and a standard deviation 8.0. Groisman and Kaplan observed a transition in the distribution of junction angles as layer depth was decreased: for layers thinner than 4 mm, they observed a marked increase in the number of 120 junctions. In their thinnest layers, they found up to 30% 120 junctions. They explained this as being due to an increased likelihood of nucleation of cracks from inhomogeneities in the material when the layer thickness became roughly the same as the size of local inhomogeneities. While we do see a broadening of the distribution of angles when the substrate is coated with teflon, we observe no sharp transition down to layer depths of 0.5 mm, which is the size of the sand grains added to our mixture. We also see no evidence for a peak in the distribution at angles of $`120^{}`$. It is likely that our alumina slurries were much more homogeneous than the coffee-water mixtures used in Ref. , but this does not explain the absence of an increase in the number of triplet junctions in thin layers with added sand.
The theoretical models of Refs. and predict that the appearance of the crack pattern should change as the amount of disorder in the system increases. With no sand added, our layers are very homogeneous and patterns are formed by cracks which propagate in straight lines and interact to form an array of predominantly four-sided polygons. These patterns are consistent with the model predictions in the case of weak disorder . With sand added, and to some extent in thinner layers, the cracks tended to propagate shorter distances and the patterns became closer to the type predicted for medium disorder. Thus our results provide a qualitative confirmation of the validity of the models.
## V Conclusions
Our isotropic drying experiments resulted in crack patterns which were similar in appearance to those predicted by theoretical models in the case of weak disorder. This is consistent with our expectation that the alumina-water mixtures used in our experiments were quite uniform in composition down to rather small length scales. The development of the crack patterns in our experiments did not proceed by a single well-characterized process. Initially, cracks nucleate at points. In layers with sand added, sand grains often serve as nucleation sites. The distribution of crack junction angles is strongly peaked at 90 for all experimental conditions studied. 120 triplet junctions can result from the symmetric nucleation of three cracks at a point, but most crack junctions form later in the development of the pattern when a propagating crack meets an older crack at 90. We do not observe a dramatic increase in the fraction of triplet junctions nucleated as the layer gets thinner, in contrast to the observations of Ref. . Models predict that new cracks will form midway between existing cracks , leading to a halving of the pattern length scale with each successive generation of cracks. Although this process does sometimes occur, it is not predominant, at least partly because of nonuniformities in layer thickness, substrate adhesion, and drying. Other processes such as the laddering and arching described above are much more common. The laddering effect is most pronounced in the directional drying experiments. We found that the primary crack spacing in the directional experiments did not show the halving effect either, but rather that the spacing tended to become established early and then propagate stably thereafter, even over sudden changes in depth. The spacing in the directional experiments depended on the substrate adhesion, but not on the speed of front propagation.
In summary, we have performed an extensive experimental study of crack pattern formation in both the isotropic and directional drying cases. Our results provide some confirmation of the predictions of theoretical models, but also underline the complexity of the pattern formation process in this system.
## Acknowledgements
This research was supported by NSERC of Canada. We acknowledge helpful discussions with N. Rivier, P.-Y. Robin, and development work by Eamonn McKernan. |
no-problem/9911/cond-mat9911398.html | ar5iv | text | # RANDOM FIELD AND RANDOM ANISOTROPY EFFECTS IN DEFECT-FREE THREE-DIMENSIONAL XY MODELS
## I INTRODUCTION
The effects of random pinning on systems of charge-density waves (CDW) or spin-density waves (SDW), and related problems like the pinning of the Abrikosov vortex lattice, have been studied for a long time. In real laboratory samples, there are always defects which create such pinning forces. Nevertheless, fundamental issues remain controversial. In 1970, Larkin presented an argument which shows that, if the unpinned system is translation-invariant, (which means that we are ignoring any effects of a periodic crystal-lattice potential), then weak random pinning forces will destroy the long-range order (LRO) of a CDW in four or fewer spatial dimensions. A simpler domain wall energy argument was later presented by Imry and Ma, and under some conditions it can be made mathematically rigorous. One unresolved issue is whether these arguments can be extended to the SDW case, where there are experiments which indicate the stability of LRO in the presence of pinning. Another controversy involves the existence of the proposed ”Bragg glass” phase, which has quasi-long-range order (QLRO).
If we ignore amplitude fluctuations, we can transform the pinned CDW problem into an XY model in a random field, whose Hamiltonian is usually taken to have the form
$$H_{RFXY}=J\underset{<ij>}{}\mathrm{cos}(\theta _i\theta _j)G\underset{i}{}\mathrm{cos}(\theta _i\varphi _i).$$
(1)
Site $`i`$ is at position $`𝐫_i`$, the sites form a lattice, and $`<ij>`$ indicates a sum over nearest neighbors. Each $`\theta _i`$ is a dynamical variable representing the phase of the CDW at site $`i`$, and can take on values in the interval $`[\pi ,\pi )`$. Each $`\varphi _i`$ is a representation of the random pinning energy arising from lattice defects. Since the defect sites are assumed to be immobile, the $`\varphi _i`$ do not change with time. We also assume that the $`\varphi _i`$ on different sites are uncorrelated, and that the probability distribution for each $`\varphi _i`$ is uniform on $`[\pi ,\pi )`$.
We can generalize Eq. (1) to study XY models in random $`p`$-fold fields, where $`p`$ is any positive integer:
$$H_{rp}=J\underset{<ij>}{}\mathrm{cos}(\theta _i\theta _j)G\underset{i}{}\mathrm{cos}(p(\theta _i\varphi _i)).$$
(2)
For Eq. (2), each $`\varphi _i`$ can be chosen to be in the interval $`[\pi /p,\pi /p)`$, but $`\theta _i`$ still takes on values in $`[\pi ,\pi )`$. The $`p`$ = 2 case, which is often simply referred to as the XY model with random anisotropy, is related to a linearly polarized SDW pinned by local moment impurities in the same way that the $`p`$ = 1 case is related to a pinned CDW. Note, in particular, that for $`p`$ = 2 the Hamiltonian preserves a two-fold inversion symmetry, in contrast to the $`p`$ = 1 case. One consequence of this is that for $`p`$ = 2 (or more), unlike $`p`$ = 1, the time average of the local magnetization, $`𝐌(𝐫_i,t)_t=(\mathrm{cos}(\theta _i(t)),\mathrm{sin}(\theta _i(t)))_t`$, must be zero for every $`i`$ in the paramagnetic phase.
Another model which is often considered is the elastic glass,
$$H_{eg}=J\underset{<ij>}{}(\theta _i\theta _j)^2G\underset{i}{}\mathrm{cos}(p(\theta _i\varphi _i)).$$
(3)
For the elastic glass, the $`\varphi _i`$ again have values in the interval $`[\pi /p,\pi /p)`$, but the $`\theta _i`$ are now defined on $`(\mathrm{},\mathrm{})`$. The dependence of Eq. (3) on $`p`$ is trivial, since it can be removed by a rescaling of the variables. Therefore, after making the appropriate scaling, the behavior of the elastic glass must be the same for all $`p`$. Giamarchi and Le Doussal have performed an analytical calculation which shows that in three dimensions at zero temperature this model has a structure factor,
$$S_\theta (𝐤)=\frac{1}{L^3}|\underset{j}{\overset{L^3}{}}\theta _j\mathrm{exp}(i𝐤𝐫_j)|^2,$$
(4)
which diverges like $`1/|𝐤|^3`$ at small $`|𝐤|`$. This result has recently been confirmed by a numerical simulation.
It is argued that in the absence of topological defects (i.e. vortex lines), Eq. (1) should have the same continuum limit as Eq. (3), and that this should be true for Eq. (2) as well. The implication is that the behavior of Eq. (2) should be essentially the same for all $`p`$, just as is the case for Eq. (3). However, the phase space for Eq. (3) is simply connected, while that of Eq. (2) is not, even in the absence of defects. Since it is known that the numerical simulation results for the $`p`$ = 3 case of Eq. (2) do not show the same behavior as the numerical simulation of Eq. (3), it appears that this difference in the topology of the phase space for the two models invalidates the mapping. One can map the energies from Eq. (2) into Eq. (3), but the entropies are different, even in the absence of vortex lines.
The difficulty is most transparent in the large $`G/J`$ limit considered by Fisher. In this limit, for the elastic glass, Eq. (3), each $`\theta _i`$ can still assume a countably infinite number of values, for any $`p`$. However, for the random field model, Eq. (2), each $`\theta _i`$ has only $`p`$ distinct allowed values in this limit. The $`1/|𝐤|^3`$ divergence of $`S_\theta (𝐤)`$ for the elastic glass arises from the unbounded variations of the $`\theta _i`$. This cannot occur in the random field model, where $`\theta _i`$ is defined on a compact manifold.
## II RANDOM $`p`$-FOLD FIELDS
In the remainder of this work we will consider a version of the model of Eq. (2), in the $`p`$ = 1 and $`p`$ = 2 cases. In a ferromagnetic phase, where time-reversal symmetry is spontaneously broken, the configuration averaged value of $`𝐌(𝐫,t)`$ in a single sample, $`𝐌(𝐫,t)_𝐫`$, is not zero. Thus, naively, one would expect that if LRO is destroyed by a weak random $`p`$ = 1 field, then it would also be destroyed by a weak random $`p`$-fold field for any $`p`$. This argument can be made explicit within a perturbation theory for small $`G/J`$ which is exact to leading order in $`1/N`$, where $`N`$ is the number of spin components.
The cause of the Larkin-Imry-Ma instability in the random field model may be seen by studying the magnetic structure factor, whose form in three dimensions is
$$S(𝐤,t)=\frac{1}{L^3}|\underset{j}{\overset{L^3}{}}𝐌(𝐫_j,t)\mathrm{exp}(i𝐤𝐫_j)|^2.$$
(5)
In equilibrium, $`S(𝐤,t)`$ becomes independent of the time $`t`$ when $`L`$ becomes infinite. If the system is not ergodic, however, there may be multiple equilibrium states, each with a different $`S(𝐤)`$. In a ferromagnetic phase $`S(𝐤)`$ shows a $`\delta `$-function peak at $`𝐤`$ = 0. For the random $`p`$ = 1 field one shows that the presence of this $`\delta `$-function induces a $`1/|𝐤|^4`$ peak in $`S(𝐤)`$ at small $`|𝐤|`$. Such a peak is impossible in four spatial dimensions or less. Due to the norm-preserving property of the Fourier transform,
$$\underset{𝐤}{}S(𝐤)=L^3,$$
(6)
where the sum over k runs over the Brillouin Zone. Since the square of the length of each spin is one in this model, Eq. (6) merely states that the total cross-section in a scattering experiment is equal to the number of spins in the scattering volume times the cross-section of one spin. There is no corresponding sum rule for $`S_\theta (𝐤)`$.
We proceed by separating the time-dependent and time-independent parts of M. Without loss of generality, we can rewrite Eq. (5) in the form
$$S(𝐤,t)=\frac{1}{L^3}|\underset{j}{\overset{L^3}{}}(𝐌(𝐫_j,t)_t+\delta 𝐌(𝐫_j,t))\mathrm{exp}(i𝐤𝐫_j)|^2,$$
(7)
where $`\delta 𝐌(𝐫_j,t)_t=0`$. Performing the Fourier transform and taking a time average yields
$$S(𝐤,t)_t=|𝐌(𝐤,t)_t|^2+|\delta 𝐌(𝐤,t)|^2_t.$$
(8)
If the system is not ergodic, then we will find a different $`S(𝐤,t)_t`$ for each equilibrium state.
To evaluate Eq. (8), Imry and Ma ignore the fixed-length-spin constraint, and assume that for small $`G/J`$ they can use a linear response spin-wave perturbation theory. The second term of Eq. (8) is the standard contribution from dynamical fluctuations of the spins. In linear response theory it gives a contribution to $`S(𝐤)`$ of Lorentzian form, proportional to $`1/(|𝐤|^2+1/\xi ^2)`$, where $`\xi `$ is the correlation length. When $`G`$ = 0, then $`\xi `$ is infinite in the ferromagnetic phase, and within perturbation theory this remains true for small $`G/J`$.
In a ferromagnetic phase, the linear response spin-wave theory for the first term of Eq. (8) generates a $`1/|𝐤|^4`$ peak whose amplitude is proportional to $`G^2`$ times the square of the order parameter, $`|𝐌(𝐫_i,t)|^2_𝐫_t`$. Such a peak is impossible in four dimensions or less, due to the sum rule on $`S(𝐤)`$. If the decoupling of the different Fourier modes assumed in the spin-wave approximation were valid, this would indicate the instability of ferromagnetism in four dimensions or less in the presence of the random $`p`$-fold field. As discussed by Fisher, this decoupling is adequate when the number of dimensions is large, but it breaks down in four dimensions or less.
For $`p`$ = 1 the random fields cause each $`𝐌(𝐫_i,t)_t`$ to become nonzero even in the paramagnetic phase. Thus, for $`p`$ = 1 the spin-wave theory result in the paramagnetic phase for the first term of Eq. (8) is a ”Lorentzian-squared” peak, of the form $`G^2/(|𝐤|^2+1/\xi ^2)^2`$. The sum rule on $`S(𝐤)`$ then implies that $`\xi `$ must be finite. This Lorentzian-squared peak also occurs for the random field Ising model, and is not related to the existence of massless spin waves.
There is no Lorentzian-squared peak in the paramagnetic phase for $`p`$ = 2, since in this case each $`𝐌(𝐫_i,t)_t`$ = 0, so that the first term of Eq. (8) then makes no contribution to $`S(𝐤)`$. Therefore, the sum rule on $`S(𝐤)`$ cannot prevent $`\xi `$ from diverging for $`p`$ = 2, and the existence of a QLRO phase in this case was proposed in 1980 by Aharony and Pytte.
The domain wall energy scaling argument given by Imry and Ma, which is nonperturbative, compares the relative strengths of the exchange energy term and the random pinning term as a function of length scale. If the effective value of the coupling $`G/J`$ scales to infinity at large length scales, then we know that for $`p`$ = 1 the model cannot be ferromagnetic.
An analogous argument does not suffice for $`p>`$ 1, however, because even for strong random anisotropy the mean-field theory has a ferromagnetic phase. The domain wall energy argument does not account for the exact $`p`$-fold symmetry of the Hamiltonian which exists for $`p>`$ 1. For $`p>`$ 1 one cannot show that the random term uniquely determines the large-scale structure of the low energy states. Thus the rigorous proof which works for $`p`$ = 1 cannot be applied for larger $`p`$.
Because the spin-wave argument assumes replica symmetry, its lack of rigor has long been recognized. More recently, Mézard and Young have shown explicitly that when one calculates beyond the leading order in $`1/N`$ for $`p`$ = 1, the replica symmetry is broken in the ferromagnetic phase, and that, therefore, the randomness should cause $`\xi `$ to be finite. Since the randomness destroys translation invariance, it is not surprising that it should also cause the long-wavelength spin-waves to become massive. Presumably this replica-symmetry breaking will also occur for $`p>`$ 1.
Within spin-wave perturbation theory, the effect of a random anisotropy on the ferromagnetic phase appears to be the same as the effect of a random field. It seems reasonable that one should be able to study the properties of a single minimum of the free energy, and that, at least for small $`G/J`$, the behavior of the system in this local minimum should not depend on $`p`$. Because the replica symmetry is broken for finite $`N`$, however, we know that this can fail.
There are a number of results which support the existence of LRO for XY models (i.e. N = 2) with random anisotropy in four dimensions or less. The first are the experiments on SDW alloys which appear to have LRO. The second is the high temperature susceptibility series for the random anisotropy XY model, which gives no indication of an instability of ferromagnetism in four dimensions. The third is the computer simulations for the $`p`$ = 3 case in three dimensions, which show that the $`p`$ = 3 random anisotropy does not destroy the transition to ferromagnetism, but the transverse correlation length in the ferromagnetic phase becomes finite.
In this work we present the results of a computer simulation study of a toy model which we believe preserves the essential features of Eq. (2). For this model we find that in three dimensions a $`p`$ = 1 random field perturbs the structure factor of finite lattices in a manner consistent with the destruction of ferromagnetism for any strength of the random field, as predicted by the domain wall energy scaling argument. In the corresponding $`p`$ = 2 case, however, we find no evidence for the destruction of ferromagnetism. Instead, we find that for this model the $`p`$ = 2 random anisotropy causes $`\xi `$ to become finite without destroying the LRO.
## III TOY MODEL FOR RANDOM FIELDS
Large-scale computer simulations of the random field and random anisotropy XY models have been performed in the last few years. While the results of these simulations are quite instructive, it has been difficult to study the behavior at weak randomness and low temperature. This is due to the limited size of the lattices which can be studied, and the difficulty of making transitions over energy barriers. In order to improve the effectiveness of the simulations, one may either try to develop new techniques for studying Eq. (2), or else one may try to find a modification of the Hamiltonian which preserves the essential features, but is easier to study. In this work we adopt the second approach. We will describe and study a model in which there are no energy barriers.
It was shown by Kohring, Shrock and Wills that if one adds a large vortex fugacity term to the XY model on a simple cubic lattice, then the model retains a ferromagnetic equilibrium state even in the absence of any explicit exchange energy. It was later shown that this ”vortex-free” XY model, in which all allowed spin configurations have the same energy, behaves in most respects like a normal XY model at some finite temperature within the ferromagnetic phase. Here, we will study the effects of adding random fields and random anisotropies to the vortex-free model. In order to retain the property that all allowed states of the model have the same energy, we replace the random term of Eq. (2) by constraints on each $`\theta _i`$.
To obtain a random-field-type model, for each $`i`$ we choose a random arc of the circle of some fixed size, and declare that $`\theta _i`$ cannot take on values within that arc. The fraction of the circle which is removed at each site is then a parameter which measures the strength of the random field. In order to maintain the vortex-free constraint everywhere, it is sufficient that the fraction removed, $`R_1`$, be less than 1/2. To see this, note that any state in which all spins have values on the same half of the circle, so that there is some axis for which the projection of all spins is in the same direction, is vortex-free. We will refer to such a state as a ”semicircle state”.
For a random-anisotropy-type model we perform the same procedure, except that we symmetrically remove two arcs from opposite sides of the circle at each site. In this case it is possible to satisfy the vortex-free constraint even if at each site we only allow two points. For the random anisotropy model it is clear that the allowed states come in pairs, so that time-reversal symmetry is not explicitly broken by the Hamiltonian.
We expect the qualitative behavior of the constraint-type random fields and random anisotropies to be the same as the corresponding $`p`$ = 1 or $`p`$ = 2 random terms of Eq. (2). It is somewhat less clear that our replacement of the exchange term in Eq. (2) by the vortex-free constraint will not make any qualitative difference. One can argue that for small $`G/J`$ the low energy states of Eq. (2) should be vortex-free, but it is difficult to prove this. In the simulation of Gingras and Huse it was observed that the vortex loops disappeared rapidly as the random field was made weaker at low temperature. It was suggested by them that in the absence of vortex loops an XY model with a random field would possess a QLRO phase, in which two-point correlations have a power-law decay as a function of distance. For the vortex-free model we can test this conjecture in a straightforward manner.
## IV MONTE CARLO CALCULATION
The Monte Carlo program was a modified version of one used earlier to study the vortex-free model without randomness. It approximates the circle by a 256 state discretization, and uses a simple cubic lattice with periodic boundary conditions. Two linear congruential pseudorandom number generators are used, one for assigning the random fields, and a different one for flipping the spins. The initial state of each lattice is chosen to be a semicircle state. Moves are rejected if they would violate the vortex-free constraint or the local random-field constraint.
A brief study of $`L\times L\times L`$ lattices as a function of size $`L`$ and the strength of the randomness showed that for the $`p`$ = 1 case increasing the strength of the randomness caused a progressive decrease of the equilibrium magnetization, $`|𝐌(L)|_𝐫_t`$, as expected, with $`|𝐌|_𝐫_t`$ extrapolating to zero for large $`L`$. For the $`p`$ = 2 case, however, there was no evidence of a decrease of $`|𝐌|_𝐫`$ as the randomness was turned on. In order to investigate this unexpected result carefully, it was decided to expend most of the computing effort on the computation of the structure factor for lattices with $`L`$ = 64.
Starting from a semicircle state, each $`R_1>`$ 0 lattice was run for 40,960 passes, which is several times the apparent longitudinal relaxation time. Some of the $`R_1`$ = 0 lattices were run for only half this time, because the longitudinal relaxation time is shorter in this case, and the transverse relaxation is given by spin-wave theory. The values of $`|𝐌|_𝐫_t`$ were obtained by averaging over the last half of each run, sampling every 20 passes. For $`L`$ = 64, the magnetization was found by this procedure to be 0.43516, 0.4018, 0.313 and 0.244 for $`R_1`$ = 0, 1/8, 1/4, and 3/8, respectively. The fluctuations in $`|𝐌|_𝐫_t`$ between runs become larger as $`R_1`$ increases, as does the time-averaged longitudinal susceptibility for a single run. Because only one initial state was used for each $`p`$ = 1 sample, we do not know if the variations in the time averages for different initial states of the same sample are as large as the variations between samples. The transverse susceptibility , obtained from the time-dependent fluctuations of $`𝐌_𝐫`$ averaged over the last half of each run, becomes smaller as $`R_1`$ increases. For $`R_1`$ = 3/8 and $`L`$ = 64, $`𝐌_𝐫`$ remains close to its initial direction for the duration of the run, and the transverse susceptibility is not much larger than the longitudinal susceptibility. This naturally implies that there are many local minima of the free energy, at least on the time scale of the simulation. For $`R_1`$ = 1/8 and 1/4 the direction of $`𝐌_𝐫`$ may change substantially at first, but then it seems to settle into some local minimum of the free energy, although the transverse susceptibility remains large.
Results for the angle-averaged $`S(𝐤)`$ for $`L`$ = 64 lattices with these strengths $`R_1`$ of the random $`p`$ = 1 field are shown on a log-log plot in Fig. 1. Each data set is an average of eight samples of the randomness, with one final state used for each sample. The data for the samples with no random fields approximately follow a $`1/|𝐤|^2`$ law, with an additional $`\delta `$-function at $`𝐤`$ = 0, which does not appear on the log-log plot, as predicted by spin-wave theory. As $`R_1`$ is increased, the weight of the peak is progressively pushed out to larger values of $`|𝐤|`$, with the sum rule on the integral over $`𝐤`$ of $`S(𝐤)`$ being preserved. Because of the sum rule, the fluctuations in the different small $`𝐤`$ modes are strongly coupled. This makes it difficult to estimate the statistical error for a single mode. Suffice it to say that the fluctuations of $`S(𝐤)`$ for a single $`𝐤0`$ mode of a single sample are of about the same size as the average value for that mode.
For $`R_1`$ = 1/8, the slope on the log-log plot of $`S(𝐤)`$ in the accessible small $`|𝐤|`$ region is approaching $`3`$. Due to the sum rule, this indicates that there is no evidence for any LRO at this value of $`R_1`$, even though the value of $`|𝐌|_𝐫_t`$ is still not much reduced from its $`R_1`$ = 0 value at $`L`$ = 64. At $`R_1`$ = 1/4, $`S(𝐤)`$ shows an apparent slope of $`2.85\pm 0.05`$ for small $`|𝐤|`$ on the log-log plot. By $`R_1`$ = 3/8 the $`L`$ = 64 samples are showing multiple local minima of the free energy. This may be an indication that some correlation length has become comparable to the sample size, but $`S(𝐤)`$ at $`R_1`$ = 3/8 can be fit at small $`|𝐤|`$ by a power law in $`|𝐤|`$ with an exponent, $`(2+\eta )`$, of $`2.63\pm 0.07`$. A QLRO phase with a continuously varying value of $`\eta `$ has recently been found in a similar model by Emig, Bogner and Nattermann.
In order to distinguish clearly between an infinite $`\xi `$ with a continuously varying exponent $`\eta `$ and a finite $`\xi `$, we would need data for larger $`L`$ or $`R_1`$ closer to 1/2. Either of these approaches would require a substantial increase in computing effort. If there really is an infinite $`\xi `$ and an $`\eta `$ which varies continuously as a function of $`R_1`$, then we would like to know if this behavior continues out to the maximum allowed value of $`R_1`$, and, if so, how $`\eta `$ behaves near that point. If it were practical to perform simulations for larger values of $`L`$, we believe that we would see the appearance of many local minima for any nonzero allowed value of $`R_1`$.
To study the $`p`$ = 2 case, we concentrated on samples with $`R_2`$ = 3/8, which means that only 1/4 of the states were allowed at each site. Smaller values of $`R_2`$ give an $`S(𝐤)`$ almost indistinguishable from the result for $`R`$ = 0, the model without randomness. Note that $`R_2`$ = 3/8 can be obtained from $`R_1`$ = 3/8 by removing an additional 3/8 of the allowed states at each site, and thus restoring a two-fold symmetry. Because the random anisotropy constraints now cause most of the attempted moves to be immediately rejected, each sample was run twice as long as for $`p`$ = 1. Also, two semicircle states, differing in average orientation by $`\pi /2`$ from each other, were used for each sample as initial states.
The average magnetization for $`L`$ = 64 and $`R_2`$ = 3/8 was obtained by averaging over the last quarter of each run. The result was $`|𝐌|_𝐫_t`$ = 0.43613, slightly $`\mathrm{𝑙𝑎𝑟𝑔𝑒𝑟}`$ than the result for the $`L`$ = 64 system with $`R`$ = 0. The results obtained using the two different initial states for a given sample did not appear to be more similar to each other than results from two different samples. The direction of the magnetization rotated significantly during most runs, indicating that $`\xi `$ is at least as large as $`L`$, and that the observed behavior is unlikely to be due to a failure of the system to relax. For one of these eight samples the final states of the two runs appeared to be in the same local minimum. Studying smaller samples for much longer running times also gave no indication that the results were caused by insufficient relaxation.
Eight samples with maximal random anisotropy (only two allowed states at each site, labeled $`W_2`$ = 1 in Fig. 2) were also studied. In this case, with only two allowed states per spin, a Metropolis-type algorithm, for which spin flips allowed by the vortex-free constraint were made with probability 3/4, was used to improve the efficiency of the program. For $`W_2`$ = 1, the $`L`$ = 64 value of $`|𝐌|_𝐫_t`$ was 0.4662, and the orientation of $`𝐌_𝐫`$ always remained close to its initial direction.
The structure factor for these $`p`$ = 2 cases, again averaged over eight samples, is shown in Fig. 2, along with the data for $`R`$ = 0. We see that at $`L`$ = 64 the structure factor for $`R_2`$ = 3/8 is not distinguishable from that of $`R`$ = 0. Although the data for $`R_2`$ = 3/8 appears to be slightly above the data for $`R`$ = 0 at small $`|𝐤|`$, this is a sampling artifact. The actual average of $`|𝐌|_𝐫`$ for the 16 $`R_2`$ = 3/8 states used in constructing the figure is 0.4349, while for the 8 $`R`$ = 0 states it is 0.4357.
For $`W_2`$ = 1, $`\xi `$ is approximately 8 lattice spacings, and the lineshape appears to be Lorentzian (plus the $`\delta `$-function at $`𝐤`$ = 0). In this case it appears that large $`L`$ behavior is seen already for $`L`$ = 16. A sample with $`W_2`$ = 1 and $`L`$ = 16 was run for approximately $`5\times 10^5`$ steps per spin. The system appeared to relax to equilibrium within the first 50 steps per spin, which was essentially the same as the relaxation time for the $`L`$ = 64 samples, and no transitions were seen out of the local minimum, which retained an $`𝐌_𝐫`$ almost parallel to that of the initial semicircle state. The transverse susceptibility is approximately 15 times the longitudinal susceptibility.
## V DISCUSSION
The model we are using for our computer simulations is one in which all allowed states of the spins have the same energy. If the dynamical behavior is ergodic, then, by definition, it must be possible to get from any initial state to any final state. One might imagine, therefore, that this model, with single-spin flip dynamics, could have an ergodicity-breaking transition as a function of the strength of the randomness. That is, there could be a transition between having a connected phase space to having a phase space which is broken into many disconnected pieces.
It is easy to see, however, that any semicircle state can be connected to another semicircle state with an arbitrary choice of the semicircle, by single spin flips which do not violate the constraints. This remains true as long as $`R_1`$ or $`R_2`$ is less than 1/2. Therefore, we cannot explain the result that for large values of $`R_1`$ or $`R_2`$ we find many local minima when $`L`$ becomes large by a percolation transition in phase space. The breakdown of ergodicity is a true phase transition, because the transition rates between different local minima only go to zero in the infinite volume limit.
One should remember that the phase space available to an individual spin depends on how well aligned its neighbors are. In a semicircle state, the magnetization is $`2/\pi `$, about 0.63662. Thus, when the system relaxes into a state with $`|𝐌|_𝐫<`$ 0.5, the ability of individual spins to reorient by single-spin flips is greatly reduced, even though the entropy of the system as a whole has increased.
Entropy barriers are just as effective as energy barriers in suppressing transitions between different minima. If the paths in phase space between different local minima must pass through intermediate states in which the value of $`|𝐌|_𝐫`$ in a volume $`\xi ^3`$ is close to 0.6, then the probability of making such transitions is suppressed by a factor exponential in the correlation volume.
The above estimate may be unduly pessimistic. For example, it may be enough to increase the local magnetization in a surface layer, so that the entropy barrier is only proportional to $`\xi ^2`$. Nevertheless, the basic principle, that uncorrelated single-spin flips are not an efficient way to achieve large-scale reorientation of $`𝐌`$, is correct.
It would not be surprising if an alternative dynamics could be developed which flipped large clusters of spins simultaneously, and was thus more effective in moving through phase space. Therefore, we would like to check our results to see if they reflect true equilibrium behavior by developing such an algorithm. However, the results as they stand seem internally consistent, and they are also consistent with the other related results cited earlier.
For $`p`$ = 2, there is no instability of the LRO when the randomness is too weak to induce the creation of vortex lines. We remind the reader that when vortex lines are allowed, as for the strong random anisotropy limit of Eq. (2), the LRO appears to be unstable in three dimensions, and the low temperature phase seems to have only QLRO. The nature of the transitions between the LRO, QLRO and paramagnetic phases are clearly of great interest, but they cannot be explored within the vortex-free model.
It should be noted that the infinite vortex fugacity used in our model does not satisfy the smoothness conditions used in the proof of Aizenman and Wehr. Therefore, our finding that the random $`p`$ = 1 field destroys the LRO in our model is an indication that the smoothness conditions can be relaxed in three dimensions. Of course, it does not follow from this that the smoothness conditions can be relaxed in four dimensions.
An alternative method of removing the vortices is by placing a lower bound on the allowed values of $`\mathrm{cos}(\theta _i\theta _j)`$ for all nearest neighbor pairs of $`i`$ and $`j`$. This method directly violates the smoothness condition (4.5) of Aizenman and Wehr, and is a more severe constraint than the vortex fugacity method used here. It is likely that this alternative method would produce results in qualitative agreement with those found here using the Kohring-Shrock-Wills method.
## VI CONCLUSION
In this work we have used Monte Carlo simulations to study a vortex-free XY model in three dimensions with random $`p`$ = 1 and $`p`$ = 2 fields. This toy model is intended to represent the effects of random pinning on uniaxial CDWs and SDWs. We have found that for CDWs the LRO should be destabilized by weak random pinning, but that for SDWs the LRO should survive. These conclusions are consistent with experiment. Our results for the $`p`$ = 1 case are consistent with the existence of a QLRO of the type discussed by Emig, Bogner and Nattermann.
###### Acknowledgements.
The author is greatful to Michael Aizenman and David Huse for helpful discussions during the course of this work, and to the physics departments of Princeton University and Washington University for providing computing facilities. |
no-problem/9911/cond-mat9911112.html | ar5iv | text | # Self-localization of directed polymers and oppressive population control
\[
## Abstract
We construct a phenomenological theory of self-localization of directed polymers in $`d+1`$ dimensions. In $`d=1`$ we show that the polymer is always self-localized, whereas in $`d=2`$ there is a phase transition between localized and free states. We also map this system to a model of population dynamics with fixed total population. Our previous results translate to static and expanding population clusters, depending on the birth and death rates. A novel “pseudo-travelling wave” is observed in some sectors of parameter space.
\]
Directed polymers are important topological objects in many condensed matter systems. Examples are flux lines in superconductors, domain walls in ferromagnets, and atomic steps on vicinal surfaces. In the case of an isolated directed polymer, theoretical studies have concentrated on the effects of quenched disorder, which may be either point-like or columnar. Using disorder to suppress the wandering of flux lines is a central concept in designing useful high-temperature superconductors.
Another mechanism by which the wandering of directed polymers may be suppressed is self-localization, mediated by elastic distortions of the medium in which the polymer is embedded. This has been studied previously via a variational free energy functional, which is the analogue of the quantum action for the polaron problem. In the first part of this Letter we study the phenomenon from an alternative viewpoint, using the statistical mechanics of directed polymers. Such an approach leads us to a simple phenomenological equation for the probability density $`P`$ of the directed polymer. We show that in $`d=1`$ the directed polymer is always self-localized, while in $`d=2`$ the polymer undergoes a transition to self-localization above a critical value of the coupling constant $`\lambda `$ (which measures the strength of interaction between the polymer and the embedding environment). The details of this transition are investigated by numerical integration.
In the latter half of this Letter we show that the phenomenological equation for $`P`$ may be re-interpreted in the language of population dynamics. In this case $`P`$ corresponds to the local population density. The precise form of the dynamics is unusual, since along with the birth and death terms, we have a global constraint that fixes the total population. \[Similar constraints have been studied recently in the context of plant population models, and a link has been made to self-organized criticality. Also, interesting links have been made between “non-Hermitian quantum mechanics” and population biology.\] The issue in the present work is not whether the population thrives or becomes extinct, but rather how the individuals spatially distribute themselves. Different phases are observed corresponding to self-localized and expanding populations. We also test the predictions of the theory by simulations of a discrete reaction-diffusion model with a global constraint. Finally, within the context of population dynamics we are at liberty to reverse the sign of the coupling constant $`\lambda `$. In this case, we find that in $`d=1`$ the density is delocalized via a pseudo-travelling wave – an unusual scenario which interpolates between dynamical scaling and travelling waves. We end the Letter by noting the general mapping between directed polymers and constrained population dynamics.
We consider a directed polymer with elastic constant $`\kappa `$ in a space with $`d`$ transverse dimensions $`𝐫`$, and one longitudinal dimension $`t`$. The polymer experiences thermal fluctuations from a reservoir at temperature $`T`$, and a potential $`V(𝐫,t)`$. One can write the partition function $`Z(𝐫,t)`$ for such a polymer with ends fixed at $`(\mathrm{𝟎},0)`$ and $`(𝐫,t)`$ in the form of a path integral. Following standard methods, this path integral may be rewritten as a partial differential equation of the form
$$_tZ=\nu ^2ZVZ,$$
(1)
where $`\nu =T/2\kappa `$ and we have absorbed a factor of $`T`$ into the potential. The probability density of the polymer is constructed by normalizing $`Z`$: $`P(𝐫,t)=Z(𝐫,t)/d^dr^{}Z(𝐫^{},t)`$. Inserting this definition into the equation for $`Z`$ yields a closed equation for $`P`$:
$$_tP=\nu ^2PVP+\left[d^dr^{}V(𝐫^{},t)P(𝐫^{},t)\right]P.$$
(2)
To our knowledge, this description of the directed polymer has not been studied previously. Note this equation does not have the form of a Fokker-Planck equation – conservation of probability is not achieved via an equation of continuity. The physics of directed polymers is intrinsically non-local.
We want to consider the potential $`V`$ to have been caused purely from elastic distortions of the embedding environment, due to the presence of the directed polymer itself. In principle we could formulate a detailed microscopic model which includes these elastic degrees of freedom explicitly, and then attempt to integrate them out to find the effective potential. However, we are content to take a phenomenological viewpoint: the elastic distortions caused by the polymer will provide an attractive potential which is strongest in the immediate vicinity of the polymer, and negligible far from the polymer. If we try to construct such a potential from the probability density $`P`$, we come at once upon the particularly simple form $`V(𝐫,t)=\lambda P(𝐫,t)`$, where $`\lambda `$ is a non-negative coupling constant. Inserting this relation into Eq.(2) we have a closed equation for $`P`$ of the form
$$_tP=\nu ^2P+\lambda P^2\lambda \sigma (t)P,$$
(3)
where $`\sigma (t)d^drP(𝐫,t)^2`$.
Given we have a non-linear integro-differential equation for $`P`$ there is little hope of a complete analytic solution. However we will uncover much of the physics from scaling arguments, and numerical integration. First, let us consider the possibility of simple scaling, and make a similarity Ansatz, which, based on the radial symmetry of the problem, we take as $`P(𝐫,t)=\xi (t)^dF(z)`$, with $`z=r/\xi `$. Note, the prefactor is fixed by normalization of $`P`$. Inserting this Ansatz into Eq.(3) yields
$`\dot{\xi }(dF+zF^{})+(\nu /\xi )(F^{\prime \prime }+(d1)F^{}/z)`$ $`+`$ (4)
$`(\lambda /\xi ^{d1})(F\sigma _0)F`$ $`=`$ $`0,`$ (5)
where $`\sigma _0=S_d𝑑zz^{d1}F(z)^2`$, and $`S_d`$ is the area of the unit hypersphere. We need to balance the terms arising from the derivative with respect to $`t`$ and the Laplacian. This implies $`\dot{\xi }=(\nu /\xi )`$ which in turn gives $`\xi =(2\nu t)^{1/2}`$. So for large $`t`$ the correlation length $`\xi `$ is large, and we see on comparing the Laplacian terms to the interaction terms (i.e. those proportional to $`\lambda `$) that for $`d>2`$ the interaction terms are negligible. In other words, for $`d>2`$ the behaviour of $`P`$ corresponds to thermal diffusion (at least for modest values of $`\lambda `$).
With the assumption of a $`t`$-dependent correlation length, we see that the interaction terms dominate for $`d<2`$, and there is no self-consistent solution. We therefore demand that $`\xi `$ is $`t`$-independent, and denote it as $`\xi _0`$. Focusing on $`d=1`$, we find
$$F^{\prime \prime }+(\lambda \xi _0/\nu )(F\sigma _0)F=0.$$
(6)
This equation is simple enough to yield an exact solution. Expressed in terms of the original spatial variable $`x`$, we find the $`t`$-independent solution
$$P(x)=(1/4\xi _0)\mathrm{sech}^2(x/2\xi _0),$$
(7)
where the localization length is self-consistently found to be $`\xi _0=6\nu /\lambda `$. We have confirmed the stability of this solution by direct numerical integration of Eq.(3) using a variety of initial conditions. In all cases we find rapid convergence to the form given above. It is remarkable that our solution coincides exactly with the well-known form for the polaron probability density, first found by Rashba. This agreement is satisfying from a physical viewpoint, but by no means obvious given the mathematical difference between Eq.(3) and the non-linear Schrödinger equation governing polaron physics.
We can achieve a perfect balance of all the terms in Eq.(4) in exactly two dimensions. In this case, the correlation length drops out of the equation and we have an ordinary differential equation for $`F(z)`$ of the form
$$F^{\prime \prime }+(z+1/z)F^{}+2F+\stackrel{~}{\lambda }(F\sigma _0)F=0,$$
(8)
where $`\stackrel{~}{\lambda }=\lambda /\nu `$ is a dimensionless coupling constant. We have been unable to derive an analytic solution to this equation. We have studied its properties using a straightforward numerical integration scheme, starting with a localized initial condition, generally taken to be Gaussian. For modest values of $`\stackrel{~}{\lambda }`$ the probability density diffuses outwards, and we find good data collapse in accordance with the similarity Ansatz defined in terms of $`z=r/\sqrt{t}`$. As we increase $`\stackrel{~}{\lambda }`$ we find a sudden transition at $`\stackrel{~}{\lambda }_c30.5`$, above which the probability density shrinks to a microscopically collapsed state. Details of this transition will be given in a longer paper.
We now move away from the physics of directed polymers, and re-interpret our central equation (3) within a very different context, namely reaction kinetics or population dynamics. Consider a species $`A`$ (either chemical, biological or homo sapiens) which undergoes the following birth and death processes: $`A0`$ and $`2A3A`$, with rates $`k`$ and $`k^{}`$ respectively, which are in general time dependent. We denote the relative density of $`A`$ by $`\rho (𝐱,t)`$, and stress that the symbol $`t`$ now represents real time. We may write a reaction-diffusion equation to describe these processes:
$$_t\rho =D^2\rho +k^{}(t)\rho ^2k(t)\rho ,$$
(9)
where $`D`$ is the effective diffusion constant of individual $`A`$ ‘particles’. The above equation is only an approximate description of this process, and a priori is only expected to be valid above some critical dimension $`d_u`$. For lower dimensions, it is often necessary to explicitly include the appropriate (microscopically derived) noise term in the above equation. For a qualitative understanding of the model (which is further refined below), such a noise term is not required.
At this point, we shall introduce a somewhat unusual feature – a global constraint on the population. In other words we fix the normalization of the relative density to unity: $`d^dr\rho =1`$. From Eq.(9) we see this imposes $`k^{}(t)/k(t)=d^dr\rho ^2`$. We can imagine two special cases of the above. First, we can fix the death rate $`k`$ to be constant, which implies that the total population is controlled by tuning the birth rate $`k^{}`$ (which is not unknown). Alternatively, we have the chilling mechanism of total population control in which the birth rate is fixed, equal to $`k_0`$ say, and the death rate is adjusted according to $`k(t)=k_0d^dx\rho ^2`$. At the level of our phenomenological description, these two choices lead to similar behaviour, so we shall concentrate on the second. Substituting the appropriately tuned rates into Eq.(9) we have
$$_t\rho =D^2\rho +k_0\rho ^2k_0\left[d^dr^{}\rho (𝐫^{},t)^2\right]\rho .$$
(10)
This equation is identical in form to the self-consistent equation (3) for the probability density of the directed polymer. Thus all of our results may be taken over and immediately applied to the population system outlined above. We state the gross features here. In $`d=1`$ the population will distribute itself in a stationary localized structure, with a spatial scale $`\xi _0D/k_0`$. In $`d=2`$ the population will attempt to spread itself as widely as possible, with a diffusive dynamics, but only for $`k_0`$ less than some critical value $`k_c`$. Above this value, the system will undergo collapse and exist in a stationary high-density phase. For $`d=3`$ the population will generally exist in the diffusive spreading phase (although there is a possibility for different behaviour for very high values of $`k_0`$).
As mentioned before, it is not obvious a priori whether a phenomenological description such as Eq.(10) is an accurate model of the true population dynamics for low dimensions. Intuitively, we expect the microscopic fluctuations to be suppressed in our model due to the long-range interactions which are built-in via the global population constraint. In this case, our qualitative predictions from Eq.(10) should hold true. We have tested this hypothesis by performing microscopic simulations of the birth/death process described above. Our model consists of $`N`$ random walkers on a $`d`$-dimensional hypercubic lattice. The walkers obey exclusion statistics. If two walkers are at adjacent lattice sites, there is a probability $`q`$ for them to spawn a new walker at a neighbouring unoccupied site. If this occurs, a walker is randomly selected from the entire population and is killed - thus conserving the overall population. We have studied this model in various dimensions. In $`d=1`$, and starting from a compact cluster of 100 particles, we find that the cluster spreads out somewhat, but that the mean cluster size asymptotes to a constant. On averaging over realizations we find that the density profile of the cluster (in the center of mass frame) is in approximate agreement with the solution of the continuum model. We refer the reader to Fig.1 where a direct comparison is made.
In $`d=2`$ we find a phase transition on varying the spawning probability $`q`$. For $`q<q_c`$ the cluster asymptotically diffuses outwards, with a cluster size growing as $`\sqrt{t}`$. For $`q>q_c`$ the cluster remains compact. The value of $`q_c`$ is generally very small. For example, for $`N=400`$, we find $`q_c0.004`$. The precise $`N`$ dependence of $`q_c`$, along with details of the simulations will be given in a longer work . At a qualitative level, we see that the continuum formulation (10) provides a good description of the underlying microscopic model.
In the directed polymer context there is no compelling reason to study the system for $`\lambda <0`$. However, within the context of population dynamics, this is a perfectly acceptable sector of parameter space to explore. In order to motivate such a model, we shall consider a different birth/death process to the one described above. Consider now that the individuals $`A`$ interact with the rules: $`A2A`$ and $`2A0`$, with rates $`r`$ and $`r^{}`$ respectively, which are in general time dependent. In other words, we have rules corresponding to mutual annihilation and single-sex reproduction. We choose to take $`r^{}=r_0`$ (constant) and tune $`r`$ to maintain a constant total population. The phenomenological reaction-diffusion equation for this process takes the form
$$_t\rho =D^2\rho r_0\rho ^2+r_0\left[d^dr^{}\rho (𝐫^{},t)^2\right]\rho .$$
(11)
As desired, this corresponds to our original directed polymer equation (3) but with a negative coupling constant.
This equation has some remarkable properties, which may be uncovered from an approximate analytic treatment. We shall restrict our attention to $`d=1`$. We note that spatial coupling enters the model in two distinct ways: locally via the Laplacian, and globally via the normalization constraint. This means that there can be non-trivial dynamics even if we set $`D=0`$ – a limiting case which allows an exact solution of the equation. We shall confine ourselves to stating a few pertinent results. Details will be given in a longer work. It is obvious that in this limit the dynamics will be extremely sensitive to the initial condition. One static solution corresponds to choosing the initial condition to be a normalized top-hat function. More interesting solutions are obtained with initial conditions with infinite support. Consider $`\rho (x,0)=(A/\xi _0)\mathrm{exp}((|x|/\xi _0)^\beta )`$. For long times, one finds that this profile spreads outwards, with a flat central region of height $`1/\xi (t)`$, where $`\xi (t)`$ is the lateral size of the profile. We find that this length scale grows in time as $`\xi (t)t^\alpha `$ with $`\alpha =1/(1+\beta )`$. The solution for the density profile is expressed as
$$\rho (x,t)=(1/\xi (t))F\left((|x|/\xi _0)^\beta (\xi (t)/\xi _0)^\beta \right),$$
(12)
where $`F(y)`$ is simply related to the initial profile. Since this solution does not have a canonical dynamical scaling form, there may be other relevant scales; and indeed we find that there is a second important length scale, which is the width $`W(t)`$ of the interface separating the region of “flat” nonzero density and the region of vanishing density. This quantity changes in time according to $`W(t)\xi (t)(\xi _0/\xi (t))^\beta `$. Therefore the interface width shrinks (grows) with time for $`\beta >1`$ ($`\beta <1`$). The above solution is rather unusual as it, in some way, interpolates between a travelling wave solution $`F(xvt)`$ and a scaling (or similarity) solution $`F(x/\xi (t))`$ . We term it a “pseudo-travelling wave”. \[We note in passing that an initial condition with tails decaying with a power $`q`$ evolves under normal dynamical scaling with $`\rho (1/t)F(|x|/t)`$ and $`F(z)z^q`$ for $`z1`$.\]
One may now ask how the above picture is modified when one re-instates the local coupling $`D`$. We have several arguments which all agree that, so long as the initial profile has a finite transverse scale, then for late times the density will have the form (12), but with a selected value of $`\beta =1/2`$. This implies that the profile grows outwards with a transverse scale $`\xi (t)t^{2/3}`$ and interfacial width $`W(t)t^{1/3}`$. The simplest (albeit non-rigorous) way to see why this value of $`\beta `$ is selected is to assume that for non-zero $`D`$ the density profile has a form approximately given by (12), and to then ask how the Laplacian term in (11) can balance with the reaction terms. Since the profile is flat for $`|x|\xi (t)`$ and vanishing for $`|x|\xi (t)`$ it is clear that the size of the Laplacian is set by the width of the interfacial region. We therefore estimate $`_x^2\rho 1/W^2\xi `$. Similarly the reaction terms are of order $`\rho ^21/\xi ^2`$. Balancing the two leads us to $`W(t)\xi (t)^{1/2}`$, which on comparing with our previous result $`W(t)\xi (t)^{1\beta }`$ gives $`\beta =1/2`$. We have performed numerical integration of Eq.(11) in $`d=1`$ for a variety of initial conditions, and with zero or non-zero $`D`$. The exact results for $`D=0`$ have been numerically confirmed, however the situation for $`D>0`$ is less clear. In Fig.2 we show an example of the evolving density grown from an initial Gaussian profile, with $`D=1`$ and $`r_0=5`$. In the inset we plot the profile size and interfacial width. These quantities exhibit very strong corrections to simple power law scaling, but a closer analysis (studying the ‘running exponents’) indicates that the asymptotic power laws are approximately 0.65(1) and 0.30(1) respectively, which are close to the values obtained above.
In conclusion we have considered two very different physical scenarios which share a common mathematical description. These are self-localization of a directed polymer, and birth/death models with fixed population number. The physics of these processes is non-local which leads to a number of interesting results, including phase transitions from localized to non-localized states, and pseudo-travelling waves.
Finally, it is noteworthy that the fundamental directed polymer equation (2) with a general external potential $`V`$ can be recast in terms of constrained population dynamics. The case of a random potential is especially interesting, and corresponds to local, random birth/death processes, along with a global feedback to control the population. We are currently using this mapping to gain new insights into these two fascinating problems. |
no-problem/9911/physics9911050.html | ar5iv | text | # A fast programmable trigger for isolated cluster counting in the BELLE experiment
## 1 Introduction
Fast, complex, general-purpose trigger systems are required for modern particle physics experiments. Although custom-made CMOS gate arrays are used for extremely fast applications such as first-level triggers ($``$ 25 ns) for LHC experiments, field programmable gate arrays (FPGAs) are an attractive option for environments that require a less demanding speed ($`<`$ 100 ns) but a more flexible trigger logic implementation. The logic of FPGA-based trigger systems can be readily changed as the nature of signal and background conditions vary. Such trigger systems are flexible and can be adapted to many different applications. Commercial products that have these functionalities exist (for example, the Lecroy 2366 Universal Logic Module, Lecroy Co.) and can be used for implementing rather simple trigger logic. In the case of the calorimeter trigger for the BELLE experiment, the number of channels, data transfer rates, and the complexity of the trigger logic preclude the use of commericially available devices. We developed a 9U VME module that accommodates more than a hundred ECL signals for the triggering purpose. The resulting board is a general purpose asynchronous programmable trigger board that satisfies VME specifications.
## 2 Trigger requirements for the BELLE Experiment
The BELLE experiment at KEK in Japan, is designed to exloit the physics potential of KEKB, a high luminosity, asymmetric $`e^+e^{}`$ collider operating at a cm energy (10.55 GeV) corresponding to the $`\mathrm{{\rm Y}}(4S)`$ resonance. In particular, BELLE is designed to test the Kobayashi-Maskawa mechanism for CP violation in B meson sector. The KEKB design luminosity is 1 $`\times `$ $`10^{34}`$cm<sup>-2</sup>s<sup>-1</sup> with a bunch crossing rate of 2 ns. The BELLE detector consists of seven subsystems; a silicon vertex detector (SVD), a central drift chamber (CDC), an aerogel Cherenkov counter (ACC), an array of trigger and time of flight scintillation counters (TOF/TSC), an electro-magnetic calorimeter (ECL), $`K_L`$ and muon detectors (KLM) and extreme forward calorimeters (EFC). A 1.5 Tesla axial magnetic field is produced by a superconducting solenoid located outside of the ECL. The KLM is outside of the solenoid and provides a return yoke for the detector’s magnetic field. The BELLE trigger system requires logic with a level of sophistication that can distinguish and select desired events from a large number of background processes that may change depending on the conditions of the KEKB storage ring system. Figure 1 shows a schematic view of the BELLE trigger system. As shown in Fig. 1, the trigger information from individual detector components is formed in parallel and combined in one final stage. This scheme facilitates the formation of redundant triggers that rely either only on information from the calorimeter or from the tracking systems. The final event trigger time is determined by requiring a coincidence between the beam-crossing RF signal and the output of the final trigger decision logic. The timing and width of the subsystem trigger signals are adjusted so that their outputs always cover the beam-crossing at a well defined fixed delay of 2.2 $`\mu `$s from the actual event crossing.
The ECL is a highly segmented array of $``$ 9000 CsI(Tl) crystals with silicon photodiode readout installed inside the coil of the solenoid magnet. Preamplifier outputs from each crystal are added in summing modules located just outside of the BELLE detector and then split into two streams with two different shaping times (1 $`\mu `$s and 200 ns): the slower one for the total energy measurement and the faster one for the trigger. For the trigger, signals from a group of crystals are summed to form a trigger cell (TC), discriminated, digitized (as differential ECL logic signals), and fed into five Cluster Counting Modules (CCMs) that count the number of isolated clusters in the calorimeter. In total, the ECL has 512 trigger cells: 432 in the barrel region and 80 in the endcaps. The trigger latency of the CCM trigger board is $``$ 150 ns. Each module accepts 132 inputs and outputs 16 logic signals. (The actual board can accommodate a maximum of 144 inputs and provide as many as 24 output signals; for BELLE we have chosen to use 132 input and 16 output lines per board).
Given the complexity discussed above and the required flexibility, we chose to use a complex FPGA to apply the isolated clustering algorithm and a CPLD device in order to match the VME bus specifications. For the FPGA, we use an XC5215-PG299 chip that has 484 configurable logical blocks (CLBs), and for the CPLD, an XC95216-5PQ160, which provides 4,800 usable gates. Once the CPLD is loaded, it permanently holds all of the VME bus specification logic. In contrast, the trigger logic contained in the FPGA is lost during a power down, and must be reconfigured during start-up, either from an on-board PROM or from a computer (VME master module) through VME bus. This takes a few milliseconds. In the following we describe in some detail the trigger logic design of the CCM board and how we achieve our performance requirements.
## 3 Logic Design
We used XACT<sup>TM</sup> software provided by Xilinx to design, debug and simulate our logic. The trigger processor board accepts the differential ECL logic signals from the calorimeter trigger cells. There are many possible strategies for finding and counting the number of isolated clusters (ICN) among the calorimeter trigger cells. But, since the trigger decision has to be made within a limited time period, a simple algorithm is desirable. We devised simple logic that counts only one cluster from a group of connected clusters. For the case of a number of connected clusters, we count only the upper most cluster in the right most column among them. This is demonstrated for a 3 $`\times `$ 3 trigger cell array in Fig. 2. Here, the trigger cell under the counting operation is numbered as “0”. If the cell “0” satisfies the logic diagram shown in Fig. 2, it is considered to be a single isolated cluster. We have applied this simple logic to the output of GEANT-based full Monte Carlo simulation of various $`B`$ decay modes as well as Bhabha scattering events and compared the perfect cluster number and the cluster number returned by the above logic. The results are summarized in Table 1. In all the cases, the discrepancies between the perfect cluster counting and the isolated cluster counting logic are below the 1 % level; despite its simplicity, the counting logic works exceptionally well. This simple clustering logic is applied to over 132 input signals and the number of isolated clusters are then tallied. In addition to the cluster counting logic, we also delay the 132 input and 16 output signals and register them in a set of FIFO RAMs (the pattern register) located on the board. The signals are delayed (in order for them to be correctly phased) by approximately 800 ns by means of an 8 MHz delay pulse and stored in FIFO RAMs at the trigger decision. The delay time can be easily changed by modifying the logic. The pattern register allows a continuous monitoring of the operation of the CCM module. The recorded cluster and ICN bits are read out through the VME bus independently of the ICN counting system. The FPGA counts the number of clusters asynchronously and the simulated timing diagram in Fig. 3 indicates that the time needed for the ICN counting is 47 ns.
In order to satisfy the complete VME bus specification, a set of logical blocks (Address Decoder, Interrupter, Control Logic, Configuration Control Logic, CSR, and FIFO RAM Control) are developed and downloaded into the CPLD. The logical blocks are designed as a VME A24/D32 slave interface. Comparators are used to decode addresses being probed by the master module. Status bits are implemented in order to check the status of the configuration completion of FPGA chip and triggering process itself. Control bits are implemented to stop the output of the triggering signal, to start the output of the triggering signal, to enable the reconfiguration of the FPGA chip via a PROM or the VME bus, and to control the FIFO RAM that serves as the pattern register. All the functionalities were tested extensively during the development phase and completely debugged before they were implemented in the experiment.
## 4 Hardware Implementation
The CCM module houses the main FPGA chip for the ICN counting, the CPLD chip for implementing the VME bus specifications, ECL-TTL and NIM-TTL converters, the PROM holding the FPGA configuration, and the FIFO RAM pattern register. A schematic diagram and an assembled board are shown in Figs. 4 and 5, respectively. The printed circuit board is a VME 9U size four-layer board. All connectors, switches, components, and downloading circuitry are mounted on one side of the board. The logic signals to and from the FPGA are TTL CMOS, and are interfaced with the differential ECL logic signals to the rest of the trigger and data acquisition system. Standard 10124 (10125) chips with 390 $`\mathrm{\Omega }`$ pull down resisters (56 $`\times `$ 2 $`\mathrm{\Omega }`$ termination resisters) are used to convert TTL to ECL (ECL to TTL). The input polarity is such that a positive ECL edge produces a negative TTL edge at the FPGA input. Also on-board are several discrete-component, NIM-TTL converters that interface with two external NIM control signals: the master trigger signal (MTG) and the external clock. Three 7202 CMOS asynchronous FIFO chips ( 3 $`\times `$ 1024 Bytes ) provide the pattern register. The actual registration for one event includes 132 inputs, 16 outputs, 8 reserved bits, 10 memory address bits, and 2 unsed bits; a total of 146 bits are registered in the three FIFO chips.
Programs for the FPGA chip can be downloaded from an on-board PROM (Master Serial Mode) or via the VME bus (Peripheral Asynchronous Mode). We use an XC17256D Serial Configuration PROM and the clustering logic is downloaded by a PROM writer that is controlled by a personal computer. The choice of the VME master module is the FORCE SUN5V, a 6U VME bus CPU board that has a 110 MHz microSPARC-II processor running Solaris 2.5.1. Accessing the CCM from the VME master module is simply done by mapping the device (in our case, the CCM) into the memory of the master module. From there, the clustering logic can also be loaded into the FPGA chip. All of the control software was developed in this master module with GNU gcc and g++ compilers. An object-oriented graphical user interface based on the ROOT framework was also developed. Resetting the module, downloading the logic to FPGA from the PROM or the VME bus, and the FIFO reading are all implemented in the graphical user interface. Programs for the CPLD chip are downloaded through an on-board connector from the parallel port of a personal computer and it enables the downloading of the CPLD program whenever necessary.
The base address of the board is set by a 8-pin dip switch on board. A hardware reset switch that resets the FPGA, the CPLD, and the FIFO RAMs is provided on the front panel. There are four LEDs indicating power on/off, MTG in, and two configuration of FPGA completion (LDC and SX1). Two fuses (250V 2A) and four capacitors (100 $`\mu `$F) are on $`\pm `$ 5 V lines for the protection purpose.
The trigger board has been fully tested and the results have been compared with software simulations. Test results are shown in Fig. 6, where a cluster-counting time of approximately 50 ns is found, which is in good agreement with the 47 ns time predicted by the simulation.
## 5 Performance with e<sup>+</sup>e<sup>-</sup> collisions
The BELLE detector started taking e<sup>+</sup>e<sup>-</sup> collision data with all subsystems, the data acquisition systems, and accompanying trigger modules operational in early June of 1999. Six CCM modules installed in the electronics hut counted isolated clusters from the e<sup>+</sup>e<sup>-</sup> collision in the calorimeter. Five CCM modules were used to count isolated clusters from the five sections of the calorimeter; the sixth module collected and summed the outputs from the other five. The flexibility inherent in the design of the board allowed the use some of the input and ouput channels of the sixth module to generate triggers for Bhabha events as well as calorimeter timing signals.
In a $``$ 100K event sample of actual triggers, we found a nearly perfect correspondence between the numbers of isolated clusters provided by the trigger logic and those inferred from TDC hit patterns that are available at the offline analysis stage. Figure 7(a) shows the correlation between the number of isolated clusters from TDC hit patterns and ICN numbers from CCM modules. As is shown here, there are few cases that ICN numbers from CCM modules are smaller than numbers from TDC hit patterns. Figure. 7 (b) shows the mismatch rate between the TDC-based and-CCM based cluster numbers as a function of the TDC-based cluster numbers. For more than 99.8 % of the cases, the two numbers are identical. We attribute the small level of inconsistency to the limitations of the clustering counting logic (see section 3) and the infrequent occurence of timing offset on the input signals.
## 6 Conclusions
We have developed a fast trigger processor board utilizing FPGA and CPLD technology. It accommodates 144 ECL input signals and provides 24 ECL output signals. It functions as a 9U VME module that enables the loading of revised trigger logic and the online resetting of the module. In addition, a pattern register on the board contains all of the input/output ECL signals that were used in a process. The isolated clustering logic is measured to have a time latency of 50 ns, in good agreement with the prediction of the simulation. Sufficient hardware and software flexibility has been incorporated into the module to make it well suited for dealing with a variety of experimental conditions.
We would like to thank thr BELLE group for their installation and maintenance of the detector, and acknowledge support from KOSEF and Ministry of Education (through BSRI) in Korea. |
no-problem/9911/cond-mat9911145.html | ar5iv | text | # Magnetic field as a probe of gap energy scales in YBCO
\[
## Abstract
Among high T<sub>c</sub> materials, the YBCO (YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-x</sub>) compounds are special since they have superconducting chains as well as planes. We show that a discontinuity in the density of states as a function of magnetic field may appear at a new energy scale, characteristic of the chain and distinct from that set by the $`d`$ wave gap. This is observable in experimental studies of the thermodynamical properties of these systems, such as the specific heat.
\]
It is widely accepted that a $`d`$-wave pairing gap describes the superconductivity in high T<sub>c</sub> compounds. The YBCO compounds are special however, due to the fact that the one dimensional chains change the Fermi surface, giving a quasi-linear branch, and thereby introducing a new energy scale for the variation of the gap on the Fermi surface. We argue that this can be directly measured using a magnetic field.
Scanning tunnelling microscopy (STM) measurements of the local density of states of YBCO agree with earlier tunnelling measurements and show a very clear structure around $`20`$ meV corresponding to the full gap energy. They also indicate structure at around $`5.5`$ meV which is absent in BSCCO compounds . It is natural to assume that this is because YBCO has a different electronic structure, possessing orthorhombic chains as well as tetragonal planes. The chains introduce a new energy scale, the value of the gap function at the endpoint of the chain Fermi surface on the boundary of the Brillouin zone. Since experiments are only sensitive to the value of the gap function on the Fermi surface (and not to its global behaviour throughout the Brillouin zone) this energy scale will be apparent in experiments. Another interpretation of the additional structure is localised quasi-particle states in the vortex core . Even if present, these should have a negligible effect on the bulk density of states, a point also made in Ref. .
We model this by assigning the chains an elliptical Fermi surface but with a highly modified gap function
$$\mathrm{\Delta }_𝐤=\mathrm{\Delta }_0(\mathrm{cos}2\varphi +s).$$
(1)
$`\varphi `$ is the polar coordinate in $`𝐤`$ space and the dimensionless parameter $`s`$ (with $`|s|<1`$) indicates the effective amount of $`s`$ wave in the order parameter. We stress that (1) is not the gap function defined throughout the Brillouin zone but is rather its value evaluated on the Fermi surface. Along the chain Fermi surface the gap function will have a dependence like (1) even if it is globally pure $`d`$-wave. This is an effective theory to model the contribution of the chains to the density of states and reflects a combination of gap symmetry and Fermi surface geometry. (Independent information on the existence of a subdominant $`s`$-wave component is provided by Josephson tunnelling data .)
Standard theory for the density of states gives
$$\frac{N(\omega )}{\overline{N}}d𝐤\mathrm{Im}\left\{\frac{|\omega |}{\omega ^2\mathrm{\Delta }_𝐤^2\xi _𝐤^2}\right\},$$
(2)
where $`\xi _𝐤`$ is the dispersion relation for the quasi-particle excitation energy (for the moment we take it $`\xi _k=𝐤^2/2m`$ with $`𝐤`$ the momentum and $`m`$ the effective mass) and $`\overline{N}`$ is the normal, nonsuperconducting density of states. Using the gap function (1), this integral is easily done and is plotted in Fig. 1 using $`s=0.57`$ (this choice is explained below.) There is a van Hove singularity at $`\omega =(1+|s|)\mathrm{\Delta }_0`$ as we expect , however there is another at $`(1|s|)\mathrm{\Delta }_0`$, in qualitative agreement with .
A problem with the measured $`N(\omega )`$ is that there is a large zero bias anomaly in contradiction with theory and presumably arising from surface effects. It is therefore useful and possibly better to measure a bulk property. We propose the low temperature limit of the specific heat, which comes from the density of quasiparticle states at zero frequency. We focus on the magnetic field regime $`H_{c1}HH_{c2}`$ where the density of states is given by quasi-particle excitations in the presence of the induced vortices. For each magnetic field, there is a magnetic energy, which scales as $`\sqrt{H}`$ . We expect structure at critical fields for which the magnetic energy sweeps through the singularities at $`\mathrm{\Delta }_0(1\pm s)`$ in Fig. 1. The new, lower energy scale is well within the range of available fields and is observable in specific heat experiments which probe the vortex modified density of states.
In the presence of the vortex, the Cooper pairs have a superfluid velocity $`𝐯_𝐬`$ which depends inversely on the distance to the vortex core. The quasi-particle energies of momentum $`𝐤`$ are Doppler shifted by $`\omega \omega +𝐯_𝐬𝐤`$ , which can be thought of as introducing a spatially dependent shift in chemical potential. We are interested in the zero frequency case, so we can use Eq. (2) but with $`\omega `$ replaced by $`V=𝐯_𝐬𝐤`$. In addition to the trace integral over $`𝐤`$, we also do a spatial average within the vortex unit cell, with coordinates $`𝐫`$:
$$N_0(H)d𝐤d𝐫V\delta \left(\zeta _𝐤^2V^2+\mathrm{\Delta }_𝐤^2\right).$$
(3)
To find $`𝐯_𝐬`$, we use the free energy density per unit length
$$F=\mathrm{d}^2r\left(B^2+\lambda _x^2(\times B)_x^2+\lambda _y^2(\times B)_y^2\right).$$
(4)
We have introduced the parameters $`\lambda _i^2=\mu _i\lambda ^2`$ with $`\lambda ^2=Mc^2/4\pi e^2n_s`$, $`\mu _i=m_i/M`$ and $`M=\sqrt{m_xm_y}`$ ($`\lambda `$ is a mean London penetration depth and $`n_s`$ the superfluid density.) Following standard free energy minimisation with a flux line source at the origin, we express the magnetic field as a modified Bessel function. In the magnetic field regime considered here, the typical spacing between vortices is much smaller than $`\lambda `$ and we use the small argument approximation:
$$B\frac{\mathrm{\Phi }_0}{2\pi \lambda ^2}\mathrm{log}\left(\frac{1}{\lambda }\sqrt{\frac{x^2}{\mu _y}+\frac{y^2}{\mu _x}}\right),$$
(5)
pointing in the z direction and $`\mathrm{\Phi }_0=\pi \mathrm{}c/e`$ is the flux quantum. Applying Ampère’s law, the current is
$`j_x`$ $``$ $`{\displaystyle \frac{c\mathrm{\Phi }_0}{8\pi ^2\lambda ^2}}{\displaystyle \frac{1}{\frac{x^2}{\mu _y}+\frac{y^2}{\mu _x}}}{\displaystyle \frac{y}{\mu _x}}`$ (6)
$`j_y`$ $``$ $`{\displaystyle \frac{c\mathrm{\Phi }_0}{8\pi ^2\lambda ^2}}{\displaystyle \frac{1}{\frac{x^2}{\mu _y}+\frac{y^2}{\mu _x}}}{\displaystyle \frac{x}{\mu _y}}.`$ (7)
We find $`𝐯_𝐬`$ by dividing the current by $`en_s`$.
The quasi-particle excitations are given by the dispersion relation
$$\zeta _𝐤=\frac{\mathrm{}^2k_x^2}{2m_x}+\frac{\mathrm{}^2k_y^2}{2m_y}.$$
(8)
As discussed above, we take the gap function (1) to be
$$\mathrm{\Delta }_𝐤=\mathrm{\Delta }_0f(𝐤)=\mathrm{\Delta }_0\left(\frac{k_x^2k_y^2}{k_x^2+k_y^2}+s\right),$$
(9)
The analysis is simplified by a change of variables to
$`x^{}=x/\sqrt{\mu _y}`$ $`k_x^{}=\sqrt{\mu _y}k_x`$ (10)
$`y^{}=y/\sqrt{\mu _x}`$ $`k_y^{}=\sqrt{\mu _x}k_y.`$ (11)
In terms of these new coordinates we have
$`𝐯_𝐬`$ $`=`$ $`{\displaystyle \frac{\mathrm{}}{2M}}{\displaystyle \frac{1}{r^{}}}\widehat{\beta }^{}`$ (12)
$`\zeta _𝐤`$ $`=`$ $`{\displaystyle \frac{𝐤^2}{2M}}`$ (13)
$`f(𝐤)`$ $`=`$ $`f(\varphi ^{})={\displaystyle \frac{m_y/m_x\mathrm{tan}^2\varphi ^{}}{m_y/m_x+\mathrm{tan}^2\varphi ^{}}}+s`$ (14)
where $`(r^{},\beta ^{})`$ are the spatial polar coordinates ($`\beta ^{}`$ is the vortex winding angle) and $`(k^{},\varphi ^{})`$ are the momentum polar coordinates in the new coordinates. Note that $`𝐯_𝐬𝐤=|𝐯_𝐬||𝐤|\mathrm{sin}(\beta ^{}\varphi ^{})`$. Henceforth we drop the primes. Since we are integrating over $`\beta `$, we are also free to shift its origin and thereby replace $`\beta \varphi `$ by $`\beta `$. To conform to usual notation we say that the chains run along the $`b`$ direction which we define as parallel to the $`y`$ axis. Since the carrier mass in the $`b`$ direction is less than in the $`a`$ direction (due to the conductivity supplied by the chains) we conclude that $`m_y/m_x<1`$. In principle $`s`$ can have either sign (we ignore the time-reversal symmetry breaking possibility for which $`s`$ is complex.) For purposes of exposition we will use the realistic value $`m_y/m_x=0.5`$ and take $`s=0.57`$ so as to get a second gap scale around $`5.5`$ meV as seen in experiment. The oscillatory part of the gap function in (12) has been modified due to the coordinate rescaling, compared to the original $`\mathrm{\Delta }_0\mathrm{cos}2\varphi `$. Nevertheless, the amplitude has not been modified and without the $`s`$ term both functions would vary between plus and minus $`\mathrm{\Delta }_0`$. Either form would yield an initial linear dependence on $`\sqrt{H}`$ followed by a saturation at a scale set by $`\mathrm{\Delta }_0`$. Rather, it is the $`s`$ term which changes this qualitative picture. It makes the gap function have a maximum value of $`\mathrm{\Delta }_0(1+|s|)`$ and a minimum value of $`\mathrm{\Delta }_0(1|s|)`$. The first of these gives the saturation scale mentioned above while the second gives structure corresponding to the first peak in Fig. 1.
We begin by nondimensionalising the spatial integral in (3). In order for the average magnetic field in the sample to equal the applied field, we require the inter-vortex spacing to be $`R=\sqrt{\mathrm{\Phi }_0/\pi H}/a`$ where $`a`$ is a geometrical constant of order unity which accounts for the mismatch between the circular vortices and the hexagonal lattice they fill out. We define the magnetic energy as
$$E_H=\frac{a}{2M}v_F\sqrt{\frac{\pi H}{\mathrm{\Phi }_0}}$$
(15)
where $`v_F`$ is the Fermi velocity and the dimensionless magnetic energy as $`\nu =E_H/\mathrm{\Delta }_0`$. We then express (3) as
$$\frac{N_0(H)}{\overline{N}}=\frac{1}{2\pi ^2}_0^{2\pi }d\varphi _0^{2\pi }d\beta _0^1d\rho \frac{\rho }{\sqrt{1\left(\frac{\rho f(\varphi )}{\nu \mathrm{sin}\beta }\right)^2}}$$
(16)
where $`\overline{N}`$ is the normal density of states and $`\rho =r/R`$. We used the delta function to perform the $`k`$ integral and it is understood that the integration domain is limited to the range where the integrand is real.
We can do the $`\rho `$ and $`\beta `$ integrals in (16) leading to the final expression for the density of states,
$$\frac{N_0(H)}{\overline{N}}=\frac{1}{2\pi }d\varphi \mathrm{min}\{1,\left(\frac{\nu }{f(\varphi )}\right)^2\}.$$
(17)
The parameter $`\nu `$ is experimentally tunable while $`s`$ (which enters $`f(\varphi )`$ through (12)) is a fixed, intrinsic material parameter (although it may depend somewhat on doping levels) and represents a combination of gap and Fermi surface symmetries. For the case $`|s|<1`$ and $`\nu 1`$, it is simpler to go back to (16) and expand the integrand around the gap nodes. One can perform the $`\varphi `$ integral approximately in the limit of small $`\nu `$ and the resulting $`\beta `$ and $`\rho `$ integrals are trivial so that
$$\frac{N_0(H)}{\overline{N}}\left(\frac{2}{\pi }\underset{n}{}\frac{1}{|f_n^{}|}\right)\nu .$$
(18)
The index $`n`$ refers to the nodes of the gap function and $`f_n^{}`$ is the derivative of the function $`f`$ evaluated at the nodes. This is still linear, as one has for a pure $`d`$-wave gap, although the presence of the $`s`$ term does change the positions of the gap nodes and hence affects the slope. For $`|s|>1`$, there are no nodes and the dependence is quadratic for small $`\nu `$. Since this is unphysical we do not present the explicit form.
There are four distinct behaviours of the integrand of (17), as shown in Fig. 2 and the $`\nu s`$ plane is accordingly divided into six regions. For fixed $`s`$, we vary $`\nu `$ and thereby cross from one region to another. Associated with such crossings there is a corresponding nonanalyticity in the density of states, which is experimentally measurable. This is shown in Fig. 3 where we plot the density of states as a function of $`\nu `$ using the integral form (17). Clearly there is structure at $`\nu =0.43`$ and $`1.57`$, in accord with our general considerations. The kink at $`\nu =1.57`$ corresponds to the integrand saturating at unity — as we go from an integrand as in Fig. 2b to the type as in Fig. 2c. However, this probably lies beyond any realisable magnetic field strength. The kink at $`\nu =0.43`$ which corresponds to going from an integrand as in Fig. 2a to the type in Fig. 2b appears by eye to be a discontinuity in slope. This is not really the case, as we now discuss.
The integrand in Fig. 2a saturates around $`\varphi =\pi /2`$ and $`\varphi =3\pi /2`$ when $`\nu `$ approaches $`1|s|`$ and the integral does not increase as quickly after this value as it does before. We can evaluate the missing contribution, $`\delta N`$, from the regions around these local minima. We take $`\nu `$ to be just below this transition, i.e.
$$\nu =1|s|ϵ$$
(19)
with $`ϵ1`$. To leading order in $`ϵ`$, the contribution of this region to (17) is
$$\frac{\delta N}{\overline{N}}\frac{8}{3\pi (1|s|)}\sqrt{\frac{m_x}{2m_y}}ϵ^{3/2}.$$
(20)
There is no such contribution if $`\nu `$ is just above $`1|s|`$, so there is a discontinuity in the density of states. While the precise prefactor and position of the discontinuity may depend on factors which we have not included, the three-halves power is generic, depending only on the topological property of having local minima of the integrand disappear. For example, in the event that $`s<0`$, we first lose the minimum around $`\varphi =0`$ instead of $`\varphi =\pi /2`$ but otherwise the behaviour is the same and, except for a change in prefactor (typically it is smaller), we still expect a discontinuity of this power. Similarly, it is the same order of discontinuity at $`\nu =1+|s|`$ although, as stated, this is probably beyond the accessible magnetic field range.
To compare with experiment, we now determine the parameters of our model. The upper scale of $`20`$ meV equals $`(1+s)\mathrm{\Delta }_0`$ and the lower scale of 5.5 meV equals $`(1s)\mathrm{\Delta }_0`$ from which we infer $`\mathrm{\Delta }_012.7\mathrm{meV}`$ and $`s0.57`$. We also use the result, consistent with both $`\mu `$SR and specific heat measurements that $`E_H2.6\sqrt{H}`$ when $`H`$ is measured in Tesla and $`E_H`$ in meV.
The dimensionless parameter $`\nu `$ then equals $`0.20\sqrt{H}`$. The dependence of the specific heat on $`H`$ has been measured in , with having the cleanest samples. The data is shown in Fig. 3 in dimensionless units (taking the specific heat at low temperatures to be proportional to temperature and to the density of states.) By our previous arguments, energies of $`5.5`$ meV correspond to magnetic fields of about $`4.5T`$ or $`\nu `$ of about $`0.42`$, which is well within the experimental ranges considered. The origin of the finite value of the density of states at zero field is not well understood but is sometimes attributed to oxygen vacancies on the chain or to resonant impurity scattering . Either way, to compare with our results one should subtract off this constant. There will also be a linearly increasing component from the density of states on the planes; this should be of comparable magnitude but without the lower singularity.
We plot this experimental data, fitting it with straight lines of different slopes at the two extremes of the data sets. There does appear to be a change in slope around $`\nu =0.5`$, in qualitative agreement with theory. While the theory does not predict a discontinuous slope, it does predict a nonanalyticity which resembles a change of slope. Since the data points are so sparse, we have not tried a detailed fitting to the predicted functional form. The magnitude of this nonanalyticity is given by the prefactor of the $`ϵ^{3/2}`$ term in (20). (We note that the aforementioned background slope from the planes is present in the experimental data but not in the theoretical curve, so the relative magnitude of the discontinuity is different.) Experiments using a much finer sampling of magnetic field values would be required to verify this prediction.
In this paper we have considered a magnetic field parallel to the $`c`$-axis. In the case where it is parallel to the $`ab`$ planes there may be additional interesting anisotropy effects . Unfortunately, the energy scales seem to be such as to put this out of the experimentally accessible magnetic field range . An interesting possibility is to consider a tilted magnetic field such that the energy scales are experimentally accessible while the in-plane component is still strong enough to yield observable anisotropy effects. A gap function of mixed symmetry would have a clear signature on this anisotropy. Another interesting extension is to consider the role of the paramagnetic response of the electrons to the magnetic field — an effect which has been completely neglected in the present work. While the energy scales seem to be such as to make this a reasonable assumption, it could well be that for the in-plane magnetic field, the paramagnetism is of comparable importance.
We thank I. Vekhter for useful discussions. Research supported in part by the Natural Sciences and Engineering Research Council (NSERC) and by the Canadian Institute for Advanced Research (CIAR). |
no-problem/9911/astro-ph9911293.html | ar5iv | text | # ASCA/ROSAT Observations of PKS 2316-423: Spectral Properties of a Low Luminosity Intermediate-type BL Lac Object
## 1. Introduction
Earlier studies of BL Lac objects have shown that the systematic differences between radio and X-ray selected BL Lac objects (RBLs vs XBLs) can be just attributed to orientation differences. Moreover, BL Lac objects have been reclassified by a more accurate way “low energy” and “high energy” peaked BL Lac objects (LBLs vs HBLs) based on the peak frequency of synchrotron radiation (e.g. Giommi and Padovani 1994). In general, RBLs and XBLs tend to be LBLs and HBLs, respectively. They generally represent two distinct extremes of BL Lacs. However, recent studies from deeper and larger X-ray survey have shown that BL Lac objects tend to exhibit more homogeneous distributions of the properties (Perlman et al. 1998; Caccianiga et al. 1999; Laurent-Muehleisen et al. 1999) rather than previously disparate ones. This has resulted in important roles of intermediate BL Lac objects (IBLs) in revealing BL Lac mysteries.
In this paper, we present the X-ray spectral analysis (ROSAT and ASCA archival data) and spectral energy distribution (SED) of PKS 2316-423, aiming at showing its intermediate-BL Lac properties. It is a southern radio source at $`z=0.0549`$, and was formerly classified as a BL Lac candidate on the base of its featureless non-thermal radio/optical continuum (Crawford & Fabian 1994; Padovani & Giommi 1996). We noticed this object as it has been the brightest contaminating source to the nearby narrow-line X-ray galaxy–NGC 7582 (Xue et al. 1998) in most of its historical X-ray records.
The ROSAT(PSPC) and ASCA satellites observed this object as a serendipitous source in April 1993 and November 1994 respectively. These observations, together with the two ROSAT/HRI observations made in 1992 and 1993, could not only extend our knowledge of the source SED properties to the X-ray domain ($`\genfrac{}{}{0pt}{}{<}{}\mathrm{\hspace{0.17em}10}`$ keV), but also provide a good opportunity for X-ray spectroscopic studies in the range of 0.1–10 keV. Which turn out to be very important for the unambiguous classification of the source.
## 2. Spectral analysis
No evident variations in the source count rate were detected over both observations spanning about half a day and a little more than one day. Thus time-averaged spectra from both satellites were used for Spectral analyzing.
A simple power-law model, with photon index of $`\mathrm{\Gamma }2.0`$ and absorption column density at the Galactic value, gives acceptable fit to the ROSAT/PSPC data (Figure 1). The inferred intrinsic luminosity is $`5.7\times 10^{43}\mathrm{ergs}\mathrm{s}^1`$, which is similar to that of other non-quasar AGN. The source was observed twice, and showed consistent fluxes, with ROSAT HRI in June 1992 and May 1993 respectively. However, the brightness decreased by 30% from the later HRI observation to the PSPC observation which was taken one week apart. These factors suggest the source is variable and thus there might be non-thermal origin for the X-ray flux.
A simple power-law model fails to well describe the ASCA data, mainly due to an abnormal excess absorption above the Galactic value is required. Consider that this excess absorption might be an artifact due to a false spectral model, we next fitted the data with a broken power-law with free break energy. Thus the fit to the data is notablely improved at a $`90`$% level, yields the absorption in consistent with the Galactic value and two powerlaw components with a break-point at $`2.1`$ keV (see Table 1). The lower-energy part is flatter with a slope in good agreement with that of ROSAT spectrum; the higher-energy part is steeper with $`\mathrm{\Gamma }=2.6_{0.3}^{+0.3}`$.
Comparison with the ROSAT/PSPC observation, the ASCA data indicate that the source brightness decreased by 33% in the 0.1–2.4 keV band in a 1.5 years interval. Meanwhile the broad-band X-ray spectrum remained the shape at the
Table1. Spectral fitting (with 90% errors).
| Data | $`N_\mathrm{H}`$ | $`\mathrm{\Gamma }_1`$ | $`\chi _\nu ^2`$/d.o.f. |
| --- | --- | --- | --- |
| | \[$`10^{20}\mathrm{cm}^2`$\] | | |
| ROSAT | $`1.4_{0.4}^{+0.5}`$ | $`2.0_{0.2}^{+0.2}`$ | 1.1/17 |
| ASCA | $`2.2_{2.0}^{+2.2}`$ | $`2.0_{0.2}^{+0.4}`$ | 1.0/131 |
| $`\mathrm{\Gamma }_2`$ | E<sub>break</sub> | F$`{}_{}{}^{obs}{}_{0.12\mathrm{k}\mathrm{e}\mathrm{V}}{}^{}`$ |
| --- | --- | --- |
| | keV | ($`10^{12}`$$`\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1`$) |
| | | $`2.63_{0.18}^{+0.15}`$ |
| $`2.6_{0.3}^{+0.3}`$ | 2.1 | $`1.35_{0.20}^{+0.33}`$ |
lower-energy part, and hardened the slope in higher-energy range, which was likely in a manner of the prediction of the synchrotron radiation losses.
## 3. Spectral Energy Distribution
The composite SED (Figure 2), from both space and ground-based observations, provides further insights into the object. It is clear that the SED from radio to X-ray is possibly from only one radiation component (Synchrotron emission) and peaks at a higher frequency falling in the EUV/soft-X-ray band ($`\nu _p=7.3\times 10^{15}`$ Hz). The optical and ultraviolet radiation appear to be a continuation of the radio synchrotron spectrum; the X-ray data are likely from a common emission origin as the lower energy parts and represents a high energy tail of the synchrotron spectrum. Other relevant slope parameters from the SED are listed in Table 2.
For a comparison, we plotted in Figure 2 the EGRET sensitivity threshold as an upper limit to the GeV flux (marked by an arrow), since the source was never detected at $`\gamma `$-ray. It is shown that the source is well dominated by a synchrotron process.
## 4. Discussions
Putting 2316-423 on the $`\alpha _{ro}`$ vs $`\alpha _{ox}`$ color-color diagram, we find it is in the intermediate range of BL Lacs. As we know, $`\alpha _{XOX}`$ can more precisely measure spectral changes from optical to soft X-ray bands, however, the values of $`\alpha _{XOX}`$ for PKS 2316-423 depend on the assumption of the X-ray spectral indices, being 0.18/0.26 and -0.42/-0.34 for $`\alpha _x=1.0`$ and 1.6, respectively. These values should locate in the intermediate range of the $`\alpha _{XOX}`$ distribution of recent BL Lacs samples (Laurent-Muehleisen et al. 1999).
The importance of the frequency at which the synchrotron radiation peaks is that it provides a powerful diagnostics for the physical condition of the emitting region. Recent studies showed that among BL Lacs the synchrotron peak frequencies are inversely correlated with their luminosities (Fossati et al. 1998). We could put
Table 2. Broad-band slopes of SED
| $`\alpha _{ro}`$ | $`\alpha _{ox}`$ | $`\alpha _x`$ | $`\alpha _x`$ |
| --- | --- | --- | --- |
| | | (0.1-2 keV) | (2-10 keV) |
| 0.56 | 1.18/1.26 | 1.0 | 1.6 |
PKS 2316-423 on the Figure 7c of Fossati et al. (1998). Due to its lowest peak luminosity, PKS 2316-423 should locate the right-bottom end, this means that the peak frequency of PKS 2316-423 would be around $`10^{18}`$ Hz, however, our fit to the SED just gives $`\nu _p10^{16}`$ Hz. Therefore, we suggest that PKS 2316-423 might be a low luminosity “intermediate” object between HBLs and LBLs.
In a summary, the X-ray spectral and SED analysis of PKS 2316-423 point out its IBL or HBL attributes with very low luminosity compared with the most recent BL Lac samples. Because of its peculiar low luminosity, however, the more detailed studies of PKS 2316-423 will shed light on the evolution of BL Lac objects.
## ACKNOWLEDGEMENTS
This research has made use of the NASA/IPAC Extra-galactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. S.J.X. acknowledges the financial support from Chinese Post Doctoral Program.
## REFERENCES
Caccianiga, A., Maccacaro, T., Wolter, A. et al., 1999, ApJ, 513, 51
Crawford C.S., Fabian A.C., 1994, MNRAS, 266, 669
Fossati, G., Maraschi, L., Celloti, A. et al., 1998, MNRAS, 299, 433
Giommi P., Padovani P., 1994, MNRAS, 268, L51
Laurent-Muehleisen et al., 1999, astro-ph/9905133
Perlman, E.S., Padovani, P., Giommi, P., 1998, AJ, 115, 1253
Padovani P., Giommi P., 1995, ApJ, 444, 567
Padovani P., Giommi P., 1996, MNRAS, 279, 526
Xue S.J., Otani C., Mihara T. et al. 1998, PASJ, 50, 519 |
no-problem/9911/physics9911010.html | ar5iv | text | # Coherent Backscattering of Light by Cold Atoms
## Abstract
Light propagating in an optically thick sample experiences multiple scattering. It is now known that interferences alter this propagation, leading to an enhanced backscattering, a manifestation of weak localization of light in such diffuse samples. This phenomenon has been extensively studied with classical scatterers. In this letter we report the first experimental evidence for coherent backscattering of light in a laser-cooled gas of Rubidium atoms.
Transport of waves in strongly scattering disordered media has received much attention during the past years when it was realized that interference can dramatically alter the normal diffusion process. In a sample of randomly distributed scatterers, the initial direction of the wave is fully randomized by scattering and a diffusion picture seems an appropriate description of propagation when the sample thickness is larger than the scattering mean free path. This model neglects all interference phenomena and predicts a transmission of the medium inversely proportional to sample thickness. This is the familiar Ohm’s law. However, interferences may have dramatic consequences such as a vanishing diffusion constant. In this situation, the medium behaves like an insulator (strong localization). Such a disorder induced transition has been reported for microwaves and for light. In fact, even far from this insulating regime, interferences already hamper the diffusion process (weak localization). This has been demonstrated in coherent backscattering (CBS) experiments. Upon coherent illumination of a static sample, a random speckle pattern is generated. This pattern is washed out by configuration averaging except in a small angular range around the backscattering direction where constructive interferences originating from reciprocal light paths enhance diffuse reflection from the sample. This effect has been observed for light in a variety of different media such as suspensions of powder samples, biological tissues or Saturn’s rings , as well as for acoustic waves. Among other interesting features such as universal conductance fluctuations or lasing in random media, CBS is a hallmark of coherent multiple scattering.
Atoms as scatterers of light offer new perspectives. The achievements of laser cooling techniques in the last decade now allow to manipulate and control samples of quantum scatterers. Cold atoms are unique candidates to move the field of coherent multiple scattering to a fully quantum regime (quantum internal structure, wave-particle duality, quantum statistical aspects). For instance, the coupling to vacuum fluctuations (spontaneous emission) is responsible for some unusual properties of the scattered light (elastic and inelastic spectra ). Also, information encoding in atomic internal states can erase interference fringes like in some ”which-path” experiments . Furthermore, it is now possible to implement situations where the wave nature of the atomic motion is essential .
In our experiment, the scattering medium is a laser-cooled gas of Rubidium atoms which constitutes a perfect monodisperse sample of strongly resonant scatterers of light. The quality factor of the transition used in our experiment is $`Q=\nu _{at}/\mathrm{\Delta }\nu _{at}10^8`$ (D2 line at $`\lambda =c/\nu _{at}=780`$nm, intrinsic resonance width $`\mathrm{\Delta }\nu _{at}=\mathrm{\Gamma }/2\pi =6`$MHz). The scattering cross section can thus be changed by orders of magnitude by a slight detuning of the laser frequency $`\nu _L`$. This is a new situation compared to the usual coherent multiple scattering experiments where resonant effects, if any, are washed out by the sample polydispersity. Moreover in our sample the duration $`\tau _D`$ (delay time) of a single scattering event largely dominates over the free propagation time between two successive scattering events : for on-resonant excitation ($`\delta =\nu _L\nu _{at}=0`$), this delay is of the order of $`\tau _D2/\mathrm{\Gamma }=50`$ ns corresponding to free propagation of light over $`15`$ m in vacuum. In such a situation, particular care must be taken to observe a CBS effect. Indeed, when atoms move, additional phaseshifts are involved. Configuration averaging will only preserve constructive interference between reciprocal waves if the motion-induced optical path change $`\mathrm{\Delta }x`$ does not exceed one wavelength . A rough estimate is $`\mathrm{\Delta }x=v_{rms}\tau _D<\lambda `$, a criterium which can be written in the more appealing form $`kv_{rms}<\mathrm{\Gamma }`$. Thus, for resonant excitation, the Doppler shift must be small compared to the width of the resonance. For Rubidium atoms illuminated by resonant light, one finds $`v_{rms}<4.6`$m/s corresponding to a temperature $`T=200`$mK. Much lower temperatures are easily achieved by laser cooling thus allowing observation of interference features in multiple scattering. However, up to now, only incoherent effects in multiple scattering, like radiation trapping , have been investigated in cold atomic vapors.
We prepare our atomic sample by loading a magneto-optical trap (MOT) from a dilute vapor of Rubidium 85 atoms (magnetic gradient $`B7`$G/cm, loading time $`t_{load}0.7\mathrm{sec}`$). Six independent trapping beams are obtained by splitting an initial laser beam slightly detuned to the red of the trapping transition (power per beam $`30`$mW, FWHM diameter $`2.8`$cm, Rubidium saturation intensity $`I_{sat}1.6`$mW/cm<sup>2</sup>, $`\delta 3\mathrm{\Gamma }`$). The repumper is obtained by two counterpropagating beams from a free running diode laser tuned to the $`F=3F^{}=3`$ transition of the D2 line. Fluorescence measurements yield $`N10^9`$ atoms corresponding to a spatial density $`n_{at}2\times 10^9`$ cm<sup>-3</sup> at the center of the cloud (gaussian profile, FWHM diameter $`7`$mm). The velocity distribution of the atoms in the trap has been measured by a time-of-flight technique to be $`v_{rms}10`$cm/s, well below the limit imposed by the above velocity criterium. To observe coherent backscattering (CBS) of light, we alternate a CBS measurement phase with a MOT preparation phase. During the CBS phase, the magnetic gradient and trapping beams of the MOT are switched off (residual power per beam $`0.2\mu `$W). The CBS probe beam (FWHM $`6`$mm, spectral width $`\mathrm{\Delta }\nu _L1`$ MHz) is resonant with the closed trapping transition of the D2 line : $`F=3F^{}=4`$. A weak probe is used to avoid saturation effects (power $`80\mu `$W, on-resonant saturation $`s=0.1`$). The optical thickness of the sample, measured by transmission, is $`\eta 4`$ and remains constant, within a few percent, during the whole duration of the CBS measurement phase ($`2.5`$ms). The corresponding extinction mean free path $`\mathrm{}2`$mm is consistent with an estimation deduced from our fluorescence measurements, taking a scattering cross-section at resonance $`\sigma _{res}=\mathrm{\hspace{0.17em}3}\lambda ^2/2\pi `$.
The CBS detection setup is shown in Fig.1. It involves a cooled CCD camera in the focal plane of a converging lens ($`f=500`$mm). A polarization sensitive detection scheme, generally allowing to eliminate the single scattering contribution, is used for signal recording in various polarization channels. For a linear incident polarization, we record the scattered light with (linear) polarization parallel (”parallel” channel) or orthogonal (”orthogonal” channel) to the incident one. We also use a circular incident polarization by inserting a quarter-wave plate between the beam-splitter and the sample. In the ”helicity preserving” channel the detected polarization is circular with the same helicity (sign of rotation of the electric field referenced to the wave propagation direction) as the incident one : as an example, no light is detected in this channel in the case of the back-reflection by a mirror. This is the channel mostly used in previous studies, because it allows to eliminate the single scattering contribution (for dipole-type scatterers). The ”helicity non-preserving” channel is obtained for a detected circular polarization orthogonal to the previous one. Teflon or dilute milk samples were used to find the exact backward direction, to cross-check the polarization channels and to test the angular resolution of our set-up. During the MOT phase (duration $`10`$ms), probe beam and detection scheme are switched off while the MOT is switched on again to recapture the atoms. After this phase a new atomic sample is reproduced. The whole sequence is repeated for a typical duration of $`1`$ min with a detected flux typically about $`1800`$ photons/pixel/sec. A ”background” image, representing less than 10% of the full signal level (due mainly to scattering from the repumper by hot atoms in the cell), is substracted from the ”CBS” image to suppress stray light contributions.
Fig. 2 (color image in appendice) shows the CBS images obtained from our laser-cooled Rubidium vapor in the various polarization channels. We clearly observe enhanced backscattering in all four polarization channels whereas for a thick teflon sample we only found pronounced cones in the polarization preserving channels. This enforces the idea that low scattering orders are dominant in our experiment which is not surprising considering the relatively small optical thickness of our sample. The intensity enhancement factors, defined as the ratio between the averaged intensity scattered in exactly backward direction and the large angle background are 1.11, 1.06,1.08 and 1.09 for the helicity preserving, helicity non-preserving, orthogonal and parallel channels respectively. The detected light intensities in these channels, normalized to that of the linear parallel channel, are 0.76, 0.77, 0.54 and 1. A closer look at Fig.2d reveals that the cone exhibits a marked anisotropy in the (linear) parallel polarization channel: the cone is found to broader in the (angular) direction parallel to the incident polarization. This effect has already been observed in classical scattering samples and is also a signature of low scattering orders .
For a more quantitative analysis of the CBS cone, we report in Fig.3 a section of image 2a (helicity non-preserving channel), taken after an angular average was performed on the data (this procedure is justified when the cone is isotropic, as in Fig.2a). The measured cone width $`\mathrm{\Delta }\theta `$ is about $`0.57`$mrad, nearly six times larger than our experimental resolution of $`0.1`$mrad. Taking into account the experimental resolution, we compared our data to a calculation (dotted line) involving only double scattering . The experimental value $`\mathrm{}2`$mm for the mean free path was used in the calculation, leaving the enhancement factor as free parameter. Even though the assumptions underlying this theoretical model (isotropic double scattering, semi-infinite medium) are rather crude in our case, the shape of the CBS cone is nicely reproduced. We plan, in further studies, to investigate in more details the contributions of different scattering orders by carefully analyzing the CBS cone shape.
One important aspect in CBS studies has always been the enhancement factor in the backscattering direction, due to the constructive interferences between reciprocal paths. In the helicity preserving channel, this enhancement factor is known to be 2 for independent scattering by classical scatterers, as single scattering can be ruled out in that polarization channel. In our experiment with cold atoms however, we measure a backscattered enhancement of 1.06, clearly less than 2 ! This reduction cannot be attributed to the experimental resolution, as we have measured enhancement factors on milk (using a dilution giving the same cone width as the atomic one) of 1.8. However, in our situation several processes could reduce the cone contrast. The first one is single scattering, which does not contribute to CBS. Due to the presence of several Zeeman sublevels in the groundstate of Rubidium atoms, Raman processes, i.e. light scattering with change of the atomic internal sublevel, have to be considered. In such events, the polarization of the scattered light differs from the incident polarization and single scattering is not eliminated even in the helicity preserving channel. Another consequence of the atom’s internal structure is a possible imbalance between the amplitudes of the reciprocal waves : atoms in different internal states can have different scattering cross sections (resulting from different Clebsch-Gordan coefficients). They can thus be seen as partial polarizers which can imbalance the amplitude of the paths which interfere for CBS. Furthermore finite-size effects should also be taken into account. Indeed our sample does not have the standard slab geometry and the gaussian shape of the probe beam is known to reduce the enhancement factor. We are currently investigating these effects to determine their respective magnitudes for our situation. Also, some more subtle phenomena might play an additional role in the cone reduction. For instance, with classical scatterers, the radiated and the incident light have identical frequencies (elastic scattering). This is no longer true for atoms for which the resonant fluorescence spectrum displays inelastic structures in addition to the usual elastic component. Because of Raman scattering, even in the weak saturation limit (weak probe intensity), atoms have a non-negligible probability to undergo inelastic scattering . The role of these rather complex spectral properties in coherent backscattering has yet to be studied both theoretically and experimentally.
In summary we have reported the first observation of coherent backscattering of light by a sample of laser cooled atoms. These first results indicate that in our system low scattering orders are dominant, as expected from optical thickness measurements. The exact value of the enhancement factor and the precise shape of the cone is not yet fully understood and requires more experimental and theoretical investigations. Further experiments will include studies of the effect of the probe beam intensity (which determines the amount of inelastic scattering) and detuning. Detuning the laser frequency from the atomic resonance leads to an increased mean free path $`\mathrm{}=1/n_{at}\sigma `$ . Indeed, we already observed that the measured width $`\mathrm{\Delta }\theta `$ of the coherent backscattering cone decreases when the probe frequency is detuned from resonance, as expected from the scaling $`\mathrm{\Delta }\theta \lambda /\mathrm{}`$. It would be very interesting to extend these experiments to new regimes. Weak and strong localization of light in gaseous Bose-Einstein condensates and of atomic matter waves in random optical potentials certainly present a great challenge for the near future.
We would like to thank the CNRS and to the PACA Region for financial support. We also thank the POAN Research Group. Finally, we would like to deeply thank D. Delande, B. van Tiggelen and D.Wiersma for many stimulating discussions. |
no-problem/9911/astro-ph9911291.html | ar5iv | text | # GLOBAL SIMULATIONS OF DIFFERENTIALLY ROTATING MAGNETIZED DISKS: FORMATION OF LOW-BETA FILAMENTS AND STRUCTURED CORONA
## 1 INTRODUCTION
Magnetic fields in differentially rotating disks play essential roles in the angular momentum transport which enable the accretion and various activities such as X-ray flares and jet formation. Motivated by the Skylab observations of the solar corona, Galeev, Rosner, & Vaiana (1979) proposed a model of magnetically structured corona in accretion disks consisting of X-ray emitting magnetic loops. The magnetic loops can be created due to the buoyant rise of magnetic flux tubes (or flux sheet) from the interior of the accretion disk. Matsumoto et al. (1988) carried out two-dimensional magnetohydrodynamic (MHD) simulations of the Parker instability (Parker 1966) in nonuniform gravitational fields which mimic those in accretion disks. They showed that when $`\beta =P_{\mathrm{gas}}/P_{\mathrm{mag}}1`$, a plane-parallel disk deforms itself into evacuated undulating magnetic loops and dense blobs accumulated in the valley of magnetic field lines. The effects of rotation and shear flow, however, were not included in their simulations.
Shibata, Tajima, & Matsumoto (1990) carried out two-dimensional MHD simulations of the Parker instability including the effects of shear flow and suggested that magnetic accretion disks fall into two types; gas pressure dominated (high-$`\beta `$) solar type disks, and magnetic pressure dominated (low-$`\beta `$) cataclysmic disks. In high-$`\beta `$ ($`\beta 1`$) disks, magnetic flux escapes from the disk more efficiently as $`\beta `$ decreases.
Several authors (Hawley, Gammie & Balbus 1995 ; Matsumoto & Tajima 1995 ; Brandenburg et al. 1995; Stone et al. 1996) have reported the results of three-dimensional local MHD simulations of magnetized accretion disks by adopting a shearing box approximation (Hawley et al. 1995). In differentially rotating disks, magnetorotational instability (Balbus & Hawley 1991) couples with the Parker instability (Foglizzo & Tagger 1995). Since Parker instability grows for long wave length perturbations along magnetic field lines, non-local effects may affect the stability and nonlinear evolution.
Results of global 3D MHD simulations including vertical gravity were reported by Matsumoto (1999) and Hawley (1999). By adopting an initially gas pressure dominated torus threaded by toroidal magnetic fields, Matsumoto (1999) showed that magnetic energy is amplified exponentially owing to the growth of the Balbus & Hawley instability and that the system approaches a quasi-steady state with $`\beta 10`$. Matsumoto (1999) assumed $`\beta =constant`$ in the torus at the initial state. When $`\beta _01`$, the deviation from magneto-rotational equilibrium introduces large amplitude perturbations.
In this paper, we present the results of 3D MHD simulations starting from an equilibrium MHD torus threaded by initially equipartition strength ($`\beta 1`$) toroidal magnetic fields.
## 2 MODELS AND NUMERICAL METHODS
The basic equations we use are ideal MHD equations in cylindrical coordinate system $`(r,\varphi ,z)`$. We assume that the gas is inviscid and evolves adiabatically. Since we neglect radiative cooling, our numerical simulations postulates that the disk is advection-dominated (see Kato, Fukue, & Mineshige 1998 for a review).
The initial condition is an equilibrium model of an axisymmetric MHD torus threaded by toroidal magnetic fields (Okada et al. 1989). We assume that the torus is embedded in hot, isothermal, non-rotating, spherical coronal gas. For gravity, we use the Newtonian potential. We neglect the self-gravity of the gas. At the initial state the torus is assumed to have a constant specific angular momentum $`L`$.
We assume polytropic equation of state $`P=K\rho ^\gamma `$ where $`K`$ is a constant, and $`\gamma `$ is the specific heat ratio. According to Okada et al. (1989), we assume
$$v_A^2=\frac{B_\varphi ^2}{4\pi \rho }=H(\rho r^2)^{\gamma 1}$$
(1)
where $`v_A`$ is the Alfvén speed and $`H`$ is a constant. For normalization we take the radius $`r_0`$ where the rotation speed $`L/r_0`$ equals the Keplerian velocity $`v_{K0}=(GM/r_0)^{1/2}`$ as unit radius and set $`\rho _0=v_{K0}=1`$ at $`r=r_0`$. Using these normalizations, we can integrate the equation of motion into the potential form;
$$\mathrm{\Psi }=\frac{1}{R}+\frac{L^2}{2r^2}+\frac{1}{\gamma 1}v_s^2+\frac{\gamma }{2(\gamma 1)}v_A^2=\mathrm{\Psi }_0=constant,$$
(2)
where $`v_s^2`$ is the square of the sound speed, $`R=(r^2+z^2)^{1/2}`$ and $`\mathrm{\Psi }_0=\mathrm{\Psi }(r_0,0)`$. By using equation (2), we obtain the density distribution.
$$\rho =\left\{\frac{\mathrm{max}[\mathrm{\Psi }_0+1/RL^2/(2r^2),0]}{K[\gamma /(\gamma 1)][1+\beta _0^1r^{2(\gamma 1)}]}\right\}^{1/(\gamma 1)}$$
(3)
where $`\beta _02K/H`$ is the plasma $`\beta `$ at $`(r,z)=(r_0,0)`$. The parameters describing the structure of the MHD torus are $`\gamma `$, $`\beta _0`$, $`L`$ and $`K`$. In this paper we report the results of simulations for parameters $`\beta _0=1`$, $`\gamma =5/3`$, $`L=1`$, and $`K=0.05`$. The density of the halo at $`R=r_0`$ is taken to be $`\rho _{\mathrm{halo}}/\rho _0=10^3`$. The unit field strength is $`B_0=\rho _0^{1/2}v_{K0}`$.
We solve the ideal MHD equations in a cylindrical coordinate system by using a modified Lax-Wendroff scheme with artificial viscosity. We simulated only the upper half space ($`z0`$) and assumed that at the equatorial plane, $`\rho `$, $`v_r`$, $`v_\varphi `$, $`B_r`$, $`B_\varphi `$, and $`P`$ are symmetric and $`v_z`$ and $`B_z`$ are antisymmetric. The outer boundaries at $`r=r_{\mathrm{max}}`$ and at $`z=z_{\mathrm{max}}`$ are free boundaries where waves can transmit. The singularity at $`R=0`$ is treated by softening the gravitational potential near the gravitating center ($`R<0.2r_0`$). The number of grid points is $`(N_r,N_\varphi ,N_z)=(200,64,240)`$. To initiate non-axisymmetric evolution, small amplitude, random perturbations are imposed at $`t=0`$ for azimuthal velocity.
## 3 NUMERICAL RESULTS
Figure 1a shows the initial condition. Color scale denotes the density distribution and red curves depict magnetic field lines. Figure 1b shows density distribution and velocity vectors at $`t=6.2t_0`$ where $`t_0`$ is the orbit time $`t_0=2\pi r_0/v_{K0}`$. As the matter which lost angular momentum accretes to the central region, the MHD torus becomes disk-like. Velocity vectors indicate that matter flows out from the disk.
After the non-axisymmetric Balbus & Hawley instability grows, the inner region of the torus becomes turbulent. The turbulent motions tangle magnetic field lines in small scale and create numerous current sheets (or current filaments). Figure 1c shows the magnetic field lines projected onto the equatorial plane and the density distribution at $`z=0`$. Note that Figure 1c shows only the inner region where $`2.5x/r_02.5`$ and $`2.5y/r_02.5`$. In large scales, magnetic field lines and density distribution show low azimuthal wave number spiral structure. Figure 1d shows that magnetic loop structures similar to those in the solar corona are created. The yellow surfaces show strongly magnetized regions where $`|𝑩|=0.1B_0`$ and red curves show magnetic field lines at $`t=7.5t_0`$. The magnetic loops buoyantly rise from the disk due to the Parker instability. Numerical results indicate that small loops which newly appeared above the photosphere (emerging magnetic loops) develop into expanding coronal loops. We can observe magnetic loops elongated in the azimuthal direction and loops twisted by the rotation of the disk.
Figure 2 shows the isosurfaces of plasma-$`\beta `$ at $`t=6.2t_0`$. Orange surfaces show strongly magnetized regions where $`\beta =0.1`$. Yellow and green surfaces show the region where $`\beta =1`$ and $`\beta =10`$, respectively. Inside the disk, strongly magnetized, low-$`\beta `$ filaments are created. As we shall show below, low-$`\beta `$ region occupies a small fraction of the total volume. Intermittent magnetic structures (filamentary strongly magnetized regions) similar to those in the solar photosphere develop in the disk.
Figure 3a shows the spatial average of the magnetic energy, $`\mathrm{log}(B^2/8\pi /P)`$ (dashed curve), $`\mathrm{log}B^2/(8\pi P_0)`$ (dash-dotted curve) and $`\mathrm{log}(B_rB_\varphi /4\pi /P_0)`$ (solid curve) averaged in $`0.7r/r_01.3`$ and $`0z/r_00.3`$. This figure shows that magnetic energy in the equatorial region decreases within a few rotation period due to the buoyant escape of magnetic flux, and that averaged plasma $`\beta `$ oscillates quasi-periodically around $`\beta 5`$. The ratio of averaged Maxwell stress to initial equatorial pressure, $`\alpha _{th}`$ is $`\alpha _{th}0.07`$ when $`6t_0<t<12t_0`$. Figure 3b shows the time evolution of the accretion rate
$$\dot{M}_{\mathrm{acc}}=_0^{2\pi }_0^{0.3r_0}\rho v_rr𝑑z𝑑\phi $$
(4)
at $`r=0.3r_0`$, and of the outflow rate
$$\dot{M}_{\mathrm{out}}=_0^{2\pi }_0^{r_{\mathrm{max}}}\rho v_zr𝑑r𝑑\phi $$
(5)
at $`z=3.0r_0`$. Accretion rate increases and fluctuates around a mean value. Numerical results also indicate quasi-periodic ejection of the disk material. Figure 3c shows the Poynting flux $`(𝑬\times 𝑩/4\pi )_z`$ which passes through the plane at $`z=1.0,2.0,3.0r_0`$. After a few rotation period, magnetic flux is convected from the equatorial region to the disk surface. The volume filling factor of the region where $`\beta 0.3`$ is shown in Figure 3d. Strongly magnetized region where $`\beta 0.3`$ occupies about 8% of the total volume after a few rotation period. Later, the filling factor decreases to 2%. After about 10 rotation period, magnetic flux is regenerated in the disk and filling factor shows the second peak when the averaged $`1/\beta `$ is maximum. Following this second peak, Poynting flux at $`z=2r_0`$ increases, which suggest that magnetic flux escapes from the disk.
## 4 DISCUSSION
We showed by 3D global MHD simulations that when a differentially rotating torus is threaded by equipartition strength toroidal magnetic fields, magnetic loops emerging from the disk continue to rise and form coronal magnetic loops similar to those in the solar corona. Inside the disk, magnetic turbulence drives dynamo action which maintains magnetic fields and keeps the disk in a quasi-steady state with $`\beta 5`$. We successfully simulated the formation process of the solar type disk by 3D direct MHD simulations.
Numerical results indicate that in differentially rotating disks magnetic field lines globally show spiral structure with low azimuthal wave number but locally fluctuating components create numerous current sheets. We expect that magnetic reconnection taking place in current sheets generate hot plasmas which emit hard X-rays. When the disk is optically thin, such reconnection events may be observed as large amplitude sporadic X-ray time variations characteristic of low-states in black hole candidates (Kawaguchi et al. 1999).
We found that inside the torus, filamentary shaped, locally strongly magnetized, low-$`\beta `$ regions appear. Even when $`\beta 5`$ in average, low-$`\beta `$ regions where $`\beta 0.3`$ occupy 2-10% of the total volume. The low-$`\beta `$ filaments are embedded in high-$`\beta `$ plasma. Such an intermittent structure is common in magnetized astrophysical plasmas. Numerical results indicate that low-$`\beta `$ filaments are re-generated after they once disappear.
Although we assumed point gravity suitable for accretion disks, numerical results can qualitatively be applied to galactic gas disks. Our numerical results suggest that although $`\beta 1`$ in average, low-$`\beta `$ filaments exist in galactic gas disks. Magnetic reconnection taking place in such low-$`\beta `$ regions may heat the interstellar gas and create hot, X-ray emitting plasmas. Tanuma et al. (1999) proposed a model that magnetic reconnection in strongly magnetized ($`30\mu G`$) regions in our Galaxy creates hot plasma which emit 7KeV component of Galactic Ridge X-ray Emission (GRXE). The low-$`\beta `$ filaments can also confine the hot plasma and prevent it from escaping from the Galaxy.
We thank Drs. K. Shibata, T. Tajima, S. Mineshige, T. Kawaguchi and K. Makishima for discussion. Numerical computations were carried out by VPP300/16R at NAOJ. This work is supported in part by the Grant of Japan Science and Technology Corporation and the Grant-in-Aid of the Ministry of Education, Science, Sports, and Culture, Japan(07640348, 10147105). |
no-problem/9911/astro-ph9911523.html | ar5iv | text | # What have learned about Gamma Ray Bursts from afterglows?
## 1. Introduction
Gamma ray bursts (GRBs) were discovered in 1969 (Klebesadel, Strong and Olson 1973) by American satellites of the Vela class aimed at verifying Russian compliance with the nuclear atmospheric test ban treaty. Though the discovery was made in 1969, the paper appeared only four years later because the authors had lingering doubts about the reality of the effects they had discovered. Since then, several thousands of bursts have been observed by a more than a dozen different satellites, but it is remarkable that the basic burst features outlined in the abstract of the 1969 paper (photons in the range $`0.21.5MeV`$, durations of $`0.130s`$, fluences in the range $`10^52\times 10^4ergscm^2`$) have remained substantially unchanged.
Current evidence (Fishman and Meegan 1995) has highlighted a wide ($`0.01100s`$) duration distribution, with hints of a bimodality which is claimed to correlate (at the $`2.5\sigma `$ level) with spectral properties. All bursts’ spectra observed so far are strictly non–thermal, and there has never been any confirmation by BATSE of a supposed thermal component (nor of cyclotron lines or precursors, for this matter) claimed in previous reports. A remarkable feature reported by BATSE is the bewildering diversity of light curves, ranging from impulsive ones (a spike followed by a slower decay, nicknamed FREDs for Fast Rise-Exponential Decay), to smooth ones, to long ones with amazingly sharp fluctuations, including even some with a strongly periodic appearance (two such examples are the ‘hand’ and the ‘comb’, so nicknamed from the number of high–Q, regularly repeating sharp spikes).
The most exceptional result from BATSE, though, was the sky distribution of the bursts (Fig.1). It was obvious from it that the bursts had to be extragalactic, as already discussed by theorists (Usov and Chibisov 1975, Paczyǹski 1986).
## 2. Afterglows
The next major step was triggered by BeppoSAX: in the summer of 1996, L. Piro and his coworkers located in archival data of the satellite the soft X–ray counterpart of a GRB (GRB 960720). They immediately conceived the idea of implementing a procedure to follow the next burst in real time, by re-orienting the whole satellite, after the initial detection by the Wide Field Cameras, so that the more sensitive Narrow Field Instruments could pinpoint the burst location to within $`45`$ arcsecs, a feat never achieved in such short times, and by a single satellite. After an initial snafu (GRB 970111), the gigantic effort paid off with the discovery of the X–ray afterglow of GRB 970228 (Costa et al., 1997), immediately followed by the discovery of its fading optical counterpart (van Paradijs et al., 1997), obtained through a search inside the WFC error box, in perfect agreement with theoretical predictions (Vietri 1997a, Mèszàros and Rees 1997)<sup>1</sup><sup>1</sup>1There is an interesting lesson to be drawn from this: in case one should wonder why a soft X–ray telescope was not placed onboard Compton to track GRBs, it was because of rivalries between different NASA subsections, the $`X`$–ray and the $`\gamma `$–ray divisions..
After the detection of the optical counterpart, the door was open to find the bursts’ redshifts: Table I summarizes the status of our current knowledge (september 1999); bursts’ luminosities are for isotropic sources.
Two comments are in order. First, the bursts have prima facie a redshift distribution not unlike that of AGNs and of the Star Formation Rate (SFR). The initial hope that they might trace an even more distant and elusive Pop III, triggered by the fact that the second redshift detected was also the largest so far (GRB 971214, $`z=3.4`$), has now vanished. Second, in order to place the energy release of GRB 990123 in context, one should notice that $`4\times 10^{54}ergs`$ is the energy obtained by converting the rest–mass of two solar masses, or, alternatively, the energy emitted by the whole Universe out to $`z1`$ within the burst duration. So, a single (perhaps double) star outshines the whole Universe.
Besides the distance and energy scales, the major impact of the discovery of afterglows has been the establishment of some key features of the fireball model (Rees and Mèszàros 1992):
1. bursts are due to explosions, as evidenced by their power–law behaviour;
2. the explosions are relativistic, as proved by the disappearence of radio flares;
3. the burst emission is due to synchrotron emission, as shown by the afterglow spectrum, and its optical polarization.
I will illustrate these points in the following, but, lest we become too proud, we should also remember that the fireball model has met some failures. The original version of the model (Mèszàros and Rees 1993) advocated the dissipation of the explosion energy at external shocks (i.e., those with the interstellar medium). Sari and Piran (1997), following a point originally made by Ruderman (1975) showed that these shocks smooth out millisecond timescale variability, which can only be maintained by the internal shocks proposed by Paczyǹski and Xu (1994). Also, the fireball model originally ascribed even the emission from the burst proper (as opposed to the afterglow) to optically thin synchrotron processes; I will discuss in the section Embarrassments why this is exceedingly unlikely. Furthermore, even the last tenet of mid–90s common wisdom, i.e., that bursts are due to neutron binary mergers, does not look too promising at the moment (since some bursts seem to be located inside star forming regions, incompatible with the long spiral–in time), though of course it is by no means ruled out yet.
### 2.1. The fireball model
Here, one may assume that an unknown agent deposits $`10^{51}10^{54}ergs`$ inside a small volume of linear dimension $`10^610^7cm`$. The resulting typical energy density corresponds to a temperature of a few $`MeV`$s, so that electrons and positrons cannot be bound by any known gravitational field. In these conditions, optical depths for all known processes exceed $`10^{10}`$. The fluid expands because of its purely thermal pressure, converting internal into bulk kinetic energy. Parametrizing the baryon component mass as $`M_bE/\eta c^2`$, it can be shown that, for $`1\eta 3\times 10^5`$ (Mèszàros, Laguna, Rees 1993) the fluid achieves quickly (the fluid Lorenz factor increases as $`\gamma r`$) a coasting Lorenz factor of $`\gamma \eta `$.
The requisite asymptotic Lorenz factor is dictated by observations: photons up to $`ϵ_{ex}18GeV`$ have been observed by EGRET from bursts (Fishman and Meegan 1995). For these photons to evade collisions with other photons, and thus electron/positron pair production, it is necessary that, in the reference frame in which a typical burst photon (with $`ϵ1MeV`$) and the exceptional photon are emitted, they appear as below pair production threshold: thus we must have $`ϵ^{}ϵ_{ex}^{}2m_ec^2`$. Since $`ϵ^{}ϵ/\gamma `$, and similarly for the other photons, we find (Baring 1993)
$$\gamma 300\left(\frac{ϵ}{1MeV}\frac{ϵ_{ex}}{10GeV}\right)^{1/2}.$$
(1)
From what we said above, we thus require a maximum baryon contamination, in an explosion of energy $`E`$, of $`M_b\begin{array}{c}<\\ \end{array}10^6M_{}(E/10^{51}erg)(300/\eta )`$.
The energy release is now assumed to be in the form of an inhomogeneous wind, with parts having a Lorenz factor larger than parts emitted previously. This leads to shell collisions (the internal shock model) at radii $`r_{sh}`$ which allow a time–scale variablity $`\delta tr_{sh}/2\gamma ^2c`$; for $`\delta t=1ms`$, $`r_{sh}10^{13}cm`$, which fixes the internal shock radii. Particle acceleration at these internal shocks and ensuing non–thermal emission is thought to lead to the formation of the burst proper. At larger radii, a shock with the surrounding ISM forms, and shell deceleration begins at a radius $`r_{ag}=(3E/4\pi nm_pc^2\gamma ^2)^{1/3}10^{17}cm`$ for a $`n=1cm^3`$ particle density typical of galactic disks. It is thought that the afterglow begins when the shell begins the slowdown, as this drives a marginally relativistic shock into the ejecta, thusly extracting a further fraction of their bulk kinetic energy.
### 2.2. Why explosions
The success of the fireball model lies in this, that it decouples the problem of the energy injection mechanism from the following evolution, which is, furthermore, an essentially hydrodynamical problem. It can be shown, in fact (Waxman 1997) that the evolution of the external shock is adiabatic, that the shock Lorenz factor decreases as $`\gamma r^{3/2}`$ because of the inertia of the swept–up matter, and thus $`r`$ scales with observer’s time as $`t=r/\gamma ^2c\gamma t^{3/8}`$ (for a radiative solution $`\gamma r^{3/7}`$, Vietri 1997b). If afterglow emission is due to optically thin synchrotron in a magnetic field in near–equipartition with post–shock energy density, it can be shown that $`B\gamma `$, that the typical synchrotron frequency at the spectral peak $`\nu _m\gamma B\gamma _e^2\gamma ^4`$ (where $`\gamma _e\gamma `$ is the lowest post–shock electron Lorenz factor), and that $`F(\nu _m)t^{3\beta /2}`$, where $`\beta `$ is the afterglow spectral slope. As it can be seen, these expectations are based exclusively upon the hydrodynamical evolution (and the synchrotron spectrum), and are thus reasonably robust.
We thus expect power–law time decays, a characteristic of strong explosions (see the Sedov analogy!), with time– and spectral–indices closely related. This is what is observed everywhere, from the X–ray through the optical to the radio, (see Piro and Fruchter, this volume), the few exceptions being discussed later on. In fact, the equality of the time–decay index of the X–ray and optical data in afterglows of individual sources has been taken as the key element to show that emission in the different bands is due to the same source. Time indices in the X–ray are in the range $`0.72.2`$ (Frontera et al., in preparation).
### 2.3. Why synchrotron spectrum in the afterglow
After having established that bursts are due to explosions, we happily learn that afterglows emit through synchrotron processes. In fig. 2 (Galama et al., 1998), we show the superposition of theoretical expectations for an optically thin synchrotron spectrum (including the cooling break at $`\nu 10^{14}Hz`$) with observations for GRB 970508.
The remarkable agreement is even more exciting as we remark that observations are not truly simultaneous, but are scaled back to the same time by means of the theoretically expected laws for time–decay, thus simultaneoulsy testing the correctness of our hydro. Another piece of evidence comes from the discovery of polarization in the optical afterglow of GRB 990510 (Fig. 3, Covino et al., 1999, Wijers et al., 1999).
This polarization may appear small ($`2\%`$), but it is surely not due to Galactic effects: stars in the same field show a comparable degree of polarization, but along an axis different by about $`50^{}`$. Also, polarization in the source galaxy is unlikely, because of a very stringent upper limit on the reddening due to this galaxy (Covino et al., 1999). The only remaining question mark is emission from an anisotropic source, but this would require a disk of $`10^{18}cm`$ to survive the intense $`\gamma `$ ray (and X, and UV) flash: though not excluded, it does not look likely.
### 2.4. Why relativistic expansion
Radio observations of the first burst observed so far (GRB 970508, Frail et al., 1997) showed puzzling fluctuations by about a factor of $`2`$ in the flux, over a time–scale of days, disappearing after about $`30`$ days from the burst (Fig. 4). This extreme, and unique behaviour, was explained by Goodman (1997), who showed that it is due to interference of rays travelling along different paths through the ISM, and randomly deflected by the spatially varying refractive index of the turbulent ISM. The wonderful upshot of this otherwise marginal phenomenon, is that these effects cease whenever the source expands beyond a radius
$$R=10^{17}cm\frac{\nu _{10}^{6/5}}{d_{sc,kpc}h_{75}}\left(\frac{SM}{10^{2.5}m^{20/3}kpc}\right)^{3/5},$$
(2)
where $`\nu _{10}`$ is the radio observing frequency in units of $`10^{10}Hz`$, $`d_{sc,kpc}`$ is the distance of the ISM from us (assumed to be a uniform scattering screen), and $`SM`$ is the Galactic scattering measure, scaled to a typical Galactic value.
The existence of interference effects is made more convincing by the amplitude of the average increase (a factor of 2, as observed), the correctness in the prediction of the time–interval between different peaks, and of the decorrelation bandwidth. Since flares disappear after about $`30`$ days, it means that the average speed of the radio source is $`R/30\text{d}ays=3\times 10^{10}cms^1`$. So we see directly that GRB 970508 expanded at an average speed of $`c`$ over a whole month, giving us a direct observational proof that the source is highly relativistic. This proof is completely equivalent to superluminal motions in blazars, and is the strongest evidence in favor of the fireball model.
### 2.5. GRB 970508: our best case
The afterglow of GRB 970508 is our best case so far: it is in fact a burst for which not only do we know the redshift, but also a radio source that has been monitored for more than $`400`$ days after the explosion (Frail, Waxman and Kulkarni 2000). Through these observations we can see the transition to a sub–relativistic regime at $`t100d`$, measure the total energetics of the following Sedov phase (unencumbered by relativistic effects!) $`E_{New}=5\times 10^{50}ergs`$, determine two elusive parameters, $`ϵ_{eq}=0.5`$ and $`ϵ_B=0.5`$ (the efficiencies with which energy is transfered to post–shock electrons by protons, and with which an equipartition field is built up), and the density of the surrounding medium $`n0.4cm^3`$. All of these values look reasonable (perhaps $`ϵ_{eq}`$ and $`ϵ_B`$ exceed our expectations by a factor of 10, a fact that could be remedied by introducing a slight density gradient which would keep the shock more efficient), so that our confidence in the external–shock–in–the–ISM model is boosted.
Another precious consequence of these late–time observations is that they yield information on beaming and energetics. In fact, GRB 970508 appeared to have a kinetic energy of $`E_{rel}=5\times 10^{51}erg`$ when in the relativistic phase, a measurement which can be reconciled with $`E_{New}`$ (remember that the expansion is adiabatic, so that we must have $`E_{New}=E_{rel}`$!) only if the unknown beaming angle, assumed $`=4\pi `$ in deriving $`E_{rel}`$, is smaller than $`4\pi `$ by the factor $`E_{New}/E_{rel}`$; we thus have the only measurement of $`\delta \mathrm{\Omega }/4\pi =0.1`$, so far. This already rules out all classes of models requiring unplausibly large amounts of beaming, $`10^8`$ or even beyond. Hopefully, more such measurements will come in the future, since this observationally heavy method is subject to many fewer uncertainties than the competing method of trying to locate breaks in the time–decay of afterglows. Also, the radiative efficiency of the burst can be estimated: correcting the observed burst energy release $`E_{GRB}=2\times 10^{51}erg`$ for the same beaming factor, the radiative efficiency is $`E_{GRB}\delta \mathrm{\Omega }/4\pi /(E_{New}+E_{rel}\delta \mathrm{\Omega }/4\pi )=0.3`$, again a unique determination. Notice however that this figure is subject to a systematic uncertainty: we do not know whether the beaming fraction is the same for the burst proper and for the afterglow.
## 3. Embarrassments
Something is rotten in the fireball kingdom as well, namely, departures from pure power–law behaviours, and the spectra of the bursts proper.
### 3.1. Unpowerlawness
Departures from power–laws are expected when one considers the extremely idealized character of the solutions discussed so far: perfect spherical symmetry, uniform surrounding medium, smooth wind from the explosion, $`ϵ_{eq}`$ and $`ϵ_B`$ constant in space and time. The tricky point here is to disentangle these distinct factors. In GRB 970508 and GRB 970828 (Piro et al., 1999, Yoshida et al., 1999) a major departure was observed in the X–ray emission, within a couple of days from the burst; they constitute the single, largest violations observed so far, in terms of number of photons. It is remarkable that spectral variations were simultaneously observed, and that both bursts showed traces (at the $`2.7\sigma `$ significance level) of an iron emission line. The similarity of the bursts’ behaviour argues in favor of the reality of these spectral features, which have been interpreted as thermal emission from a surrounding stellar–size leftover, pre–expelled by the burst’s progenitor (Lazzati et al., 1999, Vietri et al., 1999). Clearly, these departures hold major pieces of information on the bursts’ surroundings, and the nature of bursts’ progenitors.
It has been argued (Rhoads 1997) that, whenever the afterglow shell decelerates to below $`\gamma 1/\theta `$, where $`\theta `$ is the beam semi–opening angle, emission should decrease because of the lack of emitting surface, compared to an isotropic source. But, in view of the existence of clear environmental effects (GRB 970508 and GRB 970828), it appears premature to put much stock in the interpretation of time–power–law breaks as due to beaming effects. And equally, it appears to this reviewer that the same comment applies to the interpretation of a resurgence of flux as due to the appearance of a SN remnant behind the shell. The major uncertainty here is the non–univoqueness of the interpretation: Waxman and Draine (2000) have shown that effects due to dust can mimic the same phenomenon.
### 3.2. Bursts’ spectra
A clear prediction of the emission of optically thin synchrotron is that the low–photon–energy spectra should scale like $`dN_\nu /d\nu \nu ^\alpha `$, with $`\alpha =3/2`$, since the emission is in the fast cooling regime. Within thin synchrotron, there is no way to obtain $`\alpha >3/2`$. This early–recognized requirement (Katz 1994) is so inescapable that it has been dubbed the ‘line of death’. Observations are notoriously discordant with this prediction. Preece et al.(1999) have shown that, for more than 1000 bursts, $`\alpha `$ is distributed like a bell between $`2`$ and $`0`$, with mean $`\overline{\alpha }1`$. The tail of this distribution also contains a few tens of objects with $`\alpha +1`$. An example of these can be found in Frontera et al., 1999 (GRB 970111), which is instructive since BeppoSAX has better coverage of the critical, low–photon–energy region. In particular, BATSE seems to loose sensitivity below $`30keV`$, but this is still not enough to explain away the discrepancy with the theory. Also, Preece et al., 1999, showed that the time–integrated spectral energy distribution has a peak at a photon energy $`ϵ_{pk}200keV`$, and that $`ϵ_{pk}`$ has a very small variance from burst to burst. Again, this does not seem dependent upon BATSE’s lack of sensistivity above $`700keV`$, and again this has no explanation within the classic fireball model.
Any theorist who worked on blazars will say that the root of the disagreement is the neglect of Inverse Compton processes, but the trick here is not to identify the culprit, on which everyone agrees, but to devise a fireball model that smoothly incorporates it. One should remember that the details of the fireball evolution are generic, i.e., they do not depend upon any detailed property of the source, so that things like the radius at which the fireball becomes optically thin (to pairs or baryonic electrons), the radius at which acceleration ends, the equipartition magnetic field, and so on, are all reliably and inescapably fixed by the outflow’s global properties. A step toward the solution has been made by Ghisellini and Celotti (1999) who remarked that at least some bursts have compactness parameters $`l=10(L/10^{53}ergs^1)(300/\gamma )^51`$. Under these conditions, a pair plasma will form, nearly thermalized at $`kTm_ec^2`$, and with Thomson optical depth $`\tau _T10`$. The modifications which this plasma will bring to the burst’s spectrum are currently unknown, but it may be remarked that this configuration will be optically thick to both high–energy synchrotron photons due to non–thermal electrons accelerated at the internal shocks, and to low–energy cyclotron photons emitted by the thermal plasma, but it will be optically thin in the intermediate region reached by cyclotron photons upscattered via IC processes off non–thermal electrons. A model along this line (i.e., upscattering of cyclotron photons by highly relativistic electrons) is in preparation (Vietri 2000a), but it remains to be seen whether it (like any other model, of course) can simultaneously explain the spectral shape and the narrow range of the spectral distribution peak energy $`ϵ_{pk}`$.
## 4. On the central engine
As remarked several times already, the fireball evolution is independent of the source nature. The only exisiting constraint is the maximum amount of baryon contamination, which is
$$M_b=\frac{E}{\eta c^2}=10^6M_{}\frac{E}{10^{51}erg}\frac{300}{\gamma }.$$
(3)
This is a remarkably small value: since the inferred luminosities exceed the Eddington luminosity by 13 orders of magnitude, they clearly have all it takes to disrupt a whole star, no matter how compact. Yet, the energy deposition must somehow occur outside the main mass, lest the explosion be slowed down to less relativistic, or even possibly Newtonian speeds. In order to satisfy this constraint, it has emerged that the most favorable configuration has a stellar–mass black hole ($`M_{BH}310M_{}`$) surrounded by a thick torus of matter ($`M_t0.011M_{}`$, with $`\rho 10^{10}gcm^3`$). The presence of a black hole is not required by observations in any way: models involving neutron stars are still admissible, the advantage of having a black hole being only the deeper potential well: you get more energy out per unit accreted mass. The configuration thusly envisaged has a cone surrounding the symmetry axis devoid of baryons, since all models leading to this configuration have large amounts of specific angular momentum, and thus baryons close to the rotation axis either are not there, or have accreted onto the black hole due to their lack of centrifugal support.
### 4.1. Energy release mechanism
There are two major mechanisms for energy release discussed in the literature, the first one to be proposed (Berezinsky and Prilutskii 1986) being the reaction $`\nu +\overline{\nu }e^{}+e^+`$. Neutrinos have non–negligible mean free paths in the tori envisaged here, so that this annihilation reaction will take place not inside tori themselves, where they are preferentially generated because densities are highest, but in a larger volume surrounding the source. This is both a blessing and a disgrace: by occupying a larger volume, the probability that every neutrino finds its antiparticle to annihilate decreases, but then the energy is released in baryon–cleaner environments. The problem, though complex, is eminently suitable for numerical simulations, showing (Janka et al., 1999, and references therein) that about $`10^{50}ergs`$ can be released this way, above the poles of a black hole where less than $`10^5M_{}`$ are found.
Highly energetic bursts cannot be reproduced by this mechanism, due to its low efficiency: the second mechanism proposed involves the conversion of Poynting flux into a magnetized wind. The basic physical mechanisms are well–known (Usov 1992) since they have been studied in the context of pulsar emission: electrons are accelerated by a motional electric field $`\stackrel{}{E}=\stackrel{}{v}\stackrel{}{B}/c`$ due to the rotation of a sufficiently strong magnetic dipole, attached either to a black hole, or to the torus. Photons are then produced by synchrotron or curvature radiation, and photon/photon collisions produce pairs, to close the circle and allow looping. In order to carry away $`10^{51}ergs^1`$, a magnetic field of $`10^{15}G`$ is required. This is not excessive, since it is about three orders of magnitude below equipartition with torus matter, and because such fields already exist in nature, see SGR 1806-20 and SGR 1900+140: the key point is to understand whether some kind of dynamo effect can lead to these high values within the short allotted time.
Depending upon whether the open magnetic field lines extending to infinity are connected to the black hole or to the torus, the source of the energy of the outflow will be the rotational energy of the black hole (the so–called Blandford–Znajek effect) or of the torus. The first case is traditionally discussed in the context of AGNs (Rees, Blandford, Begelman and Phinney 1984), but it is harshly disputed whether the energy outflow may be actually dominated by the black hole rather than by the disk (Ghosh and Abramowicz, 1997, Livio, Ogilvie and Pringle 1998). On the other hand, the torus looks ideal as the source of a dynamo: its large shear rate, the presence of the Balbus-Hawley instability to convert polidal into toroidal flux, and the possible presence of the anti–floating mechanism inhibiting ballooning of the magnetic field (Kluzniak and Ruderman 1998), all seem to favor the existence of a fast dynamo. It should also be remarked that the configuration of the magnetic field in this problem is known: in fact, the configuration discussed in Thorne et al., 1986 for black holes, only uses the assumptions of steady–state and axial symmetry, and is thus immediately extended to magnetic fields anchored to the torus. What is really required here is a first order study, of the sort published by Tout and Pringle (1992) on angular momentum removal from young, pre–main–sequence stars via magnetic stresses, and on the associated $`\alpha \omega `$ dynamo. Until such studies are made, it will be premature to claim that neutrino annihilations are responsible for the powering of GRBs.
### 4.2. Progenitors
There is no lack of proposed progenitors, but I will discuss only binary neutron mergers (Narayan, Paczyǹski and Piran 1992), collapsars (Woosley 1993, Paczyǹski 1998) and SupraNovae (Vietri and Stella 1998, 1999).
Clearly, NS/NS mergers is the best model on paper: it involves objects which have been detected already, orbital decay induced by gravitational wave emission is shown by observations to work as per the theory, and numerical simulations by Janka’s group show that a neutrino–powered outflow in baryon–poor matter can be initiated. The major theoretical uncertainties here concern bursts’ durations and energetics: all numerical models produce short bursts ($`0.1s`$) with modest energetics, $`E<10^{51}erg`$. This is a direct consequence of the mechanism for powering the burst: large, super–Eddington luminosities are carried away by neutrinos, leading to a large mass influx, but only a small fraction, $`13\%`$, can be harnessed for the production of the burst. Furthermore, we cannot invoke large beaming factors in this case: the outflow is only marginally collimated, in agreement with expectations that an accretion disk with inner and outer radii $`R_{out}/R_{in}`$ a few (for the case at hand) can only produce a beam semi–opening angle of $`R_{in}/R_{out}`$. So, perhaps, this model may account for the short bursts, but it should be remembered that nothing of what was discussed above pertains to this subclass: BeppoSAX (and thus all BeppoSAX–triggered observations) can only detect long bursts.
On the other hand, future space missions, whether or not able to locate short bursts, can provide a decisive test of this model, provided they can follow with sufficient sensitivity a given burst for several hours. This model, in fact, is the only one proposed so far according to which some explosions should take place outside galaxies: according to Bloom, Sigurdsson and Pols (1999), about $`50\%`$ of all bursts will be located more than $`8kpc`$ from a galaxy, and $`15\%`$ in the IGM. This characteristic is testable without recourse to optical observations. In fact, the afterglow begins with a delay (as seen by an outside observer) of $`t_d=(r_{ag}r_{sh})/\gamma ^2cr_{ag}/\gamma ^2c`$, which varies greatly depending upon the environment in which the burst takes place:
$$t_d=\{\begin{array}{cc}15s\hfill & \text{ISM, n =}1cm^3\hfill \\ 5min\hfill & \text{galactic halo, n = }10^4cm^3\hfill \\ 4h\hfill & \text{IGM, n = }10^8cm^3\hfill \end{array}$$
(4)
Between the burst proper and the beginning of the power–law–like afterglow, thus a silence of recognizable duration is expected (Vietri 2000b).
Collapsars are currently in great vogue as a possible source of GRBs: the large amoung of energy available as the core of a supermassive star collapses directly to a black hole is in fact very attractive, even though (again!) the limited efficiency of the reaction $`\nu +\overline{\nu }e^{}+e^+`$ makes most of this energy unavailable. Here too there is some evidence that these objects must exist (Paczyǹski 1998), and numerical simulations again showing energy preferentially deposited along the hole rotation axis are also available (McFayden and Woosley 1999). Here however, what is truly puzzling is how the outflow can pierce the star’s outer layers without loading itself with baryons: we should remember that at most $`10^6M_{}`$ can be added to $`10^{51}erg`$: more baryons imply a proportionately slower outflow. The argument is that the dynamical timescale of the outer layers of a massive stars is of order of a few hours, so that, even if the core collapses and pressure support is removed, nothing will happen during the energy release phase: the outflow must pierce its way through. Two processes seem especially dangerous: Rayleigh–Taylor instability of the fluid heated–up by neutrino annihilations as it is weighed upon by the colder, denser outer layers, and Kelvin–Helmholtz instability after the hot fluid has pierced the outer layers and is passing through the hole. It is well–known that the non–linear development of these instabilities leads to mass entrainment, and that the time–scale for the development of these instabilities is very fast. Furthermore, the baryon–free outflow may be ‘poisoned’ by baryons to a deadly extent, even if numerical simulations, with their finite resolution, were to detect nothing of the kind.
The third class of models, SupraNovae, concerns supramassive neutron stars which are stabilized against self–gravity by fast rotation, to such an extent that they cannot be spun down to $`\omega =0`$ because they implode to a black hole. As the star’s residual magnetic dipole sheds angular momentum, this is exactly the fate to be expected for the whole star, except for a small equatorial belt , whose later accretion will power the burst. It is easy to show that this implosion must take place in a very baryon–clean environment. The major uncertainties here concern the channels of formation and the existence of this equatorial belt. Two channels of formation have been proposed: direct collapse to a supramassive configuration (Vietri and Stella 1998) and slow mass accretion in a low–mass X–ray binary (Vietri and Stella 1999). Both are possible, though none yet is supported by observations. The existence of the left–over belt has recently been questioned by Shibata, Baumgarte and Shapiro (1999), who however simulated the collapse of neutron stars with intermediate equations of state, which are entirely (or nearly exactly so) contained inside the marginally stable orbit even before collapse: clearly, these must be swallowed whole by the resulting black hole. Soft equations of state are free of this objection, and are thus much more likely to leave behind an equatorial belt. The soft EoSs are especially favored since the neutron stars must survive the $`r`$–mode instability, and thus soft EoSs (Weber 1999) would be in any case required. So one might say that the existence of these stars hinges on one uncertainty only, the EoS of nuclear matter. Besides the baryon–clean environments, SupraNovae have another advantage over rival modlels: only the lowest density regions would be left behind, precisely those with the smallest neutrino losses. The powering of the burst can thus occur through accretion caused by removal of angular momentum by magnetic stresses, without the parallel, unproductive, neutrino generation.
## 5. Conclusions
It is difficult to end on an upbeat note: we cannot expect in the near future a rate of progress similar to the one we witnessed in the past three years. In particular, it may be expected that the next flurry of excitement will come with the beginning of the SWIFT mission, which promises to collect relevant data (redshifts, galaxy types, location within or without galaxies, absorption or emission features in the optical and in the X–ray) for a few hundred bursts. This data will nail the major characteristics of the environment (at large) in which bursts take place, and we may be able to rule out a few models. On the other hand, the energy release process, shrouded as it is in optical depths $`>10^{10}`$, will remain mysterious, our only hope in this direction being gravitational waves.
Judging by the analogy with radio pulsars, this will correspond to the flattening of the learning curve. Aside from this, we may hope to locate the equivalent of the binary radio pulsar, but, differently from Jo Taylor, we have to be awfully quick in grabbing it.
## ACKNOWLEDGEMENTS
Thanks are due to Gabriele Ghisellini, who wisely steered me away from synchrotron emission, and toward the true light of Inverse Compton.
## REFERENCES
Baring, M., 1993, Ap.J., 418, 391.
Berezinsky, V.S.,, Prilutskii, O.F., 1986, Astr.Ap., 175, 309.
Bloom, J.S., Sigurdsson, S., Pols, O.R., 1999, MNRAS, 305, 763.
Costa, E., et al., 1997, Nature, 387, 783.
Covino, S., et al., 1999, Astr.Ap., 348, L1.
Fishman, G.J., Meegan, C.A., 1995, ARAA, 33, 415.
Frail, D., et al., 1997, Nature, 389, 261.
Frail, D., Waxman, E., Kulkarni, S., 2000, Ap.J., submitted, astro–ph 9910319.
Frontera, F., et al., 1999, Ap.J.S., in press, astro–ph 9911228.
Galama, T., et al., 1998, Ap.J.L., 500, L97.
Ghisellini, G., celotti, A., 1998, Ap.J.L., 511, L93.
Ghosh, P., Abramowicz, M.A., 1997, MNRAS, 292, 887.
Goodman, J.J., 1997, New Astr., 2, 449.
Janka, T., Eberl, T., Ruffert, M., Fryer, C.L., 1999, Ap.J.L., 527, L39.
Katz, J., 1994, Ap.J., 422, 248.
Klebesadel, R.W., Strong, I.B., Olson, R.A., 1973, Ap.J.L., 182, L85. ,
Kluzniak, W., Ruderman, M., 1998, Ap.J.L., 505, L113.
Livio, M., Ogilvie, G.I., Pringle, J.E., 1999, Ap.J., 512, 100.
Lazzati, D., Campana, S., Ghisellini, G., MNRAS, 1999, 304, L31.
Mc Fayden, A., Woosley, S.E., 1999, Ap.J., 524, 262.
Mészáros, P. Laguna, P., Rees, M.J., 1993, Ap.J., 415, 181.
Mészáros, Rees, M.J., 1993, Ap.J., 405, 278.
Mészáros, Rees, M.J., 1997, Ap.J., 476, 232.
Narayan, R., Paczynski, B., Piran, T., 1992, Ap.J.L., 395, L83.
Paczyǹski, B., 1986, Ap.J.L., 308, L43.
Paczyǹski, B., 1998, Ap.J.L., 494, L45.
Paczyǹski, B., Xu, G., 1994, Ap.J., 427, 708.
Piro, L., et al., 1999, Ap.J.L., 514, L73.
Preece, R.D., et al., 1999, Ap.J.S., in press, astro–ph. 9908119.
Rees, M.J., Begelman, M.C., Blandford, R.D., Phinney E.S., 1984, Nature, 295, 17.
Rees, M.J., Mèszàros, P., 1992, MNRAS, 258, P41.
Rhoads, J., 1997, Ap.J.L., 487, L1.
Ruderman, R., 1975, Ann. NY. Acad. Sci., 262, 164.
Sari, R., Piran, T., 1997, Ap.J., 485, 270.
Shibata, M., Baumgarte, T.W., Shapiro, S.L., 1999, Phys.Rev.D, submitted, astro–ph. 9911308.
Thorne, K.S., Price, R.H., MacDonald, D.A., 1986, Black holes: the membrane paradigm, Yale:New Haven, Yale Univ. Press.
Tout, C.A., Pringle, J.E., 1992, MNRAS, 256, 269.
Usov, V.V., 1992, Nature, 357, 472.
Usov, V.V., Chibisov, G.V. Soviet Astr., 19, 115.
van Paradijs, J., et al., 1997, Nature, 386, 686.
Vietri, M., 1997a, Ap.J.L., 478, L9.
Vietri, M., 1997b, Ap.J.L., 488, L105.
Vietri, M., 2000a, in preparation.
Vietri, M., 2000b, Ap.J.L., submitted.
Vietri, M., Perola, G.C., Piro, L., Stella, L., 1999, MNRAS, 308, P29.
Vietri, M., Stella, L., 1998, Ap.J.L., 507, L45.
Vietri, M., Stella, L., 1999, Ap.J.L., 527, L43.
Waxman, E., 1997, Ap.J.L., 489, L33.
Waxman, E., Draine, B.T., 2000, Ap.J., in press, astro–ph. 9909020.
Waxman, E., Frail, D., Kulkarni, D., 1998, Ap.J., 497, 288.
Weber, F., 1999, in Pulsars as astrophysical laboratories for nuclear and particle physics, Bristol, U.K.; Institute of Physics.
Wijers, R.A.M.J., et al., 1999, Ap.J.L., 523, L33.
Woosley, S., 1993, Ap.J., 405, 273.
Yoshida, A., et al., 1999, Astr.Ap.S., 138, 433. |
no-problem/9911/astro-ph9911494.html | ar5iv | text | # 1 Introduction
## 1 Introduction
XEUS, the X-ray Evolving Universe Spectroscopy mission, is a potential follow-on mission to XMM and is being studied as part of the Horizon 2000+ program within the context of the International Space Station (ISS) utilization. The XEUS mission aims to place a long lived X-ray observatory in space with a sensitivity comparable to the next generation of ground and space based observatories such as ALMA and NGST (Fig. 1). By making full use of the facilities available at the ISS and by ensuring in the design a significant growth and evolution potential, the overall mission lifetime of XEUS could be $`>`$25 years.
The key characteristic of XEUS is the large aperture X-ray mirror. This will capitalize on the successful XMM mirror technology and the industrial foundations which have been already laid in Europe for this program. The XEUS mirror aperture of 10 m diameter will be divided into annuli with each annulus sub-divided into sectors. The basic mirror unit therefore consists of a set of heavily stacked thin mirror plates. This unit is known as a “mirror petal” and is a complete, free standing, calibrated part of the overall XEUS optics with a spatial resolution of 2–5<sup>′′</sup> HEW and a broad energy range of 0.05–30 keV. Each mirror petal will be individually alignable in orbit. Narrow and Wide field imagers will provide FOVs of 1 and 5, and energy resolutions of 1–2 eV and 50 eV at 1 keV.
## 2 Mission Profile
XEUS will consist of separate detector (DSC) and mirror spacecraft (MSC) separated by 50 m and aligned by active control. The large aperture mirror cannot be deployed in a single launch. Instead, the “zero growth” XEUS (MSC1+DSC1) will be launched directly into a Fellow Traveler Orbit (FTO) to the ISS using an Ariane V or similar. The FTO is a low Earth orbit with an altitude of $``$600 km and an inclination similar to the ISS. The mated pair will then decouple and DSC1 will take up station 50 m from the MSC1 and after check-out the zero growth astrophysics observation program will commence with an aperture of 6 m<sup>2</sup> at 1 keV.
After 4–5 years of observations, the XEUS spacecraft will re-mate and maneuver to the vicinity of the ISS. At the ISS the MSC1 will separate from DSC1 and then dock with the ISS. The DSC1, with its usefulness at an end, will undergo a controlled de-orbit. At the ISS the mirror area is expanded to 30 m<sup>2</sup> at 1 keV (see Fig. 2) and MSC1 becomes MSC2. The extra mirror petals will have already been transported to the ISS using the STS or the European Automated Transfer Vehicle (ATV). Once the mirror growth and checkout is complete, MSC2 will leave the ISS and mate with the recently launched DSC2. Using the DSC2 propulsion system the pair will return to FTO and the fully grown XEUS will start its observing program.
## 3 Science Goals
XEUS will study the evolution of the hot baryons in the Universe and in particular:
* Detect massive black holes in the earliest AGN and estimate their mass, spin and $`z`$ through studies of relativistically broadened Fe-K lines and variability.
* Study the formation of the first gravitationally bound, dark matter dominated, systems ie. small groups of galaxies and trace their evolution into today’s massive clusters.
* Study the evolution of metal synthesis down to the present epoch, using in particular, observations of the hot intra-cluster gas.
* Characterize the mass, temperature, density of the intergalactic medium, much of which may be in hot filamentary structures, using aborption line spectroscopy. High $`z`$ luminous quasars and X-ray afterglows of gamma-ray bursts can be used as background sources.
### 3.1 Spectroscopy of Massive Black holes
Currently, X-ray astronomy can only detect AGN to a $`z`$ of $``$5. XEUS will be able to undertake detailed X-ray spectroscopy of much more distant AGN. Fig. 3 illustrates the results of a series of simulations of a “typical” AGN with a 2–10 keV rest-frame luminosity of $`10^{44}`$ erg s<sup>-1</sup> at different red-shifts. An exposure time of 10<sup>6</sup> s was assumed for the fully grown XEUS. Values for H<sub>0</sub> and q<sub>0</sub> of 50 km s<sup>-1</sup> Mpc<sup>-1</sup> and 0.5 together with an underlying $`\mathrm{E}^{2.0}`$ spectrum with a Galactic $`\mathrm{N}_\mathrm{H}`$ of $`10^{21}`$ atom cm<sup>-2</sup> and a local (red-shifted) $`\mathrm{N}_\mathrm{H}`$ of $`5\times 10^{21}`$ atom cm<sup>-2</sup> were assumed. A “double-horned” relativistically distorted and Doppler broadened Fe line at 6.4 keV with a rest-frame equivalent width of 350 eV was simulated. The other line parameters were taken to be as for MCG-6-30-15. Fig. 3 shows the residuals when the source is red-shifted to $`z`$ = 3, 5, 7, and 10, demonstrating that such a line can be clearly detected and its properties measured even at $`z`$ = 10.
### 3.2 Spectroscopy of Distant Galaxy Groups
To illustrate the potential of XEUS to study the formation of large scale structure Fig. 4 shows a simulation of a distant ($`z`$ = 2) galaxy group. In standard cosmological models these groups are the first emerging massive objects, with masses of $``$$`10^{13}`$M. The epoch of their first formation depends critically on the adopted cosmology, and is likely to be $`z2`$–5. Therefore the study of groups will provide a deep probe of the early Universe. These systems and their dark matter aloes are the smallest units by which to study the hot thermal intergalactic gas trapped in deep gravitational wells. Emission lines of O, Fe, Mg, and Si are clearly evident. The temperature can be determined to better than $`\pm `$3% and the Fe and O abundances to better than 10% and 20%, respectively.
### 3.3 Resonant Absorption Line Studies
XEUS will be the first X-ray observatory capable of detecting resonance absorption lines for a wide range of objects. This results from the unique combination of large effective area and high spectral and spatial resolutions. The use of resonance absorption lines can be applied to several problems, as it is in optical/UV astronomy. Resonance absorption lines are generally detectable at much lower column densities than absorption edges (which do not require high resolution spectroscopy), and therefore can trace gas which is too tenuous to be seen by other means.
Intervening hot/warm gas clouds along the line of sight towards distant background sources will produce resonance absorption lines. The main issues that can be addressed with these studies include the use of absorber number counts and their redshift dependence to test models of large-scale structure formation, the determination of the temperature distribution of baryons in the Universe, the determination of metallicities of the absorbers, and in particular the \[O/Fe\] ratio to infer the relative rates of type I and II Supernovae, the determination of the redshift evolution of parameters such as number counts, gas kTs, and metallicities, and when the emitting gas is also seen in absorption, the use both emission and absorption to infer distances, and therefore measure key cosmological parameters.
### 3.4 Studying Dust Enshrouded AGN and Starburst Galaxies
In order to test the sensitivity of XEUS to discriminate between AGN and starburst emission, spectra of a composite starburst galaxy plus a heavily absorbed AGN have been simulated. The starburst emission was parameterized by a thermal gas at kT = 3 keV with 0.3 solar metallicity. Above a few keV the absorbed AGN is expected to show up with a strong (EW = 1 keV) Fe-K line due to transmission through the N<sub>H</sub>=10$`^{24}`$cm<sup>-2</sup> absorbing material. Such a model is similar to that of the nearby galaxies NGC 6240, NGC 4945, and Mkn 3. Fig. 6 demonstrates that XEUS1 will allow a detailed study of such sources around $`z=1`$, but that XEUS2 is required to perform spectroscopy at $`z1`$. Such X-ray spectra are the only way to obtain direct proof of the existence of dust-enshrouded AGNs at high redshift. If detected, they would allow the starburst versus AGN contribution to be directly disentangled. If undetected, they would give strong limits on the AGN contribution. This would have important consequences on the star formation and ionization histories of the Universe.
### 3.5 Stellar Spectroscopy
The large effective area of the XEUS configuration provides unique opportunities for stellar X-ray astronomy. The high sensitivity means that solar-like X-ray emission can be detected out to distances of a few kpc. As a consequence large samples of truly solar-like stars become amenable for study. For example, at the distance of M 67, an old open cluster with an age similar to that of the Sun, a limiting X-ray luminosity of 10<sup>26</sup> erg s<sup>-1</sup> can be reached, implying that solar minimum X-ray emission levels can be detected. This is particularly relevant for a study of activity cycles in other solar-like stars, since in the Sun the solar cycle is most easily detectable in the X-ray domain. In addition, the XEUS sensitivity is so large that in nearby open clusters such as Hyades and Pleiades virtually all cluster stars will be detectable as X-ray sources, and the X-ray brightest cool stars can even be detected in nearby galaxies such as the LMC and M 31.
X-ray images of the Sun have revealed that hot plasma trapped in closed magnetic loops provides almost all of the solar X-ray emission. While such X-ray emission is usually “quiet”, sometimes restructuring of such magnetic loops gives rise to intense outbursts of radiation in the form of flares. On other stars flare much more intense than those on the Sun are observed. Time resolved high resolution spectroscopy is required to understand and analyze the physics of such giant stellar flares. The potential of XEUS to perform such studies is illustrated by simulations of the nearby RS CVn system AR Lac (G2 iv \+ K0 iv). The 100 s simulations shown in Fig. 7 show parts of a rich line-dominated spectrum. The large area of XEUS means that a sufficient number of counts are obtained so that the temperature, density, chemical abundance and velocity distribution of the emitting plasma can be measured on very short timescales. This will allow the study of the evolution of these basic physical parameters during typical stellar flares with an accuracy only previously achievable with solar flares.
Acknowledgements. We thank the XEUS Steering Committee (M. Turner, J. Bleeker, G. Hasinger H. Inoue, G. Palumbo, T. Peacock and J. Trümper) and the ESA ISS and XMM project teams for their support. |
no-problem/9911/cond-mat9911413.html | ar5iv | text | # Re-entrant spin susceptibility of a superconducting grain
## Abstract
We study the spin susceptibility $`\chi `$ of a small, isolated superconducting grain. Due to the interplay between parity effects and pairing correlations, the dependence of $`\chi `$ on temperature $`T`$ is qualitatively different from the standard BCS result valid in the bulk limit. If the number of electrons on the grain is odd, $`\chi `$ shows a re-entrant behavior as a function of temperature. This behavior persists even in the case of ultrasmall grains where the mean level spacing is much larger than the BCS gap. If the number of electrons is even, $`\chi (T)`$ is exponentially small at low temperatures.
By now it is well-known that the properties of an isolated, mesoscopic superconducting grain are quite different from those of a bulk sample . First of all, since such a grain carries a fixed number, $`N`$, of electrons, its behavior depends strongly on whether $`N`$ is even or odd. Second, fluctuation effects become important as the size of the grain decreases. The interplay between parity and fluctuation effects crucially depends on the ratio $`\delta /\mathrm{\Delta }_0`$ of two characteristic energies: the mean level spacing $`\delta `$ and the bulk superconducting gap $`\mathrm{\Delta }_0`$. As long as the grain is not too small, $`\delta \mathrm{\Delta }_0`$, the fluctuation region $`\mathrm{\Delta }T`$ around the critical temperature $`T_c`$ is narrow, $`\mathrm{\Delta }T/T_c\sqrt{\delta /\mathrm{\Delta }_0}1`$, and the mean field description of superconductivity is appropriate. Parity effects appear at temperatures lower than a crossover temperature $`T_{\mathrm{eff}}\mathrm{\Delta }_0/\mathrm{ln}\sqrt{8\pi \mathrm{\Delta }_0^2/\delta ^2}`$ which, in the experiments , is typically of the order of $`1030\%`$ of $`T_c`$. The dependence of $`T_{\mathrm{eff}}`$ on $`\mathrm{\Delta }_0`$ signals that the even-odd asymmetry is a collective effect due to pairing correlations. As the size of the grain is decreased, fluctuations start to smear the superconducting transition . The finite level spacing suppresses the BCS gap in a parity-dependent way . When $`\delta `$ becomes of the order of $`\mathrm{\Delta }_0`$, $`\mathrm{\Delta }TT_c`$ and the BCS description of superconductivity breaks down even at zero temperature . The regime $`\delta \mathrm{\Delta }_0`$ is dominated by strong pairing fluctuations .
The present-day interest in ultrasmall superconducting grains was triggered by the experiments of Ralph, Black and Tinkham , who were able to contact a single, nanometer-sized Al grain with current and voltage probes. They obtained tunneling spectra that revealed the presence of a parity-dependent spectroscopic gap, larger than the average level spacing, which could be driven to zero by an applied magnetic field. In this Letter we propose to measure the temperature dependence of a thermodynamic quantity – the spin susceptibility $`\chi `$ – as a means to detect both parity effects and pairing correlations. As we will show below, pairing correlations give rise to a specific temperature dependence of thermodynamic quantities. This enables a more quantitative investigation of fluctuation effects .
Spin paramagnetism of small particles has been considered in the past and very recently parity effects in the susceptibility were measured for an ensemble of small, normal metallic grains . Spin susceptibility is very sensitive to BCS pairing as well. Yosida showed that, due to the opening of the superconducting gap, $`\chi `$ vanishes at zero temperature . We will show that the combined effect of parity and pairing introduces qualitatively new features in the temperature dependence of $`\chi `$. Most interestingly, these effects might be observed even in the regime $`\delta \mathrm{\Delta }_0`$. The results of our work are summarized in Figs. 13, where we plot $`\chi `$ as a function of temperature $`T`$ for odd and even parity. In particular we want to emphasize that the odd susceptibility shows a re-entrant behavior as a function of $`T`$ for any value of the ratio $`\delta /\mathrm{\Delta }_0`$. This re-entrance is absent in normal metallic grains; it is a genuine feature of the interplay between pairing correlations and parity effects.
The BCS pairing Hamiltonian for a small grain can be written as
$$=\underset{n,\sigma =\pm }{}(ϵ_n\sigma \mu _BH)c_{n,\sigma }^{}c_{n,\sigma }\lambda \delta \underset{m,n}{}B_m^{}B_n,$$
(1)
where $`B_m^{}=c_{m,+}^{}c_{m,}^{}`$. The indices $`m`$, $`n`$ label the single particle energy levels with energy $`ϵ_m`$ and annihilation operator $`c_{m,\sigma }`$. The quantum number $`\sigma =\pm `$ labels time-reversed equally spaced states (with an average spacing $`\delta 1/\nu _0`$, where $`\nu _0`$ is the density of states at the Fermi energy). The external magnetic field $`H`$ couples to the electrons via the Zeeman term, $`\mu _B`$ is the Bohr magneton. We put the $`g`$-factor equal to two, ignoring any spin-orbit effects (see Ref. ). At the low magnetic fields of interest here, we can neglect the orbital contribution to magnetic energy, as it is smaller than the Zeeman energy by a factor $`(k_Fr)(Hr^2/\mathrm{\Phi }_0)`$ ($`r`$ is the size of the grain and $`\mathrm{\Phi }_0`$ the flux quantum). Finally $`\lambda `$ is the dimensionless BCS coupling constant. Since the Hamiltonian contains only pairing terms, an electron in a singly occupied level cannot interact with the other electrons.
The spin susceptibility of a grain with an even (e) or an odd (o) number $`N`$ of electrons is defined as
$$\chi _{e/o}(T)=\frac{^2_{e/o}(T,H)}{H^2}|_{H=0},$$
(2)
where $`_{e/o}=T\mathrm{ln}Z_{e/o}`$ is the free energy of the grain and the partition function $`Z(T,N)`$ should be evaluated in the canonical ensemble. We will perform the calculation with the help of a parity projection technique and by means of exact canonical methods based on Richardson’s solution . The grand partition function reads
$$Z_{e/o}(T,\mu )=(1/2)\underset{N=0}{\overset{\mathrm{}}{}}e^{\mu N/T}[1\pm e^{i\pi N}]Z(T,N)(1/2)\left(Z_+\pm Z_{}\right).$$
(3)
The partition function $`Z_+`$ is the usual grand partition function at temperature $`T`$ and chemical potential $`\mu _+=\mu `$. The grand partition function $`Z_{}`$ describes an auxiliary ensemble at temperature $`T`$ and chemical potential $`\mu _{}=\mu +i\pi T`$; it is a formal tool, necessary to include parity effects. The chemical potential $`\mu `$ will be placed between the topmost occupied level and the lowest unoccupied level in the even case, while it will be at the singly occupied level in the odd case. Since we are interested in the evaluation of fluctuations effects, it is convenient to express the grand partition functions $`Z_\pm `$ using the path integral formulation of superconductivity ,
$$Z_\pm =Z_\pm ^0\frac{𝒟^2\mathrm{\Delta }\mathrm{exp}\left\{\underset{0}{\overset{\beta }{}}𝑑\tau \left[\mathrm{Tr}\mathrm{ln}(1\widehat{G}_\pm ^0\widehat{\mathrm{\Delta }})\frac{\mathrm{\Delta }^2}{\lambda \delta }\right]\right\}}{𝒟^2\mathrm{\Delta }\mathrm{exp}\left\{\underset{0}{\overset{\beta }{}}𝑑\tau \frac{\mathrm{\Delta }^2}{\lambda \delta }\right\}}.$$
(4)
Here, $`\beta =1/T`$ and $`Z_\pm ^0`$ is the partition function for non-interacting electrons. The matrix Green function $`\widehat{G}_\pm ^0`$ is given by $`\widehat{G}_\pm ^{(0)}(ϵ_\nu )=\left[(i\omega _\nu +\mu _BH)\sigma ^{(0)}(ϵ_n\mu _\pm )\sigma ^{(z)}\right]^1,`$ where $`\omega _\nu `$ is a fermionic Matsubara frequency, $`\sigma ^{(i)}`$ ($`i=x,y,z`$) are the Pauli matrices, and $`\sigma ^{(0)}`$ is the identity. Finally, the matrix $`\widehat{\mathrm{\Delta }}`$ is given by $`\widehat{\mathrm{\Delta }}=(\mathrm{\Delta }/2)(\sigma ^x+i\sigma ^y)+\mathrm{h}.\mathrm{c}.`$ A direct calculation of the partition function (4) is impossible in general. Below, we first discuss two limiting cases which are tractable analytically: $`\delta /\mathrm{\Delta }_01`$ and $`\delta /\mathrm{\Delta }_01`$. Then we present the complete temperature dependence of the spin susceptibility evaluating Eq. (4) numerically for arbitrary values of $`\delta /\mathrm{\Delta }_0`$, with the help of the static path approximation .
Large grains ($`\mathrm{\Delta }_0\delta `$) . In this limit it is sufficient to evaluate the partition function in a saddle point approximation, since fluctuations will not contribute significantly . As a result we find
$$\chi _{e/o}=\frac{\mu _B^2}{2T}\underset{n}{}\frac{Z_+\mathrm{cosh}^2\frac{E_{+,n}}{2T}Z_{}\mathrm{sinh}^2\frac{E_{,n}}{2T}}{Z_+\pm Z_{}},$$
(5)
where $`E_{\pm ,n}=\sqrt{ϵ_n^2+\mathrm{\Delta }_\pm }`$. The saddle point values of $`\mathrm{\Delta }_\pm `$ are found from the equations
$$\frac{1}{\lambda }=\underset{n,\sigma }{}\frac{\delta }{4E_{\pm ,n}}\text{th}^{\pm 1}\left(\frac{E_{\pm ,n}\sigma \mu _BH}{2T}\right).$$
(6)
The partition functions for the two ensembles are
$`Z_\pm =\mathrm{exp}\left\{{\displaystyle \underset{n,\sigma }{}}\left[\mathrm{ln}2\left\{\begin{array}{c}\text{ch}\\ \text{sh}\end{array}\right\}{\displaystyle \frac{E_{\pm ,n}^\sigma }{2T}}{\displaystyle \frac{\xi _n}{2T}}\right]{\displaystyle \frac{\mathrm{\Delta }_\pm ^2}{\lambda \delta T}}\right\},`$ (9)
where $`E_{\pm ,n}^\sigma =E_{\pm ,n}\sigma \mu _BH`$ and $`\xi _n=ϵ_n\mu `$.
At low temperatures $`T\mathrm{\Delta }_0`$, the ratio $`Z_{}/Z_+`$ can be calculated easily; one finds $`Z_{}/Z_+1\sqrt{8\pi T\mathrm{\Delta }_0/\delta ^2}\mathrm{exp}(\beta \mathrm{\Delta }_0)`$. Parity effects are important if this ratio is $`1`$, i.e. at temperatures $`T<T_{\mathrm{eff}}`$. At temperatures $`T_{\mathrm{eff}}T\mathrm{\Delta }_0`$, parity effects can be ignored and the spin susceptibility is found to decrease exponentially, as in the BCS case,
$$\chi _{e/o}\frac{2\mu _B^2}{\delta }\sqrt{\frac{2\pi \mathrm{\Delta }_0}{T}}e^{\beta \mathrm{\Delta }_0}.$$
(10)
For $`TT_{\mathrm{eff}}`$, Eq. (5) can be approximated as
$$\chi _e\frac{8\pi \mu _B^2\mathrm{\Delta }_0}{\delta ^2}e^{2\beta \mathrm{\Delta }_0};\chi _o\frac{\mu _B^2}{T}.$$
We see that $`\chi _e`$ remains exponentially small, like in the BCS case (10), but with an exponent $`2\beta \mathrm{\Delta }_0`$ rather than $`\beta \mathrm{\Delta }_0`$. This reflects the fact that excitations are actually created in pairs. In odd grains, the unpaired spin gives rise to an extra paramagnetic (Curie-like) contribution to the spin susceptibility. As a result $`\chi _o`$ will show a re-entrant effect at low temperatures (see Fig. 1). Although the re-entrant behavior is essentially a single electron effect, we stress that it can be detected experimentally using granular systems with many well-separated grains (to avoid collective effects due to tunneling). Such systems contain even grains as well, but their susceptibility is exponentially small at the temperatures of interest; thus their contribution to the response of the system will be negligible.
Ultrasmall grains ($`\mathrm{\Delta }_0\delta `$) . A reduction of the grain size leads to a suppression of the gap $`\mathrm{\Delta }`$. For ultrasmall grains with $`\mathrm{\Delta }_0\delta `$, the mean field approximation gives $`\mathrm{\Delta }=0`$: the grain behaves as a normal metal. The non-interacting, parity-dependent spin susceptibility can be found from Eq. (5), see the topmost curves in Figs. 2 and 3. Note in particular the monotonous dependence of $`\chi _o`$ on $`T`$. The temperature scale at which parity effects appear is set by the average level spacing. If $`T\delta `$, parity effects are exponentially small and $`\chi _{e/o}(T)\chi _P[1(2T/\delta )\mathrm{exp}(\pi ^2T/\delta )]`$. In the opposite limit $`T\delta `$, $`\chi _e`$ is exponentially small, $`\chi _e(T)(8\mu _B^2/T)e^{\beta \delta }`$, as we need to excite an electron out of the topmost, doubly occupied single particle level to magnetize the grain. For an odd grain, $`\chi _o(T)\mu _B^2/T`$ at $`T\delta `$: the topmost level is occupied by a single electron that gives a Curie-like contribution.
The saddle point approach entirely ignores the fact that the fluctuation region $`\mathrm{\Delta }T`$ around $`T_c`$ grows as the size of the grains is reduced. Due to the presence of fluctuations, the behavior of small grains will be different in a distinct way from normal metallic grains. In the limit $`T\delta `$, both fluctuation and parity effects are small; it therefore suffices to consider the fluctuation correction $`\delta \chi _{\mathrm{fluc}}`$ to $`\chi _P`$, evaluating $`Z_+`$, Eq. (4), in Gaussian approximation. As a result, we find $`\delta \chi _{\mathrm{fluc}}/\chi _P\delta /2T\mathrm{ln}(T/\mathrm{\Delta }_0)`$; hence $`\chi _{e/o}(T)\chi _P[1(2T/\delta )\mathrm{exp}(\pi ^2T/\delta )\delta /2T\mathrm{ln}(T/\mathrm{\Delta }_0)]`$. Superconducting correlations suppress the susceptibility; due to its algebraic dependence on $`T`$ this suppression is stronger than the parity correction at temperatures $`T\delta \mathrm{ln}\mathrm{ln}\delta /\mathrm{\Delta }_0`$. In the opposite limit, $`T\delta `$, fluctuations are strong and the Gaussian approximation fails. However, the susceptibility can still be obtained analytically by considering a few levels close to the Fermi energy with a renormalized pairing interaction $`\stackrel{~}{\lambda }=1/\mathrm{ln}(\delta /\mathrm{\Delta }_0)`$ . Consider first a grain with an even number of electrons. It costs an energy $`\delta +\delta /\mathrm{ln}(\delta /\mathrm{\Delta }_0)`$ to excite an electron from the topmost, doubly occupied level to the lowest unoccupied level. Correspondingly, the leading temperature dependence of the spin susceptibility is
$$\chi _e(T)8\frac{\mu _B^2}{T}e^{\beta \delta (1+\mathrm{ln}^1(\delta /\mathrm{\Delta }_0))}+𝒪(e^{2\beta \delta }).$$
(11)
The even susceptibility is exponentially small, like in the case of a normal metallic grain, but with an exponent $`\beta \delta (1+\stackrel{~}{\lambda })`$, rather than $`\beta \delta `$. Similarly, we find the spin susceptibility for a grain with an odd number of electrons
$$\chi _o(T)\frac{\mu _B^2}{T}[1+8e^{\beta \delta (2+\mathrm{ln}^1(\delta /\mathrm{\Delta }_0))}].$$
(12)
The paramagnetic contribution from the single spin dominates at all temperatures below $`\delta `$. Compared to the case of a normal metallic grain, the odd susceptibility is non-monotonous: upon lowering temperature, $`\chi _o`$ first decreases due to superconducting fluctuations; at temperatures $`T\delta `$ a re-entrant behavior sets in which persists down to the lowest temperatures.
Reentrant susceptibility. The various limiting cases discussed so far provide evidence for the appearance of an anomaly in the spin susceptibility $`\chi _o`$. For large grains ($`\mathrm{\Delta }\delta `$) the mean field approximation, Eq.(5), leads to the re-entrant behaviour shown in Fig. 1. We will show that this is a unique signature of pairing correlations which is present even in ultrasmall grains. To this end we study the complete temperature dependence of $`\chi _{e/o}`$ for arbitrary values of the ratio $`\delta /\mathrm{\Delta }_0`$.
The physics of re-entrant susceptibility can be grasped by evaluating Eq. (4), in the static path approximation . This amounts in retaining only the static fluctuations (beyond the Gaussian approximation) in the path integral. In Figs. 2 and 3 we show the results of this calculation for the odd and the even cases respectively. The re-entrance in the odd case is visible even in systems with a ratio $`\delta /\mathrm{\Delta }_050`$ (!) and provides the signature of the existance of pairing correlations in the ultrasmall regime. Frequency dependent fluctuations will further enhance the size of the reentrant effect. The results are plotted for a system of $`N=200`$ electrons at half filling and the BCS coupling is chosen to fix the ratio $`\delta /\mathrm{\Delta }`$ ($`\lambda 0.10.2`$). The merit of the static path approximation combined with the analytic analysis in the limiting cases is that it allows to obtain a coherent quantitative physical picture in the whole temperature range.
As a final check of our results we computed $`\chi _o`$ using the exact solution of Ref. . The result is presented in Fig.2 (dashed lines). As expected, the re-entrant effect is slightly larger ($`15`$%). In order to obtain this result we considered all the different states with excitation energy up to a cutoff $`\mathrm{\Lambda }40\delta `$ for a system with $`N100`$ electrons. In the inset we show the scaling analysis for different $`N`$ and different energy cutoffs. This analysis becomes more and more difficult upon increasing temperature because of the exponential increase of the number of excited states needed.
In this Letter we proposed the study of the spin susceptibility of a metallic grain as a very sensitive probe to detect superconducting correlations. For grains in the nanometer size regime the odd spin susceptibility is a unique signature of pairing. In grains of dimensions of the order of few nanometers as those studied in Refs. the reentrance should be of the order of $`1020`$% of the Pauli value and it could be measured using the techinque used in Ref. .
Acknowledgments We thank I. Aleiner, I. Beloborodov, A. Larkin, and B. Mühlschlegel for useful discussions. We acknowledge the financial support of European Community (Contract FMRX-CT-97-0143), SFB237 of the Deutsche Forschungsgemeinschaft and INFM-PRA-QTMD. |
no-problem/9911/astro-ph9911478.html | ar5iv | text | # Pulsar Death at an Advanced Age
## 1. Introduction
Radio emission from Rotation Powered Pulsars (RPPs) probably has its origin in the relativistic outflow of electron-positron pairs along the polar magnetic field lines of a dipole magnetic field frozen into the rotating neutron star (e.g., Arons 1992, Meszaros 1992). The evidence for dipole magnetic fields comes primarily from the electromagnetic theory of RPP spindown energy losses, which occur at the rate $`\dot{E}_R=k\mu ^2\mathrm{\Omega }_{}^4/c^3=I\mathrm{\Omega }_{}\dot{\mathrm{\Omega }}_{}`$ (Dyson 1971, Arons 1979, 1992). Here $`\mu `$ is the magnetic moment, $`\mathrm{\Omega }_{}=2\pi /P`$, $`P`$ is the rotation period, and $`k`$ is a function of any other parameters of significance, with magnitude on the order of unity. In the vacuum theory (Deutsch 1955), $`k=(2/3)\mathrm{sin}^2i`$, with $`i`$ the angle between the magnetic moment and the angular velocity. Theoretical work on the torques due to conduction currents steming back to Goldreich and Julian (1969), coupled to the approximate independence of spindown torques from observationally estimated values of $`i`$ (Lyne and Manchester 1988), suggest that in reality $`k`$ does not substantially depend on $`i`$. In the subsequent discussion, I assume $`k=4/9`$, the average of the vacuum value over the sphere. Application of this EM energy loss rate to the observations of RPPs’ periods and period derivatives yields $`\mu 10^{30}`$ cgs for “normal” RPPs, and $`\mu 10^{27}`$ cgs for millisecond RPPs.
The electromagnetic torque interpretation of pulsar spindown constrains only the exterior dipole moment of the magnetic field. However, Rankin (1990) and Kramer et al. (1998) have presented strong evidence in favor of a low altitude ($`rR_{}`$) dipole geometry for the site of the core component of pulsar radio emission. Arons (1993) gave evidence that spun up millisecond pulsars must have substantially dipolar fields at low altitute.
If electron-positron pair creation above the polar caps is important for radio emission, all observed pulsars must lie in the region of $`P\dot{P}`$ space where polar cap acceleration has sufficient vigor to lead to copious pair production. Yet, to date, all internally consistent theories of polar cap pair creation have required hypothesizing a large scale (e.g., quadrupole) component of the magnetic field with strength comparable to that of the dipole (Ruderman and Sutherland 1975, Arons and Scharlemann 1979, Barnard and Arons 1982, Gurevich and Istomin 1985). Such strong magnetic anomalies contradict the evidence in favor of an apparently dipolar low altitude geometry; the alteration of the magnetic geometry also ruins the internal consistency of many models’ electrodynamics.
Here I describe a low altitude polar cap acceleration theory which successfully associates pulsar “death” with the cessation of pair creation in an offset dipole low altitude magnetic field. The basic acceleration physics is that of a space charge limited relativistic particle beam accelerated along the field lines by the starvation electric field, as in the Arons and Scharlemann theory, but with the additional effect of inertial frame dragging, first pointed out by Muslimov and Tsygan (1990, 1992) and by Beskin (1990). If the dipole’s center is offset from the stellar center, the magnetic field at one pole becomes substantially stronger than it would be if the same magnetic dipole were star centered. The location of an individual pulsar’s pair creation death depends on the magnitude of the offset, thus yielding a “death valley” (Chen and Ruderman 1993) for the whole pulsar population. Finally, if thermal photon emission at temperature $`T10^5`$ K continues to great age ($`t>10^8`$ years), as is is expected in neutron star models with late time heating due to friction between the crust and core, the theory with dipole offsets predicts and accounts for pulsars with very long periods and great age.
## 2. Polar Acceleration
Study of polar cap relativstic particle acceleration in the 1970’s had led to the conclusion that acceleration of a space charge limited particle beam from the stellar surface with energy/particle high enough to emit magnetically convertible curvature gamma rays occurs because of curvature of the magnetic field (Scharlemann et al. 1978, Arons and Scharlemann 1979). In a curved $`B`$ field, matching of the beam charge density to the Goldreich-Julian density occurs only at the surface. Along field lines which curve toward the rotation axis (“favorably curved” field lines), the beam fails to short out the vacuum above the surface, Therefore, particles accelerate along $`B`$ through a potential drop $`\mathrm{\Delta }\mathrm{\Phi }_{}=\mathrm{\Delta }\mathrm{\Phi }_{SAF}\mathrm{\Phi }_{\mathrm{pole}}(R_{}/\rho _B)10^2P^{1/2}\mathrm{\Phi }_{\mathrm{pole}}.`$ The numerical value assumes field lines have dipolar radius of curvature $`\rho _B\sqrt{R_{}c/\mathrm{\Omega }_{}}`$. Here $`\mathrm{\Phi }_{\mathrm{pole}}\mathrm{\Omega }_{}^2\mu /c^2=1.09\times 10^{13}(I_{45}/k)^{1/2}(\dot{P}_{15}/P^3)^{1/2}`$ Volts, with $`\dot{P}_{15}\dot{P}/10^{15}\mathrm{s}/\mathrm{s}`$ and $`I_{45}=I/10^{45}`$ g-cm<sup>2</sup>. Particles drop through the potential $`\mathrm{\Delta }\mathrm{\Phi }_{SAF}`$ over a length $`L_{}R_{}`$.
Curvature gamma rays have typical energy $`\epsilon _c(\mathrm{}c/\rho _B)(e\mathrm{\Delta }\mathrm{\Phi }_{}/mc^2)^3\mathrm{\Phi }_{\mathrm{pole}}^3/\rho _B^4`$, while the optical depth for pair creation, due to one photon conversion of gamma rays emitted by electrons (or positrons) can be shown to be $`\tau =\mathrm{\Lambda }\mathrm{exp}[a(mc^2/\epsilon _c)(B_q/B_{})(\rho _B/R_{})]`$ (Arons and Scharlemann 1979, Luo 1996, Bjornsson 1996), where $`a`$ is a pure number (typically $`30`$) and $`\mathrm{\Lambda }`$ is a combination of the basic parameters which is quite large ($`\mathrm{ln}\mathrm{\Lambda }20`$). A reasonable theoretical definition of the death line is $`\tau =1`$. Using $`B_{}=2(\mathrm{\Phi }_{\mathrm{pole}}/R_{})(c/\mathrm{\Omega }_{}R_{})^2`$, $`\mathrm{\Delta }\mathrm{\Phi }_{SAF}`$ and setting $`\tau `$ equal to unity yields the death line, expressed as $`\mathrm{\Phi }_{\mathrm{death}}(P)`$ such that stars $`\mathrm{\Phi }_{\mathrm{pole}}<\mathrm{\Phi }_{\mathrm{death}}`$ do not make pairs. This death line, appears as the dashed line in Figure 1. This figure shows clearly that the large dynamic range in $`\mathrm{\Phi }_{pole},P`$ space made available by the cataloging of millisecond pulsars falsifies this theory, even if one invokes non-dipolar radii of curvature to move the position of the death line vertically in the diagram - the scaling with period flatly disagrees with the shape of the boundary of pulsar radio emission in the $`\mathrm{\Phi }_{pole},P`$ diagram. These results imply either that something else governs the low altitude acceleration which leads to pair creation, or that pair creation is not important to radio emission.
Muslimov and Tsygan (1990, 1992) uncovered a previously overlooked effect on the acceleration of the non-neutral beam from the stellar surface. Stellar rotation drags the inertial frame into rotation, at the angular velocity $`\omega _{LT}=(2GI/R_{}^3c^2)\mathrm{\Omega }_{}(R_{}/r)^3`$. Therefore, the electric field required to bring a charged particle into corotation is $`𝐄_{co}=(1/c)[(𝛀_{}\omega _{LT})\times 𝐫]\times 𝐁`$; the magnetic field rotates with respect to local inertial space, not inertial space at infinity. The charge density required to support this local corotation electric field is $`\eta _R=[(𝛀_{}\omega _{LT})𝐁]/2\pi c=[𝛀_{}𝐁/2\pi c][1\kappa _g(R_{}/r)^3],`$ where $`\kappa _g=2GI/R_{}^3c^2=0.17(I_{45}/R_{10}^3)`$. Relativistic space charge limited flow from the surface has a beam charge density $`\eta _b=(𝛀_{}𝐁_{}/2\pi c)(1\kappa _g)(B/B_{})`$. Above the surface, this charge density is too small to short out $`E_{}`$ on all polar field lines, not just the favorably curved part of a polar flux tube, thus providing a theoretical basis for polar cap acceleration models to be in accord with the observed rough symmetry of radio emission with respect to the magnetic axis (Lyne and Manchester 1988). One can graphically describe this general relativistic origin of electrical starvation as the consequence of the field lines rotating faster with respect to inertial space as the radius increases, at the angular speed $`\mathrm{\Omega }_{}\omega _{LT}(r)=\mathrm{\Omega }_{}[1\kappa _g(R_{}/r)^3]`$. The constraint of relativistic flow along $`B`$ allows the beam to provide only a charge density sufficient to support corotation at the angular speed $`\mathrm{\Omega }_{}(1\kappa _g)`$. The difference not surprisingly leads to an accelerating potential drop $`\mathrm{\Delta }\mathrm{\Phi }_{}\kappa _g\mathrm{\Phi }_{pole}[1(R_{}/r)^3].`$ For normal ($`P1`$ sec) pulsars with dipole fields, the effect of dragging of inertial frames on the beam’s acceleration yields curvature gamma ray energies 1000 times greater than occur in the Arons and Scharlemann pair creation theory; for MSPs, the theories yield comparable results, although of course the symmetry of the beam with respect to the magnetic axis differs.
## 3. Death Lines and Death Valley
When curvature emission is the only source of gamma rays, the death line for a star centered dipole is shown in Figure 1. Dragging of inertial frames clearly improves the agreement between the boundary of pair activity in the $`\mathrm{\Phi }P`$ diagram and the region where pulsars occur, but the discrepancy is still too large - something else is missing. If the field geometry must be locally dipolar at low altitude, then the only ingredients still not included are 1) offset of the dipole from the stellar center and 2) additional gamma ray emission and absorption processes. The simplest dipole offset has the magnetic field of a point dipole, with the center of the dipole displaced from the stellar center by an offset vector parallel to $`\mu `$. This has the effect of increasing the magnetic field at one pole to strength $`B_{}=2\mu /(R_{}\delta )^3`$, with a resulting drastic increase in the gamma ray opacity, while leaving the accelerating potential unaltered. The results with curvature radiation as the only gamma ray emission process show that dipole offsets do allow such a curvature radiation theory of pulsar death to survive the challenge of modern observations, although at the price of displacements of the dipole center from the stellar center comparable to moving the dipole’s center to the base of the crust. Such a model still does not account for PSR J2144-3933 (Young et al. 1999), however.
Curvature emission is not the only means of converting beam energy to gamma rays. ROSAT observations have revealed the long sought thermal X-rays from neutron star surfaces (Becker and Trümper 1997). Resonant Compton scattering creates magnetically convertible gamma rays at a spatial rate $`(dN_\gamma /ds)_{rC}T/\mathrm{\Gamma }^2`$, (e.g., Luo 1996) where $`T`$ is the temperature of the cooling neutron star (polar cap heating is unimportant in death valley) and $`\mathrm{\Gamma }=e\mathrm{\Phi }/m_\pm c^2`$ is the Lorentz factor of an electron or positron in the beam. Compton scattering thus can become a significant source of gamma rays in stars with small accelerating potentials. Compton scattering thus may contribute significantly for stars with low overall voltage.
This expectation is correct, if internal heating (e.g., Umeda et al. 1993) keeps the surface temperature above $`10^5`$ K at spindown ages in excess of $`10^{7.5}`$ years. In this case, resonant Compton scattering of thermal photons by a polar electron beam does extend death valley to include all the observed pulsars, with offsets required in the lowest voltage RPPs of order 60% to 70%, as is shown in the right panel of Figure 1. Note that this theory predicted (Arons 1998) the existence of RPPs with large periods ($`P>5`$ seconds) and unusually low voltage $`\mathrm{\Phi }_{pole}<10^{12.5}`$ Volts \[great age, $`t=170(10^{12}V/\mathrm{\Phi }_{pole})^2(10^s/P)^2`$ Myr.\]
Indeed, recent observations (Young et al. 1999) have found a a $`P=8.51`$ second pulsar, PSR J2144-3933, which falls below the death line for a star centered dipole (even including Compton scattering), but lies comfortably within death valley, when both Compton scatering and dipole offsets are included.
## 4. Conclusion
I have shown that polar pair creation based on acceleration of a steadily flowing, space charged limited non-neutral beam in a locally dipolar magnetic geometry at low altitude is consistent with pulsar radio emission throughout the $`P\dot{P}`$ diagram, provided 1) the effect of dragging of inertial frames is included in estimates of the starvation electric field; 2) the dipole center is strongly offset from the stellar center in older stars, perhaps as much as $`0.70.8R_{}`$; and 3) inverse Compton emission of thermal photons from a neutron star cooling slower than exponentially at ages in excess of $`10^6`$ years plays an important role in the emission of magnetically convertible gamma rays. Stars not deep in death valley have copious pair outflow on all polar field lines. Such outflow shorts out the “outer gaps” proposed as the dynamical basis for gamma ray emission in the outer magnetosphere (Cheng, Ho and Ruderman 1986, Romani 1996). Either polar cap gamma ray emission (e.g. Zhang and Harding 2000) or possible outer magnetsophere emission from a dissipative return current layer (Arons 1981) remain as candidates for the gamma ray emission observed in RPPs.
The development of new diagnostics of the low altitude magneic field, gamma ray observations sensitive to low altitude emission, and optical and UV observations of thermal emission from old, nearby RPPs will eventually provide tests of these ideas.
## 5. Acknowledgments
My research on pulsars is supported in part by NSF grant AST 9528271 and NASA grant NAG 5-3073, and in part by the generosity of California’s taxpayers.
## References
Arons, J. 1979, Space Sci. Rev., 24, 437
. 1981, in Proc. IAU Symp. No. 95 ‘Pulsars’, W. Sieber and R. Wielebinski, eds. (Dordrecht: Reidel), 69
. 1992, Proc. IAU Colloq. No. 128, ‘The Magnetospheric Structure and Emission Mechanisms of Raio Pulsars’, T.H. Hankins, J.M. Rankin and J. A. Gil, eds. (Zielona Gora: Pedagogical University Press), 56
. 1993, Ap.J., 408, 160
. 1998, in ‘Neutron Stars and Pulsars: Thirty Years After the Discovery’, N. Shibazaki et al., eds (Tokyo: Universal Academy Press), 339 (astro-ph/9802198)
Arons, J., and Scharlemann, E.T. 1979, Ap.J., 231, 854
Beskin, V.S. 1990, Pis’ma Ast. Zh., 16, 665 (Sov. Ast. - Letters, 16)
Barnard, J.J., and Arons, J. 1982, Ap.J., 254, 713
Becker, W., and Trümper, J. 1997, A&A, 326, 682
Bjornsson, C.-I. 1996, Ap.J., 471, 321
Chen, K., and Ruderman, M.A. 1993, Ap.J., 402, 264
Cheng, K.S., Ho., C., and Ruderman, M. 1986, Ap.J., 300, 500
Deutsch, A.J. 1955, Ann. Ap., 18, 1
Dyson, F.J. 1971, “Neutron Stars and Pulsars: Fermi Lectures 1970” (Rome: Academia Nazionale dei Lincei), 25-26
Goldreich, P. and Julian, W. 1969, Ap.J., 157, 869
Gurevich, A.V., and Istomin, Ya.N. 1985, Zh.Eksp.Teor.Fiz., 89,3 (Soviet Physis - JETP, 62, 1)
Kramer, M., et al. 1998, Ap.J., 501, 270
Luo, Q. 1996, Ap.J., 468, 338
Lyne, A.G., and Manchester, R.N. 1988, MNRAS, 234, 477
Meszaros, P. 1992, “High Energy Radiation from Magnetized Neutron Stars” (Chicago: University of Chicago Press)
Muslimov, A., and Tsygan, A.I. 1990, Ast. Zh., 67, 263 (Soviet Ast., 34, 133)
. 1992, MNRAS, 255, 61
Rankin, J.M. 1990, Ap.J., 352, 247
Romani, R. 1996, Ap.J., 470, 469
Ruderman, M.A., and Sutherland, P.G. 1975, Ap.J., 196, 51
Scharlemann, E.T., Arons, J., and Fawley, W.M. 1978, Ap.J., 222, 297
Umeda, H., et al. 1993, Ap.J., 408, 186
Young, M. et al. 1999, Nature, 400, 848
Zhang, B., and Harding, A.K. 2000, Ap.J., in press (astro-ph/9911028) |
no-problem/9911/math9911112.html | ar5iv | text | # Combinatorics of 𝑞–characters of finite-dimensional representations of quantum affine algebras
## Introduction
The intricate structure of the finite-dimensional representations of quantum affine algebras has been extensively studied from different points of view, see, e.g., \[CP1, CP2, CP3, CP4, GV, V, KS, AK, FR2\]. While a lot of progress has been made, many basic questions remained unanswered. In order to tackle those questions, E. Frenkel and N. Reshetikhin introduced in \[FR2\] a theory of $`q`$–characters for these representations. One of the motivations was the theory of deformed $`𝒲`$–algebras developed in \[FR1\]: the representation ring of a quantum affine algebra should be viewed as a deformed $`𝒲`$–algebra, while the $`q`$–character homomorphism should be viewed as its free field realization. The study of $`q`$–characters in \[FR2\] was based on two main conjectures. One of the goals of the present paper is to prove these conjectures and to derive some of their corollaries.
Let us describe our results in more detail. Let $`𝔤`$ be a simple Lie algebra, $`\widehat{𝔤}`$ be the corresponding non-twisted affine Kac-Moody algebra, and $`U_q\widehat{𝔤}`$ be its quantized universal enveloping algebra (quantum affine algebra for short). Denote by $`I`$ the set of vertices of the Dynkin diagram of $`𝔤`$. Let $`\mathrm{Rep}U_q\widehat{𝔤}`$ be the Grothendieck ring of $`U_q\widehat{𝔤}`$. The $`q`$–character homomorphism is an injective homomorphism $`\chi _q`$ from $`\mathrm{Rep}U_q\widehat{𝔤}`$ to the ring of Laurent polynomials in infinitely many variables $`𝒴=[Y_{i,a}^{\pm 1}]_{iI;a^\times }`$. This homomorphism should be viewed as a $`q`$–analogue of the ordinary character homomorphism.
Indeed, let $`G`$ be the connected simply-connected algebraic group corresponding to $`𝔤`$, and let $`T`$ be its maximal torus. We have a homomorphism $`\chi :\mathrm{Rep}G\mathrm{Fun}T`$ (where $`\mathrm{Fun}T`$ stands for the ring of regular functions on $`T`$), defined by the formula $`(\chi (V))(t)=\mathrm{Tr}_Vt`$, for all $`tT`$. Upon the identification of $`\mathrm{Rep}G`$ with $`\mathrm{Rep}U_q𝔤`$ and of $`\mathrm{Fun}T`$ with $`[y_i^{\pm 1}]_{iI}`$, where $`y_i`$ is the function on $`T`$ corresponding to the fundamental weight $`\omega _i`$, we obtain a homomorphism $`\chi :\mathrm{Rep}U_q𝔤[y_i^{\pm 1}]_{iI}`$. One of the properties of $`\chi _q`$ is that if we replace each $`Y_{i,a}^{\pm 1}`$ by $`y_i^{\pm 1}`$ in $`\chi _q(V)`$, where $`V`$ is a $`U_q\widehat{𝔤}`$–module, then we obtain $`\chi (V|_{U_q𝔤})`$.
The two conjectures from \[FR2\] that we prove in this paper may be viewed as $`q`$–analogues of the well-known properties of the ordinary characters. The first of them, Theorem 4.1, is the analogue of the statement that the character of any irreducible $`U_q𝔤`$–module $`W`$ equals the sum of terms which correspond to the weights of the form $`\lambda _{iI}n_i\alpha _i,n_i_+`$, where $`\lambda =_{iI}l_i\omega _i,l_i_+`$, is the highest weight of $`V`$, and $`\alpha _i,iI`$, are the simple roots. In other words, we have: $`\chi (W)=m_+(1+_pM_p)`$, where $`m_+=_{iI}y_i^{l_i}`$, and each $`M_p`$ is a product of factors $`a_j^1,jI`$, corresponding to the negative simple roots. Theorem 4.1 says that for any irreducible $`U_q\widehat{𝔤}`$–module $`V`$, $`\chi _q(V)=m_+(1+_pM_p)`$, where $`m_+`$ is a monomial in $`Y_{i,a},iI,a^\times `$, with positive powers only (the highest weight monomial), and each $`M_p`$ is a product of factors $`A_{j,c}^1,jI,c^\times `$, which are the $`q`$–analogues of the negative simple roots of $`𝔤`$.
The second statement, Theorem 5.1, gives an explicit description of the image of the $`q`$–character homomorphism $`\chi _q`$. This is a generalization of the well-known fact that the image of the ordinary character homomorphism $`\chi `$ is equal to the subring of invariants of $`[y_i^{\pm 1}]_{iI}`$ under the action of the Weyl group $`W`$ of $`𝔤`$.
Recall that the Weyl group is generated by the simple reflections $`s_i,iI`$. The subring of invariants of $`s_i`$ in $`[y_i^{\pm 1}]_{iI}`$ is equal to
$$K_i=[y_j^{\pm 1}]_{ji}[y_i+y_ia_i^1],$$
and hence we obtain a ring isomorphism $`\mathrm{Rep}U_q\widehat{𝔤}{\displaystyle \underset{iI}{}}K_i`$.
In Theorem 5.1 (see also Corollary 5.7) we establish a $`q`$–analogue of this isomorphism. Instead of the simple reflections we have the screening operators $`S_i,iI`$, introduced in \[FR2\]. We show that $`\mathrm{Im}\chi _q`$ equals $`{\displaystyle \underset{iI}{}}\mathrm{Ker}S_i`$. Moreover, $`\mathrm{Ker}S_i`$ is equal to
$$𝒦_i=[Y_{j,a}^{\pm 1}]_{ji;a^\times }[Y_{i,b}+Y_{i,b}A_{i,bq_i}^1]_{b^\times }.$$
Thus, we obtain a ring isomorphism $`\mathrm{Rep}U_q\widehat{𝔤}{\displaystyle \underset{iI}{}}𝒦_i`$.
These results allow us to construct in a purely combinatorial way the $`q`$–characters of the fundamental representations of $`U_q\widehat{𝔤}`$, see Section 5.5.
We derive several corollaries of these results. Here is one of them (see Theorem 6.7 and Proposition 6.15). For each fundamental weight $`\omega _i`$, there exists a family of $`U_q\widehat{𝔤}`$–modules, $`V_{\omega _i}(a),a^\times `$ (see Section 1.3 for the precise definition). These are irreducible finite-dimensional representations of $`U_q\widehat{𝔤}`$, which have highest weight $`\omega _i`$ if restricted to $`U_q𝔤`$. They are called the fundamental representations of $`U_q\widehat{𝔤}`$ (of level $`0`$). According to a theorem of Chari-Pressley \[CP1, CP3\] (see Corollary 1.4 below), any irreducible representation of $`U_q\widehat{𝔤}`$ can be realized as a subquotient of a tensor product of the fundamental representations. The following theorem, which was conjectured, e.g., in \[AK\], describes under what conditions such a tensor product is reducible.
Denote by $`h^{}`$ the dual Coxeter number of $`𝔤`$, and by $`r^{}`$ the maximal number of edges connecting two vertices of the Dynkin diagram of $`𝔤`$. For the definition of the normalized $`R`$–matrix, see Section 2.3.
Theorem. Let $`\{V_k\}_{k=1,\mathrm{},n}`$, where $`V_k=V_{\omega _{s(k)}}(a_k)`$, be a set of fundamental representations of $`U_q\widehat{𝔤}`$.
The tensor product $`V_1\mathrm{}V_n`$ is reducible if and only if for some $`i,j\{1,\mathrm{},n\}`$, $`ij`$, the normalized $`R`$–matrix $`\overline{R}_{V_i,V_j}(z)`$ has a pole at $`z=a_j/a_i`$.
In that case $`a_j/a_i`$ is necessarily equal to $`q^k`$, where $`k`$ is an integer, such that $`2|k|r^{}h^{}`$.
The paper is organized as follows. In Section 1 we recall the main definitions and results on quantum affine algebras and their finite-dimensional representations. In Section 2 we give the definition of the $`q`$–character homomorphism and list some of its properties. In Section 3 we develop our main technical tool: the restriction homomorphisms $`\tau _J`$. Sections 4 and 5 contain the proofs of Conjectures 1 and 2 from \[FR2\], respectively. In Section 6 we use these results to describe the structure of the $`q`$–characters of the fundamental representations and to prove the above Theorem.
The results of this paper can be generalized to the case of the twisted quantum affine algebras.
In the course of writing this paper we were informed by H. Nakajima that he obtained an independent proof of Conjecture 1 from \[FR2\] in the $`ADE`$ case using a geometric approach.
Acknowledgments. We thank N. Reshetikhin for useful discussions. The research of both authors was supported by a grant from the Packard Foundation.
## 1. Preliminaries on finite-dimensional representations of $`U_q\widehat{𝔤}`$
### 1.1. Root data
Let $`𝔤`$ be a simple Lie algebra of rank $`\mathrm{}`$. Let $`h^{}`$ be the dual Coxeter number of $`𝔤`$. Let $`,`$ be the invariant inner product on $`𝔤`$, normalized as in \[K\], so that the square of the length of the maximal root equals $`2`$ with respect to the induced inner product on the dual space to the Cartan subalgebra $`𝔥`$ of $`𝔤`$ (also denoted by $`,`$). Denote by $`I`$ the set $`\{1,\mathrm{},\mathrm{}\}`$. Let $`\{\alpha _i\}_{iI}`$ and $`\{\omega _i\}_{iI}`$ be the sets of simple roots and of fundamental weights of $`𝔤`$, respectively. We have:
$$\alpha _i,\omega _j=\frac{\alpha _i,\alpha _i}{2}\delta _{ij}.$$
Let $`r^{}`$ be the maximal number of edges connecting two vertices of the Dynkin diagram of $`𝔤`$. Thus, $`r^{}=1`$ for simply-laced $`𝔤`$, $`r^{}=2`$ for $`B_{\mathrm{}},C_{\mathrm{}},F_4`$, and $`r^{}=3`$ for $`G_2`$.
In this paper we will use the rescaled inner product
$$(,)=r^{},$$
on $`𝔥^{}`$. Set
$$D=\mathrm{diag}(r_1,\mathrm{},r_{\mathrm{}}),$$
where
(1.1)
$$r_i=\frac{(\alpha _i,\alpha _i)}{2}=r^{}\frac{\alpha _i,\alpha _i}{2}.$$
The $`r_i`$’s are relatively prime integers. For simply-laced $`𝔤`$, all $`r_i`$’s are equal to $`1`$ and $`D`$ is the identity matrix.
Now let $`C=(C_{ij})_{1i,j\mathrm{}}`$ be the Cartan matrix of $`𝔤`$,
$$C_{ij}=\frac{2(\alpha _i,\alpha _j)}{(\alpha _i,\alpha _i)}.$$
Let $`B=(B_{ij})_{1i,j\mathrm{}}`$ be the symmetric matrix
$$B=DC,$$
i.e., $`B_{ij}=(\alpha _i,\alpha _j)=r^{}\alpha _i,\alpha _j.`$
Let $`q^\times `$ be such that $`|q|<1`$. Set $`q_i=q^{r_i}`$, and
$$[n]_q=\frac{q^nq^n}{qq^1}.$$
Following \[FR1, FR2\], define the $`\mathrm{}\times \mathrm{}`$ matrices $`B(q),C(q),D(q)`$ by the formulas
$`B_{ij}(q)`$ $`=[B_{ij}]_q,`$
$`C_{ij}(q)`$ $`=(q_i+q_i^1)\delta _{ij}+(1\delta _{ij})[C_{ij}]_q,`$
$`D_{ij}(q)`$ $`=[D_{ij}]_q=\delta _{ij}[r_i]_q.`$
We have:
$$B(q)=D(q)C(q).$$
Let $`\stackrel{~}{C}(q)`$ be the inverse of the Cartan matrix $`C(q)`$, $`C(q)\stackrel{~}{C}(q)=\mathrm{Id}`$. We will need the following property of matrix $`\stackrel{~}{C}(q)`$.
###### Lemma 1.1.
All coefficients of the matrix $`\stackrel{~}{C}(q)`$ can be written in the form
(1.2) $`\stackrel{~}{C}_{ij}(q)={\displaystyle \frac{\stackrel{~}{C}_{ij}^{}(q)}{d(q)}},i,jI,`$
where $`\stackrel{~}{C}_{ij}^{}(q)`$, $`d(q)`$ are Laurent polynomials in $`q`$ with non-negative integral coefficients, symmetric with respect to the substitution $`qq^1`$. Moreover,
$$\mathrm{deg}\stackrel{~}{C}_{ij}^{}(q)<\mathrm{deg}d(q),i,jI.$$
###### Proof.
We write here the minimal choice of $`d(q)`$, which we use in Section 3.2:
$`A_{\mathrm{}}:d(q)`$ $`=`$ $`q^{\mathrm{}}+q^\mathrm{}2+\mathrm{}+q^{\mathrm{}},`$
$`B_{\mathrm{}}:d(q)`$ $`=`$ $`q^{2\mathrm{}1}+q^{2\mathrm{}3}+\mathrm{}+q^{2\mathrm{}1},`$
$`C_{\mathrm{}}:d(q)`$ $`=`$ $`q^{\mathrm{}+1}+q^\mathrm{}1,`$
$`D_{\mathrm{}}:d(q)`$ $`=`$ $`(q+q^1)(q^\mathrm{}1+q^{\mathrm{}+1}),`$
$`E_6:d(q)`$ $`=`$ $`(q^2+1+q^2)(q^6+q^6),`$
$`E_7:d(q)`$ $`=`$ $`(q+q^1)(q^9+q^9),`$
$`E_8:d(q)`$ $`=`$ $`(q+q^1)(q^{15}+q^{15}),`$
$`F_4:d(q)`$ $`=`$ $`q^9+q^9,`$
$`G_2:d(q)`$ $`=`$ $`q^6+q^6.`$
For Lie algebras of classical series, the statement of the lemma with the above $`d(q)`$ follows from the explicit formulas for the entries $`\stackrel{~}{C}_{ij}(q)`$ of the matrix $`\stackrel{~}{C}(q)`$ given in Appendix C of \[FR1\]. For exceptional types, the lemma follows from a case by case inspection of the matrix $`\stackrel{~}{C}(q)`$. ∎
### 1.2. Quantum affine algebras
The quantum affine algebra $`U_q\widehat{𝔤}`$ in the Drinfeld-Jimbo realization \[Dr1, J\] is an associative algebra over $``$ with generators $`x_i^\pm `$, $`k_i^{\pm 1}`$ ($`i=0,\mathrm{},\mathrm{}`$), and relations:
$`k_ik_i^1=k_i^1k_i`$ $`=1,k_ik_j=k_jk_i,`$
$`k_ix_j^\pm k_i^1`$ $`=q^{\pm B_{ij}}x_j^\pm ,`$
$`[x_i^+,x_j^{}]`$ $`=\delta _{ij}{\displaystyle \frac{k_ik_i^1}{q_iq_i^1}},`$
$`{\displaystyle \underset{r=0}{\overset{1C_{ij}}{}}}(1)^r\left[\begin{array}{cc}1C_{ij}& \\ r& \end{array}\right]_{q_i}(x_i^\pm )^rx_j^\pm `$ $`(x_i^\pm )^{1C_{ij}r}=0,ij.`$
Here $`(C_{ij})_{0i,j\mathrm{}}`$ denotes the Cartan matrix of $`\widehat{𝔤}`$.
The algebra $`U_q\widehat{𝔤}`$ has a structure of a Hopf algebra with the comultiplication $`\mathrm{\Delta }`$ and the antipode $`S`$ given on the generators by the formulas:
$`\mathrm{\Delta }(k_i)`$ $`=`$ $`k_ik_i,`$
$`\mathrm{\Delta }(x_i^+)`$ $`=`$ $`x_i^+1+k_ix_i^+,`$
$`\mathrm{\Delta }(x_i^{})`$ $`=`$ $`x_i^{}k_i^1+1x_i^{},`$
$$S(x_i^+)=x_i^+k_i,S(x_i^{})=k_i^1x_i^{},S(k_i^{\pm 1})=k_i^1.$$
We define a $``$-gradation on $`U_q\widehat{𝔤}`$ by setting: $`\mathrm{deg}x_0^\pm =\pm 1,\mathrm{deg}x_i^\pm =\mathrm{deg}k_i=0,iI=\{1,\mathrm{},\mathrm{}\}`$.
Denote the subalgebra of $`U_q\widehat{𝔤}`$ generated by $`k_i^{\pm 1},x_i^+`$ (resp., $`k_i^{\pm 1},x_i^{}`$), $`i=0,\mathrm{},\mathrm{}`$, by $`U_q𝔟_+`$ (resp., $`U_q𝔟_{}`$).
The algebra $`U_q𝔤`$ is defined as the subalgebra of $`U_q\widehat{𝔤}`$ with generators $`x_i^\pm `$, $`k_i^{\pm 1}`$, where $`iI`$.
We will use Drinfeld’s “new” realization of $`U_q\widehat{𝔤}`$, see \[Dr2\], described by the following theorem.
###### Theorem 1.2 (\[Dr2, KT, LSS, B\]).
The algebra $`U_q\widehat{𝔤}`$ has another realization as the algebra with generators $`x_{i,n}^\pm `$ ($`iI`$, $`n`$), $`k_i^{\pm 1}`$ ($`iI`$), $`h_{i,n}`$ ($`iI`$, $`n\backslash 0`$) and central elements $`c^{\pm 1/2}`$, with the following relations:
$`k_ik_j=k_jk_i,`$ $`k_ih_{j,n}=h_{j,n}k_i,`$
$`k_ix_{j,n}^\pm k_i^1`$ $`=q^{\pm B_{ij}}x_{j,n}^\pm ,`$
$`[h_{i,n},x_{j,m}^\pm ]`$ $`=\pm {\displaystyle \frac{1}{n}}[nB_{ij}]_qc^{|n|/2}x_{j,n+m}^\pm ,`$
$`x_{i,n+1}^\pm x_{j,m}^\pm q^{\pm B_{ij}}x_{j,m}^\pm x_{i,n+1}^\pm `$ $`=q^{\pm B_{ij}}x_{i,n}^\pm x_{j,m+1}^\pm x_{j,m+1}^\pm x_{i,n}^\pm ,`$
$`[h_{i,n},h_{j,m}]`$ $`=\delta _{n,m}{\displaystyle \frac{1}{n}}[nB_{ij}]_q{\displaystyle \frac{c^nc^n}{qq^1}},`$
$`[x_{i,n}^+,x_{j,m}^{}]=\delta _{ij}`$ $`{\displaystyle \frac{c^{(nm)/2}\varphi _{i,n+m}^+c^{(nm)/2}\varphi _{i,n+m}^{}}{q_iq_i^1}},`$
$`{\displaystyle \underset{\pi \mathrm{\Sigma }_s}{}}{\displaystyle \underset{k=0}{\overset{s}{}}}(1)^k\left[\begin{array}{cc}s& \\ k& \end{array}\right]_{q_i}x_{i,n_{\pi (1)}}^\pm \mathrm{}x_{i,n_{\pi (k)}}^\pm `$ $`x_{j,m}^\pm x_{i,n_{\pi (k+1)}}^\pm \mathrm{}x_{i,n_{\pi (s)}}^\pm =0,s=1C_{ij},`$
for all sequences of integers $`n_1,\mathrm{},n_s`$, and $`ij`$, where $`\mathrm{\Sigma }_s`$ is the symmetric group on $`s`$ letters, and $`\varphi _{i,n}^\pm `$’s are determined by the formula
(1.3)
$$\mathrm{\Phi }_i^\pm (u):=\underset{n=0}{\overset{\mathrm{}}{}}\varphi _{i,\pm n}^\pm u^{\pm n}=k_i^{\pm 1}\mathrm{exp}\left(\pm (qq^1)\underset{m=1}{\overset{\mathrm{}}{}}h_{i,\pm m}u^{\pm m}\right).$$
For any $`a^\times `$, there is a Hopf algebra automorphism $`\tau _a`$ of $`U_q\widehat{𝔤}`$ defined on the generators by the following formulas:
(1.4) $`\tau _a(x_{i,n}^\pm )=a^nx_{i,n}^\pm ,\tau _a(\varphi _{i,n}^\pm )=a^n\varphi _{i,n}^\pm ,`$
$$\tau _a(c^{1/2})=c^{1/2},\tau _a(k_i)=k_i,$$
for all $`iI,n`$. Given a $`U_q\widehat{𝔤}`$–module $`V`$ and $`aC^\times `$, we denote by $`V(a)`$ the pull-back of $`V`$ under $`\tau _a`$.
Define new variables $`\stackrel{~}{k}_i^{\pm 1},iI`$, such that
(1.5)
$$k_j=\underset{iI}{}\stackrel{~}{k}_i^{C_{ij}},\stackrel{~}{k}_i\stackrel{~}{k}_j=\stackrel{~}{k}_j\stackrel{~}{k}_i.$$
Thus, while $`k_i`$ corresponds to the simple root $`\alpha _i`$, $`\stackrel{~}{k}_i`$ corresponds to the fundamental weight $`\omega _i`$. We extend the algebra $`U_q\widehat{𝔤}`$ by replacing the generators $`k_i^{\pm 1},iI`$ with $`\stackrel{~}{k}_i^{\pm 1},iI`$. From now on $`U_q\widehat{𝔤}`$ will stand for the extended algebra.
Let $`q^{2\rho }=\stackrel{~}{k}_1^2\mathrm{}\stackrel{~}{k}_{\mathrm{}}^2`$. The square of the antipode acts as follows (see \[Dr3\]):
(1.6) $`S^2(x)=\tau _{q^{2r^{}h^{}}}(q^{2\rho }xq^{2\rho }),xU_q\widehat{𝔤}.`$
Let $`w_0`$ be the longest element of the Weyl group of $`𝔤`$. Let $`i\overline{i}`$ be the bijection $`II`$, such that $`w_0(\alpha _i)=\alpha _{\overline{i}}`$. Define the algebra automorphism $`w_0:U_q\widehat{𝔤}U_q\widehat{𝔤}`$ by
(1.7) $`w_0(\stackrel{~}{k}_i)=\stackrel{~}{k}_{\overline{i}},w_0(h_{i,n})=h_{\overline{i},n},w_0(x_{i,n}^\pm )=x_{\overline{i},n}^\pm .`$
We have: $`w_0^2=\mathrm{Id}`$. Actually, $`w_0`$ is a Hopf algebra automorphism, but we will not use this fact.
### 1.3. Finite-dimensional representations of $`U_q\widehat{𝔤}`$
In this section we recall some of the results of Chari and Pressley \[CP1, CP2, CP3, CP4\] on the structure of finite-dimensional representations of $`U_q\widehat{𝔤}`$.
Let $`P`$ be the weight lattice of $`𝔤`$. It is equipped with the standard partial order: the weight $`\lambda `$ is higher than the weight $`\mu `$ if $`\lambda \mu `$ can be written as a combination of the simple roots with positive integral coefficients.
A vector $`w`$ in a $`U_q𝔤`$–module $`W`$ is called a vector of weight $`\lambda P`$, if
(1.8)
$$k_iw=q^{(\lambda ,\alpha _i)}w,iI.$$
A representation $`W`$ of $`U_q𝔤`$ is said to be of type 1 if it is the direct sum of its weight spaces $`W=_{\lambda P}W_\lambda `$, where $`W_\lambda =\{wW|k_iw=q^{(\lambda ,\alpha _i)}w\}`$. If $`W_\lambda 0`$, then $`\lambda `$ is called a weight of $`W`$.
A representation $`V`$ of $`U_q\widehat{𝔤}`$ is called of type 1 if $`c^{1/2}`$ acts as the identity on $`V`$, and if $`V`$ is of type 1 as a representation of $`U_q𝔤`$. According to \[CP1\], every finite-dimensional irreducible representation of $`U_q\widehat{𝔤}`$ can be obtained from a type 1 representation by twisting with an automorphism of $`U_q\widehat{𝔤}`$. Because of that, we will only consider type 1 representations in this paper.
A vector $`vV`$ is called a highest weight vector if
(1.9)
$$x_{i,n}^+v=0,\varphi _{i,n}^\pm v=\psi _{i,n}^\pm v,c^{1/2}v=v,iI,n,$$
for some complex numbers $`\psi _{i,n}^\pm `$. A type 1 representation $`V`$ is a highest weight representation if $`V=U_q\widehat{𝔤}v`$, for some highest weight vector $`v`$. In that case the set of generating functions
$$\mathrm{\Psi }_i^\pm (u)=\underset{n=0}{\overset{\mathrm{}}{}}\psi _{i,\pm n}^\pm u^{\pm n},iI,$$
is called the highest weight of $`V`$.
Warning. The above notions of highest weight vector and highest weight representation are different from standard. Sometimes they are called pseudo-highest weight vector and pseudo-highest weight representation.
Let $`𝒫`$ be the set of all $`I`$–tuples $`(P_i)_{iI}`$ of polynomials $`P_i[u]`$, with constant term 1.
###### Theorem 1.3 (\[CP1, CP3\]).
(1) Every finite-dimensional irreducible representation of $`U_q\widehat{𝔤}`$ of type 1 is a highest weight representation.
(2) Let $`V`$ be a finite-dimensional irreducible representation of $`U_q\widehat{𝔤}`$ of type 1 and highest weight $`(\mathrm{\Psi }_i^\pm (u))_{iI}`$. Then, there exists $`𝐏=(P_i)_{iI}𝒫`$ such that
(1.10)
$$\mathrm{\Psi }_i^\pm (u)=q_i^{\mathrm{deg}(P_i)}\frac{P_i(uq_i^1)}{P_i(uq_i)},$$
as an element of $`[[u^{\pm 1}]]`$.
Assigning to $`V`$ the $`I`$–tuple $`𝐏𝒫`$ defines a bijection between $`𝒫`$ and the set of isomorphism classes of finite-dimensional irreducible representations of $`U_q\widehat{𝔤}`$ of type 1. The irreducible representation associated to $`𝐏`$ will be denoted by $`V(𝐏)`$.
(3) The highest weight of $`V(𝐏)`$ considered as a $`U_q𝔤`$–module is $`\lambda =_{iI}\mathrm{deg}P_i\omega _i`$, the lowest weight of $`V(𝐏)`$ is $`\overline{\lambda }=_{iI}\mathrm{deg}P_i\omega _{\overline{i}}`$, and each of them has multiplicity $`1`$.
(4) If $`𝐏=(P_i)_{iI}𝒫`$, $`a^\times `$, and if $`\tau _a^{}(V(𝐏))`$ denotes the pull-back of $`V(𝐏)`$ by the automorphism $`\tau _a`$, we have $`\tau _a^{}(V(𝐏))V(𝐏^a)`$ as representations of $`U_q\widehat{𝔤}`$, where $`𝐏^a=(P_i^a)_{iI}`$ and $`P_i^a(u)=P_i(ua)`$.
(5) For $`𝐏`$, $`𝐐𝒫`$ denote by $`𝐏𝐐𝒫`$ the $`I`$–tuple $`(P_iQ_i)_{iI}`$. Then $`V(𝐏𝐐)`$ is isomorphic to a quotient of the subrepresentation of $`V(𝐏)V(𝐐)`$ generated by the tensor product of the highest weight vectors.
An analogous classification result for Yangians has been obtained earlier by Drinfeld \[Dr2\]. Because of that, the polynomials $`P_i(u)`$ are called Drinfeld polynomials.
Note that in our notation the polynomials $`P_i(u)`$ correspond to the polynomials $`P_i(uq_i^1)`$ in the notation of \[CP1, CP3\].
For each $`iI`$ and $`a^\times `$, define the irreducible representation $`V_{\omega _i}(a)`$ as $`V(𝐏_a^{(i)})`$, where $`𝐏_a^{(i)}`$ is the $`I`$–tuple of polynomials, such that $`P_i(u)=1ua`$ and $`P_j(u)=1,ji`$. We call $`V_{\omega _i}(a)`$ the $`i`$th fundamental representation of $`U_q\widehat{𝔤}`$. Note that in general $`V_{\omega _i}(a)`$ is reducible as a $`U_q𝔤`$–module.
Theorem 1.3 implies the following
###### Corollary 1.4 (\[CP3\]).
Any irreducible finite-dimensional representation $`V`$ of $`U_q\widehat{𝔤}`$ occurs as a quotient of the submodule of the tensor product $`V_{\omega _{i_1}}(a_1)\mathrm{}V_{\omega _{i_n}}(a_n)`$, generated by the tensor product of the highest weight vectors. The parameters $`(\omega _{i_k},a_k)`$, $`k=1,\mathrm{},n`$, are uniquely determined by $`V`$ up to permutation.
## 2. Definition and first properties of $`q`$–characters
### 2.1. Definition of $`q`$–characters
Let us recall the definition of the $`q`$–characters of finite-dimensional representations of $`U_q\widehat{𝔤}`$ from \[FR2\].
The completed tensor product $`U_q\widehat{𝔤}\widehat{}U_q\widehat{𝔤}`$ contains a special element $``$ called the universal $`R`$–matrix (at level $`0`$). It actually lies in $`U_q𝔟_+\widehat{}U_q𝔟_{}`$ and satisfies the following identities:
$`\mathrm{\Delta }^{}(x)`$ $`=\mathrm{\Delta }(x)^1,xU_q\widehat{𝔤},`$
$`(\mathrm{\Delta }\mathrm{id})`$ $`=^{13}^{23},(\mathrm{id}\mathrm{\Delta })=^{13}^{12}.`$
For more details, see \[Dr3, EFK\].
Now let $`(V,\pi _V)`$ be a finite-dimensional representation of $`U_q\widehat{𝔤}`$. Define the transfer-matrix corresponding to $`V`$ by
(2.1)
$$t_V=t_V(z)=\mathrm{Tr}_V(\pi _{V(z)}\mathrm{id})().$$
Thus we obtain a map $`\nu _q:\mathrm{Rep}U_q\widehat{𝔤}U_q𝔟_{}[[z]]`$, sending $`V`$ to $`t_V(z)`$.
###### Remark 2.1.
Note that in \[FR2\] there was an extra factor $`q^{2\rho }`$ in formula (2.1). This factor is inessential for the purposes of this paper, and therefore can be dropped.∎
Denote by $`U_q\stackrel{~}{𝔤}`$ the subalgebra of $`U_q\widehat{𝔤}`$ generated by $`x_{i,n}^\pm ,\stackrel{~}{k}_i,h_{i,r},n0,r<0,iI`$. It follows from the proof of Theorem 1.2 that $`U_q𝔟_{}U_q\stackrel{~}{𝔤}`$. As a vector space, $`U_q\stackrel{~}{𝔤}`$ can be decomposed as follows: $`U_q\stackrel{~}{𝔤}=U_q\stackrel{~}{𝔫}_{}U_q\stackrel{~}{𝔥}U_q\stackrel{~}{𝔫}_+`$, where $`U_q\stackrel{~}{𝔫}_\pm `$ (resp., $`U_q\stackrel{~}{𝔥}`$) is generated by $`x_{i,n}^\pm ,iI,n0`$ (resp., $`\stackrel{~}{k}_i,h_{i,n},iI,n<0`$). Hence
$$U_q\stackrel{~}{𝔤}=U_q\stackrel{~}{𝔥}\left(U_q\stackrel{~}{𝔤}(U_q\stackrel{~}{𝔫}_+)_0+(U_q\stackrel{~}{𝔫}_{})_0U_q\stackrel{~}{𝔤}\right),$$
where $`(U_q\stackrel{~}{𝔫}_\pm )_0`$ stands for the augmentation ideal of $`U_q\stackrel{~}{𝔫}_\pm `$. Denote by $`𝐡_q`$ the projection $`U_q\stackrel{~}{𝔤}U_q\stackrel{~}{𝔥}`$ along the last two summands (this is an analogue of the Harish-Chandra homomorphism). We denote by the same letter its restriction to $`U_q𝔟_{}`$.
Now we define the map $`\chi _q:\mathrm{Rep}U_q\widehat{𝔤}U_q\stackrel{~}{𝔥}[[z]]`$ as the composition of $`\nu _q:\mathrm{Rep}U_q\widehat{𝔤}U_q𝔟_{}[[z]]`$ and $`𝐡_q[[z]]:U_q𝔟_{}[[z]]U_q\stackrel{~}{𝔥}[[z]]`$.
To describe the image of $`\chi _q`$ we need to introduce some more notation.
Let
(2.2)
$$\stackrel{~}{h}_{i,m}=\underset{jI}{}\stackrel{~}{C}_{ji}(q^m)h_{j,m},$$
where $`\stackrel{~}{C}(q)`$ is the inverse matrix to $`C(q)`$ defined in Section 1.1. Set
(2.3)
$$Y_{i,a}=\stackrel{~}{k}_i^1\mathrm{exp}\left((qq^1)\underset{n>0}{}\stackrel{~}{h}_{i,n}z^na^n\right),a^\times .$$
We assign to $`Y_{i,a}^{\pm 1}`$ the weight $`\pm \omega _i`$.
We have the ordinary character homomorphism $`\chi :\mathrm{Rep}U_q𝔤[y_i^{\pm 1}]_{iI}`$: if $`V=_\mu V_\mu `$ is the weight decomposition of $`V`$, then $`\chi (V)=_\mu dimV_\mu y^\mu `$, where for $`\mu =_{iI}m_i\omega _i`$ we set $`y^\mu =_{iI}y_i^{m_i}`$. Define the homomorphism
$$\beta :[Y_{i,a}^{\pm 1}]_{iI;a^\times }[y_i^{\pm 1}]_{iI}$$
sending $`Y_{i,a}^{\pm 1}`$ to $`y_i^{\pm 1}`$, and denote by
$$\mathrm{res}:\mathrm{Rep}U_q\widehat{𝔤}\mathrm{Rep}U_q𝔤$$
the restriction homomorphism.
Given a polynomial ring $`[x_\alpha ^{\pm 1}]_{\alpha A}`$, we denote by $`_+[x_\alpha ^{\pm 1}]_{\alpha A}`$ its subset consisting of all linear combinations of monomials in $`x_\alpha ^{\pm 1}`$ with positive integral coefficients.
###### Theorem 2.2 (\[FR2\]).
(1) $`\chi _q`$ is an injective homomorphism from $`\mathrm{Rep}U_q\widehat{𝔤}`$ to $`[Y_{i,a}^{\pm 1}]_{iI;a^\times }U_q\stackrel{~}{𝔥}[[z]]`$.
(2) For any finite-dimensional representation $`V`$ of $`U_q\widehat{𝔤}`$, $`\chi _q(V)_+[Y_{i,a}^{\pm 1}]_{iI;a^\times }`$.
(3) The diagram
$$\begin{array}{ccc}\mathrm{Rep}U_q\widehat{𝔤}& \stackrel{\chi _q}{}& [Y_{i,a}^{\pm 1}]_{iI;a^\times }\\ \mathrm{res}& & \beta & & \\ \mathrm{Rep}U_q𝔤& \stackrel{\chi }{}& [y_i^{\pm 1}]_{iI}\end{array}$$
is commutative.
(4) $`\mathrm{Rep}U_q\widehat{𝔤}`$ is a commutative ring that is isomorphic to $`[t_{i,a}]_{iI;a^\times }`$, where $`t_{i,a}`$ is the class of $`V_{\omega _i}(a)`$.
The homomorphism
$$\chi _q:\mathrm{Rep}U_q\widehat{𝔤}[Y_{i,a}^{\pm 1}]_{iI;a^\times }$$
is called the $`q`$character homomorphism. For a finite-dimensional representation $`V`$ of $`U_q\widehat{𝔤}`$, $`\chi _q(V)`$ is called the $`q`$character of $`V`$.
### 2.2. Spectra of $`\mathrm{\Phi }^\pm (u)`$
According to Theorem 2.2(1), the $`q`$–character of any finite-dimensional representation $`V`$ of $`U_q\widehat{𝔤}`$ is a linear combination of monomials in $`Y_{i,a}^{\pm 1}`$ with positive integral coefficients. The proof of Theorem 2.2 from \[FR2\] allows us to relate the monomials appearing in $`\chi _q(V)`$ to the spectra of the operators $`\mathrm{\Phi }_i^\pm (u)`$ on $`V`$ as follows.
It follows from the defining relations that the operators $`\varphi _{i,n}^\pm `$ commute with each other. Hence we can decompose any representation $`V`$ of $`U_q\widehat{𝔤}`$ into a direct sum $`V=V_{(\gamma _{i,n}^\pm )}`$ of generalized eigenspaces
$$V_{(\gamma _{i,n}^\pm )}=\{xV|\mathrm{there}\mathrm{exists}p,\mathrm{such}\mathrm{that}(\varphi _{i,n}^\pm \gamma _{i,n}^\pm )^px=0,iI,n\}.$$
Since $`\varphi _0^\pm =k_i^{\pm 1}`$, all vectors in $`V_{(\gamma _{i,n}^\pm )}`$ have the same weight (see formula (1.8) for the definition of weight). Therefore the decomposition of $`V`$ into a direct sum of subspaces $`V_{(\gamma _{i,n}^\pm )}`$ is a refinement of its weight decomposition.
Given a collection $`(\gamma _{i,n}^\pm )`$ of generalized eigenvalues, we form the generating functions
$$\mathrm{\Gamma }_i^\pm (u)=\underset{n0}{}\gamma _{i,\pm n}^\pm u^{\pm n}.$$
We will refer to each collection $`\{\mathrm{\Gamma }_i^\pm (u)\}_{iI}`$ occurring on a given representation $`V`$ as the common (generalized) eigenvalues of $`\mathrm{\Phi }_i^\pm (u),iI`$, on $`V`$, and to $`dimV_{(\gamma _{i,n}^\pm )}`$ as the multiplicity of this eigenvalue.
Let $`𝔅_V`$ be a Jordan basis of $`\varphi _{i,n}^\pm ,iI,n`$. Consider the module $`V(z)=\tau _z^{}(V)`$, see formula $`(`$1.4$`)`$. Then $`V(z)=V`$ as a vector space. Moreover, the decomposition in the direct sum of generalized eigenspaces of operators $`\varphi _{i,n}^\pm `$ does not depend on $`z`$, because the action of $`\varphi _{i,n}^\pm `$ on $`V`$ and on $`V(z)`$ differs only by scalar factors $`z^n`$. In particular, $`𝔅_V`$ is also a Jordan basis for $`\varphi _{i,n}^\pm `$ acting on $`V(z)`$ for all $`z^\times `$. If $`v𝔅_V`$ is a generalized eigenvector with common eigenvalues $`\{\mathrm{\Gamma }_i^\pm (u)\}_{iI}`$, then the corresponding common eigenvalues on $`v`$ in $`V(z)`$ are $`\{\mathrm{\Gamma }_i^\pm (zu)\}_{iI}`$
The following result is a generalization of Theorem 1.3.
###### Proposition 2.3 (\[FR2\]).
The eigenvalues $`\mathrm{\Gamma }_i^\pm (u)`$ of $`\mathrm{\Phi }_i^\pm (u)`$ on any finite-dimensional representation of $`U_q\widehat{𝔤}`$ have the form:
(2.4)
$$\mathrm{\Gamma }_i^\pm (u)=q_i^{\mathrm{deg}Q_i\mathrm{deg}R_i}\frac{Q_i(uq_i^1)R_i(uq_i)}{Q_i(uq_i)R_i(uq_i^1)},$$
as elements of $`[[u^{\pm 1}]]`$, where $`Q_i(u),R_i(u)`$ are polynomials in $`u`$ with constant term $`1`$.
Now we can relate the monomials appearing in $`\chi _q(V)`$ to the common eigenvalues of $`\mathrm{\Phi }_i^\pm (u)`$ on $`V`$.
###### Proposition 2.4.
Let $`V`$ be a finite-dimensional $`U_q\widehat{𝔤}`$–module. There is a one-to-one correspondence between the monomials occurring in $`\chi _q(V)`$ and the common eigenvalues of $`\mathrm{\Phi }_i^\pm (u),iI`$, on $`V`$. Namely, the monomial
(2.5)
$$\underset{iI}{}\left(\underset{r=1}{\overset{k_i}{}}Y_{i,a_{ir}}\underset{s=1}{\overset{l_i}{}}Y_{i,b_{is}}^1\right)$$
corresponds to the common eigenvalues (2.4), where
(2.6)
$$Q_i(z)=\underset{r=1}{\overset{k_i}{}}(1za_{ir}),R_i(z)=\underset{s=1}{\overset{l_i}{}}(1zb_{is}),iI.$$
The weight of each monomial equals the weight of the corresponding generalized eigenspace. Moreover, the coefficient of each monomial in $`\chi _q(V)`$ equals the multiplicity of the corresponding common eigenvalue.
###### Proof.
Denote by $`U_q\widehat{𝔫}_\pm `$ the subalgebra of $`U_q\widehat{𝔤}`$ generated by $`x_{i,n}^\pm ,iI,n`$. Let $`\stackrel{~}{B}(q)`$ be the inverse matrix to $`B(q)`$ from Section 1.1. The following formula for the universal $`R`$–matrix has been proved in \[KT, LSS, Da\]:
(2.7)
$$=^+^0^{}T,$$
where
(2.8)
$$^0=\mathrm{exp}\left(\underset{n>0}{}\underset{iI}{}\frac{n(qq^1)^2}{q_i^nq_i^n}h_{i,n}\stackrel{~}{h}_{i,n}z^n\right)$$
(here we use the notation (2.2)), $`^\pm U_q\widehat{𝔫}_\pm U_q\stackrel{~}{𝔫}_{}`$, and $`T`$ acts as follows: if $`x,y`$ satisfy $`k_ix=q^{(\lambda ,\alpha _i)}x,k_iy=q^{(\mu ,\alpha _i)}y`$, then
(2.9)
$$Txy=q^{(\lambda ,\mu )}xy.$$
By definition, $`\chi _q(V)`$ is obtained by taking the trace of $`(\pi _{V(z)}\mathrm{id})()`$ over $`V`$ and then projecting it on $`U_q\stackrel{~}{𝔥}[[z]]`$ using the projection operator $`𝐡_q`$. This projection eliminates the factor $`^{}`$, and then taking the trace eliminates $`^+`$ (recall that $`U_q\stackrel{~}{𝔫}_+`$ acts nilpotently on $`V`$). Hence we obtain:
(2.10)
$$\chi _q(V)=\mathrm{Tr}_V\left[\mathrm{exp}\left(\underset{n>0}{}\underset{iI}{}\frac{n(qq^1)^2}{q_i^nq_i^n}\pi _V(h_{i,n})\stackrel{~}{h}_{i,n}z^n\right)(\pi _V1)T\right],$$
The trace can be written as the sum of terms $`m_v`$ corresponding to the (generalized) eigenvalues of $`h_{i,n}`$ on the vectors $`v`$ of the Jordan basis $`𝔅_V`$ of $`V`$ for the operators $`\varphi _{i,n}^\pm `$ (and hence for $`h_{i,n}`$).
The eigenvalues of $`\mathrm{\Phi }_i^\pm (u)`$ on each vector $`v𝔅_V`$ are given by formula (2.4). Suppose that $`Q_i(u)`$ and $`R_i(u)`$ are given by formula (2.6). Then the eigenvalue of $`h_{i,n}`$ on $`v`$ equals
(2.11)
$$\frac{q_i^nq_i^n}{n(qq^1)}\left(\underset{r=1}{\overset{k_i}{}}(a_{ir})^n\underset{s=1}{\overset{l_i}{}}(b_{is})^n\right),n>0.$$
Substituting into formula (2.10) and recalling the definition (2.3) of $`Y_{i,a}`$ we obtain that the corresponding term $`m_v`$ in $`\chi _q(V)`$ is the monomial (2.5). ∎
Let $`V=V(𝐏)`$, where
(2.12)
$$P_i(u)=\underset{k=1}{\overset{n_i}{}}(1ua_k^{(i)}),iI.$$
Then by Theorem 1.3(3), the module $`V`$ has highest weight $`\lambda =_{iI}\mathrm{deg}P_i\omega _i`$, which has multiplicity $`1`$. Proposition 2.4 implies that $`\chi _q(V)`$ contains a unique monomial of weight $`\lambda `$. This monomial equals
(2.13)
$$\underset{iI}{}\underset{k=1}{\overset{n_i}{}}Y_{i,a_k^{(i)}}.$$
We call it the highest weight monomial of $`V`$. All other monomials in $`\chi _q(V)`$ have lower weight than $`\lambda `$.
A monomial in $`[Y_{i,a}^{\pm 1}]_{iI,a^\times }`$ is called dominant if it does not contain factors $`Y_{i,a}^1`$ (i.e., if it is a product of $`Y_{i,a}`$’s in positive powers only). The highest weight monomial is dominant, but in general the highest weight monomial is not the only dominant monomial occurring in $`\chi _q(V)`$. Nevertheless, we prove below in Corollary 4.5 that the only dominant monomial contained in the $`q`$–character of a fundamental representation $`V_{\omega _i}(a)`$ is its highest weight monomial $`Y_{i,a}`$.
Note that a dominant monomial has dominant weight but not all monomials of dominant weight are dominant.
Similarly, a monomial in $`[Y_{i,a}^{\pm 1}]_{iI,a^\times }`$ is called antidominant if it does not contain factors $`Y_{i,a}`$ (i.e., if it is a product of $`Y_{i,a}^1`$’s in negative powers only). The roles of dominant and antidominant monomials are similar, see, e.g., Remark 6.19. By Corollary 6.9, the lowest weight monomial is antidominant.
###### Remark 2.5.
The statement analogous to Proposition 2.3 in the case of the Yangians has been proved by Knight \[Kn\]. Using this statement, he introduced the notion of character of a representation of Yangian.∎
### 2.3. Connection with the entries of the $`R`$–matrix
We already described the $`q`$–character of $`U_q\widehat{𝔤}`$ module $`V`$ in terms of universal $`R`$-matrix and in terms of generalized eigenvalues of operators $`\varphi _{i,n}^\pm `$. It allows us to describe the $`q`$–character of $`V`$ in terms of diagonal entries of $`R`$-matrices acting on the tensor products $`VV_{\omega _i}(a)`$ with fundamental representations. We will use this description in Section 6.
Define
(2.14)
$$A_{i,a}=k_i^1\mathrm{exp}\left((qq^1)\underset{n>0}{}h_{i,n}z^na^n\right),a^\times .$$
Using formula (2.2), we can express $`A_{i,a}`$ in terms of $`Y_{j,b}`$’s:
(2.15)
$$A_{i,a}=Y_{i,aq_i}Y_{i,aq_i^1}\underset{C_{ji}=1}{}Y_{j,a}^1\underset{C_{ji}=2}{}Y_{j,aq}^1Y_{j,aq^1}^1\underset{C_{ji}=3}{}Y_{j,aq^2}^1Y_{j,a}^1Y_{j,aq^2}^1.$$
Thus, $`A_{i,a}[Y_{j,b}^{\pm 1}]_{jI;b^\times }`$, and the weight of $`A_{i,a}`$ equals $`\alpha _i`$.
Let $`V`$ and $`W`$ be irreducible finite-dimensional representations of $`U_q\widehat{𝔤}`$ with highest weight vectors $`v`$ and $`w`$. Let $`\overline{R}_{VW}(z)\mathrm{End}(VW)`$ be the normalized $`R`$-matrix,
$$\overline{R}_{VW}(z)=f_{VW}^1(z)(\pi _{V(z)}\pi _W)(),$$
where $`f_{VW}(z)`$ is the scalar function, such that
(2.16)
$$\overline{R}_{VW}(z)(vw)=wv.$$
In what follows we always consider the normalized $`R`$-matrix $`\overline{R}_{VW}(z)`$ written in the basis $`𝔅_V𝔅_W`$.
Recall the definition of the fundamental representation $`V_{\omega _i}(a)`$ from Section 1.3. Denote its highest weight vector by $`v_{\omega _i}`$.
###### Lemma 2.6.
Let $`v𝔅_V`$ and suppose that the corresponding monomial $`m_v`$ in $`\chi _q(V)`$ is given by
(2.17) $`m_v=m_+M{\displaystyle \underset{k}{}}A_{i,a_k}^1,`$
where $`M`$ is a product of factors $`A_{j,b}^1`$, $`b^\times `$, $`jI`$, $`ji`$. Then the diagonal entry of the normalized $`R`$-matrix $`\overline{R}_{V,V_{\omega _i}(b)}(z)`$ corresponding to the vector $`vv_{\omega _i}`$ is
(2.18) $`\left(\overline{R}_{V,V_{\omega _i}(b)}(z)\right)_{vv_{\omega _i},vv_{\omega _i}}={\displaystyle \underset{k}{}}q_i{\displaystyle \frac{1a_kzb^1q_i^1}{1a_kzb^1q_i}}.`$
###### Proof.
Recall formula (2.7) for $``$. We have: $`^{}(vv_{\omega _i})=0`$; $`vv_{\omega _i}`$ is a generalized eigenvector of $`^0`$; and $`^+(vv_{\omega _i})`$ is a linear combination of tensor products $`xy𝔅_V𝔅_{V_{\omega _i}(b)}`$, where $`y`$ has a lower weight than $`v_{\omega _i}`$. Therefore the diagonal matrix element of $``$ on $`vv_{\omega _i}V(z)V_{\omega _i}(b)`$ equals the generalized eigenvalue of $`(\pi _{V(z)}\pi _{V_{\omega _i}(b)})(^0)`$ on $`vv_{\omega _i}`$.
On the other hand, as explained in the proof of Proposition 2.4, the monomial $`m_v`$ is equal to the diagonal matrix element of $`(\pi _{V(z)}1)(^0)`$ corresponding to $`v`$. Therefore the diagonal matrix element of $``$ corresponding to $`vv_{\omega _i}`$ equals the eigenvalue of $`m_v`$ (considered as an element of $`U_q\stackrel{~}{𝔥}[[z]]`$) on $`v_{\omega _i}`$.
In particular, if $`v`$ is the highest weight vector, then the corresponding monomial $`m_v`$ is the highest weight monomial $`m_+`$. Therefore we find that the diagonal matrix element of the non-normalized $`R`$–matrix corresponding to $`vv_{\omega _i}`$ equals the eigenvalue of $`m_+`$ on $`v_{\omega _i}`$. By formula (2.16) the diagonal matrix element of the normalized $`R`$–matrix equals $`1`$. Therefore the eigenvalue of $`m_+`$ on $`v_{\omega _i}`$ equals the scalar function $`f_{V,V_{\omega _i}(b)}(z)`$. Therefore we obtain that the diagonal matrix element of the normalized $`R`$-matrix $`\overline{R}_{V,V_{\omega _i}(b)}(z)`$ corresponding to the vector $`vv_{\omega _i}`$ is equal to the eigenvalue of $`m_vm_+^1`$ on $`v_{\omega _i}`$. According to formula (2.14), $`A_{i,a}=\mathrm{\Phi }_i^{}(za)`$. Therefore, if $`m_v`$ is given by formula (2.17), we obtain from formula (1.10) that this matrix element is given by formula (2.18). ∎
Note that by Theorem 4.1 below every monomial occurring in the $`q`$–character of an irreducible representation $`V`$ can be written in the form (2.17).
## 3. The homomorphisms $`\tau _J`$ and restrictions
### 3.1. Restriction to $`U_q\widehat{𝔤}_J`$
Given a subset $`J`$ of $`I`$, we denote by $`U_q\widehat{𝔤}_J`$ the subalgebra of $`U_q\widehat{𝔤}`$ generated by $`x_{i,n}^\pm ,\stackrel{~}{k}_i^{\pm 1},h_{i,r},iJ,n,r\backslash 0`$. Let
$$\mathrm{res}_J:\mathrm{Rep}U_q\widehat{𝔤}\mathrm{Rep}U_q\widehat{𝔤}_J$$
be the restriction map and $`\beta _J`$ be the homomorphism $`[Y_{i,a}^{\pm 1}]_{iI;a^\times }[Y_{i,a}^{\pm 1}]_{iJ;a^\times }`$, sending $`Y_{i,a}^{\pm 1}`$ to itself for $`iJ`$ and to $`1`$ for $`iJ`$.
According to Theorem 3(3) of \[FR2\], the diagram
$$\begin{array}{ccc}\mathrm{Rep}U_q\widehat{𝔤}& \stackrel{\chi _q}{}& [Y_{i,a}^{\pm 1}]_{iI;a^\times }\\ \mathrm{res}_J& & \beta _J& & \\ \mathrm{Rep}U_q\widehat{𝔤}_J& \stackrel{\chi _{q,J}}{}& [Y_{i,a}^{\pm 1}]_{iJ;a^\times }\end{array}$$
is commutative.
We will now refine the homomorphisms $`\beta _J`$ and $`\mathrm{res}_J`$.
### 3.2. The homomorphism $`\tau _J`$
Consider the elements $`\stackrel{~}{h}_{i,n}`$ defined by formula (2.2) and $`\stackrel{~}{k}_i^{\pm 1}`$ defined by formula (1.5).
###### Lemma 3.1.
$`\stackrel{~}{k}_ix_{j,n}^\pm \stackrel{~}{k}_i^1`$ $`=q^{\pm r_i\delta _{ij}}x_{j,n}^\pm ,`$
$`[\stackrel{~}{h}_{i,n},x_{j,m}^\pm ]`$ $`=\pm \delta _{ij}{\displaystyle \frac{[nr_i]_q}{n}}c^{|n|/2}x_{j,n+m}^\pm ,`$
$`[\stackrel{~}{h}_{i,n},h_{j,m}]`$ $`=\delta _{i,j}\delta _{n,m}{\displaystyle \frac{[nr_i]_q}{n}}{\displaystyle \frac{c^nc^n}{qq^1}}.`$
In particular, $`\stackrel{~}{k}_i^{\pm 1},\stackrel{~}{h}_{i,n},i\overline{J},n\backslash 0`$, where $`\overline{J}=IJ`$, commute with the subalgebra $`U_q\widehat{𝔤}_J`$ of $`U_q\widehat{𝔤}`$.
###### Proof.
These formulas follow from the relations given in Theorem 1.2 and the formula $`B(q)\stackrel{~}{C}(q)=D(q)`$. ∎
Denote by $`U_q\widehat{𝔥}_J^{}`$ the subalgebra of $`U_q\widehat{𝔤}`$ generated by $`\stackrel{~}{k}_i^{\pm 1},\stackrel{~}{h}_{i,n},i\overline{J},n0`$. Then $`U_q\widehat{𝔤}_JU_q\widehat{𝔥}_J^{}`$ is naturally a subalgebra of $`U_q\widehat{𝔤}`$. We can therefore refine the restriction from $`U_q\widehat{𝔤}`$–modules to $`U_q\widehat{𝔤}_J`$–modules by considering the restriction from $`U_q\widehat{𝔤}`$–modules to $`U_q\widehat{𝔤}_JU_q\widehat{𝔥}_J^{}`$–modules.
Thus, we look at the common (generalized) eigenvalues of the operators $`k_i^{\pm 1},h_{i,n},iJ`$, and $`\stackrel{~}{k}_i^{\pm 1},\stackrel{~}{h}_{i,n},i\overline{J}`$. We know that the eigenvalues of $`h_{i,n}`$ have the form (2.11). The corresponding eigenvalue of $`\stackrel{~}{h}_{i,n}`$ equals
(3.1)
$$\frac{[n]_q}{n}\underset{jI}{}\stackrel{~}{C}_{ji}(q^n)[r_j]_{q^n}\left(\underset{r=1}{\overset{k_j}{}}(a_{jr})^n\underset{s=1}{\overset{l_j}{}}(b_{js})^n\right),n>0.$$
According to Lemma 1.1, $`\stackrel{~}{C}_{ji}(x)=\stackrel{~}{C}_{ji}^{}(x)/d(x)`$, where $`\stackrel{~}{C}_{ji}^{}(x)`$ and $`d(x)`$ are certain polynomials with positive integral coefficients (we fix a choice of such $`d(x)`$ once and for all). Therefore formula (3.1) can be rewritten as
(3.2)
$$\frac{[n]_q}{nd(q^n)}\left(\underset{m=1}{\overset{u_i}{}}(c_{im})^n\underset{p=1}{\overset{t_i}{}}(d_{ip})^n\right),$$
where $`c_{im}`$ and $`d_{ip}`$ are certain complex numbers (they are obtained by multiplying $`a_{jr}`$ and $`b_{js}`$ with all monomials appearing in $`\stackrel{~}{C}_{ji}^{}(q)[r_j]_q`$).
According to Proposition 2.4, to each monomial (2.5) in $`\chi _q(V)`$ corresponds a generalized eigenspace of $`h_{i,n},iI,n0`$, with the common eigenvalues given by formula (2.11) (note that the eigenvalues of $`k_i,iI`$, can be read off from the weight of the monomial). Using formula (3.1) we find the corresponding eigenvalues of $`\stackrel{~}{h}_{i,n},i\overline{J}`$ in the form (3.2). Now we attach to these common eigenvalues the following monomial in the letters $`Y_{i,a}^{\pm 1},iJ`$, and $`Z_{j,c}^{\pm 1},j\overline{J}`$:
$$\left(\underset{iJ}{}\underset{r=1}{\overset{k_i}{}}Y_{i,a_{ir}}\underset{s=1}{\overset{l_i}{}}Y_{i,b_{is}}^1\right)\left(\underset{k\overline{J}}{}\underset{m=1}{\overset{u_k}{}}Z_{k,c_{km}}\underset{p=1}{\overset{t_k}{}}Z_{k,d_{kp}}^1\right).$$
The above procedure can be interpreted as follows. Introduce the notation
(3.3) $`𝒴=[Y_{i,a}^{\pm 1}]_{iI,a^\times },`$
(3.4) $`𝒴^{(J)}=[Y_{i,a}^{\pm 1}]_{iJ,a^\times }[Z_{k,c}^{\pm 1}]_{k\overline{J},c^\times }`$
Write
$$(D(q)\stackrel{~}{C}^{}(q))_{ij}=\underset{k}{}p_{ij}(k)q^k.$$
###### Definition 3.2.
The homomorphism $`\tau _J:𝒴𝒴^{(J)}`$ is defined by the formulas
(3.5) $`\tau _J(Y_{i,a})`$ $`=Y_{i,a}{\displaystyle \underset{j\overline{J}}{}}{\displaystyle \underset{k}{}}Z_{j,aq^k}^{p_{ij}(k)},iJ,`$
(3.6) $`\tau _J(Y_{i,a})`$ $`={\displaystyle \underset{j\overline{J}}{}}{\displaystyle \underset{k}{}}Z_{j,aq^k}^{p_{ij}(k)},i\overline{J}.`$
Observe that the homomorphism $`\beta _J`$ can be represented as the composition of $`\tau _J`$ and the homomorphism $`𝒴^{(J)}[Y_{i,a}^{\pm 1}]_{iJ,a^\times }`$ sending all $`Z_{k,c},k\overline{J},`$ to $`1`$. Therefore $`\tau _J`$ is indeed a refinement of $`\tau _J`$, and so the restriction of $`\tau _J`$ to the image of $`\mathrm{Rep}U_q\widehat{𝔤}`$ in $`𝒴`$ is a refinement of the restriction homomorphism $`\mathrm{res}_J`$.
### 3.3. Properties of $`\tau _J`$
The main advantage of $`\tau _J`$ over $`\beta _J`$ is the following.
###### Lemma 3.3.
The homomorphism $`\tau _J`$ is injective.
###### Proof.
The statement of the lemma follows from the fact that the matrix $`\stackrel{~}{C}^{}(q)`$ is non-degenerate. ∎
###### Lemma 3.4.
Let us write $`\chi _q(V)`$ as the sum $`_kP_kQ_k`$, where $`P_k[Y_{i,a}^{\pm 1}]_{iJ,a^\times }`$, $`Q_k`$ is a monomial in $`[Z_{j,c}^{\pm 1}]_{j\overline{J},c^\times }`$, and all monomials $`Q_k`$ are distinct. Then the restriction of $`V`$ to $`U_q\widehat{𝔤}_J`$ is isomorphic to $`_kV_k`$, where $`V_k`$’s are $`U_q\widehat{𝔤}_J`$–modules with $`\chi _q^J(V_k)=P_k`$. In particular, there are no extensions between different $`V_k`$’s in $`V`$.
###### Proof.
The monomials in $`\chi _q(V)𝒴`$ encode the common eigenvalues of $`h_{i,n},iI`$ on $`V`$. It follows from Section 3.2 that the monomials in $`\tau _J(\chi _q(V))`$ encode the common eigenvalues of $`h_{i,n},iJ`$, and $`\stackrel{~}{h}_{j,n},j\overline{J}`$, on $`V`$.
Therefore we obtain that the restriction of $`V`$ to $`U_q\widehat{𝔤}_JU_q\widehat{𝔥}_J^{}`$ has a filtration with the associated graded factors $`V_kW_k`$, where $`V_k`$ is a $`U_q\widehat{𝔤}_J`$–module with $`\chi _q^J(V_k)=P_k`$, and $`W_k`$ is a one–dimensional $`U_q\widehat{𝔥}_J^{}`$–module, which corresponds to $`Q_k`$. By our assumption, the modules $`W_k`$ over $`U_q\widehat{𝔥}_J^{}`$ are pairwise distinct. Because $`U_q\widehat{𝔥}_J^{}`$ commutes with $`U_q\widehat{𝔤}_J`$, there are no extensions between $`V_kW_k`$ and $`V_lW_l`$ for $`kl`$, as $`U_q\widehat{𝔤}_JU_q\widehat{𝔥}_J^{}`$–modules. Hence the restriction of $`V`$ to $`U_q\widehat{𝔤}_J`$ is isomorphic to $`_kV_k`$. ∎
Write
$$d(q)[r_i]_q=\underset{k}{}s_i(k)q^k.$$
Set
$$B_{i,a}=\underset{k}{}Z_{i,aq^k}^{s_i(k)}.$$
###### Lemma 3.5.
We have:
(3.7) $`\tau _J(A_{i,a})`$ $`=\beta _J(A_{i,a}),iJ,`$
(3.8) $`\tau _J(A_{i,a})`$ $`=\beta _J(A_{i,a})B_{i,a},i\overline{J}`$
###### Proof.
This follows from the formula $`D(q)\stackrel{~}{C}^{}(q)C(q)=D(q)d(q)`$. ∎
In the case when $`J`$ consists of a single element $`jI`$, we will write $`𝒴^{(J)},\tau _J`$ and $`\beta _J`$ simply as $`𝒴^{(j)},\tau _j`$ and $`\beta _j`$. Consider the diagram (we use the notation (3.3), (3.4)):
(3.12) $`\begin{array}{ccc}𝒴& \stackrel{\tau _j}{}& 𝒴^{(j)}\\ & & \overline{A}_{j,x}^1\\ 𝒴& \stackrel{\tau _j}{}& 𝒴^{(j)}\end{array}`$
where the map corresponding to the right vertical row is the multiplication by
$`\beta _j(A_{j,x})^11`$.
The following result will allow us to reduce various statements to the case of $`U_q\widehat{𝔰𝔩}_2`$.
###### Lemma 3.6.
There exists a unique map $`𝒴𝒴`$, which makes the diagram $`(`$3.12$`)`$ commutative. This map is the multiplication by $`A_{j,x}^1`$.
###### Proof.
The fact that multiplication by $`A_{j,x}^1`$ makes the diagram commutative follows from formula (3.7). The uniqueness follows from the fact that $`\tau _j`$ and the multiplication by $`\beta _j(A_{j,x})^11`$ are injective maps. ∎
## 4. The structure of $`q`$–characters
In this section we prove Conjecture 1 from \[FR2\].
Let $`V`$ be an irreducible finite-dimensional $`U_q\widehat{𝔤}`$ module $`V`$ generated by highest weight vector $`v`$. Then by Proposition 3 in \[FR2\],
(4.1) $`\chi _q(V)=m_+(1+{\displaystyle \underset{p}{}}M_p),`$
where each $`M_p`$ is a monomial in $`A_{i,c}^{\pm 1}`$, $`c^\times `$ and $`m_+`$ is the highest weight monomial.
In what follows, by a monomial in $`[x_\alpha ^{\pm 1}]_{\alpha A}`$ we will always understand a monomial in reduced form, i.e., one that does not contain factors of the form $`x_\alpha x_\alpha ^1`$. Thus, in particular, if we say that a monomial $`M`$ contains $`x_\alpha `$, it means that there is a factor $`x_\alpha `$ in $`M`$ which can not be cancelled.
###### Theorem 4.1.
The $`q`$–character of an irreducible finite-dimensional $`U_q\widehat{𝔤}`$ module $`V`$ has the form $`(`$4.1$`)`$ where each $`M_p`$ is a monomial in $`A_{i,c}^1`$, $`iI`$, $`c^\times `$ (i.e., it does not contain any factors $`A_{i,c}`$).
###### Proof.
The proof follows from a combination of Lemmas 3.3, 3.6 and 1.1.
First, we observe that it suffices to prove the statement of Theorem 4.1 for fundamental representations $`V_{\omega _i}(a)`$. Indeed, then Theorem 4.1 will be true for any tensor product of the fundamental representations. By Corollary 1.4, any irreducible representation $`V`$ can be represented as a quotient of a submodule of a tensor product $`W`$ of fundamental representations, which is generated by the highest weight vector. Therefore each monomial in a $`q`$–character of $`V`$ is also a monomial in the $`q`$–character of $`W`$. In addition, the highest weight monomials of the $`q`$–characters of $`V`$ and $`W`$ coincide. This implies that Theorem 4.1 holds for $`V`$.
Second, Theorem 4.1 is true for $`𝔤=U_q\widehat{𝔰𝔩}_2`$. Indeed, by the argument above, it suffices to check the statement for the fundamental representation $`V_1(a)`$. But its $`q`$–character is known explicitly (see \[FR2\], formula (4.3)):
(4.2) $`\chi _q(V_1(a))=Y_a+Y_{aq^2}^1=Y_a(1+A_{aq}^1).`$
and it satisfies the required property.
For general quantum affine algebra $`U_q\widehat{𝔤}`$, we will prove Theorem 4.1 (for the case of the fundamental representations) by contradiction.
Suppose that the theorem fails for some fundamental representation $`V_{\omega _{i_0}}(a_0)=V`$ and denote by $`\chi `$ its $`q`$–character $`\chi _q(V)`$. Denote by $`m_+`$ the highest weight monomial $`Y_{i_0,a}`$ of $`\chi `$.
Recall from Section 1.3 that we have a partial order on the weight lattice. It induces a partial order on the monomials occurring in $`\chi `$. Let $`m`$ be the highest weight monomial in $`\chi `$, such that $`m`$ can not be written as a product of $`m_+`$ with a monomial in $`A_{i,c}^1`$, $`iI`$, $`c^\times `$. This means that
(4.3) any monomial $`m^{}`$ in $`\chi `$, such that $`m^{}>m`$, is a product of $`m_+`$ and $`A_{i,c}^1`$’s.
In Lemmas 4.2 and 4.3 we will establish certain properties of $`m`$ and in Lemma 4.4 we will prove that these properties can not be satisfied simultaneously.
Recall that a monomial in $`[Y_{i,a}^{\pm 1}]_{iI,a^\times }`$ is called dominant if does not contain factors $`Y_{i,a}^1`$ (i.e., if it is a product of $`Y_{i,a}`$’s in positive powers only).
###### Lemma 4.2.
The monomial $`m`$ is dominant.
###### Proof.
Suppose $`m`$ is not dominant. Then it contains a factor of the form $`Y_{i,a}^1`$, for some $`iI`$. Consider $`\tau _i(\chi )`$. By Lemma 3.4, we have
$$\tau _i(\chi )=\underset{p}{}\chi _{q_i}(V_p)N_p,$$
where $`V_p`$’s are representation of $`U_{q_i}\widehat{𝔰𝔩}_2=U_q\widehat{𝔤}_{\{i\}}`$ and $`N_p`$’s are monomials in $`Z_{j,a}^{\pm 1},ji`$. We have already shown that Theorem 4.1 holds for $`U_{q_i}\widehat{𝔰𝔩}_2`$, so
(4.4) $`\tau _i(\chi )={\displaystyle \underset{p}{}}\left(m_p(1+{\displaystyle \underset{r}{}}\overline{M}_{r,p})\right)N_p,`$
where each $`m_p`$ is a product of $`Y_{i,b}`$’s (in positive powers only), and each $`\overline{M}_{r,p}`$ is a product of several factors $`\overline{A}_{i,c}^1=Y_{i,cq^1}^1Y_{i,cq}^1`$ (note that $`\overline{M}_{r,p}=\tau _i(M_{r,p})`$.
Since $`m`$ contains $`Y_{i,a}^1`$ by our assumption, the monomial $`\tau _i(m)`$ is not among the monomials $`\{m_pN_p\}`$. Hence
$$\tau _i(m)=m_{p_0}\overline{M}_{r_0,p_0}N_{p_0},$$
for some $`p_0,r_0`$ and $`\overline{M}_{r_0,p_0}1`$. There exists a monomial $`m^{}`$ in $`\chi `$, such that $`\tau _i(m^{})=m_{p_0}N_{p_0}`$. Therefore using Lemma 3.6 we obtain that
$$m=m^{}M_{r_0,p_0},$$
where $`M_{r_0,p_0}`$ is obtained from $`\overline{M}_{r_0,p_0}`$ by replacing all $`\overline{A}_{i,c}^1`$ by $`A_{i,c}^1`$. In particular, $`m^{}>m`$ and by our assumption (4.3) it can be written as $`m^{}=m_+M^{}`$, where $`M^{}`$ is a product of $`A_{k,c}^1`$. But then $`m=m^{}M_{r_0,p_0}=m_+M^{}M_{r_0,p_0}`$, and so $`m`$ can be written as a product of $`m_+`$ and a product of factors $`A_{k,c}^1`$. This is a contradiction. Therefore $`m`$ has to be dominant. ∎
###### Lemma 4.3.
The monomial $`m`$ can be written in the form
(4.5) $`m=m_+M{\displaystyle \underset{p}{}}A_{j_0,a_p},`$
where $`M`$ is a product of factors $`A_{i,c}^1`$, $`iI`$, $`c^\times `$. In other words, if $`m`$ contains factors $`A_{j,a}`$, then all such $`A_{j,a}`$ have the same index $`j=j_0`$.
###### Proof.
Suppose that $`m=m_+M`$, where $`M`$ contains a factor $`A_{i,c}`$. Let $`V_m`$ be the generalized eigenspace of the operators $`k_j^{\pm 1},h_{j,n},jI`$, corresponding to the monomial $`m`$. We claim that for all $`vV_m`$ we have:
(4.6) $`x_{j,n}^+v=0,jI,ji,n.`$
Indeed, let $`\tau _j(m)=\beta _j(m)N`$ (recall that $`\beta _j(m)`$ is obtained from $`m`$ by erasing all $`Y_{s,c}`$ with $`sj`$ and $`N`$ is a monomial in $`Z_{s,c}^{\pm 1}`$, $`sI`$, $`sj`$). By Lemma 3.4, $`x_{j,n}^+v`$ belongs to the direct sum of the generalized eigenspaces $`V_{m_p^{}}`$, corresponding to the monomials $`m_p^{}`$ in $`\chi `$ such that $`\tau _j(m_p^{})=\beta _j(m_p^{})N`$ (with the same $`N`$ as in $`\tau _j(m)=\beta _j(m)N`$). By formula (3.8),
$$\tau _j\left(m_+A_{i_k,c_k}^{\pm 1}\right)=\tau _j(m_+)\beta _j(A_{i_k,c_k})^{\pm 1}\underset{i_kj}{}B_{i_k,c_k}^{\pm 1}.$$
In particular, $`N`$ contains a factor $`B_{i,c}`$, and therefore all monomials $`m_p^{}`$ with the above property must contain a factor $`A_{i,c}`$. By our assumption (4.3), the weight of each $`m_p^{}`$ can not be higher then the weight of $`m`$. But the weight of $`x_{j,n}^+v`$ should be greater than the weight of $`m`$. Therefore we obtain formula $`(`$4.6$`)`$.
Now, if $`M`$ contained factors $`A_{i,c}`$ and $`A_{j,d}`$ with $`ij`$, then any non-zero eigenvector (not generalized) in the generalized eigenspace $`V_m`$ corresponding to $`m`$ would be a highest weight vector (see formula (1.9)). Such vectors do not exist in $`V`$, because $`V`$ is irreducible. The statement of the lemma now follows. ∎
###### Lemma 4.4.
Let $`m`$ be any monomial in the $`q`$–character of a fundamental representation that can be written in the form $`(`$4.5$`)`$. Then $`m`$ is not dominant.
###### Proof.
We say a monomial $`M𝒴`$ (see (3.3)) has lattice support with base $`a_0^\times `$ if $`M[Y_{i,a_0q^k}^{\pm 1}]_{iI,k}`$.
Any monomial $`m𝒴`$ can be uniquely written as a product $`m=m^{(1)}\mathrm{}m^{(s)}`$, where each monomial $`m^{(i)}`$ has lattice support with a base $`a_i`$, and $`a_i/a_jq^{}`$ for $`ij`$. Note that a non-constant monomial in $`A_{i,bq^k}^{\pm 1},iI,k`$, can not be equal to a monomial in $`A_{i,cq^k}^{\pm 1},iI,k`$ if $`b/cq^{}`$. Therefore if $`m`$ can be written in the form $`(`$4.5$`)`$, then each $`m^{(i)}`$ can be written in the form $`(`$4.5$`)`$, where $`m_+=Y_{i_0,a}`$ if $`a_i=a`$, and $`m_+=1`$ if $`a/a_iq^{}`$ (note that the product over $`p`$ in (4.5) may be empty for some $`m^{(i)}`$). We will prove that none of $`m^{(i)}`$’s is dominant unless $`m^{(i)}=m_+`$ or $`m^{(i)}=1`$.
Consider first the case of $`m^{(1)}`$, which has lattice support with base $`a`$. Then
$$m^{(1)}=\underset{iI}{}\underset{n}{}Y_{i,aq^n}^{p_i(n)}.$$
Define Laurent polynomials $`P_i(x)`$, $`iI`$ by
$$P_i(x)=\underset{n}{}p_i(n)x^n.$$
If $`m^{(1)}`$ can be written in the form $`(`$4.5$`)`$, then
(4.7) $`P_i(x)={\displaystyle \underset{jI}{}}C_{ij}(x)R_j(x)+\delta _{i,i_0},iI,`$
where $`R_j(x)`$’s are some polynomials with integral coefficients. All of these coefficients are non-negative if $`jj_0`$. Now suppose that $`m^{(1)}`$ is a dominant monomial. Then each $`P_i(x)`$ is a polynomial with non-negative coefficients. We claim that this is possible only if all $`R_i(x)=0`$.
Indeed, according to Lemma 1.1, the coefficients of the inverse matrix to $`C(x)`$, $`\stackrel{~}{C}(x)`$, can be written in the form $`(`$1.2$`)`$, where $`\stackrel{~}{C}_{jk}^{}(x)`$, $`d(x)`$ are polynomials with non-negative coefficients. Multiplying $`(`$4.7$`)`$ by $`\stackrel{~}{C}^{}(x)`$, we obtain
(4.8) $`{\displaystyle \underset{jI}{}}P_j(x)\stackrel{~}{C}_{jk}^{}(x)+d(x)R_k(x)=\stackrel{~}{C}_{i_0,k}^{}(x),kI.`$
Given a Laurent polynomial
$$p(x)=\underset{ris}{}p_ix^i,p_r0,p_s0,$$
we will say that the length of $`p(x)`$ equals $`r+s`$. Clearly, the length of the sum and of the product of two polynomials with non-negative coefficients is greater than or equal to the length of each of them. Therefore if $`kj_0`$, and if $`R_k(x)0`$, then the length of the LHS is greater than or equal to the length of $`d(x)`$, which is greater than the length of $`\stackrel{~}{C}_{i_0,k}^{}`$ by Lemma 1.1. This implies that $`R_k(x)=0`$ for $`kj_0`$.
Hence $`m^{(1)}`$ can be written in the form
$$m^{(1)}=Y_{i,a}\underset{n}{}A_{j_0,aq^n}^{c_n}.$$
But such a monomial can not be dominant because its weight is $`\omega _in\alpha _{j_0}`$, where $`n>0`$, and such a weight is not dominant. This proves the required statement for the factor $`m^{(1)}`$ of $`m`$ (which has lattice support with base $`a`$).
Now consider a factor $`m^{(i)}`$ with lattice support with base $`b`$, such that $`b/aq^{}`$. In this case we obtain the following equation: the LHS of formula (4.8) $`=0`$. The previous discussion immediately implies that there are no solutions of this equation with non-zero polynomials $`R_k(x)`$ satisfying the above conditions. This completes the proof of the lemma. ∎
Theorem 4.1 now follows from Lemmas 4.2, 4.3 and 4.4. ∎
###### Corollary 4.5.
The only dominant monomial in $`\chi _q(V_{\omega _i}(a))`$ is the highest weight monomial $`Y_{i,a}`$.
###### Proof.
This follows from the proof of Lemma 4.4. ∎
## 5. A characterization of $`q`$–characters in terms of the screening operators
In this section we prove Conjecture 2 from \[FR2\].
### 5.1. Definition of the screening operators
First we recall the definition of the screening operators on $`𝒴=[Y_{i,a}^{\pm 1}]_{iI;a^\times }`$ from \[FR2\] and state the main result.
Consider the free $`𝒴`$–module with generators $`S_{i,x},x^\times `$,
$$\stackrel{~}{𝒴}_i=\underset{x^\times }{}𝒴S_{i,x}.$$
Let $`𝒴_i`$ be the quotient of $`\stackrel{~}{𝒴}_i`$ by the relations
(5.1)
$$S_{i,xq_i^2}=A_{i,xq_i}S_{i,x}.$$
Clearly,
$$𝒴_i\underset{x(^\times /q_i^2)}{}𝒴S_{i,x},$$
and so $`𝒴_i`$ is also a free $`𝒴`$–module.
Define a linear operator $`\stackrel{~}{S}_i:𝒴\stackrel{~}{𝒴}_i`$ by the formula
$$\stackrel{~}{S}_i(Y_{j,a})=\delta _{ij}Y_{i,a}S_{i,a}$$
and the Leibniz rule: $`\stackrel{~}{S}_i(ab)=b\stackrel{~}{S}_i(a)+a\stackrel{~}{S}_i(b)`$. In particular,
$$\stackrel{~}{S}_i(Y_{j,a}^1)=\delta _{ij}Y_{i,a}^1S_{i,a}.$$
Finally, let
$$S_i:𝒴𝒴_i$$
be the composition of $`\stackrel{~}{S}_i`$ and the projection $`\stackrel{~}{𝒴}_i𝒴_i`$. We call $`S_i`$ the $`i`$th screening operator.
The following statement was conjectured in \[FR2\] (Conjecture 2).
###### Theorem 5.1.
The image of the homomorphism $`\chi _q`$ equals the intersection of the kernels of the operators $`S_i,iI`$.
In \[FR2\] this theorem was proved in the case of $`U_q\widehat{𝔰𝔩}_2`$. In the rest of this section we prove it for an arbitrary $`U_q\widehat{𝔤}`$.
### 5.2. Description of $`\mathrm{Ker}S_i`$
First, we describe the kernel of $`S_i`$ on $`𝒴`$. The following result was announced in \[FR2\], Proposition 6.
###### Proposition 5.2.
The kernel of $`S_i:𝒴𝒴_i`$ equals
(5.2) $`𝒦_i=[Y_{j,a}^{\pm 1}]_{ji;a^\times }[Y_{i,b}+Y_{i,b}A_{i,bq_i}^1]_{b^\times }.`$
###### Proof.
A simple computation shows that $`𝒦_i\mathrm{Ker}_𝒴S_i`$. Let us show that $`\mathrm{Ker}_𝒴S_i𝒦_i`$.
For $`x^\times `$, denote by $`𝒴(x)`$ the subring $`[Y_{j,xq^n}^{\pm 1}]_{jI,n}`$ of $`𝒴`$. We have:
$$𝒴\underset{x(^\times /q^{})}{}𝒴(x).$$
###### Lemma 5.3.
$$\mathrm{Ker}_𝒴S_i=\underset{x(^\times /q^{})}{}\mathrm{Ker}_{𝒴(x)}S_i.$$
###### Proof.
Let $`P𝒴`$, and suppose it contains $`Y_{j,a}^{\pm 1}`$ for some $`a^\times `$ and $`jI`$. Then we can write $`P`$ as the sum $`_kR_kQ_k`$, where $`Q_k`$’s are distinct monomials, which are products of the factors $`Y_{s,aq^n}^{\pm 1},sI,n`$ (in particular, one of the $`Q_k`$’s could be equal to $`1`$), and $`R_k`$’s are polynomials which do not contain $`Y_{s,aq^n}^{\pm 1},sI,n`$. Then
$$S_i(P)=\underset{k}{}(Q_kS_i(R_k)+R_kS_i(Q_k)).$$
By definition of $`S_i`$, $`S_i(Q_k)`$ belongs to $`𝒴S_{i,a}`$, while $`S_i(R_k)`$ belongs to the direct sum of $`𝒴S_{i,b}`$, where $`baq^{}`$.
Therefore if $`P\mathrm{Ker}_𝒴S_i`$, then $`_kQ_kS_i(R_k)=0`$. Since $`Q_k`$’s are distinct, we obtain that $`R_k\mathrm{Ker}_𝒴S_i`$. But then $`S_i(P)=_kR_kS_k(Q_k)`$. Therefore $`P`$ can be written as $`_lR_l\stackrel{~}{Q}_l`$, where each $`\stackrel{~}{Q}_l`$ is a linear combination of the $`Q_k`$’s, such that $`\stackrel{~}{Q}_l\mathrm{Ker}_𝒴S_i`$. This proves that
$$P\mathrm{Ker}_{𝒴(a)}S_i\mathrm{Ker}_{𝒴^{(a)}}S_i,$$
where $`𝒴(a)=[Y_{j,b}^{\pm 1}]_{jI,baq^{}}`$. By repeating this procedure we obtain the lemma (because each polynomial contains a finite number of variables $`Y_{j,a}^{\pm 1}`$, we need to apply this procedure finitely many times). ∎
According to Lemma 5.3, it suffices to show that $`\mathrm{Ker}_{𝒴(x)}S_i𝒦_i(x)`$, where
$$𝒦_i(x)=[Y_{j,xq^n}^{\pm 1}]_{ji;n}[Y_{i,xq^n}+Y_{i,xq^n}A_{i,xq^nq_i}^1]_n.$$
Denote $`Y_{j,xq^n}`$ by $`y_{j,n}`$, $`A_{j,xq^n}`$ by $`a_{j,n}`$, and $`A_{j,xq^n}Y_{j,xq^nq_j}^1Y_{j,xq^nq_j^1}^1`$ by $`\overline{a}_{j,n}`$. Note that $`\overline{a}_{j,n}`$ does not contain factors $`y_{j,m}^{\pm 1},m`$.
Let $`T`$ be the shift operator on $`𝒴(x)`$ sending $`y_{j,n}`$ to $`y_{j,n+1}`$ for all $`jI`$. It follows from the definition of $`S_i`$ that $`P\mathrm{Ker}_{𝒴(x)}S_i`$ if and only if $`T(P)\mathrm{Ker}_{𝒴(x)}S_i`$. Therefore (applying $`T^m`$ with large enough $`m`$ to $`P`$) we can assume without loss of generality that $`P[y_{i,n},y_{i,n+2r_i}^1]_{n0}[y_{j,n}^{\pm 1}]_{ji,n0}`$.
We find from the definition of $`S_i`$:
$`S_i(y_{j,n})`$ $`=0,ji,`$
(5.3) $`S_i(y_{i,2r_in+ϵ})`$ $`=y_{i,ϵ}{\displaystyle \underset{k=1}{\overset{n}{}}}y_{i,2r_ik+ϵ}^2\overline{a}_{i,r_i(2k1)+ϵ}S_{i,xq^ϵ},`$
where $`ϵ\{0,1,\mathrm{},2r_i1\}`$. Therefore each $`P\mathrm{Ker}_{𝒴(x)}S_i`$ can be written as a sum $`P=P_ϵ`$, where each $`P_ϵ\mathrm{Ker}_{𝒴(x)}S_i`$ and
$$P_ϵ[y_{i,2r_in+ϵ},y_{i,2r_i(n+1)+ϵ}^1]_{n0}[y_{j,n}^{\pm 1}]_{ji,n0}.$$
It suffices to consider the case $`ϵ=0`$. Thus, we show that if
$$P𝒴_i^0(x)=[y_{i,2r_in},y_{i,2r_i(n+1)}^1]_{n0}[y_{j,n}^{\pm 1}]_{ji,n0},$$
then
$$P𝒦_i^0(x)=[t_n]_{n0}[y_{j,n}^{\pm 1}]_{ji,n0},$$
where
$$t_n=y_{i,2r_in}+y_{i,2r_in}a_{i,r_i(2n+1)}^1=y_{i,2r_in}+y_{i,2r_i(n+1)}^1\overline{a}_{i,r_i(2n+1)}^1.$$
Consider a homomorphism $`𝒦_i^0(x)[y_{i,2r_in}]_{n0}𝒴_i^0(x)`$ sending $`y_{j,n}^{\pm 1},ji`$ to $`y_{j,n}^{\pm 1}`$, $`y_{i,2r_in}`$ to $`y_{i,2r_in}`$, and $`t_n`$ to $`y_{i,2r_in}+y_{i,2r_i(n+1)}^1\overline{a}_{i,r_i(2n+1)}^1`$. This homomorphism is surjective, and its kernel is generated by the elements
(5.4)
$$(t_ny_{i,2r_in})\overline{a}_{i,r_i(2n+1)}y_{i,2r_i(n+1)}1.$$
Therefore we identify $`𝒴_i^0(x)`$ with the quotient of $`𝒦_i^0(x)[y_{i,2r_in}]_{n0}`$ by the ideal generated by elements of the form (5.4).
Consider the set of monomials
$$t_{n_1}\mathrm{}t_{n_k}y_{i,2r_im_1}\mathrm{}y_{i,2r_im_l}\underset{ji,p_j0}{}y_{j,p_j}^{\pm 1},$$
where all $`n_1n_2\mathrm{}n_k0,m_1m_2\mathrm{}m_l0`$, and also $`m_jn_i+1`$ for all $`i`$ and $`j`$. We call these monomials reduced. It is easy to see that the set of reduced monomials is a basis of $`𝒴_i^0(x)`$.
Now let $`P`$ be an element of the kernel of $`S_i`$ on $`𝒴_i^0(x)`$. Let us write it as a linear combination of the reduced monomials. We represent $`P`$ as $`y_{i,2r_iN}^aQ+R`$. Here $`N`$ is the largest integer, such that $`y_{i,2r_iN}`$ is present in at least one of the basis monomials appearing in its decomposition; $`a>0`$ is the largest power of $`y_{i,2r_iN}`$ in $`P`$; $`Q0`$ does not contain $`y_{i,2r_iN}`$, and $`R`$ is not divisible by $`y_{i,2r_iN}^a`$. Recall that here both $`y_{i,2r_iN}^aQ`$ and $`R`$ are linear combinations of reduced monomials.
Recall that $`S_i(t_n)=0`$, $`S_i(y_{j,n}^{\pm 1})=0,ji`$, and $`S_i(y_{i,2r_in})`$ is given by formula (5.3).
Suppose that $`N>0`$. According to formula (5.3),
(5.5)
$$S_i(P)=ay_{i,2r_iN}^{a+1}\underset{k=1}{\overset{N1}{}}y_{i,2r_ik}\underset{l=1}{\overset{N}{}}\overline{a}_{i,r_i(2l1)}y_{i,0}QS_{i,x}+\mathrm{}$$
where the dots represent the sum of terms that are not divisible by $`y_{i,2r_iN}^{a+1}`$. Note that the first term in (5.5) is non-zero because the ring $`𝒴_i^0(x)`$ has no divisors of zero.
The monomials appearing in (5.5) are not necessarily reduced. However, by construction, $`Q`$ does not contain $`t_{N1}`$, for otherwise $`y_{i,2r_iN}^aQ`$ would not be a linear combination of reduced monomials. Therefore when we rewrite (5.5) as a linear combination of reduced monomials, each reduced monomial occurring in this linear combination is still divisible by $`y_{i,2r_iN}^{a+1}`$. On the other hand, no reduced monomials occurring in the other terms of $`S_i(P)`$ (represented by dots) are divisible by $`y_{i,2r_iN}^{a+1}`$. Hence for $`P`$ to be in the kernel, the first term of (5.5) has to vanish, which is impossible. Therefore $`P`$ does not contain $`y_{i,2r_im}`$’s with $`m>0`$.
But then $`P=_ky_{i,0}^{p_k}R_k`$, where $`R_k𝒦_{i0}(x)`$, and $`S_i(P)=_kp_ky_{i,0}^{p_k1}R_kS_{i,x}`$. Such $`P`$ is in the kernel of $`S_i`$ if and only if all $`p_k=0`$ and so $`P𝒦_{i0}(x)`$. This completes the proof of Proposition 5.2. ∎
Set
(5.6) $`𝒦={\displaystyle \underset{iI}{}}𝒦_i={\displaystyle \underset{iI}{}}\left([Y_{j,a}^{\pm 1}]_{ji;a^\times }[Y_{i,b}+Y_{i,b}A_{i,bq_i}^1]_{b^\times }\right).`$
Now we will prove that the image of the $`q`$–character homomorphism $`\chi _q`$ equals $`𝒦`$.
### 5.3. The image of $`\chi _q`$ is a subspace of $`𝒦`$
First we show that the image of $`\mathrm{Rep}U_q\widehat{𝔤}`$ in $`𝒴`$ under the $`q`$–character homomorphism belongs to the kernel of $`S_i`$.
Recall the ring $`𝒴^{(i)}=[Y_{i,a}^{\pm 1}]_{a^\times }[Z_{j,c}^{\pm 1}]_{ji,c^\times }`$ and the homomorphism $`\tau _i:𝒴𝒴^{(i)}`$ from Section 3.2.
Let $`\overline{𝒴}_i`$ be the quotient of $`\underset{x^\times }{}[Y_{i,a}^{\pm 1}]_{a^\times }S_{i,x}`$ by the submodule generated by the elements of the form $`S_{i,xq_i^2}\overline{A}_{i,xq_i}S_{i,x}`$, where $`\overline{A}_{i,xq_i}=Y_{i,x}Y_{i,xq_i^2}`$. Define a derivation $`\overline{S}_i:[Y_{i,a}^{\pm 1}]_{a^\times }\overline{𝒴}_i`$ by the formula $`\overline{S}_i(Y_{i,a})=Y_{i,a}S_{i,a}`$. Thus, $`\overline{𝒴}_i`$ coincides with the module $`𝒴_i`$ in the case of $`U_{q_i}\widehat{𝔰𝔩}_2`$ and $`\overline{S}_i`$ is the corresponding screening operator.
Set
$$𝒴_i^{(i)}=[Z_{j,c}^{\pm 1}]_{ji,c^\times }\overline{𝒴}_i.$$
The map $`\overline{S}_i`$ can be extended uniquely to a map $`𝒴^{(i)}𝒴_i^{(i)}`$ by $`\overline{S}_i(Z_{j,c})=0`$ for all $`ji,c^\times `$ and the Leibniz rule. We will also denote it by $`\overline{S}_i`$. The embedding $`\tau _i`$ gives rise to an embedding $`𝒴_i𝒴_i^{(i)}`$ which we also denote by $`\tau _i`$.
###### Lemma 5.4.
The following diagram is commutative
$$\begin{array}{ccc}𝒴& \stackrel{\tau _i}{}& 𝒴^{(i)}\\ S_i& & \overline{S}_i& & \\ 𝒴_i& \stackrel{\tau _i}{}& 𝒴_i^{(i)}\end{array}$$
###### Proof.
Since $`\tau _i`$ is a ring homomorphism and both $`S_i`$, $`\overline{S}_i`$ are derivations, it suffices to check commutativity on the generators. Let us choose a representative $`x`$ in each $`q_i^2`$–coset of $`^\times `$. Then we can write:
$$𝒴_i=\underset{x^\times /q_i^2}{}𝒴S_{i,x},𝒴_i^{(i)}=\underset{x^\times /q_i^2}{}𝒴^{(i)}S_{i,x}.$$
By definition,
$`S_i(Y_{j,xq_i^{2n}})`$ $`=\delta _{ij}Y_{i,x}{\displaystyle \underset{m}{}}A_{i,xq_i^{2m+1}}^{\pm 1}S_{i,x},`$
$`\overline{S}_i(Y_{i,xq_i^{2n}})`$ $`=Y_{i,x}{\displaystyle \underset{m}{}}\overline{A}_{i,xq_i^{2m+1}}^{\pm 1}S_{i,x},`$
$`\overline{S}_i(Z_{j,c})`$ $`=0,ji.`$
Recall from formula (3.5) that $`\tau _i(Y_{i,x})`$ equals $`Y_{i,x}`$ times a monomial in $`Z_{j,c}^{\pm 1},ji`$, and from formula (3.8) that $`\tau _i(A_{i,b}^{\pm 1})=\overline{A}_{i,b}^{\pm 1}`$. Using these formulas we obtain:
$$(\tau _iS_i)(Y_{i,xq_i^{2n}})=(\overline{S}_i\tau _i)(Y_{i,xq^{2n}})=\tau _i(Y_{i,x})\overline{A}_{i,xq_i^{2m+1}}^{\pm 1}S_{i,x}.$$
On the other hand, when $`ji`$, $`\tau _i(Y_{j,x})`$ is a monomial in $`Z_{k,c}^{\pm 1},ki`$, according to formula (3.6). Therefore
$$(\tau _iS_i)(Y_{j,x})=(\overline{S}_i\tau _i)(Y_{j,x})=0,ji.$$
The proves the lemma. ∎
###### Corollary 5.5.
The image of the $`q`$–character homomorphism $`\chi _q:\mathrm{Rep}U_q\widehat{𝔤}𝒴`$ is contained in the kernel of $`S_i`$ on $`𝒴`$.
###### Proof.
Let $`V`$ be a finite-dimensional representation of $`U_q\widehat{𝔤}`$. We need to show that $`S_i(\chi _q(V))=0`$. By Lemma 3.4, we can write $`\chi _q(V)`$ as the sum $`_kP_kQ_k`$, where each $`P_k[Y_{i,a}^{\pm 1}]_{a^\times }`$ is in the image of the homomorphism $`\chi _q^{(i)}:\mathrm{Rep}U_{q_i}\widehat{𝔰𝔩}_2[Y_{i,a}^{\pm 1}]_{a^\times }`$, and $`Q_k`$ is a monomial in $`Z_{j,c}^{\pm 1},ji`$.
The image of $`\chi _q^{(i)}`$ lies in the kernel of the operator $`\overline{S}_i`$ (in fact, they are equal, but we will not use this now). This immediately follows from the fact that $`\mathrm{Rep}U_q\widehat{𝔰𝔩}_2[\chi _q(V_1(a))]`$ and $`\overline{S}_i(\chi _q(V_1(a)))=0`$, which is obtained by a straightforward calculation. We also have: $`\overline{S}_i(Z_{j,c})=0,ji`$. Therefore $`(\overline{S}_i\tau _i)(\chi _q(V))=0`$. By Lemma 5.4, $`(\tau _iS_i)(\chi _q(V))=0`$. Since $`\tau _i`$ is injective by Lemma 3.3, we obtain: $`S_i(\chi _q(V))=0`$. ∎
### 5.4. $`𝒦`$ is a subspace of the image of $`\chi _q`$
Let $`P𝒦`$. We want to show that $`P\mathrm{Im}\chi _q`$.
A monomial $`m`$ contained in $`P𝒴`$ is called highest monomial (resp., lowest monomial), if its weight is not lower (resp., not higher) than the weight of any other monomial contained in $`P`$.
###### Lemma 5.6.
Let $`P𝒦`$. Then any highest monomial in $`P`$ is dominant and any lowest weight monomial in $`P`$ is antidominant.
###### Proof.
First we prove that the highest monomials are dominant.
By Proposition 5.2,
$$P𝒦_i=[Y_{j,a}^{\pm 1}]_{ji;a^\times }[Y_{i,b}+Y_{i,b}A_{i,bq_i}^1]_{b^\times }.$$
The statement of the lemma will follow if we show that a highest weight monomial contained in any element of $`𝒦_i`$ does not contain factors $`Y_{i,a}^1`$.
Indeed, the weight of $`Y_{i,a}`$ is $`\omega _i`$, and the weight of $`Y_{i,b}A_{i,bq_i}^1`$ is $`\omega _i\alpha _i`$. Denote $`t_b=[Y_{i,b}+Y_{i,b}A_{i,bq_i}^1]_{b^\times }`$. Given a polynomial $`Q[t_b]_{b^\times }`$, let $`m_1,\mathrm{},m_k`$ be its monomials (in $`t_b`$) of highest degree. Clearly, the monomials of highest weight in $`Q`$ (considered as a polynomial in $`Y_{j,a}^{\pm 1}`$) are $`m_1,\mathrm{},m_k`$, in which we substitute each $`t_b`$ by $`Y_{i,b}`$. These monomials do not contain factors $`Y_{i,a}^1`$.
The statement about the lowest weight monomials is proved similarly, once we observe that
$$𝒦_i=[Y_{j,a}^{\pm 1}]_{ji;a^\times }[Y_{i,b}^1+Y_{i,bq_i^2}A_{i,bq_i^1}]_{b^\times }.$$
Let $`m`$ be a highest monomial in $`P`$, and suppose that it enters $`P`$ with the coefficient $`\nu _m0`$. Then $`m`$ is dominant by Lemma 5.2. According to Theorem 1.3(2) and formula (2.13), there exists an irreducible representation $`V_1`$ of $`U_q\widehat{𝔤}`$, such that $`m`$ is the highest weight monomial in $`\chi _q(V_1)`$. Since $`\chi _q(V_1)𝒦`$ by Corollary 5.5, we obtain that $`P_1=P\nu _m\chi _q(V_1)𝒦`$.
For $`P𝒴`$, denote by $`\mathrm{\Lambda }(P)`$ the (finite) set of dominant weights $`\lambda `$, such that $`P`$ contains a monomial of weight greater than or equal to $`\lambda `$. By Proposition 5.2, if $`P𝒦`$ and $`\mathrm{\Lambda }(P)`$ is empty, then $`P`$ is necessarily equal to $`0`$.
Note that for any irreducible representation $`V`$ of $`U_q\widehat{𝔤}`$ of highest weight $`\mu `$, $`\mathrm{\Lambda }(\chi _q(V))`$ is the set of all dominant weights which are less than or equal to $`\mu `$. Therefore $`\mathrm{\Lambda }(P_1)`$ is properly contained in $`\mathrm{\Lambda }(P)`$. By applying the above subtraction procedure finitely many times, we obtain an element $`P_k=P{\displaystyle \underset{i=1}{\overset{k}{}}}\chi _q(V_i)`$, for which $`\mathrm{\Lambda }(P_k)`$ is empty. But then $`P_k=0`$.
This shows that $`𝒦\mathrm{Im}\chi _q`$. Together with Lemma 5.5, this gives us Theorem 5.1 and the following corollary.
###### Corollary 5.7.
The $`q`$–character homomorphism,
$$\chi _q:\mathrm{Rep}U_q\widehat{𝔤}𝒦,$$
where $`𝒦`$ is given by $`(`$5.6$`)`$, is a ring isomorphism.
### 5.5. Application: Algorithm for constructing $`q`$-characters
Consider the following problem: Give an algorithm which for any dominant monomial $`m_+`$ constructs the $`q`$–character of the irreducible $`U_q\widehat{𝔤}`$–module whose highest weight monomial is $`m_+`$. In this section we propose such an algorithm. We prove that our algorithm produces the $`q`$–characters of the fundamental representations (in this case $`m_+=Y_{i,a}`$). We conjecture that the algorithm works for any irreducible module.
Roughly speaking, in our algorithm we start from $`m_+`$ and gradually expand it in all possible $`U_{q_i}\widehat{𝔰𝔩}_2`$ directions. (Here we use the explicit formulas for $`q`$–characters of $`U_q\widehat{𝔰𝔩}_2`$ and Lemma 3.6.) In the process of expansion some monomials may come from different directions. We identify them in the maximal possible way.
First we introduce some terminology.
Let $`\chi _0[Y_{i,a}^{\pm 1}]_{iI,a^\times }`$ be a polynomial and $`m`$ a monomial in $`\chi `$ occuring with coefficient $`s_{>0}`$. By definition, a coloring of $`m`$ is a set $`\{s_i\}_{iI}`$ of non-negative integers such that $`s_is`$. A polynomial $`\chi `$ in which all monomials are colored is called a colored polynomial.
We think of $`s_i`$ as the number of monomials of type $`m`$ which have come from direction $`i`$ (or by expanding with respect to the $`i`$-th subalgebra $`U_{q_i}\widehat{𝔰𝔩}_2`$).
A monomial $`m`$ is called $`i`$dominant if it does not contain variables $`Y_{i,a}^1`$, $`a^\times `$.
A monomial $`m`$ occurring in a colored polynomial $`\chi `$ with coefficient $`s`$ is called admissible if $`m`$ is $`j`$–dominant for all $`j`$ such that $`s_j<s`$. A colored polynomial is called admissible if all of its monomials are admissible.
Given an admissible monomial $`m`$ occurring with coefficient $`s`$ in a colored polynomial $`\chi `$, we define a new colored polynomial $`i_m(\chi )`$, called the $`i`$–expansion of $`\chi `$ with respect to $`m`$, as follows.
If $`s_i=s`$, then $`i_m(\chi )=\chi `$. Suppose that $`s_i<s`$ and let $`\overline{m}`$ be obtained from $`m`$ by setting $`Y_{j,a}^{\pm 1}=1`$, for all $`ji`$. Since $`m`$ is admissible, $`\overline{m}`$ is a dominant monomial. Therefore there exists an irreducible $`U_{q_i}\widehat{𝔰𝔩}_2`$ module $`V`$, such that the highest weight monomial of $`V`$ is $`\overline{m}`$. We have explicit formulas for the $`q`$-characters of all irreducible $`U_q\widehat{𝔰𝔩}_2`$–modules (see, e.g., \[FR2\], Section 4.1). We write $`\chi _{q_i}(V)=\overline{m}(1+_p\overline{M}_p)`$, where $`\overline{M}_p`$ is a product of $`\overline{A}_{i,a}^1`$. Let
(5.7) $`\mu =m(1+{\displaystyle \underset{p}{}}M_p),`$
where $`M_p`$ is obtained from $`\overline{M}_p`$ by replacing all $`\overline{A}_{i,a}^1`$ by $`A_{i,a}^1`$.
The colored polynomial $`i_m(\chi )`$ is obtained from $`\chi `$ by adding monomials occuring in $`\mu `$ by the following rule. Let monomial $`n`$ occur in $`\mu `$ with coefficient $`t_{>0}`$. If $`n`$ does not occur in $`\chi `$ then it is added with the coefficient $`t(ss_i)`$ and we set the $`i`$-th coloring of $`n`$ to be $`t(ss_i)`$, and the other colorings to be $`0`$. If $`n`$ occurs in $`\chi `$ with coefficient $`r`$ and coloring $`\{r_i\}_{iI}`$, then the new coefficient of $`n`$ in $`i_m(\chi )`$ is $`\mathrm{max}\{r,r_i+t(ss_i)\}`$. In this case the $`i`$-th coloring is changed to $`r_i+t(ss_i)`$ and other colorings are not changed.
Obviously, the $`i`$–expansions of $`\chi `$ with respect to $`m`$ commute for different $`i`$. To expand a monomial $`m`$ in all directions means to compute $`\mathrm{}_m(\mathrm{}2_m(1_m(\chi ))\mathrm{})`$, where $`\mathrm{}=rk(𝔤)`$.
Now we describe the algorithm. We start with the colored polynomial $`m_+`$ with all colorings set equal zero. Let the $`U_q𝔤`$–weight of $`m_+`$ be $`\lambda `$. The set of weights of the form $`\lambda _ia_i\alpha _i`$, $`a_i_0`$ has a natural partial order. Choose any total order compatible with this partial order, so we have $`\lambda =\lambda _1>\lambda _2>\lambda _3>\mathrm{}`$
At the first step we expand $`m_+`$ in all directions. Then we expand in all directions all monomials of weight $`\lambda _1`$ obtained at the first step. Then we expand in all directions all monomials of weight $`\lambda _2`$ obtained at the previous steps, and so on. Since the monomials obtained in the expansion of a monomial of $`U_q𝔤`$–weight $`\mu `$ have weights less than $`\mu `$, the result does not depend on the choice of the total order.
Note that for any monomial $`m`$ except for $`m_+`$ occurring with coefficient $`s`$ at any step, we have $`\mathrm{max}_i\{s_i\}=s`$. This property means that we identify the monomials coming from different directions in the maximal possible way.
The algorithm stops if all monomials have been expanded. We say that the algorithm fails at a monomial $`m`$ if $`m`$ is the first non-admissible monomial to be expanded.
Let $`m_+`$ be a dominant monomial and $`V`$ the corresponding irreducible module.
###### Conjecture 5.8.
The algorithm never fails and stops after finitely many steps. Moreover, the final result of the algorithm is the $`q`$–character of $`V`$.
###### Theorem 5.9.
Suppose that $`\chi _q(V)`$ does not contain dominant monomials other then $`m_+`$. Then Conjecture 5.8 is true. In particular, Conjecture 5.8 is true in the case of fundamental representations.
###### Proof.
For $`iI`$, let $`D_i`$ be a decomposition of the set of monomials in $`\chi _q(V)`$ with multiplicities into a disjoint union of subsets such that each subset forms the $`q`$–character of an irreducible $`U_{q_i}\widehat{𝔰𝔩}_2`$ module. We refer to this decompostion $`D_i`$ as the $`i`$-th decomposition of $`\chi _q(V)`$. Denote $`D`$ the collection of $`D_i`$, $`iI`$.
Consider the following colored oriented graph $`\mathrm{\Omega }_V(D)`$. The vertices are monomials in $`\chi _q(V)`$ with multiplicities. We draw an arrow of color $`i`$ from a monomial $`m_1`$ to a monomial $`m_2`$ if and only if $`m_1`$ and $`m_2`$ are in the same subset of the $`i`$-th decomposition and $`m_2=A_{i,a}^1m_1`$ for some $`a^\times `$.
We call an oriented graph a tree (with one root) if there exists a vertex $`v`$ (called root), such that there is an oriented path from $`v`$ to any other vertex. The graph $`\mathrm{\Omega }_W(D)`$, where $`W`$ is an irreducible $`U_q\widehat{𝔰𝔩}_2`$–module is always a tree and its root corresponds to the highest weight monomial.
Consider the full subgraph of $`\mathrm{\Omega }_V(D)`$ whose vertices correspond to monomials from a given subset of the $`i`$-th decomposition of $`\chi _q(V)`$. All arrows of this subgraph are of color $`i`$. By Lemma 3.6, this subgraph is a tree isomorphic to the graph of the corresponding irreducible $`U_{q_i}\widehat{𝔰𝔩}_2`$–module. Moreover, its root corresponds to an $`i`$–dominant monomial. Therefore if a vertex of $`\mathrm{\Omega }_V(D)`$ has no incoming arrows of color $`i`$, then it corresponds to an $`i`$–dominant monomial. In particular, if $`m`$ has no incoming arrows in $`\mathrm{\Omega }_V(D)`$, then $`m`$ is dominant. Since by our assumption $`\chi _q(V)`$ does not contain any dominant monomials except for $`m_+`$, the graph $`\mathrm{\Omega }_V(D)`$ is a tree with root $`m_+`$.
Choose a sequence of weights $`\lambda _1>\lambda _2>\mathrm{}`$ as above. We prove by induction on $`r`$ the following statement $`S_r`$:
The algorithm does not fail during the first $`r`$ steps. Let $`\chi _r`$ be the resulting polynomial after these steps. Then the coefficient of each monomial $`m`$ in $`\chi _r`$ is not greater than that in $`\chi _q(V)`$ and the coefficients of monomials of weights $`\lambda _1,\mathrm{},\lambda _r`$ in $`\chi _r`$ and $`\chi _q(V)`$ are equal. Furthermore, there exists a decomposition $`D`$ of $`\chi _q(V)`$, such that monomials in $`\chi _r`$ can be identified with vertices in $`\mathrm{\Omega }_V(D)`$ in such a way that all outgoing arrows from vertices with $`U_q𝔤`$–weights $`\lambda _1,\mathrm{},\lambda _r`$ go to vertices of $`\chi _r`$. Finally, the $`j`$-th coloring of a monomial $`m`$ in $`\chi _r`$ is just the number of vertices of type $`m`$ in $`\chi _r`$ which have incoming arrows of color $`j`$ in $`\mathrm{\Omega }_V(D)`$.
The statement $`S_0`$ is obviously true. Assume that the statement $`S_r`$ is true for some $`r0`$. Recall that at the $`(r+1)`$st step we expand all monomials of $`\chi _r`$ of weight $`\lambda _{r+1}`$.
Let $`m`$ be a monomial of weight $`\lambda _{r+1}`$ in $`\chi _r`$, which enters with coefficient $`s`$ and coloring $`\{s_i\}_{iI}`$.
Then the monomial $`m`$ enters $`\chi _q(V)`$ with coefficient $`s`$ as well. Indeed, $`\mathrm{\Omega }_V(D)`$ is a tree, so all vertices $`m`$ have incoming arrows from vertices of larger weight. By the statement $`S_r`$ this arrows go to vertices corresponding to monomials in $`\chi _r`$.
Suppose that $`s_j<s`$ for some $`jI`$. Then $`m`$ is $`j`$–dominant. Indeed, otherwise each vertex of type $`m`$ in $`\mathrm{\Omega }_V(D)`$ has an incoming arrow of color $`j`$ coming from a vertex of higher weight. Then by the last part of the statement $`S_r`$, $`s_j=s`$.
Therefore the monomial $`m`$ is admissible, and the algorithm does not fail at $`m`$.
Consider the expansion $`j_m(\chi _r)`$. Let $`\mu `$ be as in $`(`$5.7$`)`$. In the $`j`$-th decomposition of $`\chi _q(V)`$, $`m`$ corresponds to a root of a tree whose vertices can be identified with monomials in $`\mu `$. We fix such an identification. Then monomials in $`\mu `$ get identified with vertices in $`\mathrm{\Omega }_V(D)`$.
Let $`v`$ be the vertex in $`\mathrm{\Omega }_V(D)`$, corresponding to a monomial $`n`$ in $`\mu `$. Denote the coefficient of $`n`$ in $`\chi _r`$ by $`p`$ and the coloring by $`\{p_i\}_{iI}`$. We have two cases:
a) $`p_j=p`$. Then the last part of the statement $`S_r`$ implies that the vertex $`v`$ does not belong to $`\chi _r`$. We add the monomial $`n`$ to $`\chi _r`$ and increase $`p_j`$ by one (we have already identified it with $`v`$).
b) $`p_j<p`$. Then by $`S_r`$ there exists a vertex $`w`$ in $`\chi _r`$ of type $`n`$ with no incoming arrows of color $`j`$. We change the decomposition $`D_j`$ by switching the vertices $`v`$ and $`w`$ and identify $`n`$ with the new $`v`$. We also increase $`p_j`$ by one. (Thus, in this case we do not add $`n`$ to $`\chi _r`$.)
In both cases, the statement $`S_{r+1}`$ follows.
Since the set of weights of monomials occuring in $`\chi _q(V)`$ is contained in a finite set $`\lambda _1,\lambda _2,\mathrm{},\lambda _N`$, the statement $`S_N`$ proves the first part of the theorem.
Corollary 4.5 then implies the second part of the theorem. ∎
We plan to use the above algorithm to compute explicitly the $`q`$–characters of the fundamental representations of $`U_q\widehat{𝔤}`$ and to obtain their decompositions under $`U_q𝔤`$.
###### Remark 5.10.
There is a similar algorithm for computing the ordinary characters of finite-dimensional $`𝔤`$–modules (equivalently, $`U_q𝔤`$–modules). That algorithm works for those representations (called miniscule) whose characters do not contain dominant weights other than the highest weight (for other representations the algorthim does not work). However, there are very few miniscule representations for a general simple Lie algebra $`𝔤`$. In contrast, in the case of quantum affine algebras there are many representations whose characters do not contain any dominant monomials except for the highest weight monomials (for example, all fundamental representations), and our algorithm may be applied to them.∎
## 6. The fundamental representations
In this section we prove several theorems about the irreducibility of tensor products of fundamental representations.
### 6.1. Reducible tensor products of fundamental representations and poles of $`R`$-matrices
In this section we prove that the reducibility of a tensor product of the fundamental representations is always caused by a pole in the $`R`$-matrix.
We say that a monomial $`m`$ has positive lattice support with base $`a`$ if $`m`$ is a product $`Y_{i,aq^n}^{\pm 1}`$ with $`n0`$.
###### Lemma 6.1.
All monomials in $`\chi _q(V_{\omega _i}(a))`$ have positive lattice support with base $`a`$.
###### Proof.
For $`U_q\widehat{𝔰𝔩}_2`$, the statement follows from the explicit formula $`(`$4.2$`)`$ for $`\chi _q(V_1(a))`$. The $`q`$–character of any irreducible representation $`V`$ of $`U_q\widehat{𝔰𝔩}_2`$ is a subsum of a product of the $`q`$–characters of $`V_1(b)`$’s. Moreover, this subsum includes the highest monomial. Hence if the highest weight monomial of $`\chi _q(V)`$ has positive lattice support with base $`a`$, then so do all monomials in $`\chi _q(V)`$.
Now consider the case of general $`U_q\widehat{𝔤}`$. Suppose there exists a monomial in $`\chi =\chi _q(V_{\omega _i}(a))`$, which does not have positive lattice support with base $`a`$. Let $`m`$ be a highest among such monomials (with respect to the partial ordering by weights).
By Corollary 4.5, the monomial $`m`$ is not dominant. In other words, if we rewrite $`m`$ as a product of $`Y_{i,b}^{\pm 1}`$, we will have at least one generator in negative power, say $`Y_{i_0,b_0}^1`$.
Write $`\tau _{i_0}(\chi )`$ in the form $`(`$4.4$`)`$. The monomial $`\tau _{i_0}(m)`$ can not be among the monomials $`\{m_pN_p\}`$, since $`m`$ contains $`Y_{i_0,b_0}^1`$. Therefore $`\tau _{i_0}(m)=m_{p_0}N_{p_0}\overline{M}_{r_0,p_0}`$ for some $`\overline{M}_{r_0,p_0}1`$, which is a product of factors $`\overline{A}_{i,c}^1`$. Let $`m_1`$ be a monomial in $`\chi `$, such that $`\tau _{i_0}(m_1)=m_{p_0}N_{p_0}`$. Then by Lemma 3.6, $`m=m_1M_{r_0,p_0}`$, where $`M_{r_0,p_0}`$ is obtained from $`\overline{M}_{r_0,p_0}`$ by replacing all $`\overline{A}_{i,c}^1`$ with $`A_{i,c}^1`$.
By construction, the weight of $`m_1`$ is higher than the weight of $`m`$, so by our assumption, $`m_1`$ has positive lattice support with base $`a`$. But then $`m_{p_0}`$ also has positive lattice support with base $`a`$. Therefore all monomials in $`m_{p_0}(1+_r\overline{M}_{r,p})`$ have positive lattice support with base $`a`$. This implies that $`M_{r_0,p_0}`$, and hence $`m=m_1M_{r_0,p_0}`$, has positive lattice support with base $`a`$. This is a contradiction, so the lemma is proved. ∎
###### Remark 6.2.
From the proof of Lemma 6.1 is clear that the only monomial in $`\chi _q(V_{\omega _i}(a))`$ which contains $`Y_{j,aq^n}^{\pm 1}`$ with $`n=0`$ is the highest weight monomial $`Y_{i,a}`$.∎
Let $`V`$ be a $`U_q\widehat{𝔤}`$–module with the $`q`$–character $`\chi _q(V)`$. Define the oriented graph $`\mathrm{\Gamma }_V`$ as follows. The vertices of $`\mathrm{\Gamma }_V`$ are monomials in $`\chi _q(V)`$ with multiplicities. Thus, there are $`\mathrm{dim}V`$ vertices. We denote the monomial corresponding to a vertex $`\alpha `$ by $`m_\alpha `$. We draw an arrow from the vertex $`\alpha `$ to the vertex $`\beta `$ if and only if $`m_\beta =m_\alpha A_{i,x}^1`$ for some $`iI`$, $`x^\times `$.
If $`V`$ is an irreducible $`U_q\widehat{𝔰𝔩}_2`$–module, then the graph $`\mathrm{\Gamma }_V`$ is connected. Indeed, every irreducible $`U_q\widehat{𝔰𝔩}_2`$–module is isomorphic to a tensor product of evaluation modules. The graph associated to each evaluation module is connected according to the explicit formulas for the corresponding $`q`$–characters (see formula (4.3) in \[FR2\]). Clearly, a tensor product of two modules with connected graphs also has a connected graph.
###### Lemma 6.3.
Let $`\alpha \mathrm{\Gamma }_V`$ be a vertex with no incoming arrows. Then $`m_\alpha `$ is a dominant monomial.
###### Proof.
Let $`\alpha `$ contain $`Y_{i,b}^1`$ for some $`iI`$, $`b^\times `$. We write the restricted $`q`$–character $`\tau _i(\chi _q(V))`$ in the form $`(`$4.4$`)`$, where each $`m_p(1+_r\overline{M}_{r,p})`$ is a $`q`$–character of an irreducible $`U_{q_i}\widehat{𝔰𝔩}_2`$ module.
The monomial $`\tau _i(m)`$ contains $`Y_{i,b}^1`$ and therefore can not be among the monomials $`\{m_pN_p\}`$. But the graphs of irreducible $`U_q\widehat{𝔰𝔩}_2`$–modules are connected. So we obtain that $`\tau _i(m)=\tau _i(A_{i,c}^1)\tau _i(m^{})`$ for some monomial $`m^{}`$ in $`\chi _q(V)`$, and some $`c^\times `$. By Lemma 3.6, we have $`m=A_{i,c}^1m^{}`$ which is a contradiction. ∎
Now Corollary 4.5 implies:
###### Corollary 6.4.
The graphs of all fundamental representations are connected.
Let a monomial $`m`$ have lattice support with base $`a`$. We call $`m`$ right negative if the factors $`Y_{i,aq^k}`$ appearing in $`m`$, for which $`k`$ is maximal, have negative powers.
###### Lemma 6.5.
All monomials in the $`q`$–character of the fundamental representation $`V_{\omega _i}(a)`$, except for the highest weight monomial, are right negative.
###### Proof.
Let us show first that from the highest weight monomial $`m_+`$ there is only one outgoing arrow to the monomial $`m_1=m_+A_{i,aq_i}^1`$. Indeed, the weight of a monomial that is connected to $`m_+`$ by an arrow has to be equal to $`\omega _i\alpha _j`$ for some $`jI`$. The restriction of $`V_{\omega _i}(a)`$ to $`U_q\widehat{𝔤}`$ is isomorphic to the direct some of its $`i`$th fundamental representation $`V_{\omega _i}`$ and possibly some other irreducible representations with dominant weights less than $`\omega _i`$. However, the weight $`\omega _i\alpha _j`$ is not dominant for any $`i`$ and $`j`$. Therefore this weight has to belong to the set of weights of $`V_{\omega _i}`$, and the multiplicity of this weight in $`V_{\omega _i}(a)`$ has to be the same as that in $`V_{\omega _i}`$. It is clear that the only weight of the form $`\omega _i\alpha _j`$ that occurs in $`V_{\omega _i}`$ is $`\omega _i\alpha _i`$, and it has multiplicity one. By Theorem 4.1, this monomial must have the form $`m_1=m_+A_{i,aq_i}^1`$.
Now, the graph $`\mathrm{\Gamma }_{V_{\omega _i}(a)}`$ is connected. Therefore each monomial $`m`$ in $`\chi _q(V_{\omega _i}(a))`$ is a product of $`m_1`$ and factors $`A_{j,b}^1`$. Note that $`m_1`$ is right negative and all $`A_{j,b}^1`$ are right negative (this follows from the explicit formula (2.15)). The product of two right negative monomials is right negative. This implies the lemma. ∎
###### Remark 6.6.
It follows from the proof of the lemma that the rightmost factor of each non-highest weight monomial occurring in $`\chi _q(V_{\omega _i}(a))`$ equals $`Y_{j,aq^n}^1`$, where $`n2r_i`$.∎
Recall the definition of the normalized $`R`$–matrix $`\overline{R}_{V,W}(z)`$ from Section 2.3. The following theorem was conjectured, e.g., in \[AK\].
###### Theorem 6.7.
Let $`\{V_k\}_{k=1,\mathrm{},n}`$, where $`V_k=V_{\omega _{s(k)}}(a_k)`$, be a set of fundamental representations of $`U_q\widehat{𝔤}`$. The tensor product $`V_1\mathrm{}V_n`$ is reducible if and only if for some $`i,j\{1,\mathrm{},n\},ij`$, the normalized $`R`$–matrix $`\overline{R}_{V_i,V_j}(z)`$ has a pole at $`z=a_j/a_i`$.
###### Proof.
The “if” part of the Theorem is obvious. Let us explain the case when $`n=2`$. Let $`\sigma :V_1V_2V_2V_1`$ be the transposition. By definition of $`\overline{R}_{V_1,V_2}(z)`$, the linear map $`\sigma \overline{R}_{V_1,V_2}(z)`$ is a homomorphism of $`U_q\widehat{𝔤}`$–modules $`V_1V_2V_2V_1`$. Therefore if $`\overline{R}_{V_1,V_2}(z)`$ has a pole at $`z=a_2/a_1`$, then $`V_1V_2`$ is reducible. It is easy to generalize this argument to general $`n`$.
Now we prove the “only if” part.
If the product $`V_1\mathrm{}V_n`$ is reducible, then the product of the $`q`$–characters $`_{i=1}^n\chi _q(V_i)`$ contains a dominant monomial $`m`$ that is different from the product of the highest weight monomials. Therefore $`m`$ is not right negative and $`m`$ is a product of some monomials $`m_i^{}`$ from $`\chi _q(V_i)`$. Hence at least one of the factors $`m_i^{}=m_i`$ must be the highest weight monomial and it has to cancel with the rightmost $`Y_{i,b}^1`$ appearing in, say, $`m_j^{}`$.
According to Lemma 6.1, $`m_j^{}=m_jM`$ where $`M`$ is a product of $`A_{s,a_jq^n}^1`$. By our assumption, the maximal $`n_0`$ occurring among $`n`$ is such that $`a_jq^{n_0}=a_iq_i^1`$. Using Lemma 2.6 we obtain that one of the diagonal entries of $`\overline{R}_{V_i,V_j}`$ has a factor $`1/(1a_ia_j^1z)`$, which can not be cancelled. Therefore $`\overline{R}_{V_i,V_j}`$ has a pole at $`z=a_j/a_i`$. This proves the “only if” part. Moreover, we see that the pole necessarily occurs in a diagonal entry. ∎
### 6.2. The lowest weight monomial
Our next goal is to describe (see Proposition 6.15 below) the possible values of the spectral parameters of the fundamental representations for which the tensor product is reducible.
First we develop an analogue of the formalism of Section 4 from the point of view of the lowest weight monomials. Recall the involution $`II,i\overline{i}`$ from Section 1.2. According to Theorem 1.3(3), there is a unique lowest weight monomial $`m_{}`$ in $`\chi _q(V_{\omega _i}(a))`$, and its weight is $`\omega _{\overline{i}}`$.
###### Lemma 6.8.
The lowest weight monomial of $`\chi _q(V_{\omega _i}(a))`$ equals $`Y_{\overline{i},aq^{r^{}h^{}}}^1`$.
###### Proof.
By Lemma 5.6, $`m_{}`$ must be antidominant. Thus, by Lemma 6.1, $`m_{}=Y_{\overline{i},aq^{n_i}}^1`$ for some $`n_i>0`$.
Recall the automorphism $`w_0`$ defined in $`(`$1.7$`)`$. The module $`V_{\omega _{\overline{i}}}(a)`$ is obtained from $`V_{\omega _i}(a)`$ by pull-back with respect to $`w_0`$. From the interpretation of the $`q`$–character in terms of the eigenvalues of $`\mathrm{\Phi }_i^\pm (u)`$, it is clear that the $`q`$–character of $`V_{\omega _{\overline{i}}}(a)`$ is obtained from the $`q`$–character of $`V_{\omega _i}(a)`$ by replacing each $`Y_{j,b}^{\pm 1}`$ by $`Y_{\overline{j},b}^{\pm 1}`$. Therefore we obtain: $`n_i=n_{\overline{i}}`$.
Consider the dual module $`V_{\omega _i}(a)^{}`$. By Theorem 1.3(3), its highest weight equals $`\omega _{\overline{i}}`$. Hence $`V_{\omega _i}(a)^{}`$ is isomorphic to $`V_{\omega _{\overline{i}}}(b)`$ for some $`b^\times `$. Since $`U_q\widehat{𝔤}`$ is a Hopf algebra, the module $`V_{\omega _i}(a)^{}V_{\omega _i}(a)^{}`$ contains a one–dimensional trivial submodule. Therefore the product of the corresponding $`q`$–characters contains the monomial $`m=1`$. According to Lemma 6.5, it can be obtained only as a product of the highest weight monomial in one $`q`$–character and the lowest monomial in another. Therefore, $`b=aq^{\pm n_i}`$.
In the same way we obtain that $`V_{\omega _{\overline{i}}}(a)^{}`$ is isomorphic to $`V_{\omega _i}(aq^{\pm n_i})`$.
From formula $`(`$1.6$`)`$ for the square of the antipode, we obtain that the double dual, $`V_{\omega _i}(a)^{}`$, is isomorphic to $`V_{\omega _i}(aq^{2r^{}h^{}})`$. Since $`n_i>0`$, we obtain that $`n_i=r^{}h^{}`$. ∎
Having found the lowest weight monomial in the $`q`$–characters of the fundamental representations, we obtain using Theorem 1.3 the lowest weight monomial in the $`q`$–character of any irreducible module.
###### Corollary 6.9.
Let $`V`$ be an irreducible $`U_q\widehat{𝔤}`$–module. Let the highest weight monomial in $`\chi _q(V)`$ be
$$m_+=\underset{iI}{}\underset{k=1}{\overset{s_k}{}}Y_{i,a_k^{(i)}}.$$
Then the lowest weight monomial in $`\chi _q(V)`$ is given by
$$m_{}=\underset{iI}{}\underset{k=1}{\overset{s_k}{}}Y_{\overline{i},a_k^{(i)}q^{r^{}h^{}}}^1.$$
We also obtain a new proof of the following corollary, which has been previously proved in \[CP1\], Proposition 5.1(b):
###### Corollary 6.10.
$$V_{\omega _i}(a)^{}V_{\omega _{\overline{i}}}(aq^{r^{}h^{}}).$$
Now we are in position to develop the theory of $`q`$–characters based on the lowest weight and antidominant monomials as opposed to the highest weight and dominant ones.
###### Proposition 6.11.
The $`q`$–character of an irreducible finite-dimensional $`U_q\widehat{𝔤}`$ module $`V`$ has the form
$$\chi _q(V)=m_{}(1+N_p),$$
where $`m_{}`$ is the lowest weight monomial and each $`N_p`$ is a monomial in $`A_{i,c}`$, $`iI`$, $`c^\times `$ (i.e., it does not contain any factors $`A_{i,c}^1`$).
###### Proof.
First we prove the following analogue of formula (4.1):
$$\chi _q(V)=m_{}(1+\underset{p}{}N_p),$$
where each $`N_p`$ is a monomial in $`A_{i,c}^{\pm 1}`$, $`c^\times `$. The proof of this formula is exactly the same as the proof of Proposition 3 in \[FR2\]. The rest of the proof is completely parallel to the proof of Theorem 4.1. ∎
###### Lemma 6.12.
The only antidominant monomial of $`q`$–character of a fundamental representation is the lowest weight monomial.
###### Proof.
The proof is completely parallel to the proof of Lemma 4.5. ∎
###### Lemma 6.13.
All monomials in a $`q`$–character of a fundamental representation are products $`Y_{i,aq^n}^{\pm 1}`$ with $`nr^{}h^{}`$.
###### Proof.
The proof is completely parallel to the proof of Lemma 6.1. ∎
The combination of Lemmas 6.1 and 6.13 yields the following result.
###### Corollary 6.14.
Let the highest weight monomial $`m_+`$ of the $`q`$–character of an irreducible $`U_q\widehat{𝔤}`$–module $`V`$ be a product of monomials $`m_+^{(i)}`$ which have positive lattice support with bases $`a_i`$. Let $`s_i`$ be the maximal integer $`s`$, such that $`Y_{k,a_iq^s}`$ is present in $`m_+^{(i)}`$ for some $`kI`$. Then any monomial $`m`$ in $`\chi _q(V)`$ can be written as a product of monomials $`m^{(i)}`$, where each $`m^{(i)}`$ is a product of $`Y_{j,a_iq^n}`$ with $`n,0ns_i+r^{}h^{}`$
### 6.3. Restrictions on the values of spectral parameters of reducible tensor products of fundamental representations
It was proved in \[KS\] that $`V_{\omega _i}(a)V_{\omega _j}(b)`$ is irreducible if $`a/b`$ does not belong to a countable set. As M. Kashiwara explained to us, one can show that this set is then necessarily finite. The following proposition, which was conjectured, e.g., in \[AK\], gives a more precise description of this set.
###### Proposition 6.15.
Let $`a_i`$, $`i=1,\mathrm{},n`$, and suppose that the tensor product of fundamental representations $`V_{\omega _{i_1}}(a_1)\mathrm{}V_{\omega _{i_n}}(a_n)`$ is reducible. Then there exist $`mj`$ such that $`a_m/a_j=q^k`$, where $`k`$ and $`2kr^{}h^{}`$.
###### Proof.
If $`V_{\omega _{i_1}}(a_1)\mathrm{}V_{\omega _{i_n}}(a_n)`$ is reducible, then $`\chi _q(V_{\omega _{i_1}}(a_1))\mathrm{}\chi _q(V_{\omega _{i_n}}(a_n))`$ should contain a dominant term other than the product of the highest weight terms. But for that to happen, for some $`m`$ and $`j`$, there have to be cancellations between some $`Y_{p,a_mq^n}^1`$ appearing in $`\chi _q(V_{\omega _{i_m}}(a_m))`$ and some $`Y_{p,a_jq^l}`$ appearing in $`\chi _q(V_{\omega _{i_j}}(a_j))`$. These cancellations may only occur if $`a_m/a_j=q^{\pm k},k`$, and $`0kr^{}h^{}`$, by Lemmas 6.1 and 6.13. Moreover, $`k2`$ according to Remark 6.6. ∎
Note that combining Theorem 6.7 and Proposition 6.15 we obtain
###### Corollary 6.16.
The set of poles of the normalized $`R`$–matrix $`\overline{R}_{V_{\omega _i}(a),V_{\omega _j}(a)}(z)`$ is a subset of the set $`\{q^k|k,2|k|r^{}h^{}\}`$.
### 6.4. The $`q`$–characters of the dual representations
In this subsection we show a simple way to obtain the $`q`$–character of the dual representation.
Recall that $`𝒦`$ is given by $`(`$5.6$`)`$.
###### Lemma 6.17.
Let $`\chi _1,\chi _2𝒦`$. Assume that all dominant monomials in $`\chi _1`$ are the same as in $`\chi _2`$ (counted with multiplicities). Then $`\chi _1=\chi _2`$.
###### Proof.
Consider $`\chi =\chi _1\chi _2`$. We have $`\chi 𝒦`$ and $`\chi `$ has no dominant monomials. Then $`\chi =0`$ by Lemma 5.6. ∎
Note that the similar statement is true for antidominant monomials.
###### Proposition 6.18.
Let $`V_{\omega _i}(a)`$ be a fundamental representation. Then the $`q`$–character of the dual representation $`V_{\omega _i}(a)^{}V_{\omega _{\overline{i}}}(aq^{r^{}h^{}})`$ is obtained from the $`q`$–character of $`V_{\omega _i}(a)`$ by replacing each $`Y_{i,aq^n}^{\pm 1}`$ by $`Y_{i,aq^n}^1`$.
###### Proof.
Let $`\chi _1=\chi _q(V_{\omega _{\overline{i}}}(aq^{r^{}h^{}}))`$ and $`\chi _2`$ is obtained from $`\chi (V_{\omega _i}(a))`$ by replacing $`Y_{i,aq^n}^{\pm 1}`$ by $`Y_{i,aq^n}^1`$. Then $`\chi _1`$ and $`\chi _2`$ are elements in $`𝒦`$ with the only dominant monomial $`Y_{\overline{i},aq^{r^{}h^{}}}`$ by Corollary 4.5 and Lemma 6.12. Therefore $`\chi _1=\chi _2`$ by Lemma 6.17. ∎
###### Remark 6.19.
One can define a similar procedure for obtaining the $`q`$–character of the dual to any irreducible $`U_q\widehat{𝔤}`$–module $`V`$. Namely, by Theorem 1.3, $`\chi _q(V)`$ is a subsum in the product of $`q`$–characters of fundamental representations. In particular, any monomial $`m`$ in $`\chi _q(V)`$ is a product of monomials $`m^{(i)}`$ from the $`q`$–characters of these fundamental representations and Proposition 6.18 tells us what to do with each $`m^{(i)}`$. This procedure is consistent because $`\chi _q((VW)^{})=\chi _q(V^{})\chi _q(W^{})`$.
Note that under this procedure the dominant monomials go to the antidominant monomials and vice versa. |
no-problem/9911/cond-mat9911156.html | ar5iv | text | # Untitled Document
Anisotropic dielectric constant of the parent antiferromagnet Bi<sub>2</sub>Sr$`{}_{2}{}^{}M`$Cu<sub>2</sub>O<sub>8</sub> ($`M`$=Dy, Y and Er) single crystals
T. Takayanagi, T. Kitajima, T. Takemura and I. Terasaki
Department of Applied Physics, Waseda University, Tokyo 169-8555, JAPAN
Abstract: The anisotropic dielectric constants of the parent antiferromagnet Bi<sub>2</sub>Sr$`{}_{2}{}^{}M`$Cu<sub>2</sub>O<sub>8</sub> ($`M`$=Dy, Y and Er) single crystals were measured from 80 to 300 K. The in-plane dielectric constant is found to be very huge (10<sup>4</sup>-10<sup>5</sup>). This suggests a remnant of the Fremi surface of the parent antiferromagnet. The out-of-plane dielectric constant is 50-200, which is three orders of magnitude smaller than the in-plane one. A significant anomaly is that a similar out-of-plane dielectric constant is observed in superconducting samples.
Keywords: parent antiferromagnet, dielectric constant, insulator-metal transition
INTRODUCTION
In a low hole density, the CuO<sub>2</sub> plane shows high resistivity and antiferromagnetic (AF) order at low temperature, which is called a parent AF insulator. With doping, an insulator-metal transition (IMT) arises, and the system changes from the AF insulator to a superconductor. For IMT, the dielectric constant $`\epsilon `$ is of great importance in the sense that it provides a measure of localization length in the insulator. Chen et al. have first pointed out the importance of $`\epsilon `$ in the studies of high-$`T_c`$ cuprates (HTSC). However, they studied $`\epsilon `$ only for La<sub>2</sub>CuO<sub>4+δ</sub>, which has various structural phase transitions that might affect $`\epsilon `$ seriously. Another problem is that they studied $`\epsilon `$ only near 4.2 K, although the resistivity anisotropy was strongly dependent on temperature. Thus it should be further examined to study $`\epsilon `$ for other HTSC over a wider temperature range.
We have been studying the charge transport of the parent insulator Bi<sub>2</sub>Sr$`{}_{2}{}^{}M`$Cu<sub>2</sub>O<sub>8</sub> ($`M`$=Y and rare-earth) . In this proceedings we report on measurements and analyses of the anisotropic dielectric constants from 80 to 300 K.
EXPERIMENTAL
Single crystals of Bi<sub>2</sub>Sr$`{}_{2}{}^{}M`$Cu<sub>2</sub>O<sub>8</sub> ($`M`$=Dy,Y and Er) were grown by a self-flux method. The growth conditions and the sample characterization were described in Ref . The resistivity was measured using a four-probe technique, and a ring configuration was used for the out-of-plane direction. The dielectric constants were measured with a two-probe technique using a lock-in amplifier (Stanford Research SR630 and SR844). A typical contact resistance was 50-100 $`\mathrm{\Omega }`$ for the in-plane direction, and 1-10 $`\mathrm{\Omega }`$ for the out-of-plane direction. Thus the measurement along the in-plane direction is less accurate near room temperature, where the contact resistance becomes comparable with the sample resistance. Detailed information on the measurements will be written elsewhere.
All the samples of Bi<sub>2</sub>Sr$`{}_{2}{}^{}M`$Cu<sub>2</sub>O<sub>8</sub> were insulating, and the doping levels of the as-grown crystals were slightly different for different $`M`$. We do not yet understand the $`M`$ dependence, but the melting points and/or the liquidus lines may depend on $`M`$ to give a slight variation in composition. Thus crystals with different $`M`$’s act as a set of parent insulators with slightly different doping levels. We estimated the hole concentration per Cu ($`p`$) by measuring the room-temperature thermopower . With good reproducibility, $`M`$=Dy was nearly undoped ($`p`$=0-0.02), and $`M`$=Er and Y were slightly doped ($`p`$=0.02-0.04). The doping levels (and the measurement results) were nearly the same between $`M`$=Er and Y, we discuss the data for $`M`$= Dy and Er below.
RESULTS AND DISCUSSION
Figure 1 shows the in-plane resistivity ($`\rho _{ab}`$) and the out-of-plane resistivity ($`\rho _c`$) for $`M`$=Er and Dy. Reflecting the different doping levels, both $`\rho _{ab}`$ and $`\rho _c`$ are larger for $`M`$=Dy than for $`M`$=Er. The temperature dependence is also different between $`M`$=Dy and Er. In particular, $`\rho _{ab}`$ for $`M`$=Er is nearly independent of temperature at 300 K, which indicates that the in-plane conduction is nearly metallic. It should be noted here that $`\rho _c/\rho _{ab}`$ is strongly dependent on temperature and the doping levels, which suggests the confinement behavior in the AF insulator .
Figure 2 shows the in-plane dielectric constant ($`\epsilon _{ab}`$) and the out-of-plane dielectric constant ($`\epsilon _c`$) for $`M`$=Er and Dy at 1 MHz. Both $`\epsilon _{ab}`$ and $`\epsilon _c`$ are larger for $`M`$=Er than $`M`$=Dy, which indicates that the sample for $`M`$=Er is closer to IMT boundary. It should be emphasized that $`\epsilon _{ab}`$ is as huge as 10<sup>4</sup>-10<sup>5</sup>. We think that the huge $`\epsilon _{ab}`$ comes from an electronic origin, because (1) $`\epsilon _{ab}`$ is very sensitive to the doping levels and (2) the dielectric loss Im $`\epsilon _{ab}1/\rho _{ab}`$ is large compared with conventional ferroelectric materials. The charge order or the variable range hopping may be an origin of the huge $`\epsilon _{ab}`$. Thus we may say that the huge $`\epsilon _{ab}`$ is a remnant of the the Fermi surface calculated by band theories.
An important feature is that $`\epsilon _c`$ remains positive and finite in the superconducting samples. Kitano et al. found that $`\epsilon _c`$ of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> near $`T_c`$ was 40-50 at 10 GHz, whereas Terasaki and Tajima measured that it was 120 at 100 MHz. These values are of the same order of $`\epsilon _c`$ for the parent insulators, and we may say that the out-of-plane conductance of HTSC is a “remnant” of the parent insulator. Another feature is that the temperature dependence of $`\epsilon _c`$ is different between $`M`$=Er and Dy. Recently we have found that $`\epsilon _c`$ for all the samples, including superconducting ones, can be understood with the Debye description of dielectric relaxation , which has been used for the analyses of the dielectric response of the charge density wave .
The reciprocal of the dielectric constant ($`1/\epsilon _{ab}`$ and $`1/\epsilon _c`$) at 80 K is plotted as a function of hole concentration per Cu in Fig. 3. For comparison, the data for the superconducting samples are also plotted. $`\epsilon _c`$ is employed from Refs. , and $`\epsilon _{ab}`$ is estimated from the Drude model as $`\epsilon _{ab}(\omega 0)=(\omega _p/\gamma )^2`$, where $`\omega _p`$ and $`\gamma `$ are the plasma frequency and the damping factor respectively. By putting $`\mathrm{}\omega _p`$=1.1 eV and $`\mathrm{}\gamma `$=$`k_BT`$, we get $`\epsilon _{ab}=2.5\times 10^4`$, which is in the same order of $`\epsilon _{ab}`$ for $`M`$=Dy and Er. As shown in Fig. 3(a), $`1/\epsilon _{ab}`$ crosses zero near $`p`$=0.05, and goes negative in the metallic side. This is exactly what we see IMT in doped Si. On the other hand, although $`1/\epsilon _c`$ becomes smaller for $`M`$=Er than for $`M`$=Dy, $`1/\epsilon _c`$ for the superconducting samples is positive, and stay at the same order. Thus $`\epsilon _c`$ is unlikely to diverge at IMT, as Chen et al. previously found that $`\epsilon _c`$ for La<sub>2</sub>CuO<sub>4+δ</sub> does not diverge at IMT . We should note that the gross feature of Fig. 3 is not largely dependent on frequency and temparature, although the data for 1MHz at 80 K was rather arbitrarily selected. More detailed analysis is in progress.
Chen et al. pointed out two possibilities for the non-divergent $`\epsilon _c`$. One is that IMT in HTSC occurs only along the in-plane direction, and the other is that the heavy effective mass along the out-of-plane direction makes the effective Bohr radius of a hole shorter than the $`c`$-axis length. Our data favors the former scenario. According to the latter scenario, $`\epsilon _{ab}/\epsilon _c`$ would be equal to the effective mass ratio, which disagrees with our observation that $`\epsilon _c`$ for Bi<sub>2</sub>Sr$`{}_{2}{}^{}M`$Cu<sub>2</sub>O<sub>8</sub> is larger than $`\epsilon _c`$ for La<sub>2</sub>CuO<sub>4+δ</sub>. Thus the non-divergent $`\epsilon _c`$ does not solely comes from the anisotropic effective mass, but from the anomalous conduction mechanism such as “confinement”.
SUMMARY
In summary, we prepared single crystals of Bi<sub>2</sub>Sr$`{}_{2}{}^{}M`$Cu<sub>2</sub>O<sub>8</sub> ($`M`$=Y and rare-earth) and measured the anisotropic dielectric constants $`\epsilon _{ab}`$ and $`\epsilon _c`$ from 80 to 300 K. The present study has revealed that $`\epsilon _{ab}`$ (10<sup>4</sup>-10<sup>5</sup>) is about three orders of magnitude larger than $`\epsilon _c`$ (10<sup>2</sup>). The huge $`\epsilon _{ab}`$ is a remnant of the Fermi surface, where the dc conductivity is suppressed by the strong correlation or localization. We have found that $`\epsilon _c`$ remains near $`10^2`$ across the insulator-metal transition, which means that the transition occurs only along the in-plane direction. This can be a piece of evidence of the confinement behavior of the parent insulators.
Acknowledgments. This work was partially supported by The Kawakami Memorial Foundation, and by Waseda University Grant for Special Research Projects (99A-556). The authors would like to thank S. Tajima for the rf-conductivity measurements in Superconductivity Research Laboratory, International Superconductivity Technology Center. They also appreciate T. Itoh, T. Kawata, K. Takahata Y. Iguchi and T. Sugaya for collaboration. |
no-problem/9911/hep-lat9911028.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The $`O(N)`$ spin models (or, more precisely, the $`O(N)`$-invariant nonlinear $`\sigma `$-models) are defined by
$$\beta =J\underset{<i,j>}{}𝐒_i𝐒_j𝐇\underset{i}{}𝐒_i,$$
(1)
where $`i`$ and $`j`$ are nearest-neighbour sites on a $`d`$dimensional hypercubic lattice, and $`𝐒_i`$ is an $`N`$-component unit vector at site $`i`$. The case $`N=1`$ corresponds to the Ising model. We will consider here only $`N>1`$. It is convenient to decompose the spin vector $`𝐒_i`$ into longitudinal (parallel to the magnetic field $`𝐇`$) and transverse components
$$𝐒_i=S_i^{}\widehat{𝐇}+𝐒_i^{}.$$
(2)
The order parameter of the system, the magnetization $`M`$, is then the expectation value of the lattice average $`S^{}`$ of the longitudinal spin components
$$M=<\frac{1}{V}\underset{i}{}S_i^{}>=<S^{}>.$$
(3)
Two types of susceptibilities are defined. The longitudinal susceptibility is the usual derivative of the magnetization, whereas the transverse susceptibility corresponds to the fluctuation per component of the lattice average $`𝐒^{}`$ of the transverse spin components
$`\chi _L`$ $`=`$ $`{\displaystyle \frac{M}{H}}=V(<S_{}^{}{}_{}{}^{2}>M^2),`$ (4)
$`\chi _T`$ $`=`$ $`V{\displaystyle \frac{1}{N1}}<𝐒_{}^{}{}_{}{}^{2}>={\displaystyle \frac{M}{H}}.`$ (5)
These models are of general interest in condensed matter physics, but have applications also in quantum field theory. In particular, the three-dimensional $`O(4)`$ model is of importance for quantum chromodynamics (QCD) with two degenerate light-quark flavors at finite temperature. If QCD undergoes a second-order chiral transition in the continuum limit, it is believed to belong to the same universality class as the $`3d`$ $`O(4)`$ model . QCD lattice data have therefore been compared to the $`O(4)`$ scaling function, determined numerically in . For staggered fermions this comparison is at present not conclusive , but results for Wilson fermions seem to agree quite well with the predictions.
$`O(N)`$ models in dimension $`2<d4`$ are predicted to display singularities on the coexistence line $`T<T_c,H=0`$ due to the presence of massless Goldstone modes . In fact, both susceptibilities are predicted to diverge in this region. The magnetic equation of state is nevertheless divergence-free, and compatible with these singularities. The equation of state was calculated up to order $`ϵ^2`$ in the $`ϵ`$-expansion by Brezin et al. . On the basis of this expansion it has been argued that these singularities may be easily observable, since the perturbative coefficient associated with the diverging term is quite large.
A further consequence of the Goldstone singularities is the appearance of strong finite-size effects at all $`T<T_c`$ for $`H0`$. These effects have been studied using chiral perturbation theory in . Direct numerical evidence of the Goldstone singularities is however lacking, apart from early simulations of the three-dimensional $`O(3)`$ model on small lattices , where indications of the predicted behaviour were found.
The aim of this paper is to verify explicitly the Goldstone singularities, and to investigate their interplay with the critical behaviour and the effect they have on the scaling function. We do this by simulating the three-dimensional $`O(4)`$ model in the presence of an external magnetic field in the low-temperature phase and close to the critical temperature $`T_c`$. For determining the scaling function, we have also simulated at some high-temperature values. First results of our work have been presented at Lattice’99 . The plan of the paper is as follows. In the next section we review the perturbative predictions for the magnetization and the susceptibilities at low temperatures, as well as the analytic results for the magnetic equation of state, which is equivalent to the magnetization’s scaling function. Our numerical results are discussed in Section 3. The fits and parametrization for the scaling function are given in Section 4. A summary and our conclusions are presented in Section 5.
## 2 Perturbative Predictions and Critical Behaviour
The continuous symmetry present in the $`O(N)`$ spin models gives rise to the so-called spin waves: slowly varying (long-wavelength) spin configurations, whose energies may be arbitrarily close to the ground-state energy. In two dimensions these modes are responsible for the absence of spontaneous magnetization, whereas in $`d>2`$ they are the massless Goldstone modes associated with the spontaneous breaking of the rotational symmetry for temperatures below the critical value $`T_c`$ . For $`T<T_c`$ the system is in a broken phase, i.e. the magnetization $`M(T,H)`$ attains a finite value $`M(T,0)`$ at $`H=0`$. To be definite we assume here $`H>0`$. As a consequence the transverse susceptibility, which is directly related to the fluctuation of the Goldstone modes, diverges as $`H^1`$ when $`H0`$ for all $`T<T_c`$. This can be seen immediately from the identity
$$\chi _T=\frac{M(T,H)}{H}.$$
(6)
This expression is a direct consequence of the $`O(N)`$ invariance of the zero-field free energy, and can be derived as a Ward identity . It is valid for all values of $`T`$ and $`H`$.
A less trivial result is that also the longitudinal susceptibility is diverging on the coexistence curve for $`2<d4`$. The leading term in the perturbative expansion for $`2<d<4`$ is $`H^{d/22}`$. The predicted divergence in $`d=3`$ is thus
$$\chi _L(T<T_c,H)H^{1/2}.$$
(7)
This is equivalent to an $`H^{1/2}`$-behaviour of the magnetization near the coexistence curve
$$M(T<T_c,H)=M(T,0)+cH^{1/2}.$$
(8)
An interesting question is whether the above expressions still describe the behaviour close to the critical region $`T\mathrm{}<T_c`$. We recall that the critical behaviour is determined by the singular part of the free energy. Its scaling form in the thermodynamic limit is
$$f_s(t,h)=b^df_s(b^{y_t}t,b^{y_h}h).$$
(9)
Here we have neglected possible dependencies on irrelevant scaling fields and exponents. The variables $`t`$ and $`h`$ are the conveniently normalized reduced temperature $`t=(TT_c)/T_0`$ and magnetic field $`h=H/H_0`$, and $`b`$ is a free length rescaling factor. The relevant exponents $`y_{t,h}`$ specify all the other critical exponents
$$y_t=1/\nu ,y_h=1/\nu _c;$$
(10)
$$\nu _c=\nu /\beta \delta ,d\nu =\beta (1+\delta ),\gamma =\beta (\delta 1).$$
(11)
Choosing the scale factor $`b`$ such that $`b^{y_h}h=1`$ and using $`M=f_s/H`$ one finds the equation
$$M=h^{1/\delta }f_G(t/h^{1/\beta \delta }),$$
(12)
where $`f_G`$ is a scaling function. It becomes universal after fixing the normalization constants $`H_0`$ and $`T_0`$. This scaling function was calculated numerically for the $`3d`$ $`O(4)`$ model by Toussaint and is used in comparison to QCD lattice data .
Alternatively, one may choose $`b^{y_t}|t|=1`$. This leads to the Widom-Griffiths form of the equation of state
$$y=f(x),$$
(13)
where
$$yh/M^\delta ,xt/M^{1/\beta }.$$
(14)
It is usual to normalize $`t`$ and $`h`$ such that
$$f(0)=1,f(1)=0.$$
(15)
The scaling forms in Eqs. (12) and (13) are clearly equivalent. In the following we will work with form (13) and obtain (12) from it parametrically in Section 4.
The equation of state (13) has been derived by Brézin et al. to order $`ϵ^2`$ in the $`ϵ`$-expansion, where $`ϵ=4d`$. Although diverging terms in $`\chi _T`$ appear at intermediate steps of the derivation, they are canceled by diverging $`\chi _L`$ terms, and the resulting expression is divergence-free. This expression has been considered by Wallace and Zia in the limit $`x1`$, i.e. at $`T<T_c`$ and close to the coexistence curve. In this limit the function is inverted to give $`x+1`$ as a double expansion in powers of $`y`$ and $`y^{d/21}`$
$$x+1=\stackrel{~}{c}_1y+\stackrel{~}{c}_2y^{d/21}+\stackrel{~}{d}_1y^2+\stackrel{~}{d}_2y^{d/2}+\stackrel{~}{d}_3y^{d2}+\mathrm{}.$$
(16)
The coefficients $`\stackrel{~}{c}_1`$, $`\stackrel{~}{c}_2`$ and $`\stackrel{~}{d}_3`$ are then obtained from the general expression of . The above form is motivated by the $`H`$-dependence in the $`ϵ`$-expansion of $`\chi _L`$ at low temperatures .
In Fig. 1 we show the function $`f(x)`$ from , its low-temperature ($`x1`$) limit and $`f(x)`$ from , which is obtained from the inverted form of the low-temperature expression, Eq. (16). We see that the low-temperature curve remains close to the general one for a significant portion of the phase diagram, including the critical point at $`x=0`$, $`y=1`$ and moving into the high-temperature phase, the region to the right of the critical point. The “inverted” curve and the low-temperature curve that generated it agree quite well. (We remark that the process of inverting produces coefficients that are determined only to order $`ϵ`$, while the original low-temperature expression was known to order $`ϵ^2`$.) As mentioned above, the form (16) is equivalent to the Goldstone-singularity form for $`\chi _L`$ at low temperatures if one identifies the variable $`y`$ with the field $`H`$. Nevertheless, the fact that this form may describe the behaviour also at temperatures close to $`T_c`$ and higher is not contradictory, since the variable $`y`$ can only be identified with $`H`$ if $`M(T,H=0)0`$, which happens only at low temperatures. The form (16) has explicitly nonnegative values of $`y`$ at $`x1`$, while the original perturbative expression produces an unphysical negative $`y`$ in a
very small neighborhood of this point . Possible problems with the form (16) are pointed out in \[15, Section 5\].
As for the large-$`x`$ limit (corresponding to $`T>T_c`$ and small $`H`$), the expected behaviour is given by Griffiths’s analyticity condition
$$f(x)=\underset{n=1}{\overset{\mathrm{}}{}}a_nx^{\gamma 2(n1)\beta }.$$
(17)
None of the curves in Fig. 1 approaches this limit, since the low-temperature curves are not valid at large $`x`$, and the general curve in its original form is known to have problems in this limit .
The perturbative equation of state has been used in to produce pictures of the expected behaviour of the $`3d`$ $`O(4)`$ model for a large range of temperatures and magnetic fields. The authors have employed an interpolation of the function from with the inverted form from at low temperatures and the Griffiths condition at high temperatures. When compared to Monte Carlo data for the same model in , the perturbative scaling function shows qualitative agreement. In Section 4 we propose a fit of our Monte Carlo data to the perturbative form of the equation of state, using (16).
## 3 Numerical Results
Our simulations are done on lattices with linear extensions $`L=24`$, 32, 48, 64, 72, 96 and 120 using the cluster algorithm of Ref. . We compute the magnetization $`M`$ and the susceptibilities $`\chi _{L,T}`$ at fixed $`J=1/T`$ (i.e. at fixed $`T`$) and varying $`H`$. We note that, due to the presence of a nonzero field, the magnetization is nonzero on finite lattices, contrary to what happens for simulations at $`H=0`$, where one is led to consider the approximate form $`<1/V|𝐒_i|>`$.
We use the value $`J_c=0.93590`$, obtained in simulations of the zero-field model . In Fig. 2 we show our data for the magnetization for low temperatures up to $`T_c`$ plotted versus $`H^{1/2}`$. We have simulated at increasingly larger values of $`L`$ at fixed values of $`J`$ and $`H`$ in order to eliminate finite-size effects. The finite-size effects for small $`H`$ do not disappear as one moves away from $`T_c`$, but rather increase.
In Fig. 3 we plot only the results from our largest lattices. The solid lines are fits to the form (8), and the filled squares at $`H>0`$ denote the last points included in our fits. It is evident that the predicted behaviour (linear in $`H^{1/2}`$) holds close to $`H=0`$ for all temperatures $`T<T_c`$ considered. The Goldstone-mode effects are therefore observable also rather close to $`T_c`$. The straight-line fits coincide with the measured points in a wide range of $`H`$ for low $`T`$ (for $`J=1.2`$ and 1.1 up to $`H=0.32`$).
With increasing $`T`$ the coincidence region gets smaller and vanishes at $`T_c`$. In Table 1 we have listed the fit parameters. The value $`M(T,0)`$ obtained from the fits is the infinite-volume value of the magnetization on the coexistence line. In the neighbourhood of $`T_c`$ it should show the usual critical behaviour
$$M(T\mathrm{}<T_c,H=0)=B(T_cT)^\beta =B(1/J_c1/J)^\beta .$$
(18)
Using this simple form, without including any next-to-leading terms, we are able to fit all points of Table 1 with $`B=0.9670(5)`$ and the exponent $`\beta =0.3785(6)`$, in agreement with the high-precision zero-field determination in .
As the critical point is reached the $`H`$-dependence of the magnetization should change to satisfy critical scaling. We thus fit the data from the largest lattice sizes at $`T_c`$ to the form
$$M(T_c,H)=d_cH^{1/\delta }.$$
(19)
As can be seen in Fig. 4a a very good straight-line fit to the largest-$`L`$ results is possible. The smaller lattices show however definite finite-size effects. We find the exponent $`\delta =4.86(1)`$, in agreement with , and in addition we obtain the critical amplitude $`d_c=0.715(1)`$. In Fig. 4b the magnetization at $`T_c`$ is compared with the finite-size-scaling prediction
$$M(T_c,H;L)=L^{\beta /\nu }Q_M(HL^{\beta \delta /\nu }),$$
(20)
using the critical exponents of Ref. . We observe no corrections to scaling, even at higher $`H`$-values. The scaling function $`Q_M`$ is universal. In order to be consistent with Eq. (19) for large $`zHL^{\beta \delta /\nu }`$, i.e. for finite small $`H`$ and large $`L`$, it must behave as
$$Q_M(z)=d_cz^{1/\delta }.$$
(21)
This offers a second way to determine the critical amplitude $`d_c`$, this time exploiting also the data of the smaller lattices. From a fit in the $`z`$-range $`201000`$ we find the value $`d_c=0.713(1)`$, which agrees with our first determination.
In Fig. 5 we show an example (at $`J=0.98`$) of the different behaviours of $`\chi _T`$ and $`\chi _L`$ at low temperatures. As a test we compare the result for $`M/H`$ (line) from the $`M`$-fits in Table 1 to the $`\chi _L`$-data. Though there are large finite-size effects for small $`L`$, the results for the highest $`L`$-values agree nicely with the expected behaviour. A similar test can be done for $`\chi _T`$ by showing also the result for $`M/H`$ (line), as obtained from the measurements of the magnetization in Fig. 3. Here as well we find agreement for large $`L`$.
The $`H`$-dependencies of the susceptibilities at the critical point are the same. Their amplitudes differ however by a factor $`\delta `$. Here and in the following we use the values $`\delta =4.86`$ and $`\beta =0.38`$ . From Eqs. (4) and (6) and from the magnetization at the critical point, Eq. (19), we derive
$$\chi _L=(d_c/\delta )H^{1/\delta 1}\mathrm{and}\chi _T=d_cH^{1/\delta 1}.$$
(22)
In the left part of Fig. 6 we compare $`\chi _L`$ and $`\chi _T`$ at $`T_c`$. The lines in the figure are calculated from Eq. (22) and the fit results of Eq. (19). We see again consistency with the highest-$`L`$ data. The right part of Fig. 6 shows two examples of $`\chi _L`$ and $`\chi _T`$ for high temperatures. Both susceptibilities converge to one value $`\chi (T)`$ for $`H0`$, since no spontaneous symmetry breaking occurs for $`T>T_c`$. At the higher temperature corresponding to $`J=0.80`$ the two susceptibilities are essentially constant (i.e. the magnetization is linear in $`H`$) and equal for a large range in $`H`$. However, at $`J=0.90`$ (that is closer to $`T_c`$) and finite $`H`$-values $`\chi _T`$ is larger than $`\chi _L`$.
## 4 The Scaling Function
The scaling function for the three-dimensional $`O(4)`$ model was determined from a fit of Monte Carlo data in . Our goal here is to describe our data using the perturbative form of the equation of state as discussed in Section 2, but with nonperturbative coefficients, determined from a fit of the data. The original form of the function $`y=f(x)`$ from is not suitable for such a fit. We thus consider Eq. (16), which is written as a simple series expansion in $`y`$. We do not expect this form to describe the data for all $`x`$ and $`y`$, yet looking at Fig. 1, we hope to cover a significant portion of the phase diagram for small $`x`$. The idea is to interpolate this result with a fit to the large-$`x`$ form (17).
Our fits are shown, together with our data, in Figs. 7 and 8. We have considered data from our largest lattices, for inverse temperatures $`0.9J1.0`$ and magnetic fields $`H0.01`$. The normalization constants $`H_0`$ and $`T_0`$, obtained from Eq. (15) and our fits in Section 3 are given by
$$H_0=\mathrm{\hspace{0.33em}5.08}(3),T_0=\mathrm{\hspace{0.33em}1.093}(2).$$
(23)
We have performed a fit using the three leading terms in (16) for small $`y`$
$$x_1(y)+1=(\stackrel{~}{c}_1+\stackrel{~}{d}_3)y+\stackrel{~}{c}_2y^{1/2}+\stackrel{~}{d}_2y^{3/2}.$$
(24)
This form was fitted in the interval $`1<x\mathrm{}<1.5`$, giving
$$\stackrel{~}{c}_1+\stackrel{~}{d}_3=\mathrm{\hspace{0.17em}0.345}(12),\stackrel{~}{c}_2=\mathrm{\hspace{0.17em}0.6744}(73),\stackrel{~}{d}_2=0.0232(49).$$
(25)
The fit describes all the data at $`T<T_c`$ and also higher, up to $`x5`$. This confirms that the expression (16) is valid also away from $`x1`$, as observed in Section 2. We note the small value of $`\stackrel{~}{d}_2`$. An attempt to include the next power of $`y`$ leads to a coefficient that is zero within errors. We also see that our data are not sensitive to possible logarithmic corrections to Eq. (16) as proposed in . Our coefficients can be compared to those calculated perturbatively for $`N=4`$ in Ref.
$$\stackrel{~}{c}_1+\stackrel{~}{d}_3=\mathrm{\hspace{0.17em}0.528},\stackrel{~}{c}_2=\mathrm{\hspace{0.17em}0.530}.$$
(26)
For large $`x`$ we have done a 2-parameter fit of the behaviour (17), in the corresponding form for $`x`$ in terms of $`y`$
$$x_2(y)=ay^{1/\gamma }+by^{(12\beta )/\gamma }.$$
(27)
Considering data points with $`y>50`$ (corresponding to $`x\mathrm{}>15`$) we obtain
$$a=\mathrm{\hspace{0.17em}1.084}(6),b=0.874(25).$$
(28)
Expression (27) is seen to describe the data for $`x\mathrm{}>2`$. We mention that a fit of our data to the leading term in Griffiths’s condition, using $`x50`$, yields $`\gamma =1.45(1)`$, which is in agreement with .
The small- and large-$`x`$ curves cover the whole range of values of $`x`$ remarkably well. In fact, the two curves are approximately superimposed in the interval $`2\mathrm{}<x\mathrm{}<8`$. We can therefore interpolate smoothly, for example by taking
$$x(y)=x_1(y)\frac{y_0^3}{y_0^3+y^3}+x_2(y)\frac{y^3}{y_0^3+y^3}$$
(29)
at $`y_0=10`$, which corresponds to $`x4`$. Expression (29) is equivalent to the equation of state (13) and to the scaling function $`f_G`$ in (12). In Fig. 8 we show a plot of $`f_G`$ obtained parametrically from $`x(y)`$ in (29). The two variables in the plot are simply related to $`x`$ and $`y`$ by
$$f_G=M/h^{1/\delta }=y^{1/\delta },t/h^{1/\beta \delta }=xy^{1/\beta \delta }.$$
(30)
We see a remarkable agreement of our data points with the form suggested by perturbation theory. With respect to the scaling function in our function is slightly higher for large negative $`t/h^{1/\delta \beta }`$, due to our more complete elimination of finite-size effects.
## 5 Summary and Conclusions
We have shown that the Golstone singularities are clearly observable at low temperatures, and also close to $`T_c`$. In fact, we are able to use the observed Goldstone-effect behaviour to extrapolate our data to $`H0`$ and obtain the zero-field critical exponent $`\beta `$ in good agreement with . We remark that the same does not happen at high temperatures: we are not able to get the exponent $`\gamma `$ from extrapolations using the constant behaviour of the longitudinal susceptibility (or the linear behaviour of the magnetization), since this behaviour is masked close to $`T_c`$ for the fields $`H`$ we have taken into account. At the same $`H`$’s the $`H^{1/2}`$ behaviour is clearly present for all the $`T<T_c`$ we consider, showing that the Goldstone effect is dominating the critical one, except at $`T_c`$.
A strong manifestation of the Goldstone behaviour had been conjectured perturbatively , based on the size of the coefficient $`\stackrel{~}{c}_2`$ in the $`ϵ`$-expansion of the equation of state. We have fitted the perturbative form in Section 4, finding a coefficient that is even larger than the perturbative one.
The resulting curve for the equation of state describes all the data beautifully, and can be plotted parametrically for the scaling function.
As a by-product of our work we have determined the critical exponent $`\delta =4.86(1)`$ by a fit of the magnetization at $`T_c`$ to the critical scaling behaviour as a function of $`H`$. In addition we checked the finite-size-scaling prediction for $`M`$. It is remarkable that we observed in both cases no corrections to scaling.
A similar investigation for the $`O(2)`$ model is currently being done .
Acknowledgements
We thank Attilio Cucchieri and Frithjof Karsch for helpful suggestions and comments. Our computer code for the cluster algorithm was based on a zero-field program by Manfred Oevers. This work was supported by the Deutsche Forschungsgemeinschaft under Grant No. Ka 1198/4-1. |
no-problem/9911/hep-th9911154.html | ar5iv | text | # 1 Calibrations and supersymmetry
Minimal surfaces have long been an active area of mathematics. With the advent of the ‘brane revolution’ they have become important to theoretical particle physics via M-theory. An important development on the mathematical side was the introduction in 1982 of the concept of a calibration . In the simplest cases this deals with $`p`$-dimensional submanifolds of $`\text{𝔼}^n`$. A p-form $`\varphi `$ on $`\text{𝔼}^n`$ is a calibration if, for all tangent p-planes $`\xi `$,
(i) $`\varphi _\xi vol_\xi `$
(ii) $`d\varphi =0`$
The contact set of the calibration is the set of all tangent p-planes for which the calibration inequality (i) is saturated, and a calibrated p-surface is one for which all tangent p-planes belong to the contact set of a calibration<sup>2</sup><sup>2</sup>2Each p-plane tangent to a surface in $`\text{𝔼}^n`$ corresponds to a point on the Grassmannian of p-planes through the origin of $`\text{𝔼}^n`$, so the contact set of a calibration is a subset of this Grassmannian. This relies on the fact that $`\text{𝔼}^n`$ has trivial holonomy, but a modified theory exists for spaces of reduced holonomy.. A theorem of Harvey and Lawson states that a calibrated p-surface has minimum p-volume among all p-surfaces with the same homology . That is, it is a minimal p-surface.
The proof of this theorem is elementary. Consider an open set $`U`$ of a surface satisfying the premise. Since the calibration inequality (i) is saturated, by hypothesis, we have
$$\mathrm{vol}(U)=_U\varphi $$
(1)
Now consider any other p-surface that coincides with the original one on the boundary $`U`$ of $`U`$, and let $`U^{}`$ be an open set of this new surface with $`U^{}=U`$. Since we only consider surfaces within the same homology class there exists a $`(p+1)`$-surface $`D`$ such that $`D=UU^{}`$. It follows that
$$\mathrm{vol}(U)=_U^{}\varphi +_D𝑑\varphi .$$
(2)
By property (i) the first term on the right hand side cannot exceed $`\mathrm{vol}(U^{})`$, while the second term vanishes by property (ii). We therefore deduce the inequality
$$\mathrm{vol}(U)\mathrm{vol}(U^{}).$$
(3)
The theorem then follows from the fact that this is true for any choice of $`U`$.
Applications of this theorem to branes, pioneered<sup>3</sup><sup>3</sup>3No attempt will be made here to survey the many subsequent applications, except those of direct relevance to D=4 N=1 domain walls and their intersections to be discussed below. in , rely on the fact that, in many cases of interest, the energy of a static p-brane is proportional to the p-volume of its worldspace $`w`$, the constant of proportionality being the p-volume tension $`T`$. There are, however, many cases in which the energy is not simply the p-volume. Obvious examples are D-branes and the M5-brane which have gauge-fields on their worldvolume. The theory of calibrations is still applicable in these cases if (when consistency allows it) one restricts attention to configurations for which worldvolume gauge fields vanish. Discounting worldvolume gauge fields, the bosonic p-brane action takes the universal form
$$S=T_W\left[vol(g)+A\right]$$
(4)
where $`vol(g)`$ is the (p+1)-volume form in the metric $`g`$ induced on the worldvolume $`W`$ swept out by the worldspace $`w`$ in the course of its time evolution, and $`A`$ is a (p+1)-form induced on $`W`$ from a background (p+1)-form. For stationary backgrounds one has a timelike Killing vector field $`k`$ for which $`_kF`$ vanishes, where $`F=dA`$ is the (p+2)-form field strength for $`A`$. This Kiling vector field generates a symmetry of the p-brane action for which there is a corresponding Noether energy. For static branes<sup>4</sup><sup>4</sup>4See for a discussion of calibrations in relation to non-static branes. in static backgrounds the energy density is
$$=T\left[\sqrt{k^2}\sqrt{detm}+\mathrm{\Phi }\right]$$
(5)
where $`m`$ is the induced worldspace metric, and $`\mathrm{\Phi }`$ is the worldspace dual of $`i_kA`$ (in a gauge for which $`_kA=0`$). In other words, $`\mathrm{\Phi }`$ is an ‘electrostatic’ energy density associated with the background gauge potential $`A`$. If $`\mathrm{\Phi }`$ vanishes then a minimal energy p-brane is a minimal p-surface (in an appropriately rescaled metric unless $`k^21`$). If $`\mathrm{\Phi }`$ does not vanish then a brane of minimal energy is not a minimal surface. However, in this case one can invoke a theory of ‘generalized calibrations’ (not to be confused with generalizations involving non-vanishing worldvolume gauge fields ).
A ‘generalized calibration’ is a p-form satisfying, for all tangent p-planes $`\xi `$,
(i)’ $`\varphi _\xi \sqrt{k^2}vol_\xi `$
(ii)’ $`d\varphi =i_kF`$
For simplicity we shall assume here that $`k^21`$, and refer to for the general case. In this case, property (i)’ reduces to property (i) and the same arguments as before lead to
$$\mathrm{vol}(U)\mathrm{vol}(U^{})+_D𝑑\varphi .$$
(6)
In the gauge for which $`_kA=0`$ we have $`i_kF=d(i_kA)`$ and hence, from property (ii)’, $`d\varphi =d(i_kA)`$. Thus,
$$_D𝑑\varphi =_Ui_kA+_U^{}i_kA,$$
(7)
and we deduce the generalized bound
$$\mathrm{vol}(U)+_Ui_kA\mathrm{vol}(U^{})+_U^{}i_kA.$$
(8)
This is equivalent to
$$E(U)E(U^{})$$
(9)
where $`E`$ is the integral of the energy density $``$ of (5), with $`k^21`$. This illustrates the general result of that the contact set of a generalized calibration is a minimal energy p-surface.
Generalized calibrations are needed to study supersymmetric branes in those supergravity backgrounds for which the ‘electrostatic’ energy density $`\mathrm{\Phi }`$ is non-vanishing. In this contribution I will limit myself to supergravity backgrounds for which $`\mathrm{\Phi }`$ vanishes, and for which the background metric is flat, so that $`k^21`$. In such cases only the standard calibrations are needed. I will begin by explaining how, in these simple circumstances, the calibration bound (i) follows from the p-brane supersymmetry algebra; I refer the reader to for the general case. A super p-brane in a vacuum background is invariant under supertranslations of superspace. This invariance implies the existence of spinorial Noether charges $`Q`$, in addition to the energy and momentum. As this symmetry is a rigid one, the charges in any region $`U`$ of the brane are well-defined. When account is taken of the p-form central charge in the spacetime supertranslation algebra , one finds that
$$\{Q,Q\}=_U\left[vol\pm \mathrm{\Gamma }_0\mathrm{\Gamma }_{I_1\mathrm{}I_p}dX^{I_1}\mathrm{}dX^{I_p}\right]$$
(10)
where $`vol`$ is the volume p-form in the induced worldspace metric $`m`$, and $`X^I`$ are the n-space coordinates. The sign depends on the orientation of the p-brane in $`\text{𝔼}^n`$. The values of $`p`$ and $`n`$ are restricted by supersymmetry, but these restrictions are those required anyway for applications to M-theory. The values of $`p`$ and $`n`$ are further restricted if we assume that the supercharges $`Q`$ are real. This assumption is not essential but will be made here to simplify the presentation. The matrices $`(\mathrm{\Gamma }_0,\mathrm{\Gamma }_I)`$ are the Dirac matrices of (1+n)-dimensional Minkowski spacetime.
We now introduce a real, commuting, covariantly constant spinor $`ϵ`$, to be called a ‘Killing spinor’, normalized so that
$$ϵ^Tϵ=1.$$
(11)
The number of such spinors will always equal the number of supersymmetry charges $`Q`$. Given such a spinor, (10) implies that
$$\left(Qϵ\right)^2=_U\left[vol\pm \varphi \right],$$
(12)
where
$$\varphi =\frac{1}{p!}\left(ϵ^T\mathrm{\Gamma }_0\mathrm{\Gamma }_{I_1\mathrm{}I_p}ϵ\right)dX^{I_1}\mathrm{}dX^{I_p}.$$
(13)
The left hand side of (12) is manifestly positive, and since this is so for any region $`U`$ we must have
$$\varphi _\xi vol_\xi $$
(14)
for all $`\xi `$. It is also obvious, since $`ϵ`$ is covariantly constant, that $`d\varphi =0`$. We conclude that $`\varphi `$ is a p-form calibration. As we shall now see, the calibration inequality (14) is saturated by configurations that preserve some fraction of the spacetime supersymmetry.
The calibration just found from considerations of supersymmetry is such that
$$\varphi _\xi =vol_\xi (ϵ^T\mathrm{\Gamma }_\xi ϵ)$$
(15)
where $`\mathrm{\Gamma }_\xi `$ is the matrix
$$\mathrm{\Gamma }=\frac{1}{p!\sqrt{detm}}\epsilon ^{i_1\mathrm{}i_p}\left(_{i_1}X^{I_1}\mathrm{}_{i_p}X^{I_p}\right)\mathrm{\Gamma }_0\mathrm{\Gamma }_{I_1\mathrm{}I_p},$$
(16)
evaluated at the point to which the p-plane $`\xi `$ is tangent. Given the restrictions on $`p`$ and $`n`$ mentioned previously, it can be shown that
$$\mathrm{\Gamma }^2=1.$$
(17)
The eigenvalues of $`\mathrm{\Gamma }`$ are therefore $`\pm 1`$, and the bound (14) is an immediate consequence of this. As we have already derived this bound, the more relevant point here is that the condition for its saturation is
$$\mathrm{\Gamma }ϵ=ϵ.$$
(18)
This is the key equation for what follows<sup>5</sup><sup>5</sup>5The Lagrangian version of this equation was originally derived in from considerations of ‘kappa-symmetry’. The derivation here follows .. For many simple applications we may choose coordinates for which the Killing spinor $`ϵ`$ is constant. For simplicity, let us suppose that such coordinates have been chosen and that $`ϵ`$ is constant. For a given tangent p-plane $`\xi `$, equation (18) becomes $`\mathrm{\Gamma }_\xi ϵ=ϵ`$, which states that $`ϵ`$ must belong to the $`+1`$ eigenspace of $`\mathrm{\Gamma }_\xi `$; let us call this $`S_\xi ^+`$. This space has a dimension equal to half the number of supersymmetry charges so, locally, the brane preserves 1/2 supersymmetry. This will also be true globally if the brane geometry is planar, but in general $`\xi `$ will depend on position on the brane and hence $`S_\xi ^+`$ will vary with position. In this case the space of solutions of (18) is the intersection of the spaces $`S_\xi ^+`$ for all $`\xi `$. For a generic p-brane this space is the empty set but for special cases there will be a non-empty intersection. Non-planar branes (or intersections of planar branes) for which (18) has at least one solution are calibrated by $`\varphi `$, but will generally preserve less than 1/2 supersymmetry. As we go from one point on the p-surface to another we go from one p-plane $`\xi `$ to another p-plane $`\xi ^{}`$. Correspondingly,
$$\mathrm{\Gamma }_\xi \mathrm{\Gamma }_\xi ^{}=R^1\mathrm{\Gamma }_\xi R,$$
(19)
where $`R`$ is some $`SO(n)`$ rotation matrix in the spinor representation. The p-surface will be a calibrated one only if there exist non-zero solutions $`ϵ`$ to $`Rϵ=ϵ`$, because only in this case will $`S_\xi ^+`$ and $`S_\xi ^{}^+`$ have a non-empty intersection.
In the context of M-theory, $`ϵ`$ is a 32 component Majorana spinor of $`SO(1,10)`$, which is real in a real representation of the Dirac matrices. For static solutions we may consider $`ϵ`$ to be a spinor of $`SO(10)`$, so $`n10`$. Preservation of some non-zero fraction $`\nu `$ of supersymmetry by a static M-brane configuration requires $`R`$ to take values in a subgroup $`G`$ of $`SO(n)SO(10)`$ such that the decomposition of the spinor representations of $`SO(n)`$ contained in the spinor of $`SO(10)`$ includes at least one singlet. If the total number of singlets is $`32\nu `$ then the configuration will preserve the fraction $`\nu `$ of supersymmetry; it is ‘$`\nu `$-supersymmetric’. The subgroups of $`SO(10)`$ with the required property are
$`SU(5)`$ $`SO(10)`$
$`Spin(7)`$ $`SO(8)`$
$`SU(4)`$ $`SO(8)`$
$`G_2`$ $`SO(7)`$
$`SU(3)`$ $`SO(6)`$
$`SU(2)`$ $`SO(4)`$ (20)
For each supersymmetric, and hence calibrated, p-surface the tangent planes must parameterize a coset space $`G/H`$, where $`G`$ is the rotation group discussed above and $`H`$ is some stability subgroup. The groups $`G`$ and $`H`$, together with the dimension $`p`$ of the calibrated surface, provide a classification of the calibrations relevant to supersymmetric configurations of M5-branes<sup>6</sup><sup>6</sup>6There are additional cases when one includes M2-branes . See for a comprehensive review.. These are shown in Table 1.
The simplest realization of these possibilities is via orthogonally intersecting M5-branes. Here I will follow the approach of . Consider, for example, two M5-branes intersecting according to the array
$$\begin{array}{cccccccccccc}M5:\hfill & 1& 2& 3& |& 4& 5& & & & & \\ M5:\hfill & 1& 2& 3& |& & & 6& 7& & & \end{array}$$
Each M5-brane determines a tangent p-plane $`\xi `$, and hence an associated matrix $`\mathrm{\Gamma }_\xi `$ with $`+1`$ eigenspace $`S_\xi ^+`$. The intersection of these spaces can be shown, by standard methods, to be 8-dimensional, so the configuration preserves 1/4 supersymmetry and must be a calibrated configuration. The 1-2-3 directions, separated from the others in the above array by the vertical line, are common to both M5-brane worldvolumes. They play an inessential role and may be ignored, as may the transverse 8-9-$`\mathrm{}`$ directions (following , we use ‘$`\mathrm{}`$’ as a convenient single character for ‘ten’), so we effectively have two intersecting 2-branes in $`\text{𝔼}^4`$. The 2-brane in the 6-7 transverse directions may be considered as a ‘solitonic’ deformation of a test brane in the 4-5 directions. From this perspective, the orthogonal intersection of the two 2-branes is a singular limit of a configuration of a single 2-brane, which can be considered as an elliptic curve in $`\text{}^2`$, asymptotic to the two orthogonal 2-planes in the 4-5 and 6-7 directions. Either plane can be obtained from the other by a discrete $`SU(2)`$ rotation in $`\text{}^2`$. Such rotations preserve 1/4 supersymmetry . We can desingularize the intersection, while maintaining 1/4 supersymmetry by allowing a continuous $`SU(2)`$ rotation from one asymptotic 2-plane to another. We then have a non-singular elliptic curve which, because it preserves 1/4 supersymmetry, must be a calibrated 2-surface. It must be calibrated by an $`SU(2)`$ Kähler calibration as this is the only case with the required properties, the Kähler and Special Lagrangian calibrations being equivalent for $`SU(2)`$.
This conclusion can be confirmed directly from the constraints imposed on Killing spinors by two orthogonally intersecting M5-branes . Let us consider here the simpler case of two orthogonally intersecting M2-branes because the three common directions of the two M5-branes are irrelevant to the final result. We take the M2-branes to intersect according to the array
$$\begin{array}{ccccccccccc}M2:\hfill & 1& 2& & & & & & & & \\ M2:\hfill & & & 3& 4& & & & & & \end{array}$$
The calibration form (13) in this case is the 2-form
$$\varphi =\frac{1}{2}\underset{i,j=1}{\overset{4}{}}\left(ϵ^T\mathrm{\Gamma }_{0ij}ϵ\right)dx^idx^j.$$
(21)
The constraints imposed on the Killing spinors by the two M2-branes are
$$\mathrm{\Gamma }_{012}ϵ=ϵ,\mathrm{\Gamma }_{034}ϵ=ϵ.$$
(22)
These imply
$$\mathrm{\Gamma }_{12}ϵ=\mathrm{\Gamma }_{34}ϵ,\mathrm{\Gamma }_{13}ϵ=\mathrm{\Gamma }_{24}ϵ,\mathrm{\Gamma }_{14}ϵ=\mathrm{\Gamma }_{25}ϵ,$$
(23)
and also
$$ϵ^T\mathrm{\Gamma }_{013}ϵ=0ϵ^T\mathrm{\Gamma }_{014}ϵ=0.$$
(24)
Given the normalization (11) of $`ϵ`$, we then find that
$$\varphi =dX^1dX^2+dX^3dX^4.$$
(25)
Introducing the complex coordinates $`z=X^1+iX^2`$ and $`w=X^1+iX^2`$ we see that $`\varphi `$ is a closed hermitian 2-form on $`\text{𝔼}^4`$; that is, a Kähler 2-form. It is also $`SU(2)`$-invariant with $`(dz,dw)`$ transforming as a complex doublet. The 2-form $`\varphi `$ is therefore an $`SU(2)`$ Kähler calibration.
The same conclusion can be reached for M5-branes via a similar analysis of the constraints imposed on Killing spinors (after factoring out from the 5-form of (13) the volume 3-form of the three common directions). But these constraints apply to a much wider class of configurations than orthogonally intersecting M5-branes. In particular, they apply to any 1/4 supersymmetric configuration of a single M5-brane that is asymptotic to the configuration of orthogonally intersecting M5-branes. The asymptotic planes may also be rotated relative to each other (see for a discussion of calibrations in this context). One may consider the array as a shorthand for the entire collection of such configurations. Pushing this interpretation to the extreme, one may regard the array as nothing more than a tabular representation of the constraints (22), so that it may be considered to represent any configuration yielding these constraints regardless of the asymptotic behaviour. If an interpretation this liberal is adopted then the geometry of the array may suggest results that are spurious in particular contexts, but one may still hope to capture general features common to all cases. We shall return to this point below when discussing the associative and Cayley calibrations, but we first need to consider a simpler case with 1/8 supersymmetry.
Consider three M5-branes intersecting according to the array
$$\begin{array}{cccccccccccc}M5:\hfill & 1& 2& 3& |& 4& 5& & & & & \\ M5:\hfill & 1& 2& 3& |& & & 6& 7& & & \\ M5:\hfill & 1& 2& 3& |& & & & & 8& 9& \end{array}$$
Each M5-brane determines a tangent p-plane $`\xi `$, and hence an associated matrix $`\mathrm{\Gamma }_\xi `$ with $`+1`$ eigenspace $`S_\xi ^+`$. The intersection of these spaces can be shown, by standard methods, to be 4-dimensional, so the configuration preserves 1/8 supersymmetry and must be a calibrated one. The 1-2-3 directions, separated from the others in the above array by the vertical line, are common to all M5-brane worldvolumes. They play an inessential role and may be ignored, as may the transverse tenth direction, so we effectively have three intersecting 2-branes in $`\text{𝔼}^6`$. The 2-branes in the 6-7 and 8-9 transverse directions may be considered, as before, as ‘solitonic’ deformations of a test brane in the 4-5 directions. In this case the orthogonal intersection of the three 2-branes can be considered a singular limit of a configuration of a single 2-brane ‘wrapped’ on an elliptic curve in $`\text{}^3`$, and asymptotic to the three orthogonal 2-planes in the 4-5, 6-7, and 8-9 directions. Each of these planes can be obtained from any of the other two by a discrete $`SU(3)`$ rotation in $`\text{}^3`$. Such rotations preserve 1/8 supersymmetry . We can desingularize the intersection, while maintaining 1/8 supersymmetry by allowing a continuous $`SU(3)`$ rotation from one asymptotic 2-plane to another. We then have a non-singular elliptic curve which, because it preserves 1/8 supersymmetry, must be a calibrated 2-surface. It must be calibrated by an $`SU(3)`$ Kähler calibration as this is the only candidate with the required properties. This can be verified as before from the constraints imposed on Killing spinors by the intersecting M5-brane configuration.
In any configuration of intersecting branes there will generally be zero modes trapped on the intersection and these will govern its low energy dynamics. In the above example the intersection is 3-dimensional so the zero modes trapped on the intersection yield a (3+1)-dimensional quantum field theory. Since the configuration preserves 1/8 of the supersymmetry of the M-theory vacuum, the intersection field theory has a total of four supersymmetries, transforming as a real spinor of $`SO(1,3)`$. In other words, the low energy intersection dynamics is governed by a D=4 N=1 SQFT. If the intersection is desingularized so as to describe a single non-singular M5-brane then this SQFT will be determined by the M5-brane’s effective action. The SQFT obtained in this way from three orthogonally-intersecting M5-branes is not of particular interest in itself, but any minimal energy M5-brane wrapping a Riemann surface in a flat M-theory vacuum is also calibrated by an $`SU(3)`$ Kähler calibration, and various SQFTs can be thus obtained. A particularly interesting example in which an M5-brane wraps a Riemann surface in the $`S^1`$-compactified M-theory vacuum was identified in as a theory, now called ‘MQCD’, with properties similar to that of SQCD.
An example of an associative calibration is provided by four M5-branes intersecting according to the array
$$\begin{array}{cccccccccccc}M5:\hfill & 1& 2& |& 3& 4& 5& & & & & \\ M5:\hfill & 1& 2& |& 3& & & 6& 7& & & \\ M5:\hfill & 1& 2& |& 3& & & & & 8& 9& \\ M5:\hfill & 1& 2& |& & 4& & 6& & 8& & \end{array}$$
Ignoring the common directions, and the transverse tenth direction, and interpreting the last three rows as ‘solitonic’ deformations of a test 3-brane in the 3-4-5 directions, we conclude that we have a calibrated 3-surface in $`\text{𝔼}^7`$. Since this configuration preserves 1/16 supersymmetry, it must be calibrated by a 3-form, and the associative 3-form calibration is the only candidate. We might wish to relate this to a D=3 SQFT, but there is an alternative application. Noting that this array can be obtained from the previous one by the addition of a fourth M5-brane, we retabulate it as
$$\begin{array}{cccccccccccc}M5:\hfill & 1& 2& 3& |& 4& 5& & & & & \\ M5:\hfill & 1& 2& 3& |& & & 6& 7& & & \\ M5:\hfill & 1& 2& 3& |& & & & & 8& 9& \\ & & & & |& & & & & & & \\ M5:\hfill & 1& 2& & |& 4& & 6& & 8& & \end{array}$$
If the first three M5-branes are interpreted as supplying a D=4 N=1 SQFT with coordinates $`(x^0,x^1,x^2,x^3)`$ then the array suggests that we interpret the fourth M5-brane as a domain wall in this theory. The 1/16 supersymmetry of associative calibrations of an M5-brane translates to 1/2 supersymmetry of the D=4 N=1 Minkowski vacuum; in other words, the domain wall is a 1/2 supersymmetric ‘BPS’ wall. This is a simple analogue of Witten’s identification of associative calibrations of the M5-brane of MQCD as 1/2 supersymmetric domain walls (see also ). It has been known for some time that the WZ model admits 1/2 supersymmetric domain walls for an appropriate superpotential (the solutions having been studied originally as solitons of the dimensionally-reduced N=2 D=2 SQFT ). They also appear in $`SU(n)`$ SQCD with $`n2`$ because the low effective action includes a WZ action with a superpotential admitting $`n`$ isolated critical points and hence $`n`$ degenerate vacua . Minimal energy configurations that interpolate from one vacuum to another are 1/2 supersymmetric domain walls. It should therefore be no surprise that MQCD admits similar domain walls, although their geometrical nature (and the fact that they are D-branes ) is remarkable.
It has recently been appreciated that the domain walls of WZ models, and hence of SQCD, may intersect at junctions, the configuration as a whole preserving 1/4 supersymmetry . The junction of two domain walls is 1-dimensional. Let $`z`$ be a complex coordinate for the plane orthogonal to this direction. The 1/4 supersymmetric domain wall junctions of the WZ model are configurations $`\psi (z)`$ of a complex scalar field satisfying
$$\frac{d\psi }{dz}=\overline{W^{}(\psi )},$$
(26)
where the ‘superpotential’ $`W`$ is a holomorphic function of $`\psi `$. The 1/2 supersymmetric domain walls themselves are special solutions of this equation with translational symmetry along one direction in the $`z`$-plane, but the generic solution is 1/4 supersymmetric. In the case that
$$W^{}=1\psi ^3,$$
(27)
corresponding to a quartic superpotential, there are three vacua with $`\psi =(1,\omega ,\omega ^2)`$ where $`\omega `$ is a cube-root of unity. In this case we expect a $`\text{}_3`$-symmetric junction, as shown in Fig. 1.
Although no exact solution to (26) of this type is known, numerical studies suggest that one exists and the existence of a $`\text{}_3`$-invariant minimum energy domain wall junction solution to the second order WZ equations with superpotential (27) has been proved . In addition, an exact solution representing a domain-wall junction has recently been found in a more complicated, but related, model . It seems clear from these results that 1/4 supersymmetric domain wall junctions are generic to D=4 N=1 SQFTs. In particular they are expected in $`SU(n)`$ SQFT for $`n3`$ .
This raises an obvious question. Does MQCD have 1/4 supersymmetric domain wall junctions and, if so, what is their geometrical realization? A suggestion of Gauntlett, which will be explored in a forthcoming article , is that domain wall junctions of MQCD are Cayley calibrations of the MQCD M5-brane. This can be motivated by considering the realization of Cayley calibrations as five M5-branes intersecting orthogonally according to the array
$$\begin{array}{cccccccccccc}M5:\hfill & 1& 2& 3& |& 4& 5& & & & & \\ M5:\hfill & 1& 2& 3& |& & & 6& 7& & & \\ M5:\hfill & 1& 2& 3& |& & & & & 8& 9& \\ & & & & |& & & & & & & \\ M5:\hfill & 1& 2& & |& 4& & 6& & 8& & \\ M5:\hfill & 1& & 3& |& 4& & 6& & & 9& \end{array}$$
This configuration preserves 1/32 supersymmetry, corresponding to 1/4 supersymmetry of the D=4 N=1 vacuum defined by the first three M5-branes. If the last two M5-branes are viewed as excitations about this vacuum then it is apparent from the array that they can be interpreted as intersecting domain walls.
Acknowledgements: I thank Jerome Gauntlett, Gary Gibbons, Jan Gutowski and George Papadopoulos for their collaboration on work reported here, and for helpful discussions. |
no-problem/9911/cond-mat9911292.html | ar5iv | text | # Role of f electrons in rare-earth and uranium intermetallics -An alternative look at heavy-fermion phenomena.
The origin of the large specific heat and the non-magnetic state observed at low temperatures in some f intermetallic compounds with uranium called heavy-fermion (h-f) compounds is discussed. Different existing theoretical models are briefly overviewed but it will be proposed to discuss the h-f compounds in terms of physical concepts worked out for rare-earth intermetallics . To remind, the magnetic and electronic properties of rare-earth intermetallics are understood by considering a few, two in the simplest but quite adequate approach, electronic subsystems i.e. the f electronic subsystem and conduction-electron subsystem (the individualized-electron model). These two subsystems are described by essentially different theoretical approaches referring to the localized and band magnetism.
In the discussion if the non-magnetic state observed in h-f compounds do refer to the local scale (single-ion) or to a collective many-body state some arguments will be given for the on-site effect. Namely, it can be rigorously proven that charge interactions via the Stark effect can produce the non-magnetic state of the localized $`f^{\text{ }n}`$ electronic subsystem also in case of the Kramers system (n is an odd number) . The full suppression of the local moment is attained by highly anisotropic charge distribution at the vicinity of the f-shell electrons. This highly anisotropic charge distribution is visualized by CEF parameters with significant values for higher-order terms. The charge mechanism for the formation of the non-magnetic state of the f magnetic ion is compared with other known mechanism like the Kondo-compensation mechanism (the spin type) and the hybridization, of f and conduction electrons, mechanism.
In view of the individualized-electron model the large specific heat originates from low-energy excitations between doublet levels of the Kramers state of the fn electronic subsystems that are slightly split due to exchange interactions. These excitations are many-electron excitations in contrary to single-electron excitations in the conduction-electron subsystem. It will be shown that magnetic and electronic properties of intermetallic systems with the f-electronic subsystem in a quasi-nonmagnetic Kramers state exhibit properties observed in h-f compounds. One can say that the h-f compounds are compounds with Kramers f ions that have difficulties, due to exotic ground state and weakness of exchange interactions, to form the well-established magnetic order. However, the system has to release the Kramers entropy before reaching zero temperature as is experimentally observed by the entropy of R ln2.
These phenomena will be discussed for some uranium h-f compounds with the hexagonal symmetry. For instance, the temperature dependence of the specific heat of UPd<sub>2</sub>Al<sub>3</sub> with a $`\lambda `$-type of peak at T<sub>N</sub> of 14 K and a Schottky-type of peak above TN has been very well reproduced by the U<sup>3+</sup> (5$`f`$ <sup>3</sup>) configuration .
Prezented at European Conf. Physics of Magnetism 93, 21-24 June 1993, Poznan, Poland (Committee: J.Morkowski, R.Micnas, S.Krompiewski) |
no-problem/9911/gr-qc9911039.html | ar5iv | text | # Chaos in FRW cosmology with gently sloping scalar field potentials
## 1 Introduction
In the last few years the chaotic regime in dynamics of closed FRW universe filled with a scalar field becomes the issue of investigations. Initially, the model with a massive scalar field (with the scalar field potential $`V(\phi )=(m^2\phi ^2)/2`$, where $`m`$ is the mass of the scalar field) was studied . Before summarizing main results obtained, we present the equation of motion (for further using we will not specify the potential $`V(\phi )`$). The system has two dynamical variables - the scale factor $`a`$ and the scalar field $`\phi `$:
$$\frac{m_P^2}{16\pi }\left(\ddot{a}+\frac{\dot{a}^2}{2a}+\frac{1}{2a}\right)+\frac{a\dot{\phi }^2}{8}\frac{aV(\phi )}{4}=0,$$
(1)
$$\ddot{\phi }+\frac{3\dot{\phi }\dot{a}}{a}+V^{}(\phi )=0.$$
(2)
with the first integral
$$\frac{3}{8\pi }m_P^2(\dot{a}^2+1)+\frac{a^2}{2}\left(\dot{\phi }^2+2V(\phi )\right)=0.$$
(3)
Here $`m_P`$ is the Planck mass.
The points of maximal expansion and those of minimal contraction, i.e. the points, where $`\dot{a}=0`$ can exist only in the region where
$$a^2\frac{3}{8\pi }\frac{m_P^2}{V(\phi )},$$
(4)
Sometimes, the region defined by inequalities (4) is called the Euclidean one. One can easily see also that the possible points of maximal expansion (where $`\dot{a}=0`$, $`\ddot{a}<0`$) are localized inside the region
$$a^2\frac{1}{4\pi }\frac{m_P^2}{V(\phi )}$$
(5)
while the possible points of minimal contraction (where $`\dot{a}=0`$, $`\ddot{a}>0`$) lie outside this region (5) being at the same time inside the Euclidean region (4).
The main idea of the further analysis consists in the fact that in the closed isotropical model with a minimally coupled scalar field satisfying the energodominance condition all the trajectories have the point of maximal expansion. Then the trajectories can be classified according to localization of their points of maximal expansion. The area of such points is specified by (5). A numerical investigation shows that this area has a quasi- periodical structure, wide zones corresponding to the falling to singularity being intermingled with narrow those in which the points of maximal expansion of trajectories having the so called “bounce” or point of minimal contraction are placed. Then studying the substructure of these zones from the point of view of possibility to have two bounces one can see that this substructure reproduce on the qualitative level the structure of the whole region of possible points of maximal expansion. Continuing this procedure ad infinitum yields the fractal set of infinitely bouncing trajectories.
It should be noticed that even the 1-st order bounce intervals (containing maximum expansion points for trajectories having at least one bounce) are very narrow. Analytical approximation for large initial $`a`$ indicates that the width of intervals is roughly inversely proportional to $`a`$ . The opposite case of small initial $`a`$ was investigated numerically, and the ratio of the first such interval width to the distance between intervals appear to be of the order of $`10^2`$, if we do not take into account zigzag-like trajectories. So, the chaotic regime, though being interesting from the mathematical point of view, may be treated as not important enough.
For steeper potentials the chaos is even less significant. The chaotic behavior may disappear completely for exponentially steep potentials .
The goal of the present paper is to describe the opposite case - the potentials which is less steep than the quadratic one. We will see that in this case the transition to a qualitatively stronger chaos may occur.
The structure of the paper is as follows. It Sec.2 we consider asymptotically flat potential and explain new features of the chaos which give rise in this case. In Sec.3 a more wide class of potentials less steep than quadratic is studied. In Sec.4 we discuss the transition to regular dynamics in the presence of ordinary matter in addition to the scalar field for potentials under consideration.
## 2 Asymptotically flat potentials and the merging of bounce intervals
We will use the units in which $`m_P/\sqrt{16\pi }=1`$ for presenting our numerical results, because in these units most of the interesting events occur for the range of parameters of the order of unity.
We start with the potential
$$V(\phi )=M_0^4(1exp(\frac{\phi ^2}{\phi _0^2})),$$
(6)
where $`M_0`$ and $`\phi _0`$ are parameters. $`M_0`$ determines the asymptotical value of the potential for $`\phi \pm \mathrm{}`$.
It can be easily checked from the equations of motion that multiplying the potential to a constant (i.e. changing the $`M_0`$) leads only to rescaling $`a`$. So, this procedure do not change the chaotic properties of our dynamical system. On the contrary, this system appear to be very sensitive to the value of $`\phi _0`$. We plotted in Fig.1. the $`\phi =0`$ cross-section of bounce intervals depending on $`\phi _0`$. This plot represents a situation, qualitatively different from studied previously for potentials like $`V\phi ^2`$ and steeper. Namely, the bounce intervals can merge.
Let us see more precisely what does it means. For $`\phi _0>0.82`$ the picture is qualitatively as for a massive scalar field - trajectories from 1-st interval have a bounce with no $`\phi `$-turns before it, trajectories which have initial point of maximal expansion between 1-st and 2-nd intervals fall into a singularity after one $`\phi `$-turn, those from 2-nd interval have a bounce after 1 $`\phi `$-turn and so on. For $`\phi _0`$ a bit smaller than the first merging value the 2-nd interval contains trajectories with 2 $`\phi `$-turns before bounce, the space between 1-st interval (which is now the product of two merged intervals) and the 2-nd one contains trajectories falling into a singularity after two $`\phi `$-turns. There are no trajectories going to a singularity with exactly one $`\phi `$-turn. Trajectories from the 1-st interval can experience now a complicated chaotic behavior which can not be described in as similar way as above.
With $`\phi _0`$ decreasing further, the process of interval merging being to continue leading to growing chaotisation of trajectories. When $`n`$ intervals merged together, only trajectories with at least $`n`$ oscillations of the scalar field before falling into a singularity are possible. Those having exactly $`n`$ $`\phi `$-turns have their initial point of maximal expansion between 1-st bounce interval and the 2-nd one (it now contains trajectories having a bounce after $`n`$ $`\phi `$-turns). For initial values of the scale factor larger then those from the 2-nd interval, the regular quasiperiodic structure described above is restored.
Numerical analysis shows also that the fraction of very chaotic trajectories as a function of $`\phi _0`$ grows rapidly with $`\phi _0`$ decreasing below the first merging value. To illustrate this point we plotted in Fig.2 the number of trajectories which do not fall into a singularity during first $`50`$ oscillations of the scalar field $`\phi `$. We do not include trajectories with the next point of maximal expansion located outside the 2-nd (or the 1-st one, if merging occurred) interval, so all counted trajectories avoid a singularity during this sufficiently long time interval due to their extreme chaoticity, but not due to reaching the slow-roll regime. The initial value of $`a`$ vary in the range of the first two intervals before and after merging with the step $`0.002`$. Before merging, the measure of so chaotic trajectories is extremely low and they are undistinguisheble on our grid. When $`\phi _0`$ becomes slightly low than the value of the first merging, this number begin to grow rather rapidly and for $`\phi _00.6`$ near $`10\%`$ of trajectories from the 1-st interval experience at least 50 oscillation before falling into a singularity.
We recall that for a simple massive scalar field potential only $`10^2`$ trajectories in the same range of the initial scale factors have at least one bounce. Fraction of trajectories not falling into a singularity after only one bounce is about one hundred times less and so on. The common numerical calculation accuracy is unsufficient for distinguishing even the sole trajectory with 50 oscillations and $`a`$ being in the range of first two intervals.
In contrast to this, the chaos for the potential (6) is really significant. Detail of intervals merging including the description out of $`\phi =0`$ cross-section require further analysis.
For large initial $`a`$ the configuration of bounce intervals for potential (6) looks like the configuration for a massive scalar field potential with the effective mass easily derived from (6): $`m_{eff}=(\sqrt{2}M_0^2)/\phi _0`$. The periods of corresponding structures coincides with a good accuracy though the widths of the intervals for the potential (6) is bigger then for $`V=(m_{eff}^2\phi ^2)/2`$.
## 3 Damour-Mukhanov potentials
The very chaotic regime described above is possible also for potentials, which are not asymptotically flat, if the potential growth is slow enough. We will illustrate this point describing a particular (but rather wide) family of potentials having power-low behavior – Damour - Mukhanov potentials . They was originally introduced to show a possibility to have an inflation behaviour without slow-roll regime. After, various issues on inflationary dynamics and growth of perturbation for this kind of scalar field potential was studied.
The explicit form of Damour-Mukhanov potential is
$$V(\phi )=\frac{M_0^4}{q}\left[\left(1+\frac{\phi ^2}{\phi _0^2}\right)^{q/2}1\right],$$
(7)
with three parameters –$`M_0`$, $`q`$ and $`\phi _0`$.
For $`\phi \phi _0`$ the potential looks like the massive one with the effective mass $`m_{eff}=M_0^2/\phi _0`$. In the opposite case of large $`\phi `$ it grows like $`\phi ^q`$.
As in the previous section, the chaotic behavior does not depend on $`M_0`$. So, we have a two-parameter family of potentials with different chaotic properties. Numerical studies with respect to possibility of bounce intervals merging shows the following picture (see Fig.3): for a rather wide range of $`q`$ there exists a corresponding critical value of $`\phi _0`$ such that for $`\phi _0`$ less than critical, the very chaotic regime exists. Increasing $`q`$ corresponds to decreasing critical $`\phi _0`$.
Surely, since this regime is absent for quadratic and more steep potentials, $`q`$ must at least be less than $`2`$. We can see clearly the very chaotic regime for $`q<1.24`$. The case $`q=1.24`$ lead to strong chaos for $`\phi _0<1.4\times 10^5`$ and the critical $`\phi _0`$ decreases with increasing $`q`$ very sharply at this point. We did not investigated further these extremely small values of $`\phi _0`$, because the physical meaning of such kind of potential is very doubtful.
## 4 The influence of a hydrodynamical matter
In this section we add the perfect fluid with the equation of state $`P=\gamma ϵ`$. The equation of motion are now
$$\frac{m_P^2}{16\pi }\left(\ddot{a}+\frac{\dot{a}^2}{2a}+\frac{1}{2a}\right)+\frac{a\dot{\phi }^2}{8}\frac{V(\phi )}{4}\frac{Q}{12a^{p+1}}(1p)=0$$
(8)
$$\ddot{\phi }+\frac{3\dot{\phi }\dot{a}}{a}+V^{}(\phi )=0.$$
(9)
with the constraint
$$\frac{3}{8\pi }m_P^2(\dot{a}^2+1)+\frac{a^2}{2}\left(\dot{\phi }^2+2V(\phi )\right)+\frac{Q}{a^p}=0.$$
(10)
Here $`p=1+3\gamma `$, $`Q`$ is a constant from the equation of motion for matter which can be integrated in the form
$$Ea^{p+2}=Q=const.$$
(11)
In Ref. it was shown that addition of a hydrodynamical matter to the scalar field with a potential $`V(\phi )=m^2\phi ^2/2`$ can kill the chaos. Here we extend this analysis to less steep potentials. Some of our results are illustrated in Fig.4. It is interesting that increasing $`Q`$ acts as increasing $`\phi _0`$. In Fig.4 three intervals merged at $`Q=0`$. When $`Q`$ increases, the 3-d and 2-nd interval consecutively separate and we return to the chaos typical for $`V(\phi )=m^2\phi ^2/2`$. With the further increasing of $`Q`$ the chaos disappear in a way discussed in .
The value of $`Q`$ corresponding to the chaos disappearing is in general bigger for the less steep potentials with the same effective mass . In Fig.5.(a) this values are plotted for Damour-Mukhanov potentials and the perfect fluid with $`\gamma =0`$ ($`p=1`$). This particular $`\gamma `$ is chosen for a mathematical reason. Namely, it can be seen from (8)-(10) that in this case only the constraint equation is changed in comparison with the initial system (1)-(3). In other words, dynamical equations describing FRW universe with a scalar field and dust matter are formally equivalent to those for scalar field only but with nonzero value of the conserved energy. So, our figure describes not only the physical system under consideration, but also general mathematical properties of (1)-(2).
We recall that for the case $`V(\phi )=m^2\phi ^2/2`$ (which is equivalent to the Damour-Mukhanov potential with $`q=2`$, the corresponding mass is equal to the effective one $`m_{eff}=M_0^2/\phi _0`$) the chaos disappear for $`Qm>0.023m_P`$ . To compare with this value, we plotted in Fig.5(a) the values of $`Qm_{eff}`$ leading to ceasing of the chaos with respect to $`\phi _0`$ for several $`q`$. In units we used the case $`q=2`$ corresponds to horizontal line $`Qm_{eff}=1.15`$. All other curves have this value as an asymptotic one for large $`\phi _0`$. With decreasing $`\phi _0`$ the value $`Qm_{eff}`$ increases with the rate which is bigger for less steep potentials.
In Fig.5(b) the analogous curve is plotted for asymptotically flat potential (6). The value $`Qm_{eff}`$ for large $`\phi _0`$ is the same. For small $`\phi _0`$ we can estimate $`Q`$ killing the chaos as the value corresponding to disappearance of the Euclidean region for a flat potential $`V(\phi )=M_0^4`$. It can be easily obtained from (4) that in this case the Euclidean region disappears at
$$Q=\frac{1}{8\sqrt{2\pi ^3}}\frac{m_P^3}{M_0^2}$$
and for bigger $`Q`$ any bounce become impossible. As the potential (6) differs significantly from the flat one only for $`\phi `$ less then $`\phi _0`$, this approximation appear to be good enough for small $`\phi _0`$. In the our units it correspond to the curve $`Qm_{eff}=8/\phi _0`$. Intermediate values of $`\phi _0`$ represent smooth transition between these two asymptotic behaviors.
## Acknowledgments
This work was supported by Russian Basic Research Foundation via grant No 99-02-16224. |
no-problem/9911/hep-ph9911230.html | ar5iv | text | # Two-pearl Strings: Feynman’s Oscillators
## I Introduction
Physicists are fond of building strings. In classical mechanics, we start with a discrete set of particles joined together with a finite distance between two neighboring particles, like a pearl necklace. We then take the limit of zero distance and infinite number of particles, resulting in a continuous string. This is how we construct classical field theory and then extend it to quantum field theory in the Lagrangian formalism. In this paper, we consider the opposite limit by dropping all the particles except two.
In order to gain an insight into what we intend to in this report, let us note an example in history. Debye’s treatment of specific heat is a classic example. Einstein’s oscillator model of specific heat is a simplified case of the Debye model in the sense that it consists only of two pearls. The Einstein model does not give an accurate description of the specific heat in the zero-temperature limit, but it is accurate enough everywhere else to be covered in textbooks. The basic strength of the oscillator model is its mathematical simplicity. It produces the numbers and curves which can be checked experimentally, without requiring from us too much mathematical labor.
While one of the main purposes of the string models is to study the internal space-time symmetries of relativistic particles, we can achieve this purpose by studying two-pearl strings which should share the same symmetry property as all other string models. In practice, the two-pearl string model consists of two constituents joined together by a spring force. The only problem is to construct the oscillator model which can be Lorentz-transformed. The problem then is to reduced to constructing a covariant harmonic oscillator formalism. This subject has a long history .
In Ref. , Feynman et al. attempted to construct a covariant model for hadrons consisting of quarks joined together by an oscillator force. They indeed formulated a Lorentz-invariant oscillator equation. They also worked out the degeneracies of the oscillator states which are consistent with observed mesonic and baryonic mass spectra. However, their wave functions are not normalizable in the space-time coordinate system. The authors of this paper never considered the question of covariance.
What is the relevant question on covariance within the framework of the oscillator formalism? In 1969 , Feynman proposed his parton model for hadrons moving with almost speed of light. Feynman observed that the hadron consists of collection of infinite number of partons which are like free particles. The partons appear to have properties which are quite different from those of the quarks. If the wave functions are to be covariant, they should be able to translate the quark model for slow hadrons into the parton model of fast hadrons. This is precisely the question we would like to address in the present report.
We achieve this purpose by transforming the oscillator model of Feynman et al. into a representation of the Poincaré group which governs the space-time symmetries of relativistic particles . In this formalism, the internal space-time symmetries are dictated by the little groups. The little group is the maximal subgroup of the Lorentz group whose transformations leave the four-momentum of a given particle invariant. The little groups for massive and massless particles are known to be isomorphic to $`O(3)`$ or the three-dimensional rotation group and $`E(2)`$ or the two-dimensional Euclidean group . In this paper, we can rewrite the wave functions of Feynman et al. as a representation of the $`O(3)`$-like little group for a massive.
Let us go back to physics. When Einstein formulated $`E=mc^2`$ in 1905, he was talking about point particles. These days, particles have their own internal space-time structures. In the case of hadrons, the particle has a space-time extension like the hydrogen atom. In spite of these complications, we do not question the validity of the energy-momentum relation given by $`E=\sqrt{m^2+p^2}`$ for all relativistic particles. The problem is that each particle has its own internal space-time variables. In addition to the energy and momentum, the massive particle has a package of variables including mass, spin, and quark degrees of freedom. The massless particle has its helicity, gauge degrees of freedom, and parton degrees of freedom.
The question is whether the two different packages of new variables for massive and massless particles can be combined into a single covariant package as Einstein’s $`E=mc^2`$ does for the energy-momentum relations for massive and massless particles. We shall divide this question into two parts. First, we deal with the question of spin, helicity, and gauge degrees of freedom. We can deal with this question without worrying about the space-time extension of the particle. Second, we face the problem of space-time extensions using hadrons which are bound states of quarks obeying the laws of quantum mechanics. In order to answer this question, we first have to construct a quantum mechanics of bound states which can be Lorentz-boosted.
In Sec. II, the above-mentioned problems are spelled out in detail. In Sec. III, we present a brief history of applications of the little groups of the Poincaré to internal space-time symmetries of relativistic particles. In Sec. IV, we construct representations of the little group using harmonic oscillator wave functions. In Sec. V, it is shown that the Lorentz-boosted oscillator wave functions exhibit the peculiarities Feynman’s parton model in the infinite-momentum limit.
Much of the concept of Lorentz-squeezed wave function is derived from elliptic deformations of a sphere resulting in a mathematical technique group called contractions . In Appendix A, we discuss the contraction of the three-dimensional rotation group to the two-dimensional Euclidean group. In Appendix B, we discuss the little group for a massless particle as the infinite-momentum/zero-mass limit of the little group for a massive particle.
## II Statement of the Problem
The Lorentz-invariant differential equation of Feynman, Kislinger, and Ravndal is a linear partial differential equation . It can therefore generate many different sets of solutions depending on boundary conditions. In their paper, Feynman et al. choose Lorentz-invariant solutions. But their solutions are not normalizable and cannot therefore be interpreted within the framework of the existing rules of quantum mechanics. In this report, we point out there are other sets of solutions. We choose here normalizable wave functions. They are not Lorentz-invariant, but they are Lorentz-covariant. These covariant solutions form a representations of the Poincaré group .
The Lorentz-invariant wave function takes the same form in every Lorentz frame, but the covariant wave function takes different forms. However, in the covariant formulation, the wave function in one frame can be transformed to the wave function in a different frame by Lorentz transformation. In particular, the wave function in the infinite-momentum frame is quite different from the wave function at the rest frame. Thus, it may be possible to obtain Feynman’s parton picture by Lorentz-boosting the quark wave function constructed from the rest frame.
In spite of the mathematical difficulties, the original paper of Feynman et al. contains the following radical departures from the conventional viewpoint.
* For relativistic bound state, we should use harmonic oscillators instead of Feynman diagrams.
* We should us harmonic oscillators instead of Regge trajectories to study degeneracies in the hadronic spectra.
These views sound radical, but they are quite consistent with the existing forms of quantum mechanics and quantum field theory. In quantum field theory, Feynman diagrams are only for scattering states where the external lines correspond free particles in asymptotic states. The oscillator eigenvalues are proportional to the highest values of the angular momentum. This is often known as the linear Regge trajectory. Between the Regge trajectory and the three-dimensional oscillator, which one is closer to the fundamental laws of quantum mechanics. Therefore, the above-mentioned radical departures mean that we are coming back to common sense in physics.
On the other hand, there is one important point Feynman et al. failed to see in their oscillator paper . Two years before the publication of this oscillator paper, Feynman proposed his parton model . However, in their oscillator paper, they do not mention the possibility of obtaining the parton picture from the quantum mechanics of bound-state quarks in a hadron in its rest frame. It is probably because their wave functions are Lorentz-invariant but not covariant.
However, the covariant formalism forces us to raise this question. This is precisely the purpose of the present report.
## III Poincaré Symmetry of Relativistic Particles
The Poincaré group is the group of inhomogeneous Lorentz transformations, namely Lorentz transformations preceded or followed by space-time translations. In order to study this group, we have to understand first the group of Lorentz transformations, the group of translations, and how these two groups are combined to form the Poincaré group. The Poincaré group is a semi-direct product of the Lorentz and translation groups. The two Casimir operators of this group correspond to the (mass)<sup>2</sup> and (spin)<sup>2</sup> of a given particle. Indeed, the particle mass and its spin magnitude are Lorentz-invariant quantities.
The question then is how to construct the representations of the Lorentz group which are relevant to physics. For this purpose, Wigner in 1939 studied the subgroups of the Lorentz group whose transformations leave the four-momentum of a given free particle . The maximal subgroup of the Lorentz group which leaves the four-momentum invariant is called the little group. Since the little group leaves the four-momentum invariant, it governs the internal space-time symmetries of relativistic particles. Wigner shows in his paper that the internal space-time symmetries of massive and massless particles are dictated by the $`O(3)`$-like and $`E(2)`$-like little groups respectively.
The $`O(3)`$-like little group is locally isomorphic to the three-dimensional rotation group, which is very familiar to us. For instance, the group $`SU(2)`$ for the electron spin is an $`O(3)`$-like little group. The group $`E(2)`$ is the Euclidean group in a two-dimensional space, consisting of translations and rotations on a flat surface. We are performing these transformations everyday on ourselves when we move from home to school. The mathematics of these Euclidean transformations are also simple. However, the group of these transformations are not well known to us. In Appendix A, we give a matrix representation of the $`E(2)`$ group.
The group of Lorentz transformations consists of three boosts and three rotations. The rotations therefore constitute a subgroup of the Lorentz group. If a massive particle is at rest, its four-momentum is invariant under rotations. Thus the little group for a massive particle at rest is the three-dimensional rotation group. Then what is affected by the rotation? The answer to this question is very simple. The particle in general has its spin. The spin orientation is going to be affected by the rotation!
If the rest-particle is boosted along the $`z`$ direction, it will pick up a non-zero momentum component. The generators of the $`O(3)`$ group will then be boosted. The boost will take the form of conjugation by the boost operator. This boost will not change the Lie algebra of the rotation group, and the boosted little group will still leave the boosted four-momentum invariant. We call this the $`O(3)`$-like little group. If we use the four-vector coordinate $`(x,y,z,t)`$, the four-momentum vector for the particle at rest is $`(0,0,0,m)`$, and the three-dimensional rotation group leaves this four-momentum invariant. This little group is generated by
$$J_1=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& i& 0\\ 0& i& 0& 0\\ 0& 0& 0& 0\end{array}\right),J_2=\left(\begin{array}{cccc}0& 0& i& 0\\ 0& 0& 0& 0\\ i& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),$$
(1)
and
$$J_3=\left(\begin{array}{cccc}0& i& 0& 0\\ i& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),$$
(2)
which satisfy the commutation relations:
$$[J_i,J_j]=iϵ_{ijk}J_k.$$
(3)
It is not possible to bring a massless particle to its rest frame. In his 1939 paper , Wigner observed that the little group for a massless particle moving along the $`z`$ axis is generated by the rotation generator around the $`z`$ axis, namely $`J_3`$ of Eq.(2), and two other generators which take the form
$$N_1=\left(\begin{array}{cccc}0& 0& i& i\\ 0& 0& 0& 0\\ i& 0& 0& 0\\ i& 0& 0& 0\end{array}\right),N_2=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& i& i\\ 0& i& 0& 0\\ 0& i& 0& 0\end{array}\right).$$
(4)
If we use $`K_i`$ for the boost generator along the i-th axis, these matrices can be written as
$$N_1=K_1J_2,N_2=K_2+J_1,$$
(5)
with
$$K_1=\left(\begin{array}{cccc}0& 0& 0& i\\ 0& 0& 0& 0\\ 0& 0& 0& 0\\ i& 0& 0& 0\end{array}\right),K_2=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& i\\ 0& 0& 0& 0\\ 0& i& 0& 0\end{array}\right).$$
(6)
The generators $`J_3,N_1`$ and $`N_2`$ satisfy the following set of commutation relations.
$$[N_1,N_2]=0,[J_3,N_1]=iN_2,[J_3,N_2]=iN_1.$$
(7)
In Appendix A, we discuss the generators of the $`E(2)`$ group. They are $`J_3`$ which generates rotations around the $`z`$ axis, and $`P_1`$ and $`P_2`$ which generate translations along the $`x`$ and $`y`$ directions respectively. If we replace $`N_1`$ and $`N_2`$ by $`P_1`$ and $`P_2`$, the above set of commutation relations becomes the set given for the $`E(2)`$ group given in Eq.(A7). This is the reason why we say the little group for massless particles is $`E(2)`$-like. Very clearly, the matrices $`N_1`$ and $`N_2`$ generate Lorentz transformations.
It is not difficult to associate the rotation generator $`J_3`$ with the helicity degree of freedom of the massless particle. Then what physical variable is associated with the $`N_1`$ and $`N_2`$ generators? Indeed, Wigner was the one who discovered the existence of these generators, but did not give any physical interpretation to these translation-like generators. For this reason, for many years, only those representations with the zero-eigenvalues of the $`N`$ operators were thought to be physically meaningful representations . It was not until 1971 when Janner and Janssen reported that the transformations generated by these operators are gauge transformations . The role of this translation-like transformation has also been studied for spin-1/2 particles, and it was concluded that the polarization of neutrinos is due to gauge invariance .
Another important development along this line of research is the application of group contractions to the unifications of the two different little groups for massive and massless particles. We always associate the three-dimensional rotation group with a spherical surface. Let us consider a circular area of radius 1 kilometer centered on the north pole of the earth. Since the radius of the earth is more than 6,450 times longer, the circular region appears flat. Thus, within this region, we use the $`E(2)`$ symmetry group for this region. The validity of this approximation depends on the ratio of the two radii.
In 1953, Inonu and Wigner formulated this problem as the contraction of $`O(3)`$ to $`E(2)`$ . How about then the little groups which are isomorphic to $`O(3)`$ and $`E(2)`$? It is reasonable to expect that the $`E(2)`$-like little group be obtained as a limiting case for of the $`O(3)`$-like little group for massless particles. In 1981, it was observed by Ferrara and Savoy that this limiting process is the Lorentz boost . In 1983, using the same limiting process as that of Ferrara and Savoy, Han et al showed that transverse rotation generators become the generators of gauge transformations in the limit of infinite momentum and/or zero mass . In 1987, Kim and Wigner showed that the little group for massless particles is the cylindrical group which is isomorphic to the $`E(2)`$ group . This completes the second raw in Table I, where Wigner’s little group unifies the internal space-time symmetries of massive and massless particles.
We are now interested in constructing the third row in Table I. As we promised in Sec. I, we will be dealing with hadrons which are bound states of quarks with space-time extensions. For this purpose, we need a set of covariant wave functions consistent with the existing laws of quantum mechanics, including of course the uncertainty principle and probability interpretation.
With these wave functions, we propose to solve the following problem in high-energy physics. The quark model works well when hadrons are at rest or move slowly. However, when they move with speed close to that of light, they appear as a collection of infinite-number of partons . As we stated above, we need a set of wave functions which can be Lorentz-boosted. How can we then construct such a set? In constructing wave functions for any purpose in quantum mechanics, the standard procedure is to try first harmonic oscillator wave functions. In studying the Lorentz boost, the standard language is the Lorentz group. Thus the first step to construct covariant wave functions is to work out representations of the Lorentz group using harmonic oscillators .
## IV Covariant Harmonic Oscillators
If we construct a representation of the Lorentz group using normalizable harmonic oscillator wave functions, the result is the covariant harmonic oscillator formalism . The formalism constitutes a representation of Wigner’s $`O(3)`$-like little group for a massive particle with internal space-time structure. This oscillator formalism has been shown to be effective in explaining the basic phenomenological features of relativistic extended hadrons observed in high-energy laboratories. In particular, the formalism shows that the quark model and Feynman’s parton picture are two different manifestations of one covariant entity . The essential feature of the covariant harmonic oscillator formalism is that Lorentz boosts are squeeze transformations . In the light-cone coordinate system, the boost transformation expands one coordinate while contracting the other so as to preserve the product of these two coordinate remains constant. We shall show that the parton picture emerges from this squeeze effect.
Let us consider a bound state of two particles. For convenience, we shall call the bound state the hadron, and call its constituents quarks. Then there is a Bohr-like radius measuring the space-like separation between the quarks. There is also a time-like separation between the quarks, and this variable becomes mixed with the longitudinal spatial separation as the hadron moves with a relativistic speed. There are no quantum excitations along the time-like direction. On the other hand, there is the time-energy uncertainty relation which allows quantum transitions. It is possible to accommodate these aspect within the framework of the present form of quantum mechanics. The uncertainty relation between the time and energy variables is the c-number relation , which does not allow excitations along the time-like coordinate. We shall see that the covariant harmonic oscillator formalism accommodates this narrow window in the present form of quantum mechanics.
For a hadron consisting of two quarks, we can consider their space-time positions $`x_a`$ and $`x_b`$, and use the variables
$$X=(x_a+x_b)/2,x=(x_ax_b)/2\sqrt{2}.$$
(8)
The four-vector $`X`$ specifies where the hadron is located in space and time, while the variable $`x`$ measures the space-time separation between the quarks. In the convention of Feynman et al. , the internal motion of the quarks bound by a harmonic oscillator potential of unit strength can be described by the Lorentz-invariant equation
$$\frac{1}{2}\left\{x_\mu ^2\frac{^2}{x_\mu ^2}\right\}\psi (x)=\lambda \psi (x).$$
(9)
It is now possible to construct a representation of the Poincaré group from the solutions of the above differential equation .
The coordinate $`X`$ is associated with the overall hadronic four-momentum, and the space-time separation variable $`x`$ dictates the internal space-time symmetry or the $`O(3)`$-like little group. Thus, we should construct the representation of the little group from the solutions of the differential equation in Eq.(9). If the hadron is at rest, we can separate the $`t`$ variable from the equation. For this variable we can assign the ground-state wave function to accommodate the c-number time-energy uncertainty relation . For the three space-like variables, we can solve the oscillator equation in the spherical coordinate system with usual orbital and radial excitations. This will indeed constitute a representation of the $`O(3)`$-like little group for each value of the mass. The solution should take the form
$$\psi (x,y,z,t)=\psi (x,y,z)\left(\frac{1}{\pi }\right)^{1/4}\mathrm{exp}\left(t^2/2\right),$$
(10)
where $`\psi (x,y,z)`$ is the wave function for the three-dimensional oscillator with appropriate angular momentum quantum numbers. Indeed, the above wave function constitutes a representation of Wigner’s $`O(3)`$-like little group for a massive particle .
Since the three-dimensional oscillator differential equation is separable in both spherical and Cartesian coordinate systems, $`\psi (x,y,z)`$ consists of Hermite polynomials of $`x,y`$, and $`z`$. If the Lorentz boost is made along the $`z`$ direction, the $`x`$ and $`y`$ coordinates are not affected, and can be temporarily dropped from the wave function. The wave function of interest can be written as
$$\psi ^n(z,t)=\left(\frac{1}{\pi }\right)^{1/4}\mathrm{exp}\left(\begin{array}{c}t^2/2\end{array}\right)\psi _n(z),$$
(11)
with
$$\psi ^n(z)=\left(\frac{1}{\pi n!2^n}\right)^{1/2}H_n(z)\mathrm{exp}(z^2/2),$$
(12)
where $`\psi ^n(z)`$ is for the $`n`$-th excited oscillator state. The full wave function $`\psi ^n(z,t)`$ is
$$\psi _0^n(z,t)=\left(\frac{1}{\pi n!2^n}\right)^{1/2}H_n(z)\mathrm{exp}\left\{\frac{1}{2}\left(z^2+t^2\right)\right\}.$$
(13)
The subscript $`0`$ means that the wave function is for the hadron at rest. The above expression is not Lorentz-invariant, and its localization undergoes a Lorentz squeeze as the hadron moves along the $`z`$ direction . The above form of the wave function is illustrated in Fig.1.
It is convenient to use the light-cone variables to describe Lorentz boosts. The light-cone coordinate variables are
$$u=(z+t)/\sqrt{2},v=(zt)/\sqrt{2}.$$
(14)
In terms of these variables, the Lorentz boost along the $`z`$ direction,
$$\left(\begin{array}{c}z^{}\\ t^{}\end{array}\right)=\left(\begin{array}{cc}\mathrm{cosh}\eta & \mathrm{sinh}\eta \\ \mathrm{sinh}\eta & \mathrm{cosh}\eta \end{array}\right)\left(\begin{array}{c}z\\ t\end{array}\right),$$
(15)
takes the simple form
$$u^{}=e^\eta u,v^{}=e^\eta v,$$
(16)
where $`\eta `$ is the boost parameter and is $`\mathrm{tanh}^1(v/c)`$. Indeed, the $`u`$ variable becomes expanded while the $`v`$ variable becomes contracted. This is the squeeze mechanism illustrated discussed extensively in the literature . This squeeze transformation is also illustrated in Fig. 2.
The wave function of Eq.(13) can be written as
$$\psi _o^n(z,t)=\psi _0^n(z,t)=\left(\frac{1}{\pi n!2^n}\right)^{1/2}H_n\left((u+v)/\sqrt{2}\right)\mathrm{exp}\left\{\frac{1}{2}(u^2+v^2)\right\}.$$
(17)
If the system is boosted, the wave function becomes
$$\psi _\eta ^n(z,t)=\left(\frac{1}{\pi n!2^n}\right)^{1/2}H_n\left((e^\eta u+e^\eta v)/\sqrt{2}\right)\times \mathrm{exp}\left\{\frac{1}{2}\left(e^{2\eta }u^2+e^{2\eta }v^2\right)\right\}.$$
(18)
In both Eqs. (17) and (18), the localization property of the wave function in the $`uv`$ plane is determined by the Gaussian factor, and it is sufficient to study the ground state only for the essential feature of the boundary condition. The wave functions in Eq.(17) and Eq.(18) then respectively become
$$\psi _0(z,t)=\left(\frac{1}{\pi }\right)^{1/2}\mathrm{exp}\left\{\frac{1}{2}(u^2+v^2)\right\}.$$
(19)
If the system is boosted, the wave function becomes
$$\psi _\eta (z,t)=\left(\frac{1}{\pi }\right)^{1/2}\mathrm{exp}\left\{\frac{1}{2}\left(e^{2\eta }u^2+e^{2\eta }v^2\right)\right\}.$$
(20)
We note here that the transition from Eq.(19) to Eq.(20) is a squeeze transformation. The wave function of Eq.(19) is distributed within a circular region in the $`uv`$ plane, and thus in the $`zt`$ plane. On the other hand, the wave function of Eq.(20) is distributed in an elliptic region. This ellipse is a “squeezed” circle with the same area as the circle, as is illustrated in Fig. 2.
For many years, we have been interested in combining quantum mechanics with special relativity. One way to achieve this goal is to combine the quantum mechanics of Fig. 1 and the relativity of Fig. 2 to produce a covariant picture of Fig. 3. We are now ready to exploit physical consequence of the Lorentz-squeezed quantum mechanics of Fig. 3.
## V Feynman’s Parton Picture
It is safe to believe that hadrons are quantum bound states of quarks having localized probability distribution. As in all bound-state cases, this localization condition is responsible for the existence of discrete mass spectra. The most convincing evidence for this bound-state picture is the hadronic mass spectra which are observed in high-energy laboratories . However, this picture of bound states is applicable only to observers in the Lorentz frame in which the hadron is at rest. How would the hadrons appear to observers in other Lorentz frames? More specifically, can we use the picture of Lorentz-squeezed hadrons discussed in Sec. IV.
Proton’s radius is $`10^5`$ of that of the hydrogen atom. Therefore, it is not unnatural to assume that the proton has a point charge in atomic physics. However, while carrying out experiments on electron scattering from proton targets, Hofstadter in 1955 observed that the proton charge is spread out . In this experiment, an electron emits a virtual photon, which then interacts with the proton. If the proton consists of quarks distributed within a finite space-time region, the virtual photon will interact with quarks which carry fractional charges. The scattering amplitude will depend on the way in which quarks are distributed within the proton. The portion of the scattering amplitude which describes the interaction between the virtual photon and the proton is called the form factor.
Although there have been many attempts to explain this phenomenon within the framework of quantum field theory, it is quite natural to expect that the wave function in the quark model will describe the charge distribution. In high-energy experiments, we are dealing with the situation in which the momentum transfer in the scattering process is large. Indeed, the Lorentz-squeezed wave functions lead to the correct behavior of the hadronic form factor for large values of the momentum transfer .
While the form factor is the quantity which can be extracted from the elastic scattering, it is important to realize that in high-energy processes, many particles are produced in the final state. They are called inelastic processes. While the elastic process is described by the total energy and momentum transfer in the center-of-mass coordinate system, there is, in addition, the energy transfer in inelastic scattering. Therefore, we would expect that the scattering cross section would depend on the energy, momentum transfer, and energy transfer. However, one prominent feature in inelastic scattering is that the cross section remains nearly constant for a fixed value of the momentum-transfer/energy-transfer ratio. This phenomenon is called “scaling” .
In order to explain the scaling behavior in inelastic scattering, Feynman in 1969 observed that a fast-moving hadron can be regarded as a collection of many “partons” whose properties do not appear to be identical to those of quarks . For example, the number of quarks inside a static proton is three, while the number of partons in a rapidly moving proton appears to be infinite. The question then is how the proton looking like a bound state of quarks to one observer can appear different to an observer in a different Lorentz frame? Feynman made the following systematic observations.
* The picture is valid only for hadrons moving with velocity close to that of light.
* The interaction time between the quarks becomes dilated, and partons behave as free independent particles.
* The momentum distribution of partons becomes widespread as the hadron moves very fast.
* The number of partons seems to be infinite or much larger than that of quarks.
Because the hadron is believed to be a bound state of two or three quarks, each of the above phenomena appears as a paradox, particularly b) and c) together. We would like to resolve this paradox using the covariant harmonic oscillator formalism.
For this purpose, we need a momentum-energy wave function. If the quarks have the four-momenta $`p_a`$ and $`p_b`$, we can construct two independent four-momentum variables
$$P=p_a+p_b,q=\sqrt{2}(p_ap_b).$$
(21)
The four-momentum $`P`$ is the total four-momentum and is thus the hadronic four-momentum. $`q`$ measures the four-momentum separation between the quarks.
We expect to get the momentum-energy wave function by taking the Fourier transformation of Eq.(20):
$$\varphi _\eta (q_z,q_0)=\left(\frac{1}{2\pi }\right)\psi _\eta (z,t)\mathrm{exp}\left\{i(q_zzq_0t)\right\}𝑑x𝑑t.$$
(22)
Let us now define the momentum-energy variables in the light-cone coordinate system as
$$q_u=(q_0q_z)/\sqrt{2},q_v=(q_0+q_z)/\sqrt{2}.$$
(23)
In terms of these variables, the Fourier transformation of Eq.(22) can be written as
$$\varphi _\eta (q_z,q_0)=\left(\frac{1}{2\pi }\right)\psi _\eta (z,t)\mathrm{exp}\left\{i(q_uu+q_vv)\right\}𝑑u𝑑v.$$
(24)
The resulting momentum-energy wave function is
$$\varphi _\eta (q_z,q_0)=\left(\frac{1}{\pi }\right)^{1/2}\mathrm{exp}\left\{\frac{1}{2}\left(e^{2\eta }q_u^2+e^{2\eta }q_v^2\right)\right\}.$$
(25)
Since we are using here the harmonic oscillator, the mathematical form of the above momentum-energy wave function is identical to that of the space-time wave function. The Lorentz squeeze properties of these wave functions are also the same, as are indicated in Fig. 4.
When the hadron is at rest with $`\eta =0`$, both wave functions behave like those for the static bound state of quarks. As $`\eta `$ increases, the wave functions become continuously squeezed until they become concentrated along their respective positive light-cone axes. Let us look at the z-axis projection of the space-time wave function. Indeed, the width of the quark distribution increases as the hadronic speed approaches that of the speed of light. The position of each quark appears widespread to the observer in the laboratory frame, and the quarks appear like free particles.
Furthermore, interaction time of the quarks among themselves become dilated. Because the wave function becomes wide-spread, the distance between one end of the harmonic oscillator well and the other end increases as is indicated in Fig. 4. This effect, first noted by Feynman , is universally observed in high-energy hadronic experiments. The period is oscillation is increases like $`e^\eta `$. On the other hand, the interaction time with the external signal, since it is moving in the direction opposite to the direction of the hadron, it travels along the negative light-cone axis. If the hadron contracts along the negative light-cone axis, the interaction time decreases by $`e^\eta `$. The ratio of the interaction time to the oscillator period becomes $`e^{2\eta }`$. The energy of each proton coming out of the Fermilab accelerator is $`900GeV`$. This leads the ratio to $`10^6`$. This is indeed a small number. The external signal is not able to sense the interaction of the quarks among themselves inside the hadron.
The momentum-energy wave function is just like the space-time wave function. The longitudinal momentum distribution becomes wide-spread as the hadronic speed approaches the velocity of light. This is in contradiction with our expectation from nonrelativistic quantum mechanics that the width of the momentum distribution is inversely proportional to that of the position wave function. Our expectation is that if the quarks are free, they must have their sharply defined momenta, not a wide-spread distribution. This apparent contradiction presents to us the following two fundamental questions:
* . If both the spatial and momentum distributions become widespread as the hadron moves, and if we insist on Heisenberg’s uncertainty relation, is Planck’s constant dependent on the hadronic velocity?
* . Is this apparent contradiction related to another apparent contradiction that the number of partons is infinite while there are only two or three quarks inside the hadron?
The answer to the first question is “No”, and that for the second question is “Yes”. Let us answer the first question which is related to the Lorentz invariance of Planck’s constant. If we take the product of the width of the longitudinal momentum distribution and that of the spatial distribution, we end up with the relation
$$<z^2><q_z^2>=(1/4)[\mathrm{cosh}(2\eta )]^2.$$
(26)
The right-hand side increases as the velocity parameter increases. This could lead us to an erroneous conclusion that Planck’s constant becomes dependent on velocity. This is not correct, because the longitudinal momentum variable $`q_z`$ is no longer conjugate to the longitudinal position variable when the hadron moves.
In order to maintain the Lorentz-invariance of the uncertainty product, we have to work with a conjugate pair of variables whose product does not depend on the velocity parameter. Let us go back to Eq.(23) and Eq.(24). It is quite clear that the light-cone variable $`u`$ and $`v`$ are conjugate to $`q_u`$ and $`q_v`$ respectively. It is also clear that the distribution along the $`q_u`$ axis shrinks as the $`u`$-axis distribution expands. The exact calculation leads to
$$<u^2><q_u^2>=1/4,<v^2><q_v^2>=1/4.$$
(27)
Planck’s constant is indeed Lorentz-invariant.
Let us next resolve the puzzle of why the number of partons appears to be infinite while there are only a finite number of quarks inside the hadron. As the hadronic speed approaches the speed of light, both the x and q distributions become concentrated along the positive light-cone axis. This means that the quarks also move with velocity very close to that of light. Quarks in this case behave like massless particles.
We then know from statistical mechanics that the number of massless particles is not a conserved quantity. For instance, in black-body radiation, free light-like particles have a widespread momentum distribution. However, this does not contradict the known principles of quantum mechanics, because the massless photons can be divided into infinitely many massless particles with a continuous momentum distribution.
Likewise, in the parton picture, massless free quarks have a wide-spread momentum distribution. They can appear as a distribution of an infinite number of free particles. These free massless particles are the partons. It is possible to measure this distribution in high-energy laboratories, and it is also possible to calculate it using the covariant harmonic oscillator formalism. We are thus forced to compare these two results. Indeed, according to Hussar’s calculation , the Lorentz-boosted oscillator wave function produces a reasonably accurate parton distribution, as indicated in Fig. 5
## Concluding Remarks
In this report, we have considered a string consisting only of two particles bounded together by an oscillator potential. The essence of the problem was to construct a quantum mechanics of harmonic oscillators which can be Lorentz-transformed. We achieved this purpose by remodeling the oscillator formalism of Feynman, Kislinger and Ravndal. Their Lorentz-invariant equation has a covariant set of solutions which is consistent with the existing principles of quantum mechanics and special relativity.
From these wave wave functions, it is possible to construct a representation of Wigner’s $`O(3)`$-like little group governing the internal space-time symmetries of relativistic particles with non-zero mass. In order to illustrate the difference between the little group for massive particles from that for massless particles, we have given a comprehensive review of the little groups for massive and massless particles. We have discussed also the contraction procedure in which the $`E(2)`$-like little group for massless particles is obtained from the $`O(3)`$-like little group for massive particles. We have given a comprehensive review of the contents of Table I.
Let us go back to the issue of strings. As we noted earlier in this paper, the string is a limiting case of discrete sets of mass points. We can consider two limiting cases, namely the continuous string and two-particle string. There also is a possibility of strings of discrete sets of particles, or “polymers of point-like constituents”. These different strings might take different mathematical forms, but they should all share the space-time symmetry. Thus, the quickest way to study this symmetry is to use the simplest mathematical technique which the two-pearl string provides.
## Acknowledgments
The author would like to thank Academician A. A. Logunov and the members of the organizing committee for inviting him to visit the Institute of High Energy Physics at Protvino and participate in the 22nd International Workshop on the Fundamental Problems of High Energy Physics. The original title of this paper was “Two-bead Strings,” but Professor V. A. Petrov changed it to “Two-pearl Strings” when he was introducing the author and the title to the audience. He was the chairman of the session in which the author presented this paper.
## A Contraction of O(3) to E(2)
In this Appendix, we explain what the $`E(2)`$ group is. We then explain how we can obtain this group from the three-dimensional rotation group by making a flat-surface or cylindrical approximation. This contraction procedure will give a clue to obtaining the $`E(2)`$-like symmetry for massless particles from the $`O(3)`$-like symmetry for massive particles by making the infinite-momentum limit.
The $`E(2)`$ transformations consist of rotation and two translations on a flat plane. Let us start with the rotation matrix applicable to the column vector $`(x,y,1)`$:
$$R(\theta )=\left(\begin{array}{ccc}\mathrm{cos}\theta & \mathrm{sin}\theta & 0\\ \mathrm{sin}\theta & \mathrm{cos}\theta & 0\\ 0& 0& 1\end{array}\right).$$
(A1)
Let us then consider the translation matrix:
$$T(a,b)=\left(\begin{array}{ccc}1& 0& a\\ 0& 1& b\\ 0& 0& 1\end{array}\right).$$
(A2)
If we take the product $`T(a,b)R(\theta )`$,
$$E(a,b,\theta )=T(a,b)R(\theta )=\left(\begin{array}{ccc}\mathrm{cos}\theta & \mathrm{sin}\theta & a\\ \mathrm{sin}\theta & \mathrm{cos}\theta & b\\ 0& 0& 1\end{array}\right).$$
(A3)
This is the Euclidean transformation matrix applicable to the two-dimensional $`xy`$ plane. The matrices $`R(\theta )`$ and $`T(a,b)`$ represent the rotation and translation subgroups respectively. The above expression is not a direct product because $`R(\theta )`$ does not commute with $`T(a,b)`$. The translations constitute an Abelian invariant subgroup because two different $`T`$ matrices commute with each other, and because
$$R(\theta )T(a,b)R^1(\theta )=T(a^{},b^{}).$$
(A4)
The rotation subgroup is not invariant because the conjugation
$$T(a,b)R(\theta )T^1(a,b)$$
does not lead to another rotation.
We can write the above transformation matrix in terms of generators. The rotation is generated by
$$J_3=\left(\begin{array}{ccc}0& i& 0\\ i& 0& 0\\ 0& 0& 0\end{array}\right).$$
(A5)
The translations are generated by
$$P_1=\left(\begin{array}{ccc}0& 0& i\\ 0& 0& 0\\ 0& 0& 0\end{array}\right),P_2=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& i\\ 0& 0& 0\end{array}\right).$$
(A6)
These generators satisfy the commutation relations:
$$[P_1,P_2]=0,[J_3,P_1]=iP_2,[J_3,P_2]=iP_1.$$
(A7)
This $`E(2)`$ group is not only convenient for illustrating the groups containing an Abelian invariant subgroup, but also occupies an important place in constructing representations for the little group for massless particles, since the little group for massless particles is locally isomorphic to the above $`E(2)`$ group.
The contraction of $`O(3)`$ to $`E(2)`$ is well known and is often called the Inonu-Wigner contraction . The question is whether the $`E(2)`$-like little group can be obtained from the $`O(3)`$-like little group. In order to answer this question, let us closely look at the original form of the Inonu-Wigner contraction. We start with the generators of $`O(3)`$. The $`J_3`$ matrix is given in Eq.(2), and
$$J_2=\left(\begin{array}{ccc}0& 0& i\\ 0& 0& 0\\ i& 0& 0\end{array}\right),J_3=\left(\begin{array}{ccc}0& i& 0\\ i& 0& 0\\ 0& 0& 0\end{array}\right).$$
(A8)
The Euclidean group $`E(2)`$ is generated by $`J_3,P_1`$ and $`P_2`$, and their Lie algebra has been discussed in Sec. I.
Let us transpose the Lie algebra of the $`E(2)`$ group. Then $`P_1`$ and $`P_2`$ become $`Q_1`$ and $`Q_2`$ respectively, where
$$Q_1=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ i& 0& 0\end{array}\right),Q_2=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& i& 0\end{array}\right).$$
(A9)
Together with $`J_3`$, these generators satisfy the same set of commutation relations as that for $`J_3,P_1`$, and $`P_2`$ given in Eq.(A7):
$$[Q_1,Q_2]=0,[J_3,Q_1]=iQ_2,[J_3,Q_2]=iQ_1.$$
(A10)
These matrices generate transformations of a point on a circular cylinder. Rotations around the cylindrical axis are generated by $`J_3`$. The matrices $`Q_1`$ and $`Q_2`$ generate translations along the direction of $`z`$ axis. The group generated by these three matrices is called the cylindrical group .
We can achieve the contractions to the Euclidean and cylindrical groups by taking the large-radius limits of
$$P_1=\frac{1}{R}B^1J_2B,P_2=\frac{1}{R}B^1J_1B,$$
(A11)
and
$$Q_1=\frac{1}{R}BJ_2B^1,Q_2=\frac{1}{R}BJ_1B^1,$$
(A12)
where
$$B(R)=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& R\end{array}\right).$$
(A13)
The vector spaces to which the above generators are applicable are $`(x,y,z/R)`$ and $`(x,y,Rz)`$ for the Euclidean and cylindrical groups respectively. They can be regarded as the north-pole and equatorial-belt approximations of the spherical surface respectively .
## B Contraction of O(3)-like to E(2)-like Little Groups
Since $`P_1(P_2)`$ commutes with $`Q_2(Q_1)`$, we can consider the following combination of generators.
$$F_1=P_1+Q_1,F_2=P_2+Q_2.$$
(B1)
Then these operators also satisfy the commutation relations:
$$[F_1,F_2]=0,[J_3,F_1]=iF_2,[J_3,F_2]=iF_1.$$
(B2)
However, we cannot make this addition using the three-by-three matrices for $`P_i`$ and $`Q_i`$ to construct three-by-three matrices for $`F_1`$ and $`F_2`$, because the vector spaces are different for the $`P_i`$ and $`Q_i`$ representations. We can accommodate this difference by creating two different $`z`$ coordinates, one with a contracted $`z`$ and the other with an expanded $`z`$, namely $`(x,y,Rz,z/R)`$. Then the generators become
$$P_1=\left(\begin{array}{cccc}0& 0& 0& i\\ 0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),P_2=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& i\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right).$$
(B3)
$$Q_1=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& 0\\ i& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),Q_2=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& i& 0& 0\\ 0& 0& 0& 0\end{array}\right).$$
(B4)
Then $`F_1`$ and $`F_2`$ will take the form
$$F_1=\left(\begin{array}{cccc}0& 0& 0& i\\ 0& 0& 0& 0\\ i& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),F_2=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& i\\ 0& i& 0& 0\\ 0& 0& 0& 0\end{array}\right).$$
(B5)
The rotation generator $`J_3`$ takes the form of Eq.(2). These four-by-four matrices satisfy the E(2)-like commutation relations of Eq.(B2).
Now the $`B`$ matrix of Eq.(A13), can be expanded to
$$B(R)=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& R& 0\\ 0& 0& 0& 1/R\end{array}\right).$$
(B6)
If we make a similarity transformation on the above form using the matrix
$$\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1/\sqrt{2}& 1/\sqrt{2}\\ 0& 0& 1/\sqrt{2}& 1/\sqrt{2}\end{array}\right),$$
(B7)
which performs a 45-degree rotation of the third and fourth coordinates, then this matrix becomes
$$\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& \mathrm{cosh}\eta & \mathrm{sinh}\eta \\ 0& 0& \mathrm{sinh}\eta & \mathrm{cosh}\eta \end{array}\right),$$
(B8)
with $`R=e^\eta `$. This form is the Lorentz boost matrix along the $`z`$ direction. If we start with the set of expanded rotation generators $`J_3`$ of Eq.(2), and perform the same operation as the original Inonu-Wigner contraction given in Eq.(A11), the result is
$$N_1=\frac{1}{R}B^1J_2B,N_2=\frac{1}{R}B^1J_1B,$$
(B9)
where $`N_1`$ and $`N_2`$ are given in Eq.(4). The generators $`N_1`$ and $`N_2`$ are the contracted $`J_2`$ and $`J_1`$ respectively in the infinite-momentum/zero-mass limit. |
no-problem/9911/quant-ph9911036.html | ar5iv | text | # Optical Bell Measurement by Fock Filtering
## 1 Introduction
Entanglement and entangled states are fundamental concepts of the new field of quantum information . They can be exploited for example in super-dense coding or teleportation both of which have been demonstrated experimentally. The most important example of entangled states is perhaps given by the four polarization-entangled Bell states involving two photons
$`|\mathrm{\Psi }_\pm `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left[|1001\pm |0110\right]={\displaystyle \frac{1}{\sqrt{2}}}\left[a_{}b_{}\pm a_{}b_{}\right]^{}|\mathrm{𝟎}`$ (1)
$`|\mathrm{\Phi }_\pm `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left[|1010\pm |0101\right]={\displaystyle \frac{1}{\sqrt{2}}}\left[a_{}b_{}\pm a_{}b_{}\right]^{}|\mathrm{𝟎},`$ (2)
where $`a_{}(b_{})`$ denotes horizontal polarized and $`a_{}(b_{})`$ denotes vertical polarized photons in the two possible directions of propagation denoted by $`a`$ and $`b`$ (see also Fig. 1). Indeed, these states have been produced experimentally via the nonlinear process of spontaneous down-conversion . In order to actually exploit entanglement, in the manipulation of quantum information one has to be able to distinguish the different maximally entangled states given above. This ability is crucial for the unambiguous experimental realization for example of quantum teleportation , where an unknown quantum state can be exchanged between two parties as long as they share an entangled state, and perform the appropriate measurement to discriminate it among the others.
A simple interferometric setup cannot be of help for the discrimination among the four EPR states, as it has been shown recently that a complete Bell measurement cannot be performed using only linear elements. A method to overcome this conclusion has been suggested in Ref. , which, however, requires to embed the state of interest in a larger Hilbert space, and therefore can be applied only in presence of multiple entanglement (entanglement in more than two degrees of freedom).
## 2 Bell discrimination
In this paper we consider the nonlinear interferometric setup depicted in Fig. 1. Our starting point is a reliable source of optical EPR-Bell states, i. e., polarization entangled photon pairs. This is usually a birefringent crystal where type-II parametric down-conversion transforms an incoming pump photon into a pair of correlated ordinary and extraordinary photons. We assume that each pulse (the signal) is prepared in one of the four EPR-Bell states, and we want to unambiguously infer from measurements which one was actually impinging onto the apparatus.
The signal first enters a polarizing beam splitter, which transmits photons with a given polarization (say vertical) and reflects photons with the orthogonal one (say horizontal). The mode transformations of this element is given by (the notation for the field modes refers to Fig. 1 hereafter)
$`(c_{},c_{},d_{},d_{})=\widehat{U}_{\mathrm{PBS}}(a_{},a_{},b_{},b_{})\widehat{U}_{\mathrm{PBS}}^{}=(b_{},a_{},a_{},b_{}),`$ (3)
and the corresponding Schrödinger evolution of the Bell states is
$`\widehat{U}_{\mathrm{PBS}}(\mathrm{\Phi }_+,\mathrm{\Phi }_{},\mathrm{\Psi }_+,\mathrm{\Psi }_{})=(\mathrm{\Phi }_+,\mathrm{\Phi }_{},\chi _+,\chi _{}).`$ (4)
In Eq. (4) $`\chi _\pm `$ are superpositions of states with both photons in the same path (arm)
$`|\chi _\pm `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left[|1100\pm |0011\right]={\displaystyle \frac{1}{\sqrt{2}}}\left[b_{}b_{}\pm a_{}a_{}\right]^{}|\mathrm{𝟎}.`$ (5)
The two sets of states, $`\chi _\pm `$ and $`\mathrm{\Phi }_\pm `$, can now be discriminated by the number of photons travelling in one arm of the interferometer, which is either zero or two for $`\chi _\pm `$ and (certainly only) one for $`\mathrm{\Phi }_\pm `$. Such a discrimination can be performed by means of a Fock Filter (FF), which is a novel kind of all-optical nonlinear switch . Let us postpone the detailed description of the FF to section 3. For the moment we assume that it switches on when a single photon (of any polarization) is present, and does not switch for zero or more than one photons. As we will see, the FF performs a kind of non-demolition measurement of the photon number, such that coherence is preserved and the state after the measurement is still available for further manipulations. Indeed, the remaining part of the device should be able to distinguish phases, namely to discriminate between $`\chi _+`$ and $`\chi _{}`$, or between $`\mathrm{\Phi }_+`$ and $`\mathrm{\Phi }_{}`$. For this purpose, first the polarization of photons in the second arm is rotated by $`\pi /2`$ using a half-wave plate, thus turning $`\mathrm{\Phi }_\pm `$ into $`\mathrm{\Psi }_\pm `$ respectively, while leaving $`\chi _\pm `$ untouched. In fact, the transformation induced by the polarization rotator reads $`\widehat{U}_{PR}=\widehat{\mathrm{I}}\widehat{V}_{PR}`$, where $`\widehat{V}_{PR}`$ acts only on two modes
$`(d_{}^{},d_{}^{})=\widehat{V}_{\mathrm{PR}}(d_{},d_{})\widehat{V}_{\mathrm{PR}}^{}=(d_{},d_{}),`$ (6)
and thus
$`\widehat{U}_{\mathrm{PR}}(\chi _+,\chi _{},\mathrm{\Phi }_+,\mathrm{\Phi }_{})=(\chi _+,\chi _{},\mathrm{\Psi }_+,\mathrm{\Psi }_{}).`$ (7)
The two paths are then recombined into a balanced (not polarizing) beam splitter, whose action on generic field modes $`x`$ and $`y`$ is described by
$`\widehat{U}_{\mathrm{BS}}(x_{},x_{},y_{},y_{})\widehat{U}_{\mathrm{BS}}^{}={\displaystyle \frac{1}{\sqrt{2}}}(x_{}+y_{},x_{}+y_{},x_{}y_{},x_{}y_{}).`$ (8)
If the transformation (8) is applied to the field modes $`c`$ and $`d^{}`$ we have, using Eqs. (6) and (3),
$`(e_{},e_{},f_{},f_{})`$ $`=`$ $`\widehat{U}_{\mathrm{BS}}(c_{},c_{},d_{}^{},d_{}^{})\widehat{U}_{\mathrm{BS}}^{}`$ (9)
$`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(c_{}+d_{}^{},c_{}+d_{}^{},c_{}d_{}^{},c_{}d_{}^{})`$
$`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(b_{}+b_{},a_{}a_{},b_{}b_{},a_{}+a_{}).`$
In terms of the state just before the BS this corresponds to the following Schrödinger evolution
$`\widehat{U}_{\mathrm{BS}}(\chi _+,\chi _{},\mathrm{\Psi }_+,\mathrm{\Psi }_{})=(\chi _+,\mathrm{\Psi }_+,\chi _{},\mathrm{\Psi }_{}).`$ (10)
Finally, the photons are measured by single-photons avalanche photo-detectors, where the last stage of the setup is a coincidence circuit (CC). In fact, $`\mathrm{\Psi }_\pm `$ correspond to the presence of photons in both channel, whereas $`\chi _\pm `$ are superpositions with both photons in the same path. In terms of the states before the BS, this means that superpositions with the minus sign ($`\chi _{}`$ and $`\mathrm{\Psi }_{}`$) lead to coincident clicks at photo-detectors (CC ON), whereas superpositions with the plus sign ($`\chi _+`$ and $`\mathrm{\Psi }_+`$) do not switch on the coincidence circuit.
The whole chain of transformations of our setup can be summarized by the following diagram
| $`\mathrm{\Psi }_+`$ | $`\stackrel{\mathrm{PBS}}{}`$ | $`\chi _+`$ | \[FF OFF\] | $`\stackrel{PR}{}`$ | $`\chi _+`$ | $`\stackrel{BS}{}`$ | $`\chi _+`$ | \[CC OFF\] |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`\mathrm{\Psi }_{}`$ | $`\stackrel{\mathrm{PBS}}{}`$ | $`\chi _{}`$ | \[FF OFF\] | $`\stackrel{PR}{}`$ | $`\chi _{}`$ | $`\stackrel{BS}{}`$ | $`\mathrm{\Psi }_+`$ | \[CC ON\] |
| $`\mathrm{\Phi }_+`$ | $`\stackrel{\mathrm{PBS}}{}`$ | $`\mathrm{\Phi }_+`$ | \[FF ON\] | $`\stackrel{PR}{}`$ | $`\mathrm{\Psi }_+`$ | $`\stackrel{BS}{}`$ | $`\chi _{}`$ | \[CC OFF\] |
| $`\mathrm{\Phi }_{}`$ | $`\stackrel{\mathrm{PBS}}{}`$ | $`\mathrm{\Phi }_{}`$ | \[FF ON\] | $`\stackrel{PR}{}`$ | $`\mathrm{\Psi }_{}`$ | $`\stackrel{BS}{}`$ | $`\mathrm{\Psi }_{}`$ | \[CC ON\] |
which illustrates the unambiguous discrimination of the four Bell states.
Our scheme is minimal, as it involves only two measurements, and thus only four possible outcomes, which equals the number of states to be discriminated.
## 3 Fock Filtering
The Fock Filter is schematically depicted in Fig. 2. The signal under examination is coupled to a high-Q ring cavity by a nonlinear crystal with relevant third-order susceptibility $`\chi \chi ^{(3)}`$, which imposes cross-Kerr phase modulation. We assume that the coupling is independent of polarization, such that the evolution operator of the Kerr medium is given by
$`U_K=\mathrm{exp}\left\{ig(a_{}^{}a_{}+a_{}^{}a_{})c^{}c\right\},`$ (11)
where $`g=\chi t`$ is the coupling constant, the $`a`$’s are the two polarization modes of the signal, and $`c`$ describes the cavity mode. Eq. (11) states that, as a result of the Kerr interaction, the cavity mode is subjected to a phase-shift proportional to the number of photons passing through the arm of the interferometer. The FF is complemented by a further, tunable, phase-shift $`\psi `$, and operates with the ring cavity fed by a strong coherent state $`|z`$, i.e. a weak laser beam provided by a stable source (the second port of the cavity is left unexcited). The input state of the whole device can be written as $`|\phi _{\mathrm{in}}=|z|0|\nu `$, where $`|\nu =_{n_{}n_{}}\nu _{n_{}n_{}}|n_{}|n_{}`$ denotes a generic preparation of the signal mode. The output state is given by
$`|\phi _{\mathrm{out}}={\displaystyle \underset{n_{}n_{}}{}}\nu _{n_{}n_{}}|\sigma _{n_{}+n_{}}z|\kappa _{n_{}+n_{}}z|n_{}|n_{},`$ (12)
where
$`\sigma _n={\displaystyle \frac{\tau }{1[1\tau ]e^{i\varphi _n}}}\kappa _n={\displaystyle \frac{\sqrt[]{1\tau }(e^{i\varphi _n}1)}{1[1\tau ]e^{i\varphi _n}}},`$ (13)
are the overall photon-number-dependent transmitivity and reflectivity of the cavity. In Eq. (13) $`\varphi _n=\psi gn`$, whereas $`\tau `$ denotes the transmitivity of the cavity beam splitters $`BS_1`$ and $`BS_2`$. For a good cavity, (i.e. a cavity with large quality factor) $`\tau `$ should be quite small, usual values achievable in quantum optical labs are about $`\tau 10^410^6`$ (losses, due to absorption processes, are about $`10^7`$). At the cavity output one mode is ignored, whereas the other one is monitored by an avalanche single-photon photo-detector, which checks whether or not photons are present. This kind of ON/OFF measurement is described by a two valued POM
$`\widehat{\mathrm{\Pi }}_{\mathrm{𝖮𝖥𝖥}}={\displaystyle \underset{k}{}}(1\eta )^k|kk|\widehat{\mathrm{\Pi }}_{\mathrm{𝖮𝖭}}=\widehat{1}\widehat{\mathrm{\Pi }}_{\mathrm{𝖮𝖥𝖥}},`$ (14)
$`\eta `$ being the quantum efficiency of the photo-detector. If the state travelling through the interferometer is either $`\mathrm{\Phi }_+`$ or $`\mathrm{\Phi }_{}`$ we have $`|\phi _{\mathrm{in}}=|z|0|\mathrm{\Phi }_\pm `$ and thus
$`|\phi _{\mathrm{out}}={\displaystyle \frac{1}{\sqrt{2}}}\left[|\sigma _1z|\kappa _1z|1010\pm |\sigma _1z|\kappa _1z|0101\right]=|\sigma _1z|\kappa _1z|\mathrm{\Phi }_\pm .`$ (15)
The probability of having a click is given by
$`P(\mathrm{𝖮𝖭}|\mathrm{\Phi }_\pm )=\text{Tr}\left\{|\phi _{\mathrm{out}}\phi _{\mathrm{out}}|\widehat{\mathrm{\Pi }}_{\mathrm{𝖮𝖭}}\right\}=1\mathrm{exp}\left(\eta |\sigma _1|^2|z|^2\right),`$ (16)
whereas the conditional output signal state, after a click has been actually registered, turns out to be
$`\widehat{\nu }_{\mathrm{out}}(\mathrm{𝖮𝖭}|\mathrm{\Phi }_\pm )={\displaystyle \frac{1}{P(\mathrm{𝖮𝖭}|\mathrm{\Phi }_\pm )}}\text{Tr}_{cavity}\left\{|\phi _{\mathrm{out}}\phi _{\mathrm{out}}|\widehat{\mathrm{\Pi }}_{\mathrm{𝖮𝖭}}\right\}=|\mathrm{\Phi }_\pm \mathrm{\Phi }_\pm |.`$ (17)
By setting $`\psi =g`$ and for $`\eta |z|^21`$ we have $`P(\mathrm{𝖮𝖭}|\mathrm{\Phi }_\pm )1`$. On the other hand, if the signal is either $`\chi _+`$ or $`\chi _{}`$ we have $`|\phi _{\mathrm{in}}=|z|0|\chi _\pm `$ and
$`|\phi _{\mathrm{out}}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left[|\sigma _2z|\kappa _2z|1100\pm |\sigma _0z|\kappa _0z|0011\right]`$ (18)
$`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left[|\sigma z|\kappa z|1100\pm |\sigma ^{}z|\kappa ^{}z|0011\right],`$
where the second equality comes from the fact that by setting $`\psi =g`$ we have $`\sigma _0^{}=\sigma _2\sigma `$. Finally, from Eqs. (14) and (18) we have
$`P(\mathrm{𝖮𝖥𝖥}|\chi _\pm )`$ $`=`$ $`\mathrm{exp}\left(\eta |\sigma |^2|z|^2\right)`$ (19)
$`\widehat{\nu }_{\mathrm{out}}(\mathrm{𝖮𝖥𝖥}|\chi _\pm )`$ $`=`$ $`|\chi _\pm \chi _\pm |.`$ (20)
For small $`g`$ and $`\tau `$ we have $`|\sigma |^2(\tau /g)^2`$. Therefore, for $`\tau g`$ we have $`P(\mathrm{𝖮𝖥𝖥}|\chi _\pm )1`$. This means that a click at the Fock Filter unambiguously implies that either $`\mathrm{\Phi }_+`$ or $`\mathrm{\Phi }_{}`$ was travelling through the interferometer, where having no click indicates the passage of either $`\chi _+`$ or $`\chi _{}`$. Remarkably, the measurement is quite robust against detector inefficiency (as only the product $`\eta |z|^2`$ is relevant) and does not destroy coherence. For both the possible outcomes (either ON or OFF) the state after the measurement remains unaffected, and is still available for further manipulations.
## 4 Conclusions
In conclusion, we have described an interferometric setup to perform a complete optical Bell measurement. It consists of a Mach-Zehnder interferometer with the first beam splitter a polarizing one, and the second a normal one, and where inside the interferometer a non-demolition photon number measurement is performed by the Fock filtering technique. The resulting scheme is robust against detectors inefficiency, and provides a reliable method to unambiguously discriminate among the four polarization-entangled EPR-Bell photon pairs.
We would like to thank D. Bouwmeester for useful comments. This work was supported by CNR and NATO through the Advanced Fellowship program 1998, the EPSRC, The Leverhulme Trust, the INFM, the Brazilian agency Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPQ), the ORS Awards Scheme, the Inlaks Foundation and the European Union. |
no-problem/9911/hep-lat9911014.html | ar5iv | text | # DESY 99-170 Least-squares optimized polynomials for fermion simulations11footnote 1Talk given at the workshop on Numerical Challenges in Lattice QCD, August 1999, Wuppertal University; to appear in the proceedings.
## 1 Introduction
In popular Monte Carlo simulation algorithms for QCD and other similar quantum field theories the main difficulty is the evaluation of the determinant of the fermion action matrix. This can be achieved by stochastic procedures with the help of auxiliary bosonic “pseudofermion” fields.
In the two-step multi-bosonic (TSMB) algorithm an approximation of the fermion determinant is achieved by the pseudofermion fields corresponding to a polynomial approximation of some negative power $`x^\alpha `$ of the fermion matrix . The auxiliary bosonic fields are updated according to the multi-bosonic action . The error of the polynomial approximation is corrected in a global accept-reject decision by using better polynomial approximations. Sometimes a reweighting of gauge configurations in the evaluation of expectation values is also necessary. This can also be performed by high order polynomials.
The polynomials used in the TSMB algorithm have to approximate the function $`x^\alpha \overline{P}(x)`$ in some non-negative interval $`x[ϵ,\lambda ],\mathrm{\hspace{0.33em}0}ϵ<\lambda `$. Here $`\overline{P}(x)`$ is a known polynomial, typically another cruder approximation of $`x^\alpha `$. The approximation scheme and optimization procedure can be chosen differently. The least-squares optimization is an efficient and flexible possibility. (For other approximation schemes see .)
In this review the basic relations for least-squares optimized polynomials are presented as introduced in . Particular attention is paid to a recurrence scheme which can be applied for determining the necessary high order polynomials and for evaluating them numerically. The details of the TSMB algorithm will not be considered. For a comprehensive summary and references see . For experience on the application of TSMB in a recent large scale numerical simulation see .
## 2 Basic relations
Least-squares optimization provides a general and flexible framework for obtaining the necessary optimized polynomials in multi-bosonic fermion algorithms. Here we introduce the basic formulae in the way it has been done in .
We want to approximate the real function $`f(x)`$ in the interval $`x[ϵ,\lambda ]`$ by a polynomial $`P_n(x)`$ of degree $`n`$. The aim is to minimize the deviation norm
$$\delta _n\left\{N_{ϵ,\lambda }^1_ϵ^\lambda 𝑑xw(x)^2\left[f(x)P_n(x)\right]^2\right\}^{\frac{1}{2}}.$$
(1)
Here $`w(x)`$ is an arbitrary real weight function and the overall normalization factor $`N_{ϵ,\lambda }`$ can be chosen by convenience, for instance, as
$$N_{ϵ,\lambda }_ϵ^\lambda 𝑑xw(x)^2f(x)^2.$$
(2)
A typical example of functions to be approximated is $`f(x)=x^\alpha /\overline{P}(x)`$ with $`\alpha >0`$ and some polynomial $`\overline{P}(x)`$. The interval is usually such that $`0ϵ<\lambda `$. For optimizing the relative deviation one takes a weight function $`w(x)=f(x)^1`$.
$`\delta _n^2`$ is a quadratic form in the coefficients of the polynomial which can be straightforwardly minimized. Let us now consider, for simplicity, only the relative deviation from the simple function $`f(x)=x^\alpha =w(x)^1`$. Let us denote the polynomial corresponding to the minimum of $`\delta _n`$ by
$$P_n(\alpha ;ϵ,\lambda ;x)\underset{\nu =0}{\overset{n}{}}c_{n\nu }(\alpha ;ϵ,\lambda )x^{n\nu }.$$
(3)
Performing the integral in $`\delta _n^2`$ term by term we obtain
$$\delta _n^2=12\underset{\nu =0}{\overset{n}{}}c_\nu V_\nu ^{(\alpha )}+\underset{\nu _1,\nu _2=0}{\overset{n}{}}c_{\nu _1}M_{\nu _1,\nu _2}^{(\alpha )}c_{\nu _2},$$
(4)
where
$$V_\nu ^{(\alpha )}=\frac{\lambda ^{1+\alpha +n\nu }ϵ^{1+\alpha +n\nu }}{(\lambda ϵ)(1+\alpha +n\nu )},$$
$$M_{\nu _1,\nu _2}^{(\alpha )}=\frac{\lambda ^{1+2\alpha +2n\nu _1\nu _2}ϵ^{1+2\alpha +2n\nu _1\nu _2}}{(\lambda ϵ)(1+2\alpha +2n\nu _1\nu _2)}.$$
(5)
The coefficients of the polynomial corresponding to the minimum of $`\delta _n^2`$, or of $`\delta _n`$, are
$$c_\nu c_{n\nu }(\alpha ;ϵ,\lambda )=\underset{\nu _1=0}{\overset{n}{}}M_{\nu \nu _1}^{(\alpha )1}V_{\nu _1}^{(\alpha )}.$$
(6)
The value at the minimum is
$$\delta _n^2\delta _n^2(\alpha ;ϵ,\lambda )=1\underset{\nu _1,\nu _2=0}{\overset{n}{}}V_{\nu _1}^{(\alpha )}M_{\nu _1,\nu _2}^{(\alpha )1}V_{\nu _2}^{(\alpha )}.$$
(7)
The solution of the quadratic optimization in (6)-(7) gives in principle a simple way to find the required least-squares optimized polynomials. The practical problem is, however, that the matrix $`M^{(\alpha )}`$ is not well conditioned because it has eigenvalues with very different magnitudes. In order to illustrate this let us consider the special case $`(\alpha =1,\lambda =1,ϵ=0)`$ with $`n=10`$. In this case the eigenvalues are:
$`0.4435021205e14,0.1045947635e11,0.1143819915e9,`$
$`0.7698917100e8,0.3571195735e6,0.1211873623e4,`$
$`0.3120413130e3,0.6249495675e2,0.9849331094e1,`$
$`1.075807246.`$ (8)
A numerical investigation shows that, in general, the ratio of maximal to minimal eigenvalues is of the order of $`𝒪(10^{1.5n})`$. It is obvious from the structure of $`M^{(\alpha )}`$ in (5) that a rescaling of the interval $`[ϵ,\lambda ]`$ does not help. The large differences in magnitude of the eigenvalues implies through (6) large differences of magnitude in the coefficients $`c_{n\nu }`$ and therefore the numerical evaluation of the optimal polynomial $`P_n(x)`$ for large $`n`$ is non-trivial.
Let us now return to the general case with arbitrary function $`f(x)`$ and weight $`w(x)`$. It is very useful to introduce orthogonal polynomials $`\mathrm{\Phi }_\mu (x)(\mu =0,1,2,\mathrm{})`$ satisfying
$$_ϵ^\lambda 𝑑xw(x)^2\mathrm{\Phi }_\mu (x)\mathrm{\Phi }_\nu (x)=\delta _{\mu \nu }q_\nu .$$
(9)
and expand the polynomial $`P_n(x)`$ in terms of them:
$$P_n(x)=\underset{\nu =0}{\overset{n}{}}d_{n\nu }\mathrm{\Phi }_\nu (x).$$
(10)
Besides the normalization factor $`q_\nu `$ let us also introduce, for later purposes, the integrals $`p_\nu `$ and $`s_\nu `$ by
$`q_\nu `$ $``$ $`{\displaystyle _ϵ^\lambda }𝑑xw(x)^2\mathrm{\Phi }_\nu (x)^2,`$
$`p_\nu `$ $``$ $`{\displaystyle _ϵ^\lambda }𝑑xw(x)^2\mathrm{\Phi }_\nu (x)^2x,`$
$`s_\nu `$ $``$ $`{\displaystyle _ϵ^\lambda }𝑑xw(x)^2x^\nu .`$ (11)
It can be easily shown that the expansion coefficients $`d_{n\nu }`$ minimizing $`\delta _n`$ are independent of $`n`$ and are given by
$$d_{n\nu }d_\nu =\frac{b_\nu }{q_\nu },$$
(12)
where
$$b_\nu _ϵ^\lambda 𝑑xw(x)^2f(x)\mathrm{\Phi }_\nu (x).$$
(13)
The minimal value of $`\delta _n^2`$ is
$$\delta _n^2=1N_{ϵ,\lambda }^1\underset{\nu =0}{\overset{n}{}}d_\nu b_\nu .$$
(14)
Rescaling the variable $`x`$ by $`x^{}=\rho x`$ allows for considering only standard intervals, say $`[ϵ/\lambda ,1]`$. The scaling properties of the optimized polynomials can be easily obtained from the definitions. Let us now again consider the simple function $`f(x)=x^\alpha `$ and relative deviation with $`w(x)=x^\alpha `$ when the rescaling relations are:
$`\delta _n^2(\alpha ;ϵ\rho ,\lambda \rho )`$ $`=`$ $`\delta _n^2(\alpha ;ϵ,\lambda ),`$
$`P_n(\alpha ;ϵ\rho ,\lambda \rho ;x)`$ $`=`$ $`\rho ^\alpha P_n(\alpha ;ϵ,\lambda ;x/\rho ),`$
$`c_{n\nu }(\alpha ;ϵ\rho ,\lambda \rho )`$ $`=`$ $`\rho ^{\nu n\alpha }c_{n\nu }(\alpha ;ϵ,\lambda ).`$ (15)
In applications to multi-bosonic algorithms for fermions the decomposition of the optimized polynomials as a product of root-factors is needed. This can be written as
$$P_n(\alpha ;ϵ,\lambda ;x)=c_{n0}(\alpha ;ϵ,\lambda )\underset{j=1}{\overset{n}{}}[xr_{nj}(\alpha ;ϵ,\lambda )].$$
(16)
The rescaling properties here are:
$`c_{n0}(\alpha ;ϵ\rho ,\lambda \rho )`$ $`=`$ $`\rho ^{n\alpha }c_{n0}(\alpha ;ϵ,\lambda ),`$
$`r_{nj}(\alpha ;ϵ\rho ,\lambda \rho )`$ $`=`$ $`\rho r_{nj}(\alpha ;ϵ,\lambda ).`$ (17)
The root-factorized form (16) can also be used for the numerical evaluation of the polynomials with matrix arguments if a suitable optimization of the ordering of roots is performed .
The above orthogonal polynomials satisfy three-term recurrence relations which are very useful for numerical evaluation. In fact, at large $`n`$ the recursive evaluation of the polynomials is numerically more stable than the evaluation with root factors. For general $`f(x)`$ and $`w(x)`$, the first two ortogonal polynomials with $`\mu =0,1`$ are given by
$$\mathrm{\Phi }_0(x)=1,\mathrm{\Phi }_1(x)=x\frac{s_1}{s_0}.$$
(18)
The higher order polynomials $`\mathrm{\Phi }_\mu (x)`$ for $`\mu =2,3,\mathrm{}`$ can be obtained from the recurrence relation
$$\mathrm{\Phi }_{\mu +1}(x)=(x+\beta _\mu )\mathrm{\Phi }_\mu (x)+\gamma _{\mu 1}\mathrm{\Phi }_{\mu 1}(x),(\mu =1,2,\mathrm{}),$$
(19)
where the recurrence coefficients are given by
$$\beta _\mu =\frac{p_\mu }{q_\mu },\gamma _{\mu 1}=\frac{q_\mu }{q_{\mu 1}}.$$
(20)
Defining the polynomial coefficients $`f_{\mu \nu }(0\nu \mu )`$ by
$$\mathrm{\Phi }_\mu (x)=\underset{\nu =0}{\overset{\mu }{}}f_{\mu \nu }x^{\mu \nu }$$
(21)
the above recurrence relations imply the normalization convention
$$f_{\mu 0}=1,(\mu =0,1,2,\mathrm{}).$$
(22)
The rescaling relations for the orthogonal polynomials easily follow from the definitions. For the simple function $`f(x)=x^\alpha `$ and relative deviation with $`w(x)=x^\alpha `$ we have
$$\mathrm{\Phi }_\mu (\alpha ;\rho ϵ,\rho \lambda ;x)=\rho ^\mu \mathrm{\Phi }_\mu (\alpha ;ϵ,\lambda ;x/\rho ).$$
(23)
For the quantities introduced in (2) this implies
$`q_\nu (\alpha ;\rho ϵ,\rho \lambda )`$ $`=`$ $`\rho ^{2\alpha +1+2\nu }q_\nu (\alpha ;ϵ,\lambda ),`$
$`p_\nu (\alpha ;\rho ϵ,\rho \lambda )`$ $`=`$ $`\rho ^{2\alpha +2+2\nu }p_\nu (\alpha ;ϵ,\lambda ),`$
$`s_\nu (\alpha ;\rho ϵ,\rho \lambda )`$ $`=`$ $`\rho ^{2\alpha +1+\nu }s_\nu (\alpha ;ϵ,\lambda ).`$ (24)
For the expansion coefficients defined in (12)-(13) one obtains
$`b_\nu (\alpha ;\rho ϵ,\rho \lambda )`$ $`=`$ $`\rho ^{\alpha +1+\nu }b_\nu (\alpha ;ϵ,\lambda ),`$
$`d_\nu (\alpha ;\rho ϵ,\rho \lambda )`$ $`=`$ $`\rho ^{\alpha \nu }d_\nu (\alpha ;ϵ,\lambda ),`$ (25)
and the recurrence coefficients in (19)-(20) satisfy
$`\beta _\mu (\alpha ;\rho ϵ,\rho \lambda )`$ $`=`$ $`\rho \beta _\mu (\alpha ;ϵ,\lambda ),`$
$`\gamma _{\mu 1}(\alpha ;\rho ϵ,\rho \lambda )`$ $`=`$ $`\rho ^2\gamma _{\mu 1}(\alpha ;ϵ,\lambda ).`$ (26)
For general intervals $`[ϵ,\lambda ]`$ and/or functions $`f(x)=x^\alpha \overline{P}(x)`$ the orthogonal polynomials and expansion coefficients have to be determined numerically. In some special cases, however, the polynomials can be related to some well know ones. An example is the weight factor
$$w^{(\rho ,\sigma )}(x)^2=(xϵ)^\rho (\lambda x)^\sigma .$$
(27)
Taking, for instance, $`\rho =2\alpha ,\sigma =0`$ this weight is similar to the one for relative deviation from the function $`f(x)=x^\alpha `$, which would be just $`x^{2\alpha }`$. In fact, for $`ϵ=0`$ these are exactly the same and for small $`ϵ`$ the difference is negligible. The corresponding orthogonal polynomials are simply related to the Jacobi polynomials , namely
$$\mathrm{\Phi }_\nu ^{(\rho ,\sigma )}(x)=(\lambda ϵ)^\nu \nu !\frac{\mathrm{\Gamma }(\rho +\sigma +\nu +1)}{\mathrm{\Gamma }(\rho +\sigma +2\nu +1)}P_\nu ^{(\sigma ,\rho )}\left(\frac{2x\lambda ϵ}{\lambda ϵ}\right).$$
(28)
Comparing different approximations with different $`(\rho ,\sigma )`$ the best choice is usually $`\rho =2\alpha ,\sigma =0`$ which corresponds to optimizing the relative deviation (see the appendix of ).
For large condition numbers $`\lambda /ϵ`$ least-squares optimization is much better than the Chebyshev approximation used for the approximation of $`x^1`$ in . The Chebyshev approximation is minimizing the maximum of the relative deviation
$$R(x)xP(x)1.$$
(29)
For the deviation norm
$$\delta _{max}\underset{x[ϵ,\lambda ]}{\mathrm{max}}|R(x)|$$
(30)
the least-squares approximation is slightly worse than the Chebyshev approximation. An example is shown by fig. 1. In the left lower corner the Chebyshev approximation has $`Rc(0.0002)=0.968`$ compared to $`Ro(0.0002)=0.991`$ for the least-squares optimization. For smaller condition numbers the Chebyshev approximation is not as bad as is shown by fig. 1. Nevertheless, in QCD simulations in sufficiently large volumes the condition number is of the order of the light quark mass squared in lattice units which can be as large as $`𝒪(10^610^7)`$.
Figure 1 also shows that the least-squares optimization is quite good in the minimax norm in (30), too. It can be proven that
$$|Ro(ϵ)|=\underset{x[ϵ,\lambda ]}{\mathrm{max}}|Ro(x)|$$
(31)
hence the minimax norm can also be easily obtained from
$$\delta _{max}^{(o)}=\underset{x[ϵ,\lambda ]}{\mathrm{max}}|Ro(x)|=|Ro(ϵ)|.$$
(32)
Therefore the least squares-optimization is also well suited for controlling the minimax norm, if for some reason it is required.
In QCD simulations the inverse power to be approximated ($`\alpha `$) is related to the number of Dirac fermion flavours: $`\alpha =N_f/2`$. If only $`u`$\- and $`d`$-quarks are considered we have $`N_f=2`$ and the function to be approximated is $`x^1`$. The dependence of the (squared) least-squares norm in (1) on the polynomial order $`n`$ is shown by fig. 2 for different values of the condition number $`\lambda /ϵ`$. The dependence on $`\alpha =N_f/2`$ is illustrated by fig. 3.
Another possible application of least-squares optimized polynomials is the numerical evaluation of the zero mass lattice action proposed by Neuberger . If one takes, for instance, the weight factor in (27) corresponding to the relative deviation, then the function $`x^{1/2}`$ has to be expanded in the Jacobi polynomials $`P^{(1,0)}`$.
## 3 Recurrence scheme
The expansion in orthogonal polynomials is very useful because it allows for a numerically stable evaluation of the least-squares optimized polynomials by the recurrence relation (19). The orthogonal polynomials themselves can also be determined recursively.
A recurrence scheme for obtaining the recurrence coefficients $`\beta _\mu ,\gamma _{\mu 1}`$ and expansion coefficients $`d_\nu =b_\nu /q_\nu `$ has been given in . In order to obtain $`q_\nu ,p_\nu `$ contained in (20) one can use the relations
$$q_\mu =\underset{\nu =0}{\overset{\mu }{}}f_{\mu \nu }s_{2\mu \nu },p_\mu =\underset{\nu =0}{\overset{\mu }{}}f_{\mu \nu }\left(s_{2\mu +1\nu }+f_{\mu 1}s_{2\mu \nu }\right).$$
(33)
The coefficients themselves can be calculated from $`f_{11}=s_1/s_0`$ and (19) which gives
$`f_{\mu +1,1}`$ $`=`$ $`f_{\mu ,1}+\beta _\mu ,`$
$`f_{\mu +1,2}`$ $`=`$ $`f_{\mu ,2}+\beta _\mu f_{\mu ,1}+\gamma _{\mu 1},`$
$`f_{\mu +1,3}`$ $`=`$ $`f_{\mu ,3}+\beta _\mu f_{\mu ,2}+\gamma _{\mu 1}f_{\mu 1,1},`$
$`\mathrm{}`$
$`f_{\mu +1,\mu }`$ $`=`$ $`f_{\mu ,\mu }+\beta _\mu f_{\mu ,\mu 1}+\gamma _{\mu 1}f_{\mu 1,\mu 2},`$
$`f_{\mu +1,\mu +1}`$ $`=`$ $`\beta _\mu f_{\mu ,\mu }+\gamma _{\mu 1}f_{\mu 1,\mu 1}.`$ (34)
The orthogonal polynomial and recurrence coefficients are recursively determined by (20) and (33)-(3). The expansion coefficients for the optimized polynomial $`P_n(x)`$ can be obtained from
$$b_\mu =\underset{\nu =0}{\overset{\mu }{}}f_{\mu \nu }_ϵ^\lambda 𝑑xw(x)^2f(x)x^{\mu \nu }.$$
(35)
The ingredients needed for this recursion are the basic integrals $`s_\nu `$ defined in (2) and
$$t_\nu _ϵ^\lambda 𝑑xw(x)^2f(x)x^\nu .$$
(36)
The recurrence scheme based on the coefficients of the orthogonal polynomials $`f_{\mu \nu }`$ in (3) is not optimal for large orders $`n`$, neither for arithmetics nor for storage requirements. A better scheme can be built up on the basis of the integrals
$`r_{\mu \nu }{\displaystyle _ϵ^\lambda }𝑑xw(x)^2\mathrm{\Phi }_\mu (x)x^\nu ,`$
$`\mu =0,1,\mathrm{},n;\nu =\mu ,\mu +1,\mathrm{},2n\mu .`$ (37)
The recurrence coefficients $`\beta _\mu ,\gamma _{\mu 1}`$ can be expressed from
$$q_\mu =r_{\mu \mu },p_\mu =r_{\mu ,\mu +1}+f_{\mu 1}r_{\mu \mu }$$
(38)
and eq. (20) as
$$\beta _\mu =f_{\mu 1}\frac{r_{\mu ,\mu +1}}{r_{\mu \mu }},\gamma _{\mu 1}=\frac{r_{\mu \mu }}{r_{\mu 1,\mu 1}}.$$
(39)
It follows from the definition that
$`r_{0\nu }`$ $`=`$ $`{\displaystyle _ϵ^\lambda }𝑑xw(x)^2x^\nu =s_\nu ,`$
$`r_{1\nu }`$ $`=`$ $`{\displaystyle _ϵ^\lambda }𝑑xw(x)^2(x^{\nu +1}+f_{11}x^\nu )=s_{\nu +1}+f_{11}s_\nu .`$ (40)
The recurrence relation (19) for the orthogonal polynomials implies
$$r_{\mu +1,\nu }=r_{\mu ,\nu +1}+\beta _\mu r_{\mu \nu }+\gamma _{\mu 1}r_{\mu 1,\nu }.$$
(41)
This has to be supplemented by
$$f_{11}=\frac{s_1}{s_0}$$
(42)
and by the first equation from (3):
$$f_{\mu +1,1}=f_{\mu ,1}+\beta _\mu .$$
(43)
Eqs. (39)-(43) define a complete recurrence scheme for determining the orthogonal polynomials $`\mathrm{\Phi }_\mu (x)`$. The moments $`s_\nu `$ of the integration measure defined in (2) serve as the basic input in this scheme.
The integrals $`b_\nu `$ in (13), which are necessary for the expansion coefficients $`d_\nu `$ in (12), can also be calculated in a similar scheme built up on the integrals
$`b_{\mu \nu }{\displaystyle _ϵ^\lambda }𝑑xw(x)^2f(x)\mathrm{\Phi }_\mu (x)x^\nu ,`$
$`\mu =0,1,\mathrm{},n;\nu =0,1,\mathrm{},n\mu .`$ (44)
The relations corresponding to (3)-(41) are now
$`b_{0\nu }={\displaystyle _ϵ^\lambda }𝑑xw(x)^2f(x)x^\nu =t_\nu ,`$ (45)
$`b_{1\nu }={\displaystyle _ϵ^\lambda }𝑑xw(x)^2f(x)(x^{\nu +1}+f_{11}x^\nu )=t_{\nu +1}+f_{11}t_\nu ,`$
$`b_{\mu +1,\nu }=b_{\mu ,\nu +1}+\beta _\mu b_{\mu \nu }+\gamma _{\mu 1}b_{\mu 1,\nu }.`$
The only difference compared to (3)-(41) is that the moments of $`w(x)^2`$ are now replaced by the ones of $`w(x)^2f(x)`$.
It is interesting to collect the quantities which have to be stored in order that the recurrence can be resumed. This is useful if after stopping the iterations, for some reason, the recurrence will be restarted. Let us assume that the quantities $`q_\nu (\nu =0,\mathrm{},n)`$, $`b_\nu (\nu =0,\mathrm{},n)`$, $`\beta _\nu (\nu =1,\mathrm{},n1)`$ and $`\gamma _\nu (\nu =0,\mathrm{},n2)`$ are already known and one wants to resume the recurrence in order to calculate these quantities for higher indices. For this it is enough to know the values of
$`f_{n1,1},r_{n1,n1},`$
$`R_{0\mathrm{}n}^{(0)}(r_{0,2n+1},r_{1,2n},\mathrm{},r_{n,n+1}),`$
$`R_{0\mathrm{}n}^{(1)}(r_{0,2n},r_{1,2n1},\mathrm{},r_{n,n}),`$
$`B_{0\mathrm{}n}^{(1)}(b_{0,n},b_{1,n1},\mathrm{},b_{n,0}),`$
$`B_{0\mathrm{}n1}^{(2)}(b_{0,n1},b_{1,n2},\mathrm{},b_{n1,0}).`$ (46)
This shows that for maintaining a resumable recurrence it is enough to store a set of quantities linearly increasing in $`n`$.
An interesting question is the increase of computational load as a function of the highest required order $`n`$. At the first sight this seems to go just like $`n^2`$, which is surprising because, as eq. (6) shows, finding the minimum requires the inversion of an $`nn`$ matrix. However, numerical experience shows that the number of required digits for obtaining a precise result does also increase linearly with $`n`$. This is due to the linearly increasing logarithmic range of eigenvalues, as illustrated by (2). Using, for instance, Maple V for the arbitrary precision arithmetic, the computation slows down by another factor going roughly as (but somewhat slower than) $`n^2`$. Therefore, the total slowing down in $`n`$ is proportional to $`n^4`$. For the same reason the storage requirements increase by $`n^2`$.
## 4 A convenient choice for TSMB
In the TSMB algorithm for Monte Carlo simulations of fermionic theories, besides the simple function $`x^\alpha `$, also the function $`x^\alpha /\overline{P}(x)`$ has to be approximated. Here $`\overline{P}(x)`$ is typically a lower order approximation to $`x^\alpha `$. In this case, if one chooses to optimize the relative deviation, the basic integrals defined in (2) and (36) are, respectively,
$`s_\nu `$ $`=`$ $`{\displaystyle _ϵ^\lambda }𝑑x\overline{P}(x)^2x^{2\alpha +\nu },`$
$`t_\nu `$ $`=`$ $`{\displaystyle _ϵ^\lambda }𝑑x\overline{P}(x)x^{\alpha +\nu }.`$ (47)
It is obvious that, if the recurrence coefficients for the expansion of the polynomial $`\overline{P}(x)`$ in orthogonal polynomials are known, the recursion scheme can also be used for the evaluation of $`s_\nu `$ and $`t_\nu `$.
Another observation is that the integrals in (4) can be simplifyed if, instead of choosing the weight factor $`w(x)^2=\overline{P}(x)^2x^{2\alpha }`$, one takes
$$w(x)^2=\overline{P}(x)x^\alpha ,$$
(48)
which leads to
$`s_\nu `$ $`=`$ $`{\displaystyle _ϵ^\lambda }𝑑x\overline{P}(x)x^{\alpha +\nu },`$
$`t_\nu `$ $`=`$ $`{\displaystyle _ϵ^\lambda }𝑑xx^\nu .`$ (49)
Since $`\overline{P}(x)`$ is an approximation to $`x^\alpha `$, the function $`f(x)x^\alpha /\overline{P}(x)`$ is close to one and the difference between $`f(x)^2`$ and $`f(x)^1`$ is small. Therefore the least-squares optimized approximations with the weights $`w(x)^2=f(x)^2`$ and $`w(x)^2=f(x)^1`$ are also similar. It turns out that the second choice is, in fact, a little bit better because the largest deviation from $`x^\alpha `$ typically occurs at the lower end of the interval $`x=ϵ`$ where $`\overline{P}(x)`$ is smaller than $`x^\alpha `$. As a consequence, $`\overline{P}(x)x^\alpha <1`$ and $`\overline{P}(x)x^\alpha >\overline{P}(x)^2x^{2\alpha }`$. This means that choosing the weight factor in (48) is emphasising more the lower end of the interval where $`\overline{P}(x)`$ as an approximation of $`x^\alpha `$ is worst.
In summary: least-squares optimization is a flexible and powerful tool which can serve as a basis for applying the two-step multi-bosonic algorithm for Monte Carlo simulations of QCD and other similar theories. With the help of the recurrence scheme described in the previous section one can determine the necessary polynomial approximations to high enough orders. |
no-problem/9911/hep-ph9911495.html | ar5iv | text | # 𝑒𝑝 Physics with Heavy Flavours
## 1 Introduction
Heavy Flavour physics at HERA focuses on aspects related to production dynamics rather than weak decays and mixing angles. Heavy quarks – like jets – reflect the properties of hard sub-processes in $`ep`$ interactions. Their mass provides the natural scale which allows the application of perturbative methods in QCD and also testing the theory in regions where no other hard scale (like high transverse energy) is present. Furthermore, when probing the structure of matter, heavy quarks single out the gluonic content.
For this talk topics from open charm and beauty production have been selected. There was no time to cover Onium production (for recent reviews, see ). There is no tau physics included, either, although $`\tau `$ candidates have been seen at HERA, and interesting limits on lepton-flavour violating lepto-quarks have been obtained .
At HERA, 820 GeV protons (defining the “forward” direction) collide head-on with 27 GeV electrons, yielding a centre-of-mass system (CMS) energy of $`\sqrt{s}=`$ 300 GeV. The standard textbook notation will be used to describe the kinematics of deep inelastic scattering (DIS): $`Q^2`$ denotes the four-momentum transfer squared, $`x`$ and $`y`$ are scaling variables related to the momentum fraction of the struck parton and to the inelasticity of the collision, respectively. $`W`$ denotes the photon-proton CMS energy. The kinematic regime where the exchanged photon becomes quasi-real ($`Q^20`$) is called photo-production. Also in this region $`W`$ can be large and attains values an order of magnitude higher than in fixed-target experiments.
## 2 Charm Production
In most of the studies presented here, open charm is detected in the “golden” decay channel $`D^+D^0\pi ^+`$ followed by $`D^0K^{}\pi ^+`$.<sup>1</sup><sup>1</sup>1The charge conjugate is always implicitly included. ZEUS also uses the channel $`D^+(D^0K^{}\pi ^{}\pi ^+\pi ^{})\pi ^{}`$, and this year for the first time they have shown results using $`D_s+(\varphi K^+K^{})\pi ^+`$ .
### 2.1 Deep inelastic scattering
In QCD, $`c`$ and $`b`$ production in $`ep`$ collisions proceed mainly via the boson gluon fusion diagram shown in figure 1.
Therefore, charm production has already in the past offered a way to determine the gluon density in the proton . The process has been calculated in Next-to-Leading order (NLO) QCD in the so-called Three Flavour $`\overline{MS}`$ scheme for photo-production and DIS . In the HVQDIS program, the charm hadronization into $`D^{}`$ mesons is modeled using a Peterson fragmentation function with the parameter $`ϵ_c=0.035`$ as determined in $`e^+e^{}`$ annihilation .
Differential $`D^{}`$ cross sections in the experimentally accessible DIS range have been measured by H1 and ZEUS . The ZEUS data are shown in figure 2 and compared to the HVQDIS calculation. (The open points in (d) and (e) are results from the $`K4\pi `$ channel.) A Leading-Order Monte Carlo fragmentation approach based on the JETSET program also has been used by ZEUS. As can be seen from the figure, the pseudo-rapidity and the $`x_D`$ distribution<sup>2</sup><sup>2</sup>2$`x_D2p^{}(D^{})/W`$ is the fractional momentum of the meson in the $`\gamma ^{}p`$ rest frame and strongly correlated with $`\eta `$. are quite sensitive to details of the fragmentation modeling, whereas the distributions of other kinematic variables, like $`x`$ and $`Q^2`$ are barely affected. The migration of the mesons towards positive pseudo-rapidities in the Monte Carlo, due to color interactions between the charm quark and the proton remnant, has been called the “beam drag” effect .
In general, there is good agreement with the NLO calculation. This gives the justification to the extrapolation to the full phase space, which is needed to extract $`F_2^c`$, the charm contribution to the proton structure function.
In the Quark-Parton Model, the structure function $`F_2`$ is given by the charge-weighted sum of the (anti-)quark densities, $`F_2=_ie_i^2(q_i(x)+\overline{q}_i(x))`$ and depends only on $`x`$. The presence of gluons introduces “scaling violations”, i.e. a dependence of the structure function on $`Q^2`$. The gluons are also responsible for the observed steep rise of $`F_2`$ towards low values of $`x`$, where the quark sea is being probed.
The relative charm contribution to inclusive DIS, quoted as $`F_2^c/F_2`$, is not constant, but rises with $`Q^2`$ and towards low $`x`$, as can be seen in figure 3. Since the process under study is gluon-induced, $`F_2^c`$ reflects the gluon content of the proton even more pronouncedly than $`F_2`$. The most salient feature is that in the HERA regime the charm contribution is very large, 20 - 30%, making the theoretical description of charm production an essential ingredient to the understanding of proton structure.
H1 has performed a direct measurement of the gluon density in the proton from charm production in DIS and in photo-production. The kinematics of the reconstructed $`D^{}`$ meson and the scattered electron have been used to infer the incoming gluon momentum. The NLO programs have been used in order to correct for higher order processes and fragmentation by means of an unfolding procedure. The results, shown in figure 4, agree well with each other and with the gluon distribution determined indirectly through a QCD analysis of the scaling violations of inclusive structure function data. This demonstrates that the gluon density is universal, and indicates the success of the NLO QCD description.
### 2.2 Photo-production
In the case of photo-production of open charm, additional, so-called “resolved” photon contributions have to be taken into account. The photon may fluctuate into a hadronic state, and a parton from this state interacts with a parton from the proton, e.g. charm may be formed via gluon-gluon fusion. Since only a fraction of the photon’s energy is available in the hard sub-process, in comparison with “direct” photon interactions, the outgoing charm is found at lower transverse momenta and at more forward rapidities. Resolved photo-production is therefore suppressed by experimental cuts, and in fact extra care was taken in the above-mentioned gluon analysis to minimize uncertainties related to such a priori unknown contributions.
There are two approaches in NLO QCD to describe charm photo-production. In the “massive” scheme , charm is produced only dynamically in the final state (like in the DIS case above). In the “massless” scheme , charm also plays an active role in the initial state as a parton of the proton and of the photon. While the first approach should be adequate near threshold where the transverse momenta are comparable to the charm quark mass, the second has been developed for the region of higher $`p_{}`$.
$`D^{}`$ photoproduction cross sections as recently measured by ZEUS in the region 80 $`<W<`$ 120 GeV are compared to NLO QCD calculations in the massive scheme in figure 5. Fair agreement can be seen in the low $`p_{}`$ range, with however somewhat “stretched” parameter values. At higher $`p_{}`$ the data tend to be above the prediction, in particular in the forward region. As in the DIS case, predictions in this region are seen to be sensitive to details of how the fragmentation process is being modeled. The same conclusions can be drawn from the ZEUS data obtained with $`D^{}`$ and $`D_s`$ mesons at higher $`W`$ . H1, albeit with somewhat larger measurement errors, do not observe a discrepancy with respect to the “massive” calculations .
In the “massless” calculation, there is a large “resolved” contribution which is mostly due to diagrams where a charm quark from the photon interacts with a parton from the proton; an example is shown in figure 6. (Note that the assignment of “direct” and “resolved” is not unambiguous in NLO.) In this picture the cross sections are thus sensitive to the charm density in the photon, as indicated in figure 6, where different photon PDFs have been used in the calculation. H1 and ZEUS data (shown here) have been compared with NLO QCD in the “massless” scheme. As in the “massive” case, the agreement is not good everywhere.
### 2.3 Diffractive production
Pursuing these concepts further, charm production may also shed light on the dynamics of diffractive scattering. A fraction of about 10 % of the DIS events have a “rapidity gap”: in contrast to the general case the region surrounding the outgoing proton beam is void of any particles. This distinctively “diffractive” topology leads to an interpretation of the events in terms of the exchange of a color-less object, which may be identified with the Pomeron ($`IP`$) in the framework of hadron-hadron interaction phenomenology. HERA offers the possibility to study these phenomena in a perturbative regime and to investigate the partonic structure of the exchange, where charm production again singles out the gluonic component.
Two different model predictions are considered here (figure 7). In the “resolved Pomeron” model , the structure of the exchanged object is described in terms of quark and gluon densities which have been obtained from a fit to inclusive diffractive DIS data . In the “two gluon” model , the color-less exchange is realized in terms of two hard gluons, and cross sections are sensitive to the gluon density in the proton. The two approaches lead to remarkably different kinematic distributions. Both H1 and ZEUS have presented first measurements of differential cross sections this year.
$`z_{IP}`$ is an observable correlated with the fraction of momentum of the exchanged object which is carried by the parton interacting with the $`c\overline{c}`$ pair. In the “$`2g`$” model, $`z_{IP}=1`$ would hold if the partons could be directly observed. The data in figure 8 reveal that a sizeable fraction of charm is produced at low $`z_{_{IP}}^{obs}`$ (“resolved Pomeron interaction”). The “resolved $`IP`$” approach however fails in reproducing the normalization of the H1 data. In the “two gluon” model, on the other hand, a lack of higher order, low $`z_{IP}`$ contributions is apparent. More general, models, where the hadronic system $`X`$ predominantly consists of the $`c\overline{c}`$ system alone, are disfavored. The data still have large, mostly statistical, uncertainties, but they already provide valuable and rather distinctive indications for further refinement of the theoretical description.
## 3 Beauty Production
Finding beauty at HERA is not straightforward. Theory predicts total cross section ratios of about $`\sigma _{\mathrm{uds}}:\sigma _{\mathrm{charm}}:\sigma _{\mathrm{beauty}}2000:200:1`$ . The measurements to date rely on the well established signature of semileptonic decays of $`b`$ hadrons in jets. Due to the higher $`b`$ mass, the leptons in $`b`$ events have a higher transverse momentum $`p_T^{rel}`$ with respect to the jet direction which approximates the flight direction of the decaying hadron (figure 9). The $`b`$ cross section can thus be extracted by means of a fit to the $`p_T^{rel}`$ distribution.
The preliminary H1 result released in 1998 has meanwhile been published . The visible cross section, quoted for the range $`Q^2<1\mathrm{GeV}^2`$, $`0.1<y<0.8`$, $`p_t^\mu >2\mathrm{GeV}`$, $`35^{}<\theta ^\mu <130^{}`$ of $`\sigma _{b\overline{b}}^{vis}=0.93\pm 0.08_{0.07}^{+0.17}\mathrm{nb}`$ is found to be surprisingly high: the Monte Carlo prediction based on Leading Order QCD calculation is 4 to 5 times lower ($`\sigma _{b\overline{b}}^{vis}=0.191\mathrm{nb}`$). NLO corrections are however non-negligible, as in the charm photoproduction case.
In figure 10 the result, extrapolated to the full phase space by means of a NLO program incorporating a Peterson type model of fragmentation, is confronted with the NLO prediction. Theory at NLO still undershoots the experimental result by about a factor of 2. This is in marked contrast to the expectation that predictions for beauty should be more reliable than for charm, due to the higher scale set by the $`b`$ mass. On the other hand, the NLO approximation also has difficulties reproducing the normalization of open $`b`$ cross section measurements at the Tevatron .
ZEUS has determined the beauty cross section in three independent analyses, using electrons, muons in the central region and muons in the forward detector region . The $`p_T^{rel}`$ distribution, obtained in the $`e`$ channel, is displayed in figure 9. Overlayed is a Monte Carlo prediction obtained with the HERWIG generator, which already describes the shape of the data reasonably well. It should be noted that in HERWIG about half of the $`c`$ and $`b`$ photo-production is due to resolved processes with a heavy quark in the initial state, mostly as an active parton in the photon. A fit of the different components yields a $`b`$ fraction of 20%, which translates into a visible cross section of $`\sigma _{vis}(e^+\text{2 jets}+e^{}+X)=39\pm 11_{16}^{+23}`$ pb for events with 2 jets ($`E_T^{1,2}>7,6`$ GeV, $`|\eta |<2.4`$) in the kinematic range $`Q^2<1`$GeV<sup>2</sup>, 0.2$`<y<`$0.8, $`e`$: $`P_T>1.6`$GeV, $`|\eta |<1.6`$.
Consistent results are obtained in the muon channels, with the same jet requirements and for the same kinematic range. For central muons ($`1.75<\eta <`$1.3, $`p>`$3 GeV) $`\sigma _{vis}(e^+\text{2 jets}+\mu +X)=36.4\pm 5.2_{9.1}^{+10.4}`$ pb is quoted, and for forward muons (1.4$`<\eta <`$2.4, $`p_{}>`$1.5 GeV) $`\sigma _{vis}(e^+\text{2 jets}+\mu +X)=20.5\pm 6.5_{5.7}^{+6.3}`$ pb.
Using the Monte Carlo to convert the measured hadronic cross section to a partonic cross section allows a comparison with NLO QCD calculations at parton level. This is done for the ZEUS result in the $`e`$ channel in figure 10. Here, the result is found to be even a factor of 4 above theory, with somewhat larger errors.
The long lifetimes of $`c`$ and $`b`$ hadrons which can be measured with microvertex detectors provide an independent signature. With its Central Silicon Tracker (CST), H1 has for the first time at HERA attempted to exploit these features. An event with two jets and a penetrating track identified as a muon is shown on the left-hand side of figure 11. The muon has a relatively high transverse momentum of 3 GeV$`/c`$ with respect to the nearest jet, which is unlikely for muons from charm decays or mis-identified hadrons but is expected for decays of $`b`$ flavoured hadrons. In the magnified view perpendicular to the beam (on the right-hand side) the tracks measured in the CST are represented as bands with widths corresponding to their $`\pm 1\sigma `$ measurement precision. The resolution provided by the CST reveals that the muon track originates from a well separated secondary vertex. This is now being integrated into ongoing physics analyses.
## 4 Summary and Outlook
Heavy quarks have become an increasingly interesting part of the physics field opened up by HERA. With the statistical power now reached in the data, charm quarks are a direct probe of the gluonic component of hadronic structure. In the case of the proton, the fact that about one quarter of the $`ep`$ interactions in the HERA regime result in final states with charm makes it evident that understanding the charm contribution to the structure function $`F_2`$ is a conditio sine qua non. The agreement of the gluon density directly determined from charm with the result from the scaling violation analysis lends strong support to the underlying QCD picture.
New light has also been shed on the structure of the photon, where a charm content in the “resolved” photon may provide a viable route to understanding the HERA data. Finally, probing the structure of diffractive exchange with charm, differential cross section data have started to discriminate between different theoretical concepts.
Beauty production at HERA has now been measured by both H1 and ZEUS, and found to occur at a rate that is presenting NLO QCD with a challenge. Although experimental errors are still considerable, a re-evaluation of theoretical uncertainties might be appropriate.
Top pair production is kinematically excluded at HERA energies. It has however been speculated recently that the events with isolated leptons and large missing $`p_{}`$, of which the H1 collaboration observes more than expected (and some with atypical kinematics), might be due to single top production mediated by a new, flavour-changing effective interaction. The top quark would decay via $`tWb`$, $`W\mu \nu `$, giving rise to the observed signature. In fact, some of the observed outstanding events are kinematically not inconsistent with such a hypothesis. Lifetime-based tagging techniques will in the future be used to investigate such exciting possibilities further. One more reason why H1 and ZEUS physicists look forward to the high luminosity running at the upgraded HERA machine in 2001.
###### Acknowledgments.
I like to thank my colleagues in H1 and ZEUS for providing me with the results of their work for this presentation. And it is a pleasure to thank Paul Dauncey, Jonathan Flynn, and their crew for organizing a top meeting at a beautiful place in such a charming way. |
no-problem/9911/astro-ph9911394.html | ar5iv | text | # Measuring Ω_𝑏 from the Helium Lyman-𝛼 Forest
## 1. Introduction
In the last few years it has become possible to observe details of absorption by singly ionized helium. The observations combine new information about the history of quasars, intergalactic gas, and structure formation.
Early observations of the HeII Ly$`\alpha `$ absorption spectral region included the quasars Q0302-003 (z=3.285, Jackobsen et al. 1994, HS 1700+64 (z=2.72, Davidsen et al. 1996) and PKS 1935-692 (z=3.18 Tytler & Jackobsen 1996). Higher resolution (GHRS) observations of Q0302-003 Hogan, Anderson & Rugers 1997 and HE 2347-4342 (z=2.885, Reimers et al. 1997) revealed structure in the absorption which could be reliably correlated with HI absorption. Heap et al. have followed up with STIS on Q0302-003 (1999a) and HE 2347-4342 (1999b). The second observation should be particularly illuminating since HE 2347-4342 is relatively bright allowing a high resolution grating to be used. The Anderson et al. (1998) observations of PKS 1935-692 with STIS yield good zero level estimates important for estimating the optical depth $`\tau `$. Preliminary reductions of longer STIS integrations of PKS 1935-692 with the 0.1 Å slit by Anderson et al. (1999) confirm the 1998 results. Taken together, these data now appear to be showing the cosmic ionization of helium by quasars around redshift 3. Although it is possible that the medium is already ionized to HeIII by other sources at a lower level (Miralda-Escudé et al. 1999).
All of the objects show absorption with mean $`\tau \genfrac{}{}{0pt}{}{>}{}1`$ at redshifts lower than the quasar. For the higher redshift QSO’s Q0302-033 and PKS 1935-692 (shown in Fig. 1) there is a clear shelf of $`\tau \genfrac{}{}{0pt}{}{>}{}1.3`$ in a wavelength region of order 20 Å in observed wavelength blueward of the quasar emission line redshift, dropping to a level consistent with zero flux or $`\tau \genfrac{}{}{0pt}{}{>}{}3`$ beyond that. The sharp edge led Anderson et al. to conclude that gas initially containing helium as mostly HeII was being double-ionized in a region around the quasars. The lack of a strong emission line for HeII Ly$`\alpha `$ suggests that ionizing flux is escaping so that the 228 Å flux may be similar to a simple power-law extension of the observed 304 Å rest frame flux. Hogan et al. (1997) used the same reasoning to estimate the time required for quasars to create the double ionized helium region to be 20 Myr for a 20 Å shelf (dependent on the Hubble parameter, spectral hardness, cosmology, baryon density and the shelf size).
The features present in HeII Ly$`\alpha `$ spectra are reflected in the HI Ly$`\alpha `$ forest for these quasars. Attempts to model the HeII absorption with line systems detected in HI suggest that very low column HI absorbers, difficult to differentiate from noise in HI spectra, provide a substantial contribution to the HeII absorption. Typically, in the shelf region, the ratio of HeII to HI ions is of order 20 or more, rising to at least 100 farther away (The cross-section for HeII Ly$`\alpha `$ absorption is 1/4 that of HI). Both PKS 1935-692 and HE 2347-4342 display conspicuous voids in the HeII absorption near the apparent edge of the HeIII bubble with corresponding voids in the HI spectra.
Reionization and the origin of the HeII Ly$`\alpha `$ forest can addressed with detailed theoretical treatments (eg. Zheng & Davidsen 1995, Zhang et al. 1998, Fardal et al. 1998, Abel & Haehnelt 1999, Gnedin 1999). Wadsley, Hogan & Anderson (1999) presented numerical models of the onset of full helium reionization around a single quasar. The models integrated one-dimensional radiative transfer along lines of sight taken from cosmological hydrodynamical simulations and used flux levels comparable to PKS 1935-692. The results reinforced the basic intepretation of PKS 1935-692 by Hogan et al. (1997) as the growth of an He III bubble over time in a medium that was mostly HeII. In addition void recoveries in the large ionized bubbles similar to that observed for PKS 1935-692 occurred often among a random set of simulated lines of sight. The gas temperature in such bubbles is strongly affected by the ionization of He II to He III, particularly because the first photons to reach much of the gas will be quite hard since the softer photons are absorbed close to the quasar until the gas is optically thin. In particular the underdense medium reaches temperatures of order 15000K. We make a simple analytical argument for this result in the next section.
Given our fairly good guess as to the physical conditions near PKS 1935-692 we are in position to attempt to convert the observed optical depths into a density in baryons. Comparing optical depths measured for the small sample of voids in the shelf region of PKS 1935-692 to distributions of void widths and densities from simulations we can build an estimate of $`\mathrm{\Omega }_b`$, the total cosmic density in baryons. The are still uncontrollable systematic uncertainties in this estimate but these differ from other techniques and can be addressed with better simulations and a larger sample of quasars. The main focus of this paper is the technique.
## 2. Estimating $`\mathrm{\Omega }_b`$ Using Helium Quasar Observations
In He II Ly$`\alpha `$ absorption spectra only the voids are sufficiently low density to allow measurements of the optical depth. The highly ionized void gas is optically thin to the ionizing radiation and cool enough to ignore collisional effects, resulting in a fairly simple relation between optical depth and gas density,
$`\tau _{\mathrm{HeII}}=1.0\times 10^{10}n_{\mathrm{HeII}}/({\displaystyle \frac{dv}{dl}}/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1)`$
$`=106{\displaystyle \frac{\alpha (T)}{\mathrm{\Gamma }}}(\rho /\overline{\rho })^2\mathrm{\Omega }_bh^2\left({\displaystyle \frac{1+z}{4}}\right)^6\left({\displaystyle \frac{dv}{dl}}/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1\right)^1,`$ (1)
where $`\tau _{\mathrm{HeII}}`$ is the optical depth for absorption of He II Ly$`\alpha `$, $`n_{\mathrm{HeII}}`$ is the number density of He II ions, $`\frac{dv}{dl}`$ converts from real space to wavelength (velocity) space and $`\rho `$ is the gas density. The recombination rate $`\alpha (T)`$ contains the only temperature dependence. The photo-ionizations per second $`\mathrm{\Gamma }`$ is determined from the rest frame HeII ionizing flux, $`F_{228\AA }`$ can be extrapolated from the rest frame flux at $`304\AA `$ which is estimated (assuming a cosmological model) from the observed continuum flux at $`304\AA (1+z_q)`$, where $`z_q`$ is the redshift of the quasar.
The fraction of HeII depends on temperature roughly as $`T^{1/2}`$ through the recombination coefficient. The radiative transfer models of Wadsley et al. (1999) gave temperatures around 15000K for the underdense gas near quasars. The optical depth to a He II ionizing photon travelling $`\delta z=0.1`$ (The size of the PKS 1935-692 shelf region) at redshift $`z=3`$ (SCDM, $`\mathrm{\Omega }_b=0.05`$, $`H_0=50\mathrm{k}\mathrm{m}\mathrm{s}^1\mathrm{Mpc}^1`$) is approximately $`\tau =62(E_\gamma /54.4\mathrm{eV})^3`$. The optical depth falls to 1 for photons with energies, $`E_\gamma 215\mathrm{eV}`$. Thus the HeII in the outer half of the shelf region will be ionized preferentially by photons in the energy range 100-200 eV. Injecting 100-54.4 eV per helium atom is equivalent to a temperature increase of 13000 K when the energy is distributed among all the particles. The cooling time for the underdense gas is very long, dominated by adiabatic expansion. Gas elsewhere will gain $``$ 4000 K due to HeIII reionization.
As voids evolve they approach an attractor solution resembling empty universes and their relative growth slows when they reach around $`0.1`$ times the mean density, making that value fairly representative. We used $`N`$-body simulations of 3 cosmologies (Standard CDM, Open CDM and $`\mathrm{\Lambda }`$CDM) to get probability distributions for the quantity we label normalized opacity $`O=(\rho /\overline{\rho })^2/(\frac{dv}{dl}/\frac{dv}{dl}_{HUBBLE})`$ as a function of void width. $`O`$ may be directly related to the void optical depths via (1). The set of voids allows a statistical maximum likelihood comparison of the simulated distribution of $`O`$ to the distribution of void widths and optical depths observed. Figure 1 shows the probability distribution of $`O`$ in voids (equivalent to void optical depth) and void widths for the $`\mathrm{\Lambda }`$CDM cosmology. The optical depth from the simulation was normalized so that mean density gas expanding with the Hubble flow gives an optical depth of 1.0.
The effect of gas pressure on the dynamics of gas in the voids should be negligible and was ignored in our simulations. Thus the gas follows the dark matter exactly. The simulations were performed using Hydra (Couchman 1991) with 64 Mpc periodic simulation volumes ($`6000`$ $`\mathrm{km}\mathrm{s}^1`$ at $`z=3`$) containing $`128^3`$ particles. A preliminary convergence study indicated that the statistics of $`O`$ are relatively insensitive to resolution however our resolution is still lower than that of the best current Lyman-$`\alpha `$ forest studies which also include gas. We were constrained by the need to include larger scales so that the large void portion of our sample was reasonable. Higher resolution gives slightly larger estimates for $`\mathrm{\Omega }_b`$.
The ratio of $`\tau _{\mathrm{HeII}}`$ to $`O`$ that has the maximum likelihood gives an estimate for the total baryon density as follows,
$`\mathrm{\Omega }_bh^2=0.0178\left({\displaystyle \frac{\tau _{\mathrm{HeII}}}{O}}/3.5\right)^{\frac{1}{2}}\left({\displaystyle \frac{F_{\nu }^{}{}_{\mathrm{\hspace{0.17em}228}\AA ,\mathrm{REST}}{}^{}}{6\times 10^{23}ergs^1Hz^1cm^2}}\right)^{\frac{1}{2}}`$
$`\left({\displaystyle \frac{H(z)}{400\mathrm{k}\mathrm{m}\mathrm{s}^1\mathrm{Mpc}^1}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{3+\alpha }{4.8}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{Y}{0.25}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{1+z}{4.0}}\right)^3\left({\displaystyle \frac{T}{15000K}}\right)^{\frac{1}{4}}.`$ (2)
In this equation all the quantities are given as ratios with typical values estimated from PKS 1935-692, with $`\rho /\overline{\rho }`$ being the local density divided by the cosmological mean density, $`\frac{dv}{dl}/\frac{dv}{dl}_{\mathrm{HUBBLE}}`$ the local expansion rate along the line-of-sight compared to the mean, $`H(z)=H_0(\mathrm{\Omega }_M(1+z)^3+\mathrm{\Omega }_{\mathrm{CURV}}(1+z)^2+\mathrm{\Omega }_\mathrm{\Lambda })^{1/2}`$ local Hubble parameter value, $`\alpha `$ the spectral slope of the quasar ($`F_\nu \nu ^\alpha `$) and $`Y`$ the helium mass fraction. The value of $`H(z)`$ is $`400\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ at $`z=3`$ for an $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ SCDM cosmology.
## 3. Results for PKS 1935-692
The exact systemic redshift of PKS 1935-692 is uncertain due to an absence of narrow IR emission lines. We use $`z=3.19`$.
We used the 5 large voids observed over the range $`z=3.093.15`$ for PKS 1935-692. This range should be far enough from the quasar to avoid “associated” absorbers caught in the quasar outflow. The fits are shown in figure 2. He II and H I absorption are both plotted with the models for each overplotted as dashed lines. The He II spectrum was modeled with the known HI Ly$`\alpha `$ absorption lines and 5 parameter values for the optical depth in each void. The ratio of the HI to HeII optical depths was a sixth free parameter. The models were convolved with the same spectral point spread function as the observations and include same level of photon shot noise. The best model was determined using maximum likelihood and the fitting errors determined with Monte Carlo realizations. The emptiest void has a reasonable probability according to the simulations, however since it has been suggested that it could be due to a local ionizing source we calculated the effect of removing it, which was a $`40\%`$ increase in the estimate for $`\mathrm{\Omega }_b`$.
The ionizing flux was inferred from the observed PKS 1935-692 continuum level at 304 $`\AA `$ rest frame and extrapolated to $`z>3`$ and 228 $`\AA `$. Assuming the quasar redshift is $`z=3.19`$ this gives a flux of $`6\times 10^{23}ergs^1Hz^1cm^2`$ (for standard CDM) at $`z=3.125`$ which is in the middle of the voids. The sample of helium quasars is quite small and though PKS 1935-692 does not have any known variability all quasars are thought to be intrinsically variable due to the nature of the fueling and emission processes. Selection effects favour the idea that PKS 1935-692 is currently brighter than its average value over the last few thousand years (the ionization response timescale). If PKS 1935-692 has recently brightened then ( 2) implies that our $`\mathrm{\Omega }_b`$ estimate is biased upward. If there is additional foreground continuum absorption then our estimates are biased low.
The results are tabulated below with 1-$`\sigma `$ fitting errors indicated (Monte Carlo). Aside from the uncertainties mentioned above, there is still uncertainty in the temperature through the $`T^{1/4}`$ dependence. If the gas was pre-ionized to HeIII the underdense gas could be colder by a factor of order 2 which would lower the $`\mathrm{\Omega }_bh^2`$ estimates by 20 %. A larger sample of quasars would make the greatest improvement in the robustness of this measurement.
## References
Abel, T. & Haehnelt, M. 1999, ApJ, 520, L13
Anderson, S.F., Hogan, C.J., Williams, B.F. & Carswell, R.F. 1998, accepted AJ
Couchman, H.M.P. 1991, ApJ, 368, 23
Croft, R. A. C., Weinberg, D. H., Katz, N. & Hernquist, L. 1997, ApJ 488, 532
Davidsen, A.F., Kriss, G.A. & Zheng, W. 1996, Nature, 380, 47
Fardal, M. A., Giroux, M. L., & Shull, J. M. 1998, AJ, 115, 2206
Gnedin, N. 1999, Cosmological Reionization by Stellar Sources, astro-ph/9909383
Heap, S.R, Williger, G.M., Smette, A., Hubeny, I., Sahu M., Jenkins, E.B., Tripp, T.M., Winkler, J.N. 1999a, ApJ, submitted, astro-ph/9812429
Heap et al. 1999b, in preparation
Hogan, C.J., Anderson, S.F. & Rugers, M.H. 1997, AJ, 113, 87
Jackobsen, P. et al. 1994, Nature, 370, 35
Miralda-Escudé, Haehnelt, M. & Rees, M.J. 1999, ApJ accepted, astro-ph/9812306
Reimers, D., Köhler, S., Wisotzki, L., Groote, D., Rodriguez-Pascual, P. Wamsteker, W. 1997, A&A, 327, 890
Tytler, D. & Jackobsen, P. 1996, (unpublished)
Wadsley, J.W, Hogan, C.J. & Anderson, S.F. 1999, 9th Annual October Astrophysics Conference (1998). College Park, Maryland. Eds S. Holt & E. Smith. Am. Inst. Phys. Press, 1999, 273, astro-ph/9812239
Zhang, Y., Meiksin, A., Anninos, P., & Norman, M. 1998, ApJ 495, 63
Zheng, W. & Davidsen, A. 1995, ApJ, 440, L53 |
no-problem/9911/hep-ph9911204.html | ar5iv | text | # Mesons in non-local chiral quark modelsResearch supported by the Polish State Committee for Scientific Research, grant 2P03B-080-12, and by DSF and BMBF.
## I Why non-local regulators?
First, let me recall the reasons why we wish to consider chiral quark models with non-local regulators (see also the contribution by Bojan Golli and Georges Ripka to these proceedings):
1. Non-locality arises naturally in several approaches to low-energy quark dynamics, such as the instanton-liquid model Diakonov86 ; instant1:rev ; instant2:rev or Schwinger-Dyson resummations Roberts92 . For the discussions of various “derivations” and applications of non-local quark models see, e.g.,Ripka97 ; Cahill87 ; Holdom89 ; Ball90 ; Krewald92 ; Ripka93 ; BowlerB ; PlantB ; mitia:rev ; Bijnens . Hence, we should cope with non-local regulators from the outset.
2. Non-local interactions regularize the theory in such a way that the anomalies are preserved Ripka93 ; Arrio and charges are properly quantized. With other methods, such as the proper-time regularization or the quark-loop momentum cut-off mitia:rev ; Bijnens ; Goeke96 ; Reinhardt96 the preservation of the anomalies can only be achieved if the (finite) anomalous part of the action is left unregularized, and only the non-anomalous (infinite) part is regularized. If both parts are regularized, anomalies are violated badly krs ; bbs . We consider such a division artificial and find it quite appealing that with non-local regulators both parts of the action can be treated on equal footing.
3. With non-local interactions the effective action is finite to all orders in the loop expansion ($`1/N_c`$ expansion). In particular, meson loops are finite and there is no more need to introduce extra cut-offs, as was necessary in the case of local models Ripka96b ; Tegen95 ; Temp . As the result, non-local models have more predictive power.
4. As Bojan Golli, Georges Ripka and WB have shown nls , stable solitons exist in a chiral quark model with non-local interactions without the extra constraint that forces the $`\sigma `$ and $`\pi `$ fields to lie on the chiral circle. Such a constraint is external to the known derivations of effective chiral quark models.
5. The empirical values of the low-energy constants $`g_8`$ and $`g_{27}`$ of the effective weak chiral Lagrangian are better reproduced within the non-local model Franz compared to the conventional NJL model.
In view of these improvements it becomes important to look at all other applications of effective quark models and compare the predictions of non-local and local versions.
## II Dispersion-relation sum rules for meson correlators
In this talk we will present results for the vector-meson correlators.
The basic object of our study is the meson correlation function, defined as
$$\mathrm{\Pi }^{AB}(q^2)=0|id^4xe^{iqx}\mathrm{T}\{j^A(x),j^B(0)\}|0,$$
(1)
where $`\mathrm{T}`$ denotes the time-ordered product, and $`j^A(x)`$ describes a color-singlet quark bilinear with appropriate Lorentz and flavor matrix, e.g. in the $`\rho `$-meson channel we have $`j_\mu ^a(x)=\frac{1}{2}\overline{\psi }(x)\gamma _\mu \tau ^a\psi (x)`$. The tensor structure can be taken away, with $`\mathrm{\Pi }^{AB}(q^2)=t^{AB}\mathrm{\Pi }(q^2)`$, where $`t^{AB}`$ is a tensor in the Lorentz and isospin indices, and $`\mathrm{\Pi }(q^2)`$ is a scalar function. The powerful feature of $`\mathrm{\Pi }(q^2)`$ is its analyticity in the $`q^2`$ variable, which will be used shortly. In Figure 1 we display various regions in the complex $`q^2`$ plane: the positive real axis is the physical region, with poles and cuts corresponding to physical states in the particular channel. In that physical region we have (for certain channels) direct experimental information. Far to the left, at the end of the negative real axis, is the deep-Euclidean region, where perturbative QCD and its operator product expansion can be applied. There is yet another region, close to $`0`$ on the negative real axis: the shallow-Euclidean region. This is the playground of the effective chiral models. Indeed, the most successful attempt to derive such a model from QCD, namely, the instanton-liquid model, has a natural limitation to that region of momenta instant1:rev ; instant2:rev . More to the point, we believe that all effective chiral quark models should be used in and only in the shallow Euclidean domain. There have been many attempts, however, to apply such models directly to the physical region. In our opinion these are doomed to fail because of the lack of confinement, which is a key player in the physical region. With unconfined quarks, the unphysical $`q\overline{q}`$ continuum obstructs any model calculation for $`q^2>(2M)^2`$, where $`M300400\mathrm{M}\mathrm{e}\mathrm{V}`$ is the “constituent” quark mass. For these reasons of principles we always remain with the model at low negative $`q^2`$ Boch ; tensor , and compare to the data via the dispersion relation which holds for the physical correlator. In other words, we bring the physical data to the shallow Euclidean region via the dispersion relation. This is similar in spirit to the QCD sum rule approach, where one compares the physical spectrum to the deep Euclidean region via (Borelized) dispersion relation (see Figure 1).
The correlators considered here satisfy the twice-subtracted dispersion relation
$$\mathrm{\Pi }(Q^2)=c_0+c_1Q^2+\frac{Q^4}{\pi }_0^{\mathrm{}}𝑑s\frac{\mathrm{Im}\mathrm{\Pi }(s)}{s^2(s+Q^2)},$$
(2)
where $`Q^2=q^2`$, and $`c_i`$ are subtraction constants. Relation (2) holds for the physical correlators, and does not in general hold for the correlators evaluated in models analyt . For some channels (vector channels) $`\mathrm{Im}\mathrm{\Pi }(s)`$ is obtained directly from experiment, in other channels we have indirect information only, e.g. from QCD sum rules. Let us take the $`\rho `$-meson channel, where $`j_a^\mu (x)=\frac{1}{2}\overline{\psi }(x)\gamma ^\mu \tau _a\psi (x)`$ and $`\mathrm{\Pi }_{ab}^{\mu \nu }(Q^2)=\delta _{ab}(Q^\mu Q^\nu /Q^2g^{\mu \nu })\mathrm{\Pi }_\rho (Q^2)`$. The spectral strength in related to the ratio
$$\mathrm{Im}\mathrm{\Pi }_\rho ^{\mathrm{phen}}(s)=\frac{s}{6\pi }\frac{\sigma (e^+e^{}n\pi )}{\sigma (e^+e^{}\mu ^+\mu ^{})},n=2,4,6,\mathrm{}$$
(3)
known very accurately from experiment. The spectral strength peaks at the position of the $`\rho `$-meson pole, and at large $`s`$ assumes the perturbative-QCD value. For our task it is more convenient to use the simple pole+continuum parameterization of $`\mathrm{Im}\mathrm{\Pi }_\rho (s)`$, such as used e.g. in QCD sum rules. In a given channel the fit has the form
$$\mathrm{Im}\mathrm{\Pi }^{\mathrm{phen}}(s)=\frac{\pi s^2}{g^2}\delta \left(sm^2\right)+as\theta (ss_0),$$
(4)
where $`a`$ is known from perturbative QCD, and $`g`$, $`m`$ and $`s_0`$ are chosen such that the experimental data are reproduced. In the $`\rho `$-channel we have $`a=\frac{1}{8\pi }(1+\alpha _s/\pi +\mathrm{})`$, $`m=0.77\mathrm{GeV}`$, $`g^2/(4\pi )=2.36`$, and $`s_0=1.5\mathrm{GeV}^2`$ QCD:sum:1 . Since all our calculations will be done in the leading-$`N_c`$ level, we drop the $`\alpha _s`$ correction in $`a`$.
With the parametrization (4) we readily obtain from (2)
$$\mathrm{\Pi }^{\mathrm{phen}}(Q^2)=c_0+c_1Q^2+\frac{Q^4}{g^2(m^2+Q^2)}+\frac{a}{\pi }Q^2\mathrm{log}\left(1+\frac{Q^2}{s_0}\right).$$
(5)
On the other hand, $`\mathrm{\Pi }(Q^2)`$ can be calculated directly in chiral quark models in the shallow Euclidean space. We denote this model correlation $`\mathrm{\Pi }^{\mathrm{mod}}(Q^2),`$ as want to compare it somehow to $`\mathrm{\Pi }^{\mathrm{phen}}(Q^2)`$. One possibility is to Fourier-transform to coordinate space Shuryak93 ; Shuryak93b ; Arriola95b ; JamArr . Here we apply a simpler method, which relies on just Taylor-expanding $`\mathrm{\Pi }^{\mathrm{phen}}(Q^2)`$ and $`\mathrm{\Pi }^{\mathrm{mod}}(Q^2)`$ in the $`Q^2`$ variable. For the phenomenological correlator we get from Eq. (5)
$$\mathrm{\Pi }^{\mathrm{phen}}(Q^2)=\underset{k=1}{\overset{\mathrm{}}{}}()^{k+1}b_k^{\mathrm{phen}}=c_0+c_1Q^2+Q^2\underset{k=1}{\overset{\mathrm{}}{}}()^{k+1}\left[\left(\frac{Q^2}{m^2}\right)^k+\frac{a}{k\pi }\left(\frac{Q^2}{s_0}\right)^k\right],$$
(6)
whereas for the model correlator we can write
$$\mathrm{\Pi }^{\mathrm{mod}}(Q^2)=\underset{k=1}{\overset{\mathrm{}}{}}()^{k+1}b_k^{\mathrm{mod}},$$
(7)
with the expansion coefficients $`b_k^{\mathrm{mod}}`$ yet to be determined. We can now compare the coefficients $`b_k^{\mathrm{phen}}`$ and $`b_k^{\mathrm{mod}}`$, and form a set of “sum rules”. With two subtractions in (2) we can start at $`k=2`$: $`b_k^{\mathrm{phen}}=b_k^{\mathrm{mod}}`$, $`k2`$. With the explicit form (6) this gives
$$b_2^{\mathrm{mod}}=\frac{1}{g^2m^2}+\frac{a}{\pi s_0},b_3^{\mathrm{mod}}=\frac{1}{g^2m^4}+\frac{a}{2\pi s_0^2},\mathrm{}$$
(8)
Sum rules for higher values of $`k`$ are sensitive to the details of the phenomenological spectrum, hence are not going to be of much help.
In some channels the coupling constant $`g`$ is not well known. We can then eliminate it from Eqs. (8) to obtain
$$m^2=\frac{b_2^{\mathrm{mod}}\frac{a}{\pi s_0}}{b_3^{\mathrm{mod}}\frac{a}{2\pi s_0^2}},m^2=\frac{b_3^{\mathrm{mod}}\frac{a}{2\pi s_0^2}}{b_4^{\mathrm{mod}}\frac{a}{3\pi s_0^3}},\mathrm{}$$
(9)
Sum rules (8) or (9) can be used to verify model predictions for meson correlators.
## III Results for the local model
The model evaluation of meson correlators is well known. At the leading-$`N_c`$ level one has
The diagrams with the dashed line occur only if the coupling constant $`G`$ is non-zero in a given channel. This is the case of the of the $`\sigma `$ and $`\pi `$ channels. In the vector channels they may or may not be present, depending on whether we allow for explicit vector interactions between the quarks.
We first consider the $`\rho `$-channel in the local NJL model with the proper-time regularization mitia:rev ; Bijnens ; Goeke96 ; Reinhardt96 . After some simple algebra we get
$$b_2^{\mathrm{mod}}=\frac{1}{8\pi ^2}e^{M^2/\mathrm{\Lambda }^2}\frac{1}{5M^2},b_3^{\mathrm{mod}}=\frac{1}{8\pi ^2}e^{M^2/\mathrm{\Lambda }^2}\frac{3(M^2+\mathrm{\Lambda }^2)}{140M^4\mathrm{\Lambda }^2},\mathrm{}$$
(10)
where $`M`$ is the constituent quark mass generated by the spontaneous breaking of the chiral symmetry, and $`\mathrm{\Lambda }`$ is the proper-time cut-off, adjusted such that the pion decay constant has its experimental value, $`F_\pi =93\mathrm{M}\mathrm{e}\mathrm{V}`$. Here we work in the strict chiral limit, with the current quark mass set to zero. The results for sum rules (9) are shown in Table 1. We can see that the ratios of phenomenological to model coefficients $`b_2`$ and $`b_3`$ are larger than $`1`$, and increase rather rapidly with increasing $`M`$. Thus the sum rules (8) favor lower values of $`M`$. However, even for such low values as $`M=250\mathrm{M}\mathrm{e}\mathrm{V}`$ the ratios $`b_2^{\mathrm{phen}}/b_2^{\mathrm{mod}}`$ and $`b_3^{\mathrm{phen}}/b_3^{\mathrm{mod}}`$ are still significantly above $`1`$. We conclude that the model is far from satisfying the sum rules (8).
Next, we repeat our calculation for the variant of the model where vector interactions are included Bijnens ; Klimt ; Vogl in the Lagrangian: $`\frac{1}{2}G_\rho \left((\overline{\psi }\gamma _\mu \tau ^a\psi )^2+(\overline{\psi }\gamma _\mu \gamma _5\tau ^a\psi )^2\right)`$. In that model the formulas for the axial coupling constant of the quark, $`g_A^Q`$, and for $`F_\pi `$ read
$$g_A^Q=\left(1+G_\rho \frac{N_cM^2}{\pi ^2}\mathrm{\Gamma }(0,M^2/\mathrm{\Lambda }^2)\right)^1,F_\pi ^2=g_A^Q\frac{N_cM^2}{4\pi ^2}\mathrm{\Gamma }(0,M^2/\mathrm{\Lambda }^2),$$
(11)
with $`\mathrm{\Gamma }(0,x)=_x^{\mathrm{}}𝑑te^t/t`$. The results are shown in Table 2. We can see that the ratio $`b_2^{\mathrm{phen}}/b_2^{\mathrm{mod}}`$ decreases as $`G_\rho `$ increases. However, uncomfortably large values of $`G_\rho `$ are needed in order to satisfy the sum rule, i.e. to make $`b_2^{\mathrm{phen}}/b_2^{\mathrm{mod}}1`$. The conclusion is that at moderate values of $`M`$ the model needs very large values of $`G_\rho `$ to describe properly the vector channel.
## IV Results for the non-local model
The non-local chiral quark model differs from the local versions in the fact that the interaction vertex carries momentum-dependent factors $`r(p_i)`$:
For some more details see the contribution of Bojan Golli and Georges Ripka. We use here the following form of the regulator, $`r(p^2)=1/(1+p^2/\mathrm{\Lambda }^2)^2`$, which is simpler that the instanton-model expression but is known to reproduce well its basic predictions. We use the notation $`M_k=Mr(k^2)^2`$, $`D_k=k^2+`$ $`M_k^2`$. In the vacuum sector one finds that
$`{\displaystyle \frac{1}{G}}`$ $`=`$ $`4N_cN_f{\displaystyle \frac{d_4k}{(2\pi )^4}\frac{r(k^2)^4}{D_k}},\overline{u}u=\overline{d}d=4N_c{\displaystyle \frac{d_4k}{(2\pi )^4}\frac{M_k}{D_k}},`$ (12)
$`{\displaystyle \frac{\alpha _s}{8\pi }}G_{\mu \nu }^aG_a^{\mu \nu }`$ $`=`$ $`4N_c{\displaystyle \frac{d_4k}{(2\pi )^4}\frac{M_k^2}{D_k}},F_\pi ^2=4N_c{\displaystyle \frac{d_4k}{(2\pi )^4}\frac{M_k^2k^2M_kM_k^{}+k^4M_k^2}{D_k^2}},`$
where the first equation is the stationary point condition, expressing the quark coupling constant via the parameter $`M`$, the second equation is the quark condensate in the chiral limit, the third is the gluon condensate in the chiral limit, Weiss , and the last one gives the pion decay constant BowlerB ; PlantB in the chiral limit. We use the notation $`M_k^{}=dM_k/dk^2`$.
There is a complication associated with non-local interactions, namely the Noether currents pick up extra contributions. Furthermore, the transverse parts of these currents are not uniquely determined and their choice constitutes a part of the model. Here we use the so-called straight-line $`P`$-exponent prescription for gauging the model. For a discussion of this and related issues see Refs. BowlerB ; PlantB ; Bled99 . We then find that
where
$$\mathrm{\Gamma }^\mu =\gamma ^\mu _0^1𝑑\alpha \frac{dM(k+\alpha q)}{dk_\mu },S^{\mu \nu }=_0^1𝑑\alpha _0^1𝑑\beta \frac{d^2M(k+\alpha q\beta q)}{dk_\mu dk_\nu }.$$
(13)
It is simple to check that the current is conserved, $`q_\mu \mathrm{\Pi }^{\mu \nu }=0`$.
Coming back to the sum rules (8), we note by comparing Tables 1 and 3 that the results are a bit better in the non-local model. Especially at larger values of $`M`$, around 350MeV, we gain about a factor of 2. The discrepancy leaves room for such effects as the vector-channel interactions and $`1/N_c`$ corrections, which should be the object of a further study.
The effects of non-localities in currents, as depicted in the figure for $`\mathrm{\Pi }^{\mu \nu }`$, bring about 15 % to the results. More precisely, the calculation with $`\gamma ^\mu `$ instead of $`\mathrm{\Gamma }^\mu `$ in the vertices (and without the sea-gull term $`S^{\mu \nu }`$) yields $`b_2^{\mathrm{mod}}`$ and $`b_3^{\mathrm{mod}}`$ roughly 15% lower.
## V Weinberg sum rules
Now we turn to a more formal aspect of our study. An appealing feature of non-local regulators is that now both Weinberg sum rules hold. The famous sum rules (I) and (II) are:
$`{\displaystyle \frac{1}{\pi }}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{ds}{s}}\left[\mathrm{Im}\mathrm{\Pi }_\rho (s)\mathrm{Im}\mathrm{\Pi }_{A_1}(s)\right]`$ $`=`$ $`F_\pi ^2,`$ ()
$`{\displaystyle \frac{1}{\pi }}{\displaystyle _0^{\mathrm{}}}𝑑s\left[\mathrm{Im}\mathrm{\Pi }_\rho (s)\mathrm{Im}\mathrm{\Pi }_{A_1}(s)\right]`$ $`=`$ $`m\overline{u}u+\overline{d}d.`$ ()
Whereas (I) holds in all variants of the NJL model, in local models (II) picks up $`M`$ instead of $`m`$ on the right-hand side, thus is violated badly. To prove the sum rules in the non-local model we consider the dispersion relation
$$\mathrm{\Pi }_\rho (Q^2)\mathrm{\Pi }_{A_1}(Q^2)=\frac{1}{\pi }_0^{\mathrm{}}\frac{ds}{s+Q^2}\left[\mathrm{Im}\mathrm{\Pi }_\rho (s)\mathrm{Im}\mathrm{\Pi }_{A_1}(s)\right].$$
(14)
No subtractions are necessary. We set $`Q^2=0`$. An explicit evaluation gives
$$\mathrm{\Pi }_\rho (0)\mathrm{\Pi }_{A_1}(0)=4N_c\frac{d_4k}{(2\pi )^4}\frac{M_k^2k^2M_kM_k^{}+k^4M_k^2}{D_k^2},$$
(15)
in which we recognize our formula for $`F_\pi ^2`$ (12), thus verifying (I). To prove WSR II we multiply both sides of (14) by $`Q^2`$ and take the limit of $`Q^2\mathrm{}`$. We find, to the first order in the current quark mass $`m`$,
$`\underset{Q^2\mathrm{}}{lim}Q^2\left(\mathrm{\Pi }_\rho (Q^2)\mathrm{\Pi }_{A_1}(Q^2)\right)=`$
$`\underset{Q^2\mathrm{}}{lim}Q^2{\displaystyle \frac{d_4k}{(2\pi )^4}\frac{4N_c[M_kM_{k+Q}+m(M_k+M_{k+Q})]}{D_kD_{k+Q}}}=`$
$`4N_cm\underset{Q^2\mathrm{}}{lim}Q^2{\displaystyle \frac{d_4k}{(2\pi )^4}\left(\frac{M_k}{D_kD_{k+Q}}+\frac{M_k}{D_kD_{kQ}}\right)}=`$
$`8mN_c{\displaystyle \frac{d_4k}{(2\pi )^4}\frac{M_k}{D_k}}=2m\overline{u}u+\overline{d}d.`$ (16)
In passing from the second to the third line in the above derivation we have used the fact that $`M_p`$ is strongly concentrated around $`p=0`$, thus we could drop the term with $`M_kM_{k+Q}`$ at $`Q^2\mathrm{\Lambda }^2`$. By a similar argument in the third line we have replaced $`Q^2/\left((k\pm Q)^2+M_{k+Q}^2\right)`$ by $`1`$. Finally, in the last line we have recognized our expression for the quark condensate (12). We note that non-local contributions to the Noether currents are suppressed and do not contribute to (16). Similarly, rescattering diagrams as displayed in the equation for $`\mathrm{\Pi }^{AB}`$ can be dropped. This is because the vertex $`\mathrm{\Gamma }`$ contains the regulator, hence the diagram is strongly suppressed at large $`Q^2`$.
Clearly, the reason for the compliance with the second Weinberg sum rule is the fact that the momentum-dependent mass of the quark becomes asymptotically, in the deep-Euclidean region, just the current quark mass. This is not the case of local models Bijnens ; Dmi , where the mass is constant, and this is why these models violate Eq. (()V).
## VI Final remarks
To end this talk we describe shortly the results for other channels. In the $`\omega `$-meson channel the model results are, at the leading-$`N_c`$ level, exactly the same as for the $`\rho `$-channel. In the pseudoscalar and scalar channels we do not know the the corresponding parameter $`g`$ from experiment, hence we consider sum rules (9). In the pion channel the pion pole entirely dominates the sum rules, i.e. the continuum contribution is negligeable and we get $`m_\pi ^2b_2^{\mathrm{mod}}/b_3^{\mathrm{mod}}b_3^{\mathrm{mod}}/b_4^{\mathrm{mod}}\mathrm{}`$ in both the local and non-local variants of the model. In the $`\sigma `$-meson channel things are more interesting. Whereas in the local model the sum rules (9) simply give the pole at twice the quark mass, $`m_\sigma =2M`$, in the non-local model the predicted value of $`m_\sigma `$ ranges from 400MeV at $`M=300\mathrm{M}\mathrm{e}\mathrm{V}`$ to 470MeV at $`M=450\mathrm{M}\mathrm{e}\mathrm{V}`$ and is insensitive to the value of the threshold parameter $`s_0`$.
One of us (WB) is grateful to Bojan Golli, Georges Ripka, Enrique Ruiz Arriola and Mike Birse for many useful conversations. |
no-problem/9911/astro-ph9911308.html | ar5iv | text | # Stability and collapse of rapidly rotating, supramassive neutron stars: 3D simulations in general relativity
## I Introduction
Neutron stars found in nature are rotating. Rotation can support stars with higher mass than the maximum static limit, producing “supramassive” stars, as defined and calculated by Cook, Shapiro and Teukolsky . Such supramassive stars can be created when neutron stars accrete gas from a normal binary companion. This scenario can also lead to “recycled” pulsars \[see for model calculations in general relativity\]. Alternatively, supramassive stars can be produced in the merger of binary neutron stars ( for discussions and references).
Pulsars are believed to be uniformly rotating. Eventually, viscosity will drive any equilibrium star to uniform rotation. Uniformly rotating configurations with sufficient angular momentum will be driven to the mass-shedding limit (at which the star’s equator rotates with the Kepler frequency, so that any further spin-up would disrupt the star; ). Supramassive neutron stars at the mass-shedding limit is the subject of this paper.
The dynamical stability of rotating neutron stars, including supramassive configurations, against radial perturbations, as well as the final fate of unstable stars undergoing collapse, has not been established definitively.
Along a sequence of nonrotating, spherical stars, parameterized by central density, the maximum mass configuration defines a critical density above which the stars are unstable to radial oscillations: stars on the high density, unstable branch collapse to black holes on dynamical timescales .
Establishing the onset of instability for rotating stars is more complicated. Chandrasekhar and Friedman and Schutz developed a formalism to identify points of dynamical instability to axisymmetric perturbations along sequences of rotating stars. In their formalism, however, a complicated functional for a set of trial functions has to be evaluated. Probably because of the complexity of their methods, explicit calculations have never been performed. Friedman, Ipser and Sorkin showed that for uniformly rotating stars the onset of secular instability can be located quite easily by applying turning-point methods along sequences of constant angular momentum. This method has been applied to find points of secular instability in numerical models of uniformly rotating neutron stars .
Turning point methods along such sequences can only identify the point of secular, and not dynamical instability, since one is comparing neighboring, uniformly rotating configurations with the same angular momentum. Maintaining uniform rotation during perturbations tacitly assumes high viscosity. In reality, the star will preserve circulation as well as angular momentum in a dynamical perturbation, and not uniform rotation. It is thus possible that a secularly unstable star may be dynamically stable: for sufficiently small viscosity, the star may change to a differentially rotating, stable configuration instead of collapsing to a black hole. Ultimately, the presence of viscosity will bring the star back into rigid rotation, driving the star to an unstable state. A secular instability evolves on a dissipative, viscous timescale, while a dynamical instability evolves on a collapse (free-fall) timescale. Friedman, Ipser and Sorkin showed that along a sequence of uniformly rotating stars, a secular instability always occurs before a dynamical instability (implying that all secularly stable stars are also dynamically stable).
For spherical stars, the onset of secular and dynamical instability coincides (since for a nonrotating star a radial perturbation conserves both circulation and uniform rotation). This suggests that for uniformly rotating stars for which the rotational kinetic energy $`T`$ is typically a small fraction of the gravitational energy $`W`$, the onset of dynamical instability is close to the onset of secular instability. One goal of this paper is to test this hypothesis and to identify the onset of dynamical instability in rotating stars.
We also follow the nonlinear growth of the radial instability and determine the final fate of unstable configurations. Nonrotating neutron stars collapse to black holes, but rotating stars could also form black holes surrounded by massive disks. Also, if $`J/M_g^2`$ exceeds the Kerr limit of unity (where $`J`$ is the angular momentum and $`M_g`$ the total mass-energy or gravitational mass of the progenitor star), not all of the matter can collapse directly to a black hole. As pointed out recently, such a system could be the central source of $`\gamma `$-ray bursts .
Numerical hydrodynamic simulations in full general relativity (GR) provide the best approach to understanding the collapse of rotating neutron stars. Two groups included rotation in axisymmetric, relativistic hydrodynamic codes to study the collapse of rotating massive stars to black holes. The collapse and fate of unstable rotating neutron stars, however, has never been simulated before. Probably this is because numerical methods for constructing initial data describing rapidly rotating neutron stars, as well as numerical tools, techniques and sufficient computational resources have only become available recently. Over the last few years, robust numerical techniques for constructing equilibrium models of rotating neutron stars in full GR have been established . More recently, methods for the numerical evolution of 3D gravitational fields have been developed (see, e.g., ). In a previous paper , Shibata presented a wide variety of numerical results of test problems for his 3D hydrodynamic GR code and demonstrated that simulations for many interesting problems are now feasible.
In this paper, we perform simulations in full GR for rapidly rotating neutron stars. This study is a by-product of our long-term effort to build robust, fully relativistic, hydrodynamic evolution codes in 3D. We adopt rigidly rotating supramassive neutron stars at mass-shedding as initial data. By exploring rotating stars at mass-shedding, we can clarify the effect of rotation most efficiently. Such stars are also the plausible outcome of pulsar recycling and binary coalescence. Following Ref. , we prepare equilibrium states for such stars using an approximation in which the spatial metric is assumed to be conformally flat. We then perform numerical simulations to investigate the dynamical stability of the rapidly rotating neutron stars against collapse and to determine the final fate of the unstable neutron stars. We believe that this is the first 3D simulation of the dynamical collapse of a rotating neutron star in full GR.
In Newtonian physics, stars with sufficient rotation ($`T/|W|0.27`$) are dynamically unstable to bar formation . Since $`T/|W|`$ increases approximately with $`R^1`$ as a star collapses, radial collapse may drive the dynamical growth of nonaxisymmetric bars. To allow for this possibility, a numerical simulation must be performed in 3D, not in axisymmetry.
In Sec. II, we briefly describe our formulation, initial data, and spatial gauge conditions. In Sec. III, we present numerical results. First, we study the dynamical stability of supramassive rotating neutron stars at the mass-shedding limit. We then study the final products of the unstable neutron stars adopting three kinds of initial conditions: In the first case, we choose a marginally stable neutron star and slightly reduce the pressure for destabilization as the initial condition. In the second case, we prepare a stable star, and then reduce a large fraction of the pressure suddenly. While we are primarily interested in computational consequences, this scenario may provide a model for sudden phase transitions inside neutron stars (see, e.g., and references therein). In the third case, we prepare a stable star and add more mass near the surface to induce collapse. In all the cases, we find that the final products are black holes without surrounding massive disks, which we can readily explain. In Sec. IV we provide a summary. Throughout this paper, we adopt the units $`G=c=M_{}=1`$ where $`G`$, $`c`$ and $`M_{}`$ denote the gravitational constant, speed of light and solar mass, respectively. We use Cartesian coordinates $`x^k=(x,y,z)`$ as the spatial coordinate with $`r=\sqrt{x^2+y^2+z^2}`$; $`t`$ denotes coordinate time.
## II Methods
### A Summary of formulation
We perform numerical simulations using the same formulation as in , to which the reader may refer for details and basic equations. The fundamental variables used in this paper are:
$`\rho `$ $`:\mathrm{rest}\mathrm{mass}\mathrm{density},`$ (1)
$`\epsilon `$ $`:\mathrm{specific}\mathrm{internal}\mathrm{energy},`$ (2)
$`P`$ $`:\mathrm{pressure},`$ (3)
$`u^\mu `$ $`:\mathrm{four}\mathrm{velocity},`$ (4)
$`\alpha `$ $`:\mathrm{lapse}\mathrm{function},`$ (5)
$`\beta ^k`$ $`:\mathrm{shift}\mathrm{vector},`$ (6)
$`\gamma _{ij}`$ $`:\mathrm{metric}\mathrm{in}3\mathrm{D}\mathrm{spatial}\mathrm{hypersurface},`$ (7)
$`\gamma `$ $`=e^{12\varphi }=\mathrm{det}(\gamma _{ij}),`$ (8)
$`\stackrel{~}{\gamma }_{ij}`$ $`=e^{4\varphi }\gamma _{ij},`$ (9)
$`K_{ij}`$ $`:\mathrm{extrinsic}\mathrm{curvature}.`$ (10)
General relativistic hydrodynamic equations are solved using the van Leer scheme for the advection terms . Geometric variables (together with three auxiliary functions $`F_i`$ and the trace of the extrinsic curvature) are evolved with a free evolution code. The boundary conditions for geometric variables are the same as those adopted in . Violations of the Hamiltonian constraint and conservation of mass and angular momentum are monitored as code checks. Several test calculations, including spherical collapse of dust, stability of spherical neutron stars, and the evolution of rotating neutron stars as well as corotating binary systems have been described in . Black holes that form during the late phase of the collapse are located with an apparent horizon finder as described in .
We also define a density $`\rho _{}(=\rho \alpha u^0e^{6\varphi })`$ from which the total rest mass of the system can be integrated as
$$M_{}=d^3x\rho _{}.$$
(11)
We perform the simulations assuming $`\pi `$-rotation symmetry around the $`z`$-axis as well as a reflection symmetry with respect to the $`z=0`$ plane and using a fixed uniform grid with the typical size $`153\times 77\times 77`$ in $`xyz`$. We have also performed test simulations with different grid resolutions to check that the results do not change significantly.
The slicing and spatial gauge conditions we use in this paper are basically the same as those adopted in ; i.e., we impose an “approximate” maximal slice condition ($`K_k^k0`$) and an “approximate” minimum distortion gauge condition ($`\stackrel{~}{D}_i(_t\stackrel{~}{\gamma }^{ij})0`$ where $`\stackrel{~}{D}_i`$ is the covariant derivative with respect to $`\stackrel{~}{\gamma }_{ij}`$). However, for the case when a rotating star collapses to a black hole, we slightly modify the spatial gauge condition in order to improve the spatial resolution around the black hole. The method of the modification is described in Sec. II.C.
### B Initial conditions for rotating neutron stars
As initial conditions, we adopt rapidly and rigidly rotating supramassive neutron stars in (approximately) equilibrium states. The approximate equilibrium states are obtained by choosing a conformally flat spatial metric, i.e., assuming $`\gamma _{ij}=e^{4\varphi }\delta _{ij}`$. This approach is computationally convenient and, as illustrated in , provides an excellent approximation to exact axisymmetric equilibrium configurations.
Throughout this paper, we assume a $`\mathrm{\Gamma }`$-law equation of state in the form
$$P=(\mathrm{\Gamma }1)\rho \epsilon ,$$
(12)
where $`\mathrm{\Gamma }`$ is the adiabatic constant. For hydrostatic problems, the equation of state can be rewritten in the polytropic form
$$P=K\rho ^\mathrm{\Gamma },\text{ }\mathrm{\Gamma }=1+\frac{1}{n},$$
(13)
where $`K`$ is the polytropic constant and $`n`$ the polytropic index. We adopt $`\mathrm{\Gamma }=2`$ ($`n=1`$) as a reasonable qualitative approximation to realistic (moderately stiff) cold, nuclear equations of state.
Physical units enter the problem only through the polytropic constant $`K`$, which can be chosen arbitrarily or else completely scaled out of the problem. We often quote values for $`K=200/\pi `$, for which in our units ($`G=c=M_{}=1`$) the radius is $`R=(\pi K/2)^{1/2}=10`$ in the Newtonian limit; corresponding to $`R15`$ km. Since $`K^{n/2}`$ has units of length, dimensionless variables can be constructed as
$`\overline{M}_{}=M_{}K^{n/2},`$ $`\overline{M}_g=M_gK^{n/2},`$ $`\overline{R}=RK^{n/2},`$ (14)
$`\overline{J}=JK^n,`$ $`\overline{\mathrm{P}}=\mathrm{P}K^{n/2},`$ $`\overline{\rho }=\rho K^n,`$ (15)
where $`\mathrm{P}`$ denotes rotational period. All results can be scaled for arbitrary $`K`$ using Eqs. (14).
For the construction of the (approximate) equilibrium states as initial data, we adopt a grid in which the semi major axes of the stars, along the $`x`$ and $`y`$-axes, are covered with 40 grid points. For rotating stars at mass-shedding near the maximum mass, the semi minor (rotation) axis along the $`z`$-axis is covered with $`23`$ or 24 grid points.
In Fig. 1, we show the relation between the gravitational mass $`M_g`$ and the central density $`\rho _c`$ for the neutron stars. The solid and dotted lines denote the relations for spherical neutron stars and stars rotating at the mass-shedding limit, constructed from the exact stationary matter and field equations. The open circles denote the approximate equilibrium states at the mass-shedding limit obtained using the conformal flatness approximation. We find that at $`\rho _c=\rho _{\mathrm{max}}`$ where $`0.0040\rho _{\mathrm{max}}0.0045`$, $`M_g`$ takes its maximum value. For stars with stiff equations of state the numerical results in Ref. show that the central density at the onset of secular instability (which hereafter we refer to as $`\rho _{\mathrm{crit}}`$) is very close to $`\rho _{\mathrm{max}}`$ (see, e.g., Fig. 4 of ). We therefore consider stars with $`\rho _c\rho _{\mathrm{max}}(\rho _{\mathrm{crit}})`$ as candidates for dynamically unstable stars.
### C Spatial gauge condition
When no black hole is formed, we adopt the approximate minimum distortion gauge condition as our spatial gauge condition (henceforth referred to as the AMD gauge condition). However, as pointed out in previous papers , during the black hole formation (i.e., for an infalling radial velocity field), the expansion of the shift vector $`_i\beta ^i`$ and the time derivative of $`\varphi `$ becomes positive using this gauge condition together with maximal slicing. Accordingly, the coordinates diverge outward and the resolution around the black hole forming region becomes worse and worse during the collapse. This undesirable property motivates us to modify the AMD gauge condition when we treat black hole formation. Specifically, we modify the AMD shift vector according to
$$\beta ^i=\beta _{\mathrm{AMD}}^if(t,r)\frac{x^i}{r+ϵ}\beta _{\mathrm{AMD}}^r^{}.$$
(16)
Here $`\beta _{\mathrm{AMD}}^i`$ denotes the shift vector in the AMD gauge condition, $`\beta _{\mathrm{AMD}}^r^{}x^k\beta _{\mathrm{AMD}}^k/(r+ϵ)`$, $`ϵ`$ is a small constant much less than the grid spacing, and $`f(t,r)`$ is a function chosen as
$$f(t,r)=f_0(t)\frac{1}{1+(r/M_{g,0})^4}.$$
(17)
where $`M_{g,0}`$ denotes the gravitational mass of a system at $`t=0`$. We determine $`f_0(t)`$ from $`\varphi _0=\varphi (r=0)`$. Taking into account the fact that the resolution around $`r=0`$ deteriorates when $`\varphi _0`$ becomes large, we choose $`f_0`$ according to
$$f_0(t)=\{\begin{array}{cc}1\hfill & \mathrm{for}\varphi _00.8,\hfill \\ 2.5\varphi _01\hfill & \mathrm{for}0.4\varphi _00.8,\text{ Type I}\hfill \\ 0\hfill & \mathrm{for}\varphi _0<0.4,\hfill \end{array}$$
(18)
or
$$f_0(t)=\{\begin{array}{cc}1\hfill & \mathrm{for}\varphi _00.6,\hfill \\ 5\varphi _02\hfill & \mathrm{for}0.4\varphi _00.6,\text{ Type II}\hfill \\ 0\hfill & \mathrm{for}\varphi _0<0.4.\hfill \end{array}$$
(19)
Note that for spherical collapse with $`f_0=1`$, $`_i\beta ^i0`$ at $`r=0`$ in both cases. In general, we find numerically that $`_i\beta ^i`$ is small near the origin, where the collapse proceeds nearly spherically. In the following, we refer to the modified gauge conditions of $`f_0`$ defined by Eqs. (18) and (19) as type I and II, respectively. We employ them whenever a rotating neutron star collapses to a black hole.
## III Numerical results
### A Dynamical stability
We investigate the stability of supramassive rotating neutron stars at mass-shedding limits against gravitational collapse. We adopt the stars marked with (A), (B), (C), (D), and (E) in Fig. 1 as initial conditions for our numerical experiments. The physical properties of these stars are summarized in Table I. In this numerical experiment, we adopt two initial conditions for each model. In one case, we use the (approximate) equilibrium states of rotating neutron stars without any perturbation and in the other case, we uniformly reduce the pressure by $`1\%`$ (by decreasing $`K`$; i.e., $`\mathrm{\Delta }K/K=1\%`$ where $`\mathrm{\Delta }K`$ denotes the depletion factor of $`K`$ ).
In Fig. 2, we show $`\rho `$ and $`\alpha `$ at $`r=0`$ as a function of $`t/\mathrm{P}`$ where $`\mathrm{P}`$ is the rotation period of each rotating star. We find that when $`\rho _c<\rho _{\mathrm{crit}}`$ (i.e., stars (A), (B) and (C)), the rotating stars oscillate independent of the initial perturbations. Hence, these stars are stable against gravitational collapse. We note that we find small amplitude oscillations even when we do not reduce the pressure initially. This is caused by small deviations of the initial data from true equilibrium states, both because of the conformal flatness approximation and because of numerical truncation error.
We expect the oscillation frequencies in Fig. 2 to be the fundamental quasi-radial oscillation of these rotating stars. The oscillation periods increase with the central density. At the marginally stable point of secular stability ($`\rho _c=\rho _{\mathrm{crit}}`$), the period becomes infinite.
Star (D) does not collapse either and instead oscillates for $`\mathrm{\Delta }K=0`$. However, it collapses for $`\mathrm{\Delta }K/K=1\%`$. This indicates that star (D) is located near the onset point of dynamical stability. It is found that the oscillation amplitude for the case $`\mathrm{\Delta }K=0`$ is very large compared with those for (A)–(C). This could be caused by two effects: (i) star (D) is near the onset of dynamical instability and hence a small deviation from true equilibrium can induce large perturbations; (ii) the conformal flatness approximation results in larger deviations from true equilibrium for more relativistic configurations, which causes a larger initial perturbation. Apparently, the deviation caused by the numerical truncation error and/or the conformal flatness approximation stabilizes the configuration, and for $`\mathrm{\Delta }K=0`$ the star oscillates with an average value of the central density ($`\rho (r=0)0.004`$) slightly smaller than the initial value $`\rho _c0.0047`$. This suggests that star (D) with $`\mathrm{\Delta }K=0`$ is a perturbed state of a true equilibrium star of $`\rho _c0.004\rho _{\mathrm{crit}}`$. The results for star (E) are similar to, but more pronounced than those for star (D), suggesting that the initial configuration (E) may also be a perturbation of a stable star with $`\rho _c0.004\rho _{\mathrm{crit}}`$.
To determine the onset of dynamical instability more sharply, we perform further simulations adopting $`\mathrm{\Delta }K/K=0.7\%`$, 0.8%, and $`0.9\%`$ for star (D). In Fig. 3, we show $`\rho `$ and $`\alpha `$ at $`r=0`$ as a function of $`t/\mathrm{P}`$ for star (D) for the various initial depletion factors. We find that for $`\mathrm{\Delta }K/K0.8\%`$, the stars behave similarly to $`\mathrm{\Delta }K=0`$; i.e., the stars simply oscillate with the average density $`\rho _{\mathrm{crit}}`$. For $`\mathrm{\Delta }K/K0.9\%`$, however, the stars quickly collapse to a black hole. We do not find any examples in which the stars oscillate with average densities larger than $`0.0045\rho _{\mathrm{crit}}`$. This indicates that equilibrium stars with $`\rho _c\rho _{\mathrm{crit}}`$ are dynamically unstable. Although we cannot specify the onset of dynamical instability with arbitrary precision, our present results indicate that it nearly coincides with the onset of secular instability.
### B Final outcome of unstable collapse
To study the final outcome of the gravitational collapse of rapidly rotating neutron stars, we perform a number of numerical experiments for several different initial conditions.
First, we evolve star (D) with $`\mathrm{\Delta }K/K=1\%`$ for different spatial gauge conditions. In Fig. 4, we show $`\varphi `$ and $`\alpha `$ at $`r=0`$ as a function of time for the AMD gauge (solid line), the modified gauge of type I (dotted line) and type II (dashed line). As stated in Sec. II.B, $`\varphi (r=0)`$ increases quickly during the gravitational collapse for the AMD gauge. In this case, $`\alpha (r=0)`$ stops decreasing in the late phase of the collapse where $`\varphi (r=0)1`$, which is a numerical artifact. This is probably caused by the insufficient resolution around the black hole forming region. For the modified gauge conditions, $`\alpha (r=0)`$ smoothly approaches zero. We note that $`\alpha (r=0)`$ ought to be independent of the spatial gauge condition, so that the deviation of the AMD results from the modified gauge condition results are a numerical artifact. This shows that the results for $`t/\mathrm{P}1.4`$ computed in the AMD gauge condition is unreliable and indicates that the modification of the AMD gauge condition is an appropriate strategy to overcome the deterioration of the resolution in the late phase of the collapse.
In Fig. 5, we show the time variation of the total angular momentum of the system. Since the evolving system is nearly axisymmetric, the angular momentum should be nearly conserved. In all the three cases, however, the angular momentum slowly decreases in the early phase, which is caused by numerical dissipation at the stellar surface. As the collapsing star approaches a black hole, the angular momentum changes quickly because the resolution becomes increasingly worse. In the AMD gauge case, the error amounts to $`5\%`$, while in the modified gauge cases, it is $`1.5\%`$ at the time when apparent horizon is found at $`t1.4\mathrm{P}`$ (see Fig. 6). This is further evidence that the modified gauge conditions are better suited for simulations of black hole formation.
It should be noted that even with the modified gauge conditions, the resolution becomes too poor to perform accurate simulations for times exceeding $`t/\mathrm{P}1.5`$. This is because the metric $`\stackrel{~}{\gamma }_{ij}`$ becomes very spiky around the apparent horizon (i.e., because of horizon throat stretching). To perform simulations for times much later than horizon formation special computational tools are necessary, probably including apparent horizon boundary conditions .
In Fig. 6, we show snapshots of density contour lines for the density $`\rho _{}`$ and the velocity field for $`v^i(=u^i/u^0)`$ in the equatorial and $`y=0`$ planes. The results are obtained in the modified gauge condition of type I. It is found that after about 1.4 orbital periods almost all the matter has collapsed to a black hole. In Fig. 7, we show the fraction of the rest mass inside a coordinate radius $`r`$, defined as
$$\frac{M_{}(r)}{M_{}}=\frac{1}{M_{}}_{|x^i|<r}d^3x\rho _{}.$$
(20)
$`R_e`$ denotes the coordinate axial length in the equatorial plane at $`t=0`$ (see Table I). Note that at $`t1.4\mathrm{P}`$, the apparent horizon is located at $`r0.2R_e`$. Thus, almost all the matter (more than $`99\%`$) has been absorbed by the black hole by that time. Although Fig. 6 shows that a small fraction of the matter has not yet been swallowed by the black hole, the matter which stays inside $`rR_e5M_g`$ will ultimately have to fall in. This is, because the radius of the innermost stable circular orbit (ISCO) is $`R_{\mathrm{ISCO}}^{\mathrm{SS}}5M_g`$ for a (nonrotating) Schwarzschild black hole in our gauge. The collapse of rotating neutron stars with $`J/M_g^20.6<1`$ leads to moderately rotating Kerr black holes, for which $`R_{\mathrm{ISCO}}^{\mathrm{SS}}`$ is an adequate approximation to the ISCO. This fact already suggests that no disk will form around the black hole.
The same reason also suggests why no massive disk forms during the collapse: the equatorial radius $`R_e`$ is initially less than $`5M_g`$, and hence inside the radius which will become the ISCO of the final black hole.
Next, we evolve the initial configuration (A) depleting the pressure by various amounts, which may provide a model for sudden phase transitions inside neutron stars . In Fig. 8, we show $`\rho `$ and $`\alpha `$ at $`r=0`$ as a function of time for $`\mathrm{\Delta }K/K=0,1\%`$, $`5\%`$, and $`10\%`$ . When the depletion factor is less than $`5\%`$, the star simply oscillates, but for $`\mathrm{\Delta }K/K=10\%`$ the star collapses dynamically. Note that depleting the pressure by $`10\%`$ is approximately equivalent to increasing the gravitational mass by $`5\%`$ according to the scaling relation for the polytropic stars of $`\mathrm{\Gamma }=2`$ (see Eq. (14)). Since the gravitational mass for star (A) is about $`3\%`$ less than the maximum allowed mass, it is quite reasonable that this star collapses. In the following two simulations, we focus on evolutions of star (A) with $`\mathrm{\Delta }K/K=10\%`$.
In order to test if nonaxisymmetric (bar-mode) perturbations have enough time to grow appreciably during the gravitational collapse, we excite such a perturbation by modifying the initial density profile $`\rho _{}`$ according to
$$\rho _{}=(\rho _{})_0\left(1+0.3\frac{x^2y^2}{R_e^2}\right),$$
(21)
where $`(\rho _{})_0`$ denotes the density profile of star (A) in the unperturbed state.
In Figs. 9 and 10, we show snapshots of density contour lines for $`\rho _{}`$ and the velocity field for $`v^i`$ in the equatorial and $`y=0`$ planes for the above axisymmetric and nonaxisymmetric initial conditions. For these simulations we adopted the modified gauge condition of type II. In Fig. 11, we also show $`M_{}(r)/M_{}`$ as a function of time for these cases. We again find that irrespective of the initial perturbation, almost all the matter collapses into the black hole without any massive disk or ejecta around the black hole. Again, this is a consequence of the stars being sufficiently compact that almost all the matter ends up inside the ISCO of the final black hole. Note that the star with the nonaxisymmetric perturbation evolves very similarly to the unperturbed, axisymmetric star, showing that the dynamical collapse does not leave the nonaxisymmetric perturbation enough time to grow appreciably during the collapse. Again, this can be understood quite easily from the following heuristic (and Newtonian) argument. Star (A) has an initial equatorial radius of $`R_e5.5M_g`$, and can therefore shrink by less than a factor of 3 before a black hole forms. Its initial value of $`T/|W|`$ is about 0.09 (see Table I). Since $`T/|W|`$ scales approximately with $`R^1`$, it can just barely reach the critical value $`(T/|W|)_{\mathrm{dyn}}0.27`$ for dynamical instability before a black hole forms. It is therefore not surprising that we do not find dynamically growing axisymmetric perturbations. Note that the star does reach the critical value for secular instability to bar formation (which may be as small as $`(T/|W|)_{\mathrm{sec}}0.1`$ for very compact configurations, see ), so that viscosity or emission of gravitational waves could drive the star unstable. However, this mode would grow on the corresponding dissipative timescale, which is much longer than the dynamical timescale of the collapse.
In order to make these statements about nonaxisymmetric growth more quantitative, we compare the quantities
$$2\frac{x_{\mathrm{rms}}y_{\mathrm{rms}}}{x_{\mathrm{rms}}+y_{\mathrm{rms}}}$$
(22)
for the perturbed and unperturbed evolutions in Fig. 12. Here, $`x_{\mathrm{rms}}^i`$ denotes the mean square axial length defined as
$$x_{\mathrm{rms}}^i=\left[\frac{1}{M_{}}d^3x\rho _{}(x^i)^2\right]^{1/2}.$$
(23)
The figure shows very clearly that the axial ratio oscillates for the perturbed evolution, but does not grow on the dynamical timescale of the collapse.
Finally, we model a scenario in which a small amount of matter accretes onto a stable star resulting in destabilization of the star. As the stable star, we again adopt configuration (A) and to model the matter accretion we modify the initial density distribution according to
$$\rho _{}=(\rho _{})_0\left(1+0.5\frac{r^2}{R_e^2}\right),$$
(24)
with all the matter moving with the same initial angular velocity. Most of the enhancement is in the outer region, which mimics the effect of accretion. In this case, the total rest mass is about $`9.5\%`$ larger than that of star (A), so that the mass is larger than the maximum allowed mass along the sequence of rotating neutron stars. The value of $`J/M_g^2`$ is nearly unchanged. Note that we do not reduce the pressure for these simulations. We again evolve the star using the modified gauge condition of type II.
The star again evolves very similarly to those in the previous two cases. As an example, we show in Fig 13 $`M_{}(r)/M_{}`$ as a function of time, which is similar to the results in Fig. 11. The apparent horizon forms at $`t0.89\mathrm{P}`$ and $`r0.2R_e`$. We again find that almost all the matter collapses into the black hole without forming a massive disk around the black hole.
## IV Summary and conclusion
We perform fully relativistic, 3D hydrodynamic simulations of supramassive neutron stars rigidly rotating at the mass-shedding limit. We study the dynamical stability of such stars close to the onset of secular instability and follow the collapse to rotating black holes.
Our results suggest that the onset of dynamical, radial instability is indeed close to the onset of secular instability, as expected from the coincidence of the secular and dynamical instability in nonrotating, spherical stars.
In all our simulations, nearly all the matter is consumed by the nascent black hole by the time the calculation stops, and we do not find any evidence for a formation of a massive disk or any ejecta. Since we are considering maximally rotating neutron stars at the mass-shedding limit, and since the formation of a disk is even less likely for more slowly rotating stars, we conclude that such disks quite generally do not form during the collapse of unstable, uniformly rotating neutron stars. This also includes stars which are destabilized by pressure depletion (as, for example, by a nuclear phase transition), or by mass accretion.
We also find that during the collapse to a black hole, nonaxisymmetric perturbations do not have enough time to grow appreciably.
Both these findings can be understood quite easily from heuristic arguments. The initial equilibrium configurations are sufficiently compact, typically $`R_e6M_g`$, so that most of the matter already starts out inside the radius which will become the ISCO of the final black hole. Therefore it is very unlikely that a stable, massive disk would form. Also, the star can only contract by about a factor of three before a black hole forms. Hence $`T/|W|`$, which approximately scales with $`R^1`$, can only increase by about a factor of three over its initial value of $`(T/|W|)_{\mathrm{init}}0.09`$, and only barely reaches the critical value of dynamical instability for bar formation $`(T/|W|)_{\mathrm{dyn}}0.27`$. It is therefore not surprising that we do not see a dynamical growth of nonaxisymmetric perturbations. We expect that these results hold for any moderately stiff equation of state, for which the corresponding critical configurations are similarly compact.
The study reported here focuses on uniformly rotating neutron stars, for which we adopt a moderately stiff equation of state and consider a configuration which is moderately compact initially ($`R/M_g6`$). We speculate that for two alternative scenarios the results may be quite different, even qualitatively, both as far as the formation of a disk and the growth of nonaxisymmetric perturbations are concerned.
For differentially rotating neutron stars, which are the likely outcome of the merger of binary neutron stars , $`T/|W|`$ may take larger values than for rigidly rotating neutron stars. It is therefore possible that such stars might develop dynamical bar mode instabilities.
Rotating supermassive stars (with masses $`M10^5M_{}`$) or massive stars on the verge of supernova collapse are subject to the same dynamical instabilities, but are characterized by very soft equations of state ($`\mathrm{\Gamma }4/3`$) and initial configurations which are nearly Newtonian (see ). Such stars therefore reach the critical value $`(T/|W|)_{\mathrm{dyn}}`$ for bar mode formation far outside the horizon radius. Moreover, $`R/M`$ is very large initially, so that a disk may easily form (compare the discussion in ).
We will treat the collapse of both differentially rotating neutron stars and supermassive stars in future papers.
###### Acknowledgements.
Numerical computations were performed on the FACOM VPP 300R and VX/4R machines in the data processing center of the National Astronomical Observatory of Japan. This work was supported by NSF Grants AST 96-18524 and PHY 99-02833 and NASA Grant NAG5-7152 at the University of Illinois. M.S. gratefully acknowledges support by Grant-in-Aid (Nos. 08NP0801 and 09740336) of the Japanese Ministry of Education, Science, Sports and Culture, and JSPS (Fellowships for Research Abroad).
Table I. The list of the central density $`\rho _c`$, total rest mass $`M_{}`$, gravitational mass $`M_g`$, angular momentum in units of $`M_g^2`$ ($`J/M_g^2`$), $`T/W`$, rotation period $`\mathrm{P}`$, and coordinate length of the semi-major axis $`R_e`$ for rotating neutron stars at mass-shedding limits in the conformal flat approximation. The gauge invariant definition of $`T/W`$ is the same as that in Ref. . The units of mass, length and time are $`M_{}`$, $`1.477`$km, and $`4.927\mu `$sec, respectively.
| $`\rho _c(10^3)`$ | $`M_{}`$ | $`M_g`$ | $`J/M_g^2`$ | $`T/W`$ | $`\mathrm{P}`$ | $`R_e`$ | Model |
| --- | --- | --- | --- | --- | --- | --- | --- |
| $`2.77`$ | 1.580 | 1.452 | 0.598 | 0.087 | 163 | 8.064 | (A) |
| $`3.38`$ | 1.628 | 1.484 | 0.586 | 0.085 | 148 | 7.820 | (B) |
| $`3.98`$ | 1.646 | 1.496 | 0.574 | 0.083 | 137 | 7.365 | (C) |
| $`4.68`$ | 1.645 | 1.494 | 0.563 | 0.080 | 127 | 6.934 | (D) |
| $`5.43`$ | 1.630 | 1.481 | 0.553 | 0.078 | 120 | 6.566 | (E) | |
no-problem/9911/gr-qc9911027.html | ar5iv | text | # Development of a Double Pendulum for Gravitational Wave Detectors
## Abstract
Seismic noise will be the dominant source of noise at low frequencies for ground based gravitational wave detectors, such as LIGO now under construction. Future interferometers installed at LIGO plan to use at least a double pendulum suspension for the test masses to help filter the seismic noise. We are constructing an apparatus to use as a test bed for double pendulum design. Some of the tests we plan to conduct include: dynamic ranges of actuators, and how to split control between the intermediate mass and lower test mass; measurements of seismic transfer functions; measurements of actuator and mechanical cross couplings; and measurements of the noise from sensors and actuators. All these properties will be studied as a function of mechanical design of the double pendulum.
CGPG-99/11-5
gr-qc/9911027
The next upgrade installed at LIGO plans to use at least a double pendulum suspension for the test masses to help filter seismic noise. We are constructing a facility to be used as a test-bed for testing the mechanical and local control design of test mass suspension systems for interferometer gravitational wave detectors. The basic design that we are following is based on the GEO gravitational wave detector suspension design, which uses multiple pendulums . Our prototype is shown schematically in Figure 1.
In addition to the double pendulum, which includes the test mass, a second identical double pendulum is just behind the test mass pendulum. This second pendulum acts as a reaction pendulum, which is used to hold the sensors and actuators used to control the test position of the test mass. Putting the sensors and actuators on an identical second pendulums allows a smaller relative motion between the actuators and the test mass. In addition to gaining extra seismic filtering of a double pendulum, the double pendulum also allows the possibility of controlling the test mass at the intermediate mass rather than from the test mass itself. Therefore, the magnets used for position control, which unfortunately ruin the high mechanical Q of the test mass, can be moved to the intermediate mass. The disadvantage of a double pendulum is that it is more complex to model and hence control, so that testing of actual configurations is a critical activity. The double pendulum is hung from cantilever spring, so that there is also vertical seismic isolation. One feature of our test bed facility is that the design is made flexible so that a variety of test mass configurations can be tested, such as changing the positions of the sensors and actuators. Another feature in our test bed facility is the use of a high precision three-axis vibration shaker used for diagnostics. The construction and assembly of the mechanical pieces of the initial single intermediate pendulum supported by cantilever springs is near completion. The first goal is to control this single pendulum in all six degrees of freedoms. Testing and calibration of position controller: sensors (LED shadow detectors) and actuators (magnets and coils) are now in progress. Testing and characterizing the three-axis vibration shaker has been in progress. The next phase will be to build a double pendulum with a dummy test mass of the size of the test masses currently being installed in LIGO. Finally, a double pendulum, with its corresponding double reaction mass pendulum will be built and tested. Some of the tests we plan to conduct include: dynamic ranges of actuators, used to control the position of the double pendulum masses and how to split control between the intermediate mass and lower test mass; measurements of seismic transfer functions of the double pendulum; measurements of actuator and mechanical cross couplings; and measurements of the noise from sensors and actuators. All these properties will be studied as a function of mechanical design of the double pendulum, such as two versus four wire suspension, wire attachment points (which determine the resonant frequencies of the pendulum), actuator and sensor placement, intermediate mass shape and size, cantilever spring design and number, and modal damping versus point to point damping.
###### Acknowledgements.
We would like to thank Dr. Mike Plissi of the GEO Project for his interest in this work. This research is supported by The Pennsylvania State University and NSF Grant No. PHY-9870032. |
no-problem/9911/hep-ph9911355.html | ar5iv | text | # A model independent analysis of 𝐁→𝑋_𝑠ℓ⁺ℓ⁻ decays in supersymmetry
## 1 Introduction
One of the features of a general low energy supersymmetric (SUSY) extension of the Standard Model (SM) is the presence of a huge number of new parameters. FCNC and CP violating phenomena constrain strongly a big part of the new parameter space. However there is still room for significant departures from the SM expectations in this interesting class of physical processes.
Recently we have investigated the relevance of new physics effects in the semileptonic inclusive decay $`BX_s\mathrm{}^+\mathrm{}^{}`$ . This decay is quite suppressed in the Standard Model; however, new $`B`$-factories should reach the precision requested by the SM prediction and an estimate of all possible new contributions to this process is compelling.
Semileptonic charmless $`B`$ decays have been deeply studied. The dominant perturbative SM contribution has been evaluated in ref. and later two loop QCD corrections have been provided . The contribution due to $`c\overline{c}`$ resonances to these results are included in the papers listed in ref. . Long distance corrections can have a different origin according to the value of the dilepton invariant mass one considers. $`O(1/m_b^2)`$ corrections have been first calculated in ref. and recently corrected in ref. . Near the peaks, non-perturbative contributions generated by $`c\overline{c}`$ resonances by means of resonance-exchange models have been provided in ref. . Far from the resonance region, instead, ref. (see also ref. ) estimate $`c\overline{c}`$ long-distance effects using a heavy quark expansion in inverse powers of the charm-quark mass ($`O(1/m_c^2)`$ corrections).
An analysis of the SUSY contributions has been presented in refs. where the authors estimate the contribution of the Minimal Supersymmetric Standard Model (MSSM). They consider first a universal soft supersymmetry breaking sector at the Grand Unification scale (Constrained MSSM) and then partly relax this universality condition. In the latter case they find that there can be a substantial difference between the SM and the SUSY results in the Branching Ratios and in the forward–backward asymmetries. One of the reasons of this enhancement is that the Wilson coefficient $`C_7(M_W)`$ can change sign with respect to the SM in some region of the parameter space while respecting constraints coming from $`bs\gamma `$. The recent measurements of $`bs\gamma `$ have narrowed the window of the possible values of $`C_7(M_W)`$ and in particular a sign change of this coefficient is no more allowed in the Constrained MSSM framework. Hence, on one hand it is worthwhile considering $`BX_s\mathrm{}^+\mathrm{}^{}`$ in a more general SUSY framework then just the Constrained MSSM, and, on the other hand, the above mentioned new results prompt us to a reconsideration of the process. In reference the possibility of new-physics effects coming from gluino-mediated FCNC is studied.
We consider all possible contributions to charmless semileptonic $`B`$ decays coming from chargino-quark-squark and gluino-quark-squark interactions and we analyze both Z-boson and photon mediated decays. Contributions coming from penguin and box diagrams are taken into account; moreover, corrections to the MIA results due to a light $`\stackrel{~}{t}_R`$ are considered. A direct comparison between the SUSY and the SM contributions to the Wilson coefficients is performed. Once the constraints on mass insertions are established, we find that in generic SUSY models there si still enough room in order to see large deviations from the SM expectations for branching ratios and asymmetries. For our final computation of physical observables we consider NLO order QCD evolution of the coefficients and non-perturbative corrections ($`O(1/m_b^2),O(1/m_c^2),`$..), each in its proper range of the dilepton invariant mass.
Because of the presence of so many unknown parameters (in particular in the scalar mass matrices) which enter in a quite complicated way in the determination of the mass eigenstates and of the various mixing matrices it is very useful to adopt the so-called “Mass Insertion Approximation”(MIA) . In this framework one chooses a basis for fermion and sfermion states in which all the couplings of these particles to neutral gauginos are flavor diagonal. Flavor changes in the squark sector are provided by the non-diagonality of the sfermion propagators. The pattern of flavor change is then given by the ratios
$$(\delta _{ij}^f)_{AB}=\frac{(m_{ij}^{\stackrel{~}{f}})_{AB}^2}{M_{sq}^2},$$
(1)
where $`(m_{ij}^{\stackrel{~}{f}})_{AB}^2`$ are the off-diagonal elements of the $`\stackrel{~}{f}=\stackrel{~}{u},\stackrel{~}{d}`$ mass squared matrix that mixes flavor $`i`$, $`j`$ for both left- and right-handed scalars ($`A,B=`$Left, Right) and $`M_{sq}`$ is the average squark mass (see e.g. ). The sfermion propagators are expanded in terms of the $`\delta `$s and the contribution of the first two terms of this expansion are considered. We show that the graphs with a double MI can be safely neglected in this process. The genuine SUSY contributions to the Wilson coefficients will be simply proportional to the various $`\delta `$s and a keen analysis of the different Feynman diagrams involved will allow us to isolate the few insertions really relevant for a given process. In this way we see that only a small number of the new parameters is involved and a general SUSY analysis is made possible. The hypothesis regarding the smallness of the $`\delta `$s and so the reliability of the approximation can then be checked a posteriori.
Many of these $`\delta `$s are strongly constrained by FCNC effects or by vacuum stability arguments . Nevertheless it may happen that such limits are not strong enough to prevent large contributions to some rare processes.
## 2 Operator basis and general framework
The effective Hamiltonian for the decay $`BX_s\mathrm{}^+\mathrm{}^{}`$ in general low-energy SUSY models is the same in the SM and in the MSSM , it is known at next-to-leading order and we refer to the cited articles for its expression. We find that SUSY can also modify (with respect to the SM) the matching coefficients of the operators
$`Q_7^{}`$ $`=`$ $`{\displaystyle \frac{e}{8\pi ^2}}m_b\overline{s}_R\sigma ^{\mu \nu }b_LF_{\mu \nu },`$
$`Q_9^{}`$ $`=`$ $`(\overline{s}_R\gamma _\mu b_R)\overline{l}\gamma ^\mu l,`$
$`Q_{10}^{}`$ $`=`$ $`(\overline{s}_R\gamma _\mu b_R)\overline{l}\gamma ^\mu \gamma _5l.`$ (2)
However we have checked that the contribution of these operators is negligible and so they are not considered in the final discussion of physical quantities. SUSY contributions to other operators are of higher perturbative order and can be neglected.
The observables we have in mind are the differential branching ratio and the forward-backward asymmetry,
$`R(s)`$ $``$ $`{\displaystyle \frac{\mathrm{d}\mathrm{\Gamma }(BX_sl^+\mathrm{}^{})/\mathrm{d}s}{\mathrm{\Gamma }(BX_ce\nu )}}`$ (3)
$`A_{FB}(s)`$ $``$
$$\frac{{\displaystyle _1^1}\mathrm{d}\mathrm{cos}\theta {\displaystyle \frac{\mathrm{d}^2\mathrm{\Gamma }(BX_sl^+\mathrm{}^{})}{\mathrm{d}\mathrm{cos}\theta \mathrm{d}s}}\mathrm{Sgn}(\mathrm{cos}\theta )}{{\displaystyle _1^1}\mathrm{d}\mathrm{cos}\theta {\displaystyle \frac{\mathrm{d}^2\mathrm{\Gamma }(BX_sl^+\mathrm{}^{})}{\mathrm{d}\mathrm{cos}\theta \mathrm{d}s}}}$$
(4)
where $`s=(p_\mathrm{}^++p_{\mathrm{}^{}})^2/m_b^2`$, $`\theta `$ is the angle between the positively charged lepton and the B flight direction in the rest frame of the dilepton system.
It is worth underlying that integrating the differential asymmetry given in eq. (4) we do not obtain the global Foward–Backward asymmetry which is by definition:
$`{\displaystyle \frac{N(\mathrm{}_{}^+)N(\mathrm{}_{}^+)}{N(\mathrm{}_{}^+)+N(\mathrm{}_{}^+)}}`$ $``$
$$\frac{{\displaystyle _1^1}\mathrm{d}\mathrm{cos}\theta {\displaystyle ds\frac{\mathrm{d}^2\mathrm{\Gamma }(BX_sl^+\mathrm{}^{})}{\mathrm{d}\mathrm{cos}\theta \mathrm{d}s}\mathrm{Sgn}(\mathrm{cos}\theta )}}{{\displaystyle _1^1}\mathrm{d}\mathrm{cos}\theta {\displaystyle ds\frac{\mathrm{d}^2\mathrm{\Gamma }(BX_sl^+\mathrm{}^{})}{\mathrm{d}\mathrm{cos}\theta \mathrm{d}s}}}$$
(5)
where $`\mathrm{}_{}^+`$ and $`\mathrm{}_{}^+`$ stand respectevely for leptons scattered in the forward and backward direction.
To this extent it is useful to introduce the following quantity
$`\overline{A}_{FB}(s)`$ $``$ (6)
$$\frac{{\displaystyle _1^1}\mathrm{d}\mathrm{cos}\theta {\displaystyle \frac{\mathrm{d}^2\mathrm{\Gamma }(BX_sl^+\mathrm{}^{})}{\mathrm{d}\mathrm{cos}\theta \mathrm{d}s}}\mathrm{Sgn}(\mathrm{cos}\theta )}{{\displaystyle _1^1}\mathrm{d}\mathrm{cos}\theta {\displaystyle ds\frac{\mathrm{d}^2\mathrm{\Gamma }(BX_sl^+\mathrm{}^{})}{\mathrm{d}\mathrm{cos}\theta \mathrm{d}s}}}$$
(7)
whose integrated value is given by eq. (5).
Eqns. (3) and (4) have been corrected in order to include several non-perturbative effects. We refer to and to references therein for all the definitions concerning this issue.
## 3 Light $`\stackrel{~}{t}_R`$ effects
In the Mass Insertion Approximation framework we assume that all the diagonal entries of the scalar mass matrices are degenerate and that the off diagonal ones are sufficiently small. In this context we expect all the squark masses to lie in a small region around an average mass which we have chosen not smaller than 250 GeV. Actually there is the possibility for the $`\stackrel{~}{t}_R`$ to be much lighter; in fact the lower bound on its mass is about 70 GeV. For this reason it is natural to wonder how good is the MIA when a $`\stackrel{~}{t}_R`$ explicitly runs in a loop.
The diagrams, among those we have computed, interested in this effect are the chargino penguins and box with the $`(\delta _{23}^u)_{LR}`$ insertion. To compute the light–$`\stackrel{~}{t}_R`$ contribution we adopt the approach presented in ref. . There the authors consider an expansion valid for unequal diagonal entries which gives exactly the MIA in the limit of complete degeneration.
## 4 Constraints on mass insertions
In order to establish how large the SUSY contribution to $`BX_s\mathrm{}^+\mathrm{}^{}`$ can be, one can compare, coefficient per coefficient, the MI results with the SM ones taking into account possible constraints on the $`\delta `$s coming from other processes, in particular from $`bs\gamma `$. A discussion about this issue can be found in ref. . The most relevant $`\delta `$s interested in the determination of the Wilson coefficients $`C_7`$, $`C_9`$ and $`C_{10}`$ are $`(\delta _{23}^u)_{LL}`$, $`(\delta _{23}^u)_{LR}`$, $`(\delta _{33}^u)_{RL}`$, $`(\delta _{23}^d)_{LL}`$ and $`(\delta _{23}^d)_{LR}`$.
## 5 Results
While the gluino sector of the theory is essentially determined by the knowledge of the gluino mass (i.e. $`M_{gl}`$), the chargino one needs two more parameters (i.e. $`M_2`$, $`\mu `$ and $`\mathrm{tan}\beta `$). Moreover it is a general feature of the models we are studying the decoupling of the SUSY contributions in the limit of high sparticle masses: we expect the biggest SUSY contributions to appear for such masses chosen at the lower bound of the experimentally allowed region. On the other hand this considerations suggest us to constrain the parameters of the chargino sector by the requirement of the lighter eigenstate not to have a mass lower than the experimental bound of about 70 GeV . The remaining two dimensional parameter space has yet no constraint. For these reasons we scan the chargino parameter space by means of scatter plots .
Thus, with $`\mu 160`$, $`M_{gl}M_{sq}250`$ GeV, $`M_{\stackrel{~}{\nu }}50`$ GeV, $`\mathrm{tan}\beta 2`$ one gets
$`C_9^{MI}(M_B)`$ $`=`$ $`1.2(\delta _{23}^u)_{LL}+0.69(\delta _{23}^u)_{LR}`$
$`0.51(\delta _{23}^d)_{LL}`$
$`C_{10}^{MI}(M_B)`$ $`=`$ $`1.75(\delta _{23}^u)_{LL}8.25(\delta _{23}^u)_{LR}.`$
In order to numerically compare these values with the respective SM results we note that the minimum value of $`\left(C_9^{\mathrm{eff}}(s)\right)^{SM}(M_B)`$ is about 4 while $`C_{10}^{SM}=4.6`$. Thus one deduces that SM expectations for the observables are enhanced when $`C_9^{MI}(M_b)`$ is positive. Moreover the big value of $`C_{10}^{MI}(M_B)`$ implies that the final total coefficient $`C_{10}(M_B)`$ can have a different sign with respect to the SM estimate. As a consequence of this, the sign of asymmetries can be the opposite of the one calculated in the SM.
The sign and the value of the coefficient $`C_7`$ has a great importance. In fact the integral of the BR (see eq. (3)) is dominated by the $`|C_7|^2/s`$ and $`C_7C_9`$ term for low values of $`s`$. In the SM the interference between $`O_7`$ and $`O_9`$ is destructive and this behavior can be easily modifed in the general class of models we are dealing with. It is worthwhile to note that the Constrained MSSM cannot drive a change in the the sign of $`C_7`$ while this can be realized in these kind of models.
The integrated BRs and asymmetries for the decays $`BX_se^+e^{}`$ and $`BX_s\mu ^+\mu ^{}`$ in the SM case and in the SUSY one (with the above choices of the parameters) are summarized in tab.1. There we computed the total perturbative contributions neglecting the resonances.
The results of tab. 1 must be compared with the experimental best limit which reads
$`BR_{exp}`$ $`<`$ $`\mathrm{5.8\hspace{0.33em}10}^5.`$ (8)
Looking table 1 we see that the differences between SM and SUSY predictions can be remarkable. Moreover a sufficiently precise measure of BRs, $`A_{FB}`$s and $`\overline{A}_{FB}`$s can either discriminate between the CMSSM and more general SUSY models or give new constraints on mass insertions. Both these kind of informations can be very useful for model building.
## 6 Conclusions
In this paper a discussion about SUSY contributions to semileptonic decays $`BX_se^+e^{}`$, $`BX_s\mu ^+\mu ^{}`$ is provided.
Given the constraints coming from the recent measure of $`bs\gamma `$ and estimating all possible SUSY effects in the MIA framework we see that SUSY has a chance to strongly enhance or depress semileptonic charmless B-decays. The expected direct measure will give very interesting informations about the SM and its possible extensions. |
no-problem/9911/math-ph9911012.html | ar5iv | text | # 1 Introduction.
## 1 Introduction.
The (normalized) Rogers dilogarithm is a transcendental function defined for $`x[0,1]`$ as follows
$$L(x)=\frac{6}{\pi ^2}\left(\underset{n=1}{\overset{\mathrm{}}{}}\frac{x^n}{n^2}+\frac{1}{2}\mathrm{ln}x\mathrm{ln}(1x)\right).$$
(1.1)
It is a strictly increasing continuous function satisfying the following functional equations:
$`L(x)+L(1x)=1,`$ (1.2)
$`L(x)+L(y)=L(xy)+L\left({\displaystyle \frac{x(1y)}{1xy}}\right)+L\left({\displaystyle \frac{y(1x)}{1xy}}\right).`$ (1.3)
Dilogarithm identities of the form
$$\underset{k=1}{\overset{r}{}}L(x_k)=c,$$
(1.4)
where $`c0`$ is a rational number, and $`x_k[0,1]`$ are algebraic numbers (i.e. they are real roots of polynomial equations with integer coefficients) arise in different contexts in mathematics and theoretical physics (see e.g., and references therein). In particular, they appear in the description of the asymptotic behaviour of infinite series $`\chi (q)`$ of the form
$$\chi (q)=q^{\mathrm{const}}\underset{\stackrel{}{m}=\stackrel{}{0}}{\overset{\mathrm{}}{}}\frac{q^{\stackrel{}{m}^tA\stackrel{}{m}+\stackrel{}{m}\stackrel{}{B}}}{(q)_{m_1}\mathrm{}(q)_{m_r}},$$
(1.5)
where $`(q)_n=_{k=1}^n(1q^k)`$ and $`(q)_0=1`$. Suppose that $`A`$ and $`\stackrel{}{B}`$ are such that the sum in (1.5) involves only non-negative powers of $`q`$ (hence $`\chi (q)`$ is convergent for $`0<|q|<1`$). Let $`q=e^{2\pi i\tau }`$, $`\mathrm{Im}(\tau )>0`$ and $`\widehat{q}=e^{2\pi i/\tau }`$. The saddle point analysis (see e.g., ) shows that the asymptotics of $`\chi (q)`$ in the $`\tau 0`$ limit is $`\chi (q)\widehat{q}^{\frac{c}{24}}`$ with $`c`$ given by (1.4) and the numbers $`0x_i1`$ satisfying the following equations
$$x_i=\underset{j=1}{\overset{r}{}}(1x_j)^{(A_{ij}+A_{ji})},i=1,\mathrm{},r.$$
(1.6)
Let $`A`$ be an $`r\times r`$ matrix with rational entries such that all $`x_i`$ in (1.6) belong to the interval $`[0,1]`$. Introduce $`c[A]=_{i=1}^rL(x_i)`$. We will call the matrix $`A`$ admissible if $`c[A]`$ is rational. As seen from (1.6), it is sufficient to consider only symmetric $`A`$.
The principal aim of this work is to search for admissible $`2\times 2`$ matrices $`A`$ such that $`c[A]`$ has the form of the effective central charge $`c_{st}`$ of a minimal Virasoro model $`(s,t)`$, i.e.
$$c_{st}=1\frac{6}{st},$$
(1.7)
where $`s`$ and $`t`$ are co-prime numbers.
The physical motivation for the formulated mathematical task is twofold. First, equations (1.4) and (1.6) arise in the context of the thermodynamic Bethe ansatz (TBA) approach to the ultra-violet limit of certain (1+1)-dimensional integrable systems . In this case the matrix $`A`$ is related to the corresponding S-matrix, $`S(\theta )`$, and $`c`$ gives the value of the effective central charge of the ultra-violet limit of the model in question. Below we will refer to a system of equations of the type (1.6) as the TBA equations.
Second, equations (1.4) and (1.6) appear in the conformal field theory. Namely, the series (1.5) can be identified for certain $`A`$ (upon choosing specific $`\stackrel{}{B}`$ and possibly imposing some restriction on the summation over $`\stackrel{}{m}`$) as characters (or linear combinations of characters) of irreducible representations of the Virasoro algebra (see for characters of the minimal models). In this case $`c`$ is the value of the effective central charge of the conformal model to which the character $`\chi (q)`$ belongs.
In addition, the search for admissible matrices $`A`$ has a pure mathematical outcome. It allows us to find many dilogarithm identities and to make a step towards classification of the identities (1.4) for $`r=2`$ (the complete classification is an open problem that appears to be quite involved).
In the $`r=1`$ case there are only five algebraic numbers on the interval $`[0,1]`$ such that $`c`$ in (1.4) is rational,
$$L(0)=0,L(1\rho )=\frac{2}{5},L(\frac{1}{2})=\frac{1}{2},L(\rho )=\frac{3}{5},L(1)=1.$$
(1.8)
Here $`\rho =\frac{1}{2}(\sqrt{5}1)`$ is the positive root of the equation $`x^2+x=1`$. Notice that all the values of $`c=L(x)`$ listed in (1.8) have the form (1.7) (with $`(s,t)=(2,3)`$, $`(2,5)`$, $`(3,4)`$, $`(3,5)`$, and $`st=\mathrm{}`$ for $`c=1`$). They correspond, respectively, to
$$A=\mathrm{},1,\frac{1}{2},\frac{1}{4},0.$$
(1.9)
These $`A`$ allow us to construct Virasoro characters of the form (1.5). In particular, $`A=\mathrm{}`$ gives $`\chi (q)=1`$, which is the only character of the trivial $`(2,3)`$ model, and $`A=0`$ gives (for $`B=0`$) the eta-function $`\eta (q)`$. For the other $`A`$ we have, for instance, (see and references therein)
$$\chi _{1,1}^{2,5}=q^{\frac{11}{60}}\underset{m=0}{\overset{\mathrm{}}{}}\frac{q^{m^2+m}}{(q)_m},\chi _{1,2}^{3,4}=q^{\frac{1}{16}}\underset{m=0}{\overset{\mathrm{}}{}}\frac{q^{\frac{1}{2}m^2+\frac{1}{2}m}}{(q)_m},\chi _{1,2}^{3,5}+\chi _{1,3}^{3,5}=q^{\frac{1}{40}}\underset{m=0}{\overset{\mathrm{}}{}}\frac{q^{\frac{1}{4}m^2}}{(q)_m}.$$
(1.10)
The observation that all values of $`c`$ obtained from the $`r=1`$ TBA equations are of the form (1.7) motivates our choice of $`c`$ for the $`r=2`$ case. Notice however that in the latter case $`0c[A]2`$. Therefore, we allow $`st`$ in (1.7) to acquire negative values (which makes sense in the light of Proposition 2 below), keeping the requirement that $`|s|`$ and $`|t|`$ are co-prime. It should be remarked here that another natural candidate for $`c[A]2`$ is the central charge of the $`Z_n`$-parafermionic model ,
$$c_n=\frac{2(n1)}{n+2},n=2,3,4,\mathrm{}$$
(1.11)
As we will see below, this form of $`c`$ appears in the connection to the $`r=2`$ TBA also quite often.
The paper is organized as follows. In section 2 certain properties of the solution to the $`r=2`$ TBA equations are described (e.g., we find what classes of $`A`$ correspond to $`c=1`$, $`c<1`$ and $`c>1`$), and some continuous families of admissible matrices $`A`$ are found. In section 3 various admissible matrices $`A`$ (not belonging to continuous families) with $`c[A]`$ of the form (1.7) are presented. The corresponding dilogarithm identities are obtained and in most cases proven or shown to be equivalent to previously known identities. In section 4 we briefly discuss possible applications and remaining questions.
## 2 Properties of $`r=2`$ TBA equations.
Our aim is to search for such admissible matrices $`A=\left(\genfrac{}{}{0pt}{}{a}{b}\genfrac{}{}{0pt}{}{b}{d}\right)`$ that the value of $`c[A]=L(x)+L(y)`$ has the form (1.7) ($`|s|`$ and $`|t|`$ are co-prime numbers and $`st`$ may be negative). Recall that $`0x,y1`$ satisfy the equations
$$\begin{array}{c}x=(1x)^{2a}(1y)^{2b}\hfill \\ y=(1x)^{2b}(1y)^{2d}.\hfill \end{array}$$
(2.1)
Let us denote $`D:=adb^2=detA`$ and introduce the functions $`\kappa (t)`$ and $`\delta (t)`$ defined for $`t0`$ as follows:
$$\kappa (t)=\xi ,\delta (t)=L(\xi ),\mathrm{where}\xi =(1\xi )^{2t},0\xi 1.$$
(2.2)
Since the summation in (1.5) is taken over non-negative numbers, it is too restrictive to require $`A`$ to be positive definite. Instead, we impose weaker conditions ensuring that the sum in (1.5) involves only non-negative powers of $`q`$:
$$a,d0,b\mathrm{min}(a,d).$$
(2.3)
Notice that these are sufficient conditions for (2.1) to have a solution on the interval $`[0,1]`$.
For $`b=0`$ equations (2.1) decouple and $`c[A]=\delta (a)+\delta (d)`$. Then, taking the (finite) values of $`a`$ and $`d`$ from the list (1.9), we obtain
$$c=\frac{4}{5},\frac{9}{10},1,\frac{11}{10},\frac{6}{5},\frac{7}{5},\frac{3}{2},\frac{8}{5},2.$$
(2.4)
The first two values are the effective central charges of the $`(5,6)`$ and $`(5,12)`$ minimal models, whereas the last four values correspond to the $`Z_8`$, $`Z_{10}`$, $`Z_{13}`$ and $`Z_{\mathrm{}}`$ parafermionic models.
Another possibility for the $`b=0`$ case is to take $`a`$ to be any positive (rational) number and put $`d=(4a)^1`$. As seen from (2.1), this leads to $`y=1x`$, and hence $`c[A]=1`$ due to (1.2). In fact, it appears that the set (2.4) exhausts possible rational values of $`c[A]`$ for $`b=0`$ (a rigorous proof of this statement would be desirable). Thus, the $`b=0`$ case does not lead to non-trivial $`r=2`$ dilogarithm identities. For the rest of the paper we will assume that $`b0`$.
Notice that the system (2.1) may in general have several solutions on the interval $`[0,1]`$. For example, if $`a>0`$, $`\frac{1}{2}>b>0`$, $`d=0`$ (notice that $`\kappa (0)=1`$), the system (2.1) possesses the extra solution $`x=0`$, $`y=1`$. Such a situation is undesirable from the physical point of view ($`x_i`$ in the TBA equations (1.6) are physical entities which should be defined uniquely). Therefore, in the present paper we will deal mainly with such matrices $`A`$ that the solution of (2.1) is unique.
Proposition 1. Suppose that $`A`$ satisfies (2.3) and
$$D\frac{1}{2}\mathrm{max}\{d\left(\frac{1}{\kappa (a)}1\right),a\left(\frac{1}{\kappa (d)}1\right)\}.$$
(2.5)
Then the system (2.1) possesses a unique solution on the interval $`[0,1]`$.
The proof of this and of the other propositions in this section is given in the Appendix. Equation (2.5) involves the function $`\kappa (t)`$ which cannot be expressed in terms of elementary functions. It can be reduced to more explicit (although weaker) estimates. For instance, employing the Bernoulli and a Jensen-type inequalities to estimate $`\kappa (t)`$, we derive that (2.5) holds if $`Dad`$ for $`d\frac{1}{2}`$, $`b>0`$, and if $`D(2ad)/(2d+1)`$ for $`d>\frac{1}{2}`$, $`b>0`$.
Proposition 2. Suppose that $`A`$ is a symmetric invertible $`r\times r`$ matrix such that the corresponding solution of (1.6) on the interval $`[0,1]`$ is unique. Then
$$c[A]+c[\frac{1}{4}A^1]=r.$$
(2.6)
This proposition explains why it makes sense to allow $`st`$ in (1.7) to be negative. If $`c[A]=1\frac{6}{st}>1`$, then $`c[\frac{1}{4}A^1]=1+\frac{6}{st}<1`$. Furthermore, Proposition 2 shows also that it is sufficient to consider only such $`A`$ that $`b>0`$. Indeed, if $`b<0`$, then (2.3) implies that $`D>0`$. Therefore, the off-diagonal entries of the ‘dual’ matrix $`\frac{1}{4}A^1`$ are positive.
Proposition 3. Suppose that $`A`$ satisfies (2.3). Then
$`c[A]>1`$ if and only if $`b<\frac{1}{2}\mathrm{and}ad<(\frac{1}{2}b)^2;`$ (2.7)
$`c[A]=1`$ if and only if $`b\frac{1}{2}\mathrm{and}ad=(\frac{1}{2}b)^2;`$ (2.8)
$`c[A]<1`$ $`\mathrm{otherwise}.`$ (2.9)
Equation (2.8) implies that the solution of (2.1) satisfies the relation $`x+y=1`$ if and only if the matrix $`A`$ has the form
$$A=\left(\begin{array}{cc}a& \frac{1}{2}\sqrt{ad}\\ \frac{1}{2}\sqrt{ad}& d\end{array}\right),a,d0.$$
(2.10)
Notice that here $`D=\sqrt{ad}\frac{1}{4}`$ and Proposition 1 cannot guarantee uniqueness of the solution of (2.1) for sufficiently small values of $`ad`$. However, as seen from the proof, even if (2.1) has several solutions all they satisfy the relation $`x+y=1`$.
Proposition 4. Suppose that $`A`$ is such that the corresponding solution of (2.1) on the interval $`[0,1]`$ is unique. Then this solution satisfies the relation $`x=y`$ if and only if $`a=d`$.
This proposition implies that the value of $`c[A]`$ for a matrix of the form
$$A=\left(\begin{array}{cc}a& b\\ b& a\end{array}\right)$$
(2.11)
depends only on $`(a+b)`$. Indeed, for $`x=y`$ and $`a=d`$ the system (2.1) turns into the pair of coinciding equations for one variable. Therefore, $`x=y=\kappa (a+b)`$ and $`c[A]=2\delta (a+b)`$.
Thus, the $`r=2`$ dilogarithm identity for a matrix $`A`$ of the form (2.11) reduces to an $`r=1`$ identity. Therefore, the only values of $`(a+b)`$ in (2.11) that correspond to rational value of $`c[A]`$ are given by the set (1.9). Namely, for $`(a+b)=1,\frac{1}{2},\frac{1}{4},0`$ we obtain, respectively,
$$c=\frac{4}{5},1,\frac{6}{5},2.$$
(2.12)
The value $`c=1`$ here corresponds to a particular case ($`d=a`$, $`b=\frac{1}{2}a`$) of the family (2.10). The value $`c=\frac{4}{5}`$ is the effective central charge of the $`(5,6)`$, $`(3,10)`$ and $`(2,15)`$ minimal models. The existence of the family of matrices (2.11) yielding this value of $`c[A]`$ was observed in . The following realizations of (1.5) (with certain restriction on the summation) as Virasoro characters are known for this family: $`a=\frac{2}{3}`$, $`b=\frac{1}{3}`$ gives $`\chi _{1,3}^{5,6}`$ and $`\chi _{1,1}^{5,6}+\chi _{1,5}^{5,6}`$ ; $`a=b=\frac{1}{2}`$ gives $`\chi _{1,2}^{5,6}`$, $`\chi _{1,4}^{5,6}`$, $`\chi _{2,2}^{5,6}`$ and $`\chi _{2,4}^{5,6}`$ ; $`a=1`$, $`b=0`$ gives $`\chi _{1,5}^{3,10}`$ . Let us remark that, according to Proposition 1, the solution of (2.1) for the $`a+b=1`$ case of (2.11) is unique at least for $`a>0.25`$. Numerical computations show that it becomes non-unique for $`a<a_00.1`$.
To complete the general discussion of the properties of solutions to the system (2.1) let us find some estimates for $`c[A]`$.
Proposition 5. Suppose that $`A`$ satisfies (2.3) and $`ad>0`$. Then the following lower and upper bounds on $`c[A]`$ hold:
$`\delta (b+d)+L\left(\left(\kappa (d)\right)^{\frac{a+b}{d}}\right)c[A]\delta (a+b)+\delta (d),\mathrm{for}db;`$ (2.13)
$`\delta (b+d)+L\left(\left(\kappa (\frac{D}{ab})\right)^{\frac{a^2b^2}{D}}\right)c[A]\delta (a+b)+\delta (\frac{D}{ab}),\mathrm{for}db>0;`$ (2.14)
$`\delta (a+b)+\delta (\frac{D}{ab})c[A]\mathrm{\hspace{0.17em}2}\delta (b+d),\mathrm{for}b<0.`$ (2.15)
As an application of this proposition, we notice that if $`A`$ is such that $`abd>\xi _03.75`$, then $`c[A]`$ cannot be the effective central charge of a minimal model. Indeed, the smallest non-zero value of $`c_{st}`$ is $`\frac{2}{5}`$ (recall that $`s`$ and $`t`$ in (1.7) are co-prime), whereas $`c[A]\delta (2\xi _0)+\delta (\xi _0)<\frac{2}{5}`$.
## 3 Solutions of $`r=2`$ TBA equations and corresponding dilogarithm identities.
Eqs. (2.11) (for $`a+b=0,\frac{1}{4},\frac{1}{2},1`$) and (2.10) are examples of continuous families of admissible matrices $`A`$. Now we will present several other admissible matrices $`A`$ having $`c[A]`$ in the form (1.7). For completeness, the previously known examples are also listed. Let us remind that, according to Proposition 2, the list of the matrices $`A`$ below can be doubled by including their duals $`\frac{1}{4}A^1`$, but this does not lead to new dilogarithm identities.
There exists the well-known representation of the type (1.5) for the characters of $`(2,2k+1)`$ model with $`\mathrm{rank}A=k1`$ (it provides the sum side of the Andrews-Gordon identities ). In the $`k=3`$ case the corresponding matrix $`A`$ is
$$A=\left(\begin{array}{cc}2& 1\\ 1& 1\end{array}\right),c[A]=4/7.$$
(3.1)
The corresponding dilogarithm identity is ($`\lambda =2\mathrm{cos}\frac{\pi }{7}`$)
$$L(\frac{1}{\lambda ^2})+L(\frac{1}{(\lambda ^21)^2})=\frac{4}{7}.$$
(3.2)
The other known example is the following matrix that allows us to construct all characters of the $`(3,7)`$ (see , the case of $`\chi _{1,2}^{3,7}`$ was found earlier in )
$$A=\frac{1}{4}\left(\begin{array}{cc}4& 2\\ 2& 3\end{array}\right),c[A]=5/7.$$
(3.3)
For instance,
$$\chi _{1,3+Q}^{3,7}=q^{\frac{1}{168}}\underset{\genfrac{}{}{0pt}{}{\stackrel{}{m}=\stackrel{}{0}}{m_2=Q\mathrm{mod}\mathrm{\hspace{0.17em}2}}}{\overset{\mathrm{}}{}}\frac{q^{m_1^2+\frac{3}{4}m_2^2+m_1m_2\frac{1}{2}m_2}}{(q)_{m_1}(q)_{m_2}},Q=0,1.$$
(3.4)
The corresponding dilogarithm identity is ($`\lambda =2\mathrm{cos}\frac{\pi }{7}`$)
$$L(\frac{1}{\lambda ^2})+L(\frac{1}{1+\lambda })=\frac{5}{7}.$$
(3.5)
Let us mention that both (3.2) and (3.5) can be derived from the Watson identities
$$L(\alpha )L(\alpha ^2)=\frac{1}{7},L(\beta )+\frac{1}{2}L(\beta ^2)=\frac{5}{7},L(\gamma )+\frac{1}{2}L(\gamma ^2)=\frac{4}{7},$$
(3.6)
where $`\alpha `$, $`\beta `$ and $`\gamma ^1`$ are roots of the cubic
$$t^3+2t^2t1=0$$
(3.7)
such that $`\lambda =1+\alpha =\beta ^1=(1\gamma )^1`$. The equivalence of (3.2) to the second equation in (3.6) was shown in . Exploiting the Abel’s duplication formula (which follows from (1.3))
$$\frac{1}{2}L(x^2)=L(x)L(\frac{x}{1+x}),$$
(3.8)
we establish the equivalence of (3.5) to the second equation in (3.6):
$`L(\frac{1}{\lambda ^2})+L(\frac{1}{1+\lambda })=L(\beta ^2)+L(\frac{\beta }{1+\beta })=L(\beta ^2)+L(\beta )\frac{1}{2}L(\beta ^2)=L(\beta )+\frac{1}{2}L(\beta ^2).`$
Next we describe admissible matrices $`A`$ obeying a specific pattern. Let us mention that the $`a=1`$ case was found in and the $`a=\frac{1}{2}`$, $`a=2`$ cases in .
Proposition 6. Among the matrices of the form
$$A=\frac{1}{2}\left(\begin{array}{cc}2a& 1\\ 1& 1\end{array}\right),a0$$
(3.9)
only those with $`a=0,\frac{1}{2},1,2,\mathrm{}`$ have rational value of $`c[A]`$. These values are, respectively, $`c=1,\frac{4}{5},\frac{3}{4},\frac{7}{10},\frac{1}{2}`$.
Proof. Denote $`u=1x`$, $`v=1y`$. In these variables equations (2.1) corresponding to (3.9) look as follows:
$$v=1uv,1u^2=(u^2)^a.$$
(3.10)
Using the first of these relations and employing the formulae (1.2)-(1.3), we obtain
$`L(x)+L(y)=2L(u)L(v)=2L(1v)L(u^2)L(1u)=2L(x)L(y)L(u^2),`$
and hence
$$c[A]=L(x)+L(y)=1\frac{1}{2}L(u^2).$$
(3.11)
Thus, $`c[A]`$ is rational only if $`L(u^2)`$ belongs to the list (1.8), i.e. $`u^2=0,1\rho ,\frac{1}{2},\rho ,1`$. Noticing that for $`w=u^2`$ the second equation in (3.10) takes the form $`w=(1w)^{1/a}`$, we obtain the possible values of $`2a`$ as inverse to these in (1.9) (cf. Proposition 2).
For $`a=0`$ the matrix (3.9) is a particular case of (2.10). For $`a=\mathrm{}`$ the corresponding series (1.5) contains no summation over the first variable and thus reduces to the $`r=1`$ case giving characters of the $`(3,4)`$ minimal model (for instance, the second character in (1.10)). For $`a=\frac{1}{2}`$ the matrix (3.9) is a particular case of (2.11). It allows us to construct several characters of the $`(5,6)`$ minimal model . For instance,
$$\chi _{2,2+2Q}^{5,6}=q^{\frac{1}{120}}\underset{\genfrac{}{}{0pt}{}{\stackrel{}{m}=\stackrel{}{0}}{m_2=Q\mathrm{mod}\mathrm{\hspace{0.17em}2}}}{\overset{\mathrm{}}{}}\frac{q^{\frac{1}{2}(m_1^2+m_2^2)+m_1m_2+\frac{1}{2}m_1}}{(q)_{m_1}(q)_{m_2}},Q=0,1.$$
(3.12)
The corresponding dilogarithm identity is $`2L(1\rho )=\frac{4}{5}`$.
For $`a=1`$ the matrix (3.9) allows us to construct all characters of the $`(3,8)`$ (see , the case of $`\chi _{1,2}^{3,8}`$ was found earlier in ). For instance,
$$\chi _{1,4}^{3,8}=q^{\frac{1}{8}}\underset{\stackrel{}{m}=\stackrel{}{0}}{\overset{\mathrm{}}{}}\frac{q^{m_1^2+\frac{1}{2}m_2^2+m_1m_2+m_1+\frac{1}{2}m_2}}{(q)_{m_1}(q)_{m_2}}.$$
(3.13)
The corresponding dilogarithm identity is
$$L(1\frac{1}{\sqrt{2}})+L(\sqrt{2}1)=\frac{3}{4},$$
(3.14)
or, equivalently, $`L(\frac{1}{\sqrt{2}})L(\sqrt{2}1)=\frac{1}{4}`$. The latter relation is just a particular case, $`x=\frac{1}{\sqrt{2}}`$, of the Abel’s duplication formula (3.8). Let us remark that the dual matrix gives $`c[\frac{1}{4}A^1]=\frac{5}{4}`$ which is the central charge of the $`Z_6`$ parafermionic model.
For $`a=2`$ the matrix (3.9) allows us to construct some characters of the $`(4,5)`$ . For instance,
$$\chi _{2,2}^{4,5}=q^{\frac{1}{120}}\underset{\stackrel{}{m}=\stackrel{}{0}}{\overset{\mathrm{}}{}}\frac{q^{2m_1^2+\frac{1}{2}m_2^2+m_1m_2+\frac{1}{2}m_2}}{(q)_{m_1}(q)_{m_2}}.$$
(3.15)
The corresponding dilogarithm identity is $`L(1\sqrt{\rho })+L(1\frac{1}{1+\sqrt{\rho }})=\frac{7}{10}`$, or, equivalently,
$$L(\sqrt{\rho })+L(\frac{1}{1+\sqrt{\rho }})=\frac{13}{10}.$$
(3.16)
This identity was found in as a consequence of the formula (3.15). The proof of Proposition 6 provides an algebraic derivation for (3.16) based on the functional relation (1.3).
Now we present a list of admissible matrices $`A`$ with $`c[A]`$ in the form (1.7) that have not appeared in the literature before. These are results of computer based search performed bearing in the mind the general properties of $`r=2`$ TBA equations discussed in the previous section. For some of the corresponding dilogarithm identities we give an explicit algebraic proof or show that they are equivalent to certain known identities. The cases where such a proof is lacking were checked numerically (with a precision of order $`10^{15}`$).
The effective central charge of the $`(3,5)`$ model is produced by
$$A=\frac{1}{4}\left(\begin{array}{cc}5& 4\\ 4& 4\end{array}\right),c[A]=3/5.$$
(3.17)
Notice that $`c[\frac{1}{4}A^1]=\frac{7}{5}`$ is the central charge of the $`Z_8`$ parafermionic model. Solving (2.1) for (3.17), we find that $`x=1\delta ^2`$ and $`y=(1+\delta )^2`$ where $`\delta `$ is the positive root of the quartic
$$\delta ^4+2\delta ^3\delta 1=0.$$
(3.18)
Applying the Ferrari’s method, we reduce this equation to
$$\delta ^2+\delta =\rho +1.$$
(3.19)
The solution is $`\delta =\frac{1}{2}(\sqrt{3+2\sqrt{5}}1)=\frac{1}{2}(\sqrt{4\rho +5}1)`$. The corresponding dilogarithm identity reads
$$L(1\delta ^2)+L\left(\frac{1}{(1+\delta )^2}\right)=L\left(\frac{1}{2}\sqrt{4\rho +5}\frac{1}{2}\rho \right)+L\left(\frac{1}{2}+\frac{1}{2}\rho \frac{1}{2}\sqrt{5\rho 2}\right)=\frac{3}{5}.$$
(3.20)
Gordon and McIntosh proved in for the same $`\delta `$ the following identity
$$L(\delta )L(\delta ^3)=\frac{1}{5}.$$
(3.21)
Let us show that (3.20) and (3.21) are equivalent. Using (1.2) and (3.8) several times, we find
$`L(1\delta ^2)+L(\frac{1}{(1+\delta )^2})=1L(\delta ^2)+L(\frac{1}{(1+\delta )^2})=12L(\delta )+2L(\frac{\delta }{1+\delta })+2L(\frac{1}{1+\delta })2L(\frac{1}{2+\delta })`$
$`=12L(\delta )+22L(\frac{1}{2+\delta })=32L(\delta )2L(1\delta ^3)=12\left(L(\delta )L(\delta ^3)\right)=\frac{3}{5}.`$
In the last line we used that $`(2+\delta )^1=1\delta ^3`$ holds due to (3.19).
The central charge of the $`(3,4)`$ model is produced by the following matrices
$`A=\frac{1}{2}\left(\begin{array}{cc}4& 3\\ 3& 3\end{array}\right),c[A]=1/2,`$ (3.24)
$`A=\frac{1}{2}\left(\begin{array}{cc}8& 3\\ 3& 2\end{array}\right),c[A]=1/2.`$ (3.27)
Notice that $`c[\frac{1}{4}A^1]=\frac{3}{2}`$ is the central charge of the $`Z_{10}`$ parafermionic model. Solving (2.1) for (3.24), we find: $`x=\frac{1}{4}(3\sqrt{5})=\frac{1}{2}(1\rho )`$, $`y=\sqrt{5}2=2\rho 1`$ and the corresponding dilogarithm identity reads
$$L(\frac{1}{2}\frac{1}{2}\rho )+L(2\rho 1)=\frac{1}{2}.$$
(3.28)
To prove it we introduce $`u=1x`$, $`v=1y`$ and notice that $`u=\frac{1}{2}(1+\rho )=1/(2\rho )`$ and $`v=2u^1=2(1\rho )`$. Employing (1.2) and (1.3), we obtain:
$`L(u)+L(v)=L(2u1)+L(\frac{1}{2})+L(\frac{v}{2})=L(\rho )+\frac{1}{2}+L(1\rho )=\frac{3}{2},`$
which is equivalent to (3.28) due to (1.2).
Equations (2.1) for (3.27) can be transformed to the form:
$$x^46x^3+13x^210x+1=0,y^4+6y^311y^2+6y1=0$$
(3.29)
and $`y(32x)=(1x)`$. Applying the Ferrari’s method, we reduce these equations to
$$x^2+(\sqrt{2}3)x=2\sqrt{2}3,y^2+3(\sqrt{2}+1)y=\sqrt{2}+1.$$
(3.30)
The solution is $`x=\frac{1}{2}(3\sqrt{2})\frac{1}{2}\sqrt{2\sqrt{2}1}`$, which leads to the following dilogarithm identity:
$$L\left(\frac{3}{2}\frac{1}{2}\sqrt{2}\frac{1}{2}\sqrt{2\sqrt{2}1}\right)+L\left((\frac{3}{2}+\sqrt{2})\sqrt{2\sqrt{2}1}\frac{3}{2}\frac{3}{2}\sqrt{2}\right)=\frac{1}{2}.$$
(3.31)
The effective central charge of the $`(2,5)`$ model is produced by
$$A=\frac{1}{2}\left(\begin{array}{cc}8& 5\\ 5& 4\end{array}\right),c[A]=2/5.$$
(3.32)
Notice that $`c[\frac{1}{4}A^1]=\frac{8}{5}`$ is the central charge of the $`Z_{13}`$ parafermionic model. Solving (2.1) for (3.32), we find that $`x=1u_+`$ and $`y=u_{}(u_{}1)^1`$, where $`u_+>0`$ and $`u_{}<0`$ are the real roots of the quartic
$$u^4+u^3+3u^23u1=0.$$
(3.33)
Applying the Ferrari’s method, we reduce this equation to
$$u^2\rho u=2\rho 1.$$
(3.34)
The solution is $`u_\pm =\frac{1}{2}\rho \pm \frac{1}{2}\sqrt{7\rho 3}`$, which leads to the following dilogarithm identity:
$$L\left(1\frac{1}{2}\rho \frac{1}{2}\sqrt{7\rho 3}\right)+L\left(\frac{1}{2}\sqrt{28\rho +45}2\rho \frac{5}{2}\right)=\frac{2}{5}.$$
(3.35)
To prove it we employ (1.2) and (1.3):
$`L(x)+L(y)=L(1u_+)+L(1\frac{1}{1u_{}})=2L(u_+)L(\frac{1}{1u_{}})`$
$`=2L(\frac{u_+}{1u_{}})L(\rho )L(\frac{1u_+}{1\rho })=\frac{7}{5}L(\frac{u_+}{1u_{}})L(\frac{1\rho +u_{}}{1\rho })=\frac{2}{5}L(\frac{u_+}{1u_{}})+L(\frac{u_{}}{1\rho })=\frac{2}{5}.`$
In the last line we used that the relations $`u_++u_{}=\rho `$, $`u_+u_{}=\rho ^3`$ and $`(1\rho )u_+=(1u_{})u_{}`$ hold due to (3.34).
The central charge of the $`(6,7)`$ minimal model is produced by (this was noticed earlier by M. Terhoeven (unpublished))
$$A=\frac{1}{6}\left(\begin{array}{cc}8& 1\\ 1& 2\end{array}\right),c[A]=6/7.$$
(3.36)
Notice that $`c[\frac{1}{4}A^1]=\frac{8}{7}`$ is the central charge of the $`Z_5`$ parafermionic model. Solving (2.1) for (3.36), we derive that $`x=\mu ^1`$ and $`y=1\nu `$, where $`0<\nu <1`$ and $`\mu >1`$ are the real roots of the following equation
$$t^67t^5+19t^428t^3+20t^27t+1=0.$$
(3.37)
The corresponding dilogarithm identity reads $`L(\mu ^1)+L(1\nu )=\frac{6}{7}`$, or equivalently
$$L(\nu )L(\frac{1}{\mu })=\frac{1}{7}.$$
(3.38)
It would be interesting to clarify whether this identity is related to the Watson identities.
The list is completed with two matrices $`A`$ such that $`d=0`$. As was remarked above, in such a case equations (2.1) have an extra solution $`x=0`$, $`y=1`$. We however will focus on the ‘regular’ solution, $`0<x,y<1`$.
$$A=\frac{1}{4}\left(\begin{array}{cc}1& 1\\ 1& 0\end{array}\right),c[A]=8/7.$$
(3.39)
Solving the corresponding equations (2.1), we find that $`y`$ satisfies the cubic (3.7) and $`x=1y^2`$. Therefore, $`y=\alpha `$, $`x=1\alpha ^2`$ and the dilogarithm identity yielding the value of $`c[A]`$ in (3.39) is equivalent to the first identity in (3.6):
$$L(x)+L(y)=L(1\alpha ^2)+L(\alpha )=1+L(\alpha )L(\alpha ^2)=\frac{8}{7}.$$
(3.40)
Notice that this is the central charge of the $`Z_5`$ parafermionic model. Let us remark that the dual matrix would have $`c[\frac{1}{4}A^1]=\frac{6}{7}`$ (which is the central charge of the $`(6,7)`$ minimal model) but it does not satisfy (2.3) and thus Proposition 2 is not applicable.
$$A=\frac{1}{18}\left(\begin{array}{cc}8& 3\\ 3& 0\end{array}\right),c[A]=6/5.$$
(3.41)
Solving the corresponding equations (2.1), we find that $`y`$ satisfies the quartic (3.18) and $`x=1y^3`$. Therefore, $`y=\delta `$, $`x=1\delta ^3`$ and the dilogarithm identity yielding the value of $`c[A]`$ in (3.41) is equivalent to the Gordon-McIntosh identity (3.21):
$$L(x)+L(y)=L(1\delta ^3)+L(\delta )=1+L(\delta ^3)L(\delta )=\frac{6}{5}.$$
(3.42)
The dual matrix would have $`c[\frac{1}{4}A^1]=\frac{4}{5}`$ (which is the central charge of the $`(5,6)`$ minimal model) but it does not satisfy (2.3) and thus Proposition 2 is not applicable.
## 4 Discussion.
To summarize, we studied admissible $`2\times 2`$ matrices $`A`$ such that $`c[A]`$ (or $`c[\frac{1}{4}A^1]=2c[A]`$) computed via the corresponding TBA equations (2.1) is the effective central charge (1.7) of a minimal Virasoro model. Certain properties of such matrices have been established. In particular, we have described classes of $`A`$ that have $`c[A]`$ less, equal or bigger than 1. Some upper and lower bounds for $`c[A]`$ have been obtained. Several continuous families and a ‘discrete’ set of admissible matrices $`A`$ have been found. The corresponding two-term dilogarithm identities have been obtained. Some of them ((3.16), (3.20), (3.31), (3.35), (3.38)) are quite non-trivial and appear to be new. All the found identities but (3.31) and (3.38) have been proved directly by exploiting the functional dilogarithm relations or shown to be equivalent to the Watson and Gordon-McIntosh identities. This serves as a proof that the matrices presented in section 3 (some of them were found by computer based search) are indeed admissible. What the two unproven identities concern, the structure of (3.31) suggests that it presumably can be treated by the standard technique, whereas the status of (3.38) is less clear.
The presented set of matrices $`A`$ presumably exhausts admissible matrices with not very fractional entries having $`c[A]`$ of the form (1.7). This can be claimed thanks to the Proposition 5 and the fact that the spectrum of $`c_{st}`$ is separated from 0 and 2. However, the question whether the set is complete remains open. If the set is complete (or can be completed), it can be used for a classification of massive $`(1+1)`$-dimensional integrable models with diagonal scattering by the admissible values of the effective central charge $`c_{\mathrm{eff}}`$ for the corresponding $`S`$-matrices. In particular, our results imply that such a model with two massive particles may have in the ultra-violet limit (if the standard TBA analysis applies) $`c_{\mathrm{eff}}`$ of the form (1.7) given by (2.4) or $`c=\frac{2}{5},\frac{1}{2},\frac{4}{7},\frac{3}{5},\frac{7}{10},\frac{5}{7},\frac{3}{4},\frac{6}{7},\frac{8}{7}`$.
Let us remark that a search for $`r=2`$ admissible matrices corresponding to other forms of $`c[A]`$ will be more involved. For instance, the spectrum of $`c_n`$ given by (1.11) is ‘gapless’ (i.e., not separated from 2). Therefore, according to Propositions 2 and 3, we will have to consider $`A`$ with very small and very large entries.
It is interesting to understand whether the found admissible matrices can be employed in (1.5) to construct Virasoro characters. This would allow us to apply the quasi-particle representations to the corresponding conformal models.
Acknowledgments: I am grateful to K. Kokhas for helpful discussions. This work has been completed during the workshop “Applications of integrability” at the Erwin Schrödinger Institute, Vienna and my visit to the Institut für Theoretische Physik, Freie Universität Berlin. I thank the organizers of the workshop, the members of the ESI and the members of the ITP, FU-Berlin for warm hospitality. This work was supported in part by the grant RFFI-99-01-00101.
## Appendix.
Proof of Proposition 1. Eliminating $`x`$ in (2.1), we obtain
$$y^{\frac{1}{2b}}(1y)^{\frac{d}{b}}+y^{\frac{a}{b}}(1y)^{\frac{2}{b}D}=1.$$
(A.1)
Let $`f(y)`$ denote the l.h.s. of (A.1). For $`D0`$ the uniqueness of the solution is obvious since $`f(y)`$ is monotonic (strictly increasing for $`b>0`$ and strictly decreasing for $`b<0`$) on the interval $`[0,1]`$. Consider now the case of $`D<0`$ (which implies $`b>0`$ because of (2.3)). We have $`f(0)=0`$, $`f(1)=\mathrm{}`$ and $`f(y)`$ is a smooth (but not necessarily monotonic) function for $`0<y<1`$. Eq. (A.1) can have several solutions if $`f^{}(y)df(y)/dy`$ has roots on this interval. The explicit form of $`f^{}(y)`$ shows that this can occur only for $`y>y_{\mathrm{min}}=a(a2D)^1`$. Furthermore, if (A.1) has several solutions, then among the roots of $`f^{}(y)`$ there must be at least one, denote it $`y_0`$, such that $`f(y_0)<1`$. As seen from (A.1), the necessary condition for this is $`y_0<\kappa (d)`$. If this relation is incompatible with the condition $`y_0>y_{\mathrm{min}}`$, i.e. $`2Da(\frac{1}{\kappa (d)}1)`$, then the solution of (A.1) and hence of (2.1) is unique. Considering in the same way the counterpart of (A.1) for $`x`$, we obtain the condition $`2Dd(\frac{1}{\kappa (a)}1)`$. Clearly, we can take the lowest of the two bounds.
Proof of Proposition 2. Taking logarithm of the equations in (1.6), multiplying the resulting system with $`\frac{1}{2}A^1`$ from the left, taking exponents of the new equations, and replacing all $`x_i`$ by $`(1x_i)`$, we obtain exactly equations (1.6) for $`\frac{1}{4}A^1`$. Exploiting the property (1.2), we infer that $`c[\frac{1}{4}A^1]=_{i=1}^rL(1x_i)=_{i=1}^r(1L(x_i))=rc[A]`$.
Proof of Proposition 3. In the case of $`b>\frac{1}{2}`$ we have $`x<(1x)^{2a}(1y)1y`$. Therefore $`c[A]=L(x)+L(y)<L(1y)+L(y)=1`$. The analogous consideration for $`b=\frac{1}{2}`$ shows that $`x+y=1`$ (and hence $`c[A]=1`$) only if $`a=0`$ or $`d=0`$. Otherwise $`x+y<1`$ and hence $`c[A]<1`$.
Consider now the $`b<\frac{1}{2}`$ case. Let $`4ad=(2b1)^2`$. Divide the first equation in (2.1) by $`(1y)`$ and take its $`(2b1)`$-th power. Divide the second equation in (2.1) by $`(1x)`$ and take its $`2a`$-th power. The r.h.s. of the resulting equations coincide. Thus, we obtain
$$\left(\frac{1y}{x}\right)^{12b}=\left(\frac{y}{1x}\right)^{2a},$$
(A.2)
where the powers on both sides are positive. An assumption that $`1y>x`$ leads to a contradiction since then the l.h.s. and the r.h.s. of (A.2) are, respectively, greater and smaller than 1. An assumption that $`1y<x`$ leads to analogous contradiction. Thus, we conclude that $`1y=x`$. Moreover, any matrix $`A`$ such that $`c[A]=1`$ necessarily satisfies (2.8). Indeed, $`c[A]=1`$ implies the relation $`x+y=1`$. Substituting it into (2.1), we obtain the conditions $`4ad=(12b)^2`$ and $`b\frac{1}{2}`$ (the latter one guaranties existence of a solution on the interval $`[0,1]`$).
The hyperbola $`4ad=(12b)^2`$ divides the quadrant $`a0`$, $`d0`$ into two disjoint parts. Since $`c[A]`$ is continuous function of $`a`$ and $`d`$, we infer that $`c[A]<1`$ for $`4ad>(12b)^2`$ (because $`x`$ and $`y`$ are small for large $`a`$ and $`d`$) and $`c[A]>1`$ for $`4ad<(12b)^2`$ (because $`x1`$ and $`y1`$ for small $`a`$ and $`d`$).
Proof of Proposition 4. Equation (A.1) in the $`a=d`$ case coincides with its $`x`$ counterpart, that is $`x`$ and $`y`$ obey the same equation. This implies $`x=y`$ since we required the uniqueness of the solution. The ‘only if’ part of the proposition is obvious, it suffices to substitute the relation $`x=y`$ into (2.1).
Proof of Proposition 5. Let $`b>0`$. Notice that $`ad`$ implies $`xy`$. Indeed, for $`d`$ and $`b`$ finite and $`a>>d`$, it follows from (2.1) that $`x0`$ whereas $`y`$ is finite. Together with Proposition 4 this implies that $`x<y`$ for all $`a>d`$ since $`x`$ and $`y`$ are continuous functions of $`a`$, $`b`$, $`d`$ (cf. (A.1)). Thus, we have $`1x1y`$. Substituting this inequality into (2.1), we obtain
$$(1y)^{2(a+b)}x\kappa (a+b),\kappa (b+d)y(1x)^{2(b+d)}.$$
(A.3)
This provides the upper bound for $`x`$ and the lower bound for $`y`$. In order to find an upper bound for $`y`$ we can simply notice that the second equation in (2.1) implies $`y<\kappa (d)`$. Alternatively, we can first employ (2.1) to express $`y`$ as follows: $`y=(1y)^{2D/a}x^{b/a}`$. Together with $`x<y`$ this yields $`y<\kappa (\frac{D}{ab})`$. Comparing the values of $`\frac{D}{ab}`$ and $`d`$, we infer that the first upper bound for $`y`$ is better if $`d<b`$. Now, if $`y<\kappa (t)`$, then the definition (2.2) implies also that $`1y>\kappa (t)^{\frac{1}{2t}}`$. Substituting this relation (with $`t=d`$ or $`t=\frac{D}{ab}`$) into the first inequality in (A.3), we obtain the corresponding lower bounds for $`x`$. Having found the upper and lower bounds for $`x`$ and $`y`$, we obtain the estimates (2.13) and (2.14) simply exploiting that $`L(t)`$ and hence $`\delta (t)`$ are strictly monotonic.
The estimates in (2.15) are derived by similar considerations in the $`b<0`$ case. |
no-problem/9911/hep-ph9911512.html | ar5iv | text | # The low-mass 𝜎-meson: Is it an eyewitness of confinement?
## 1 Dispersion relation $`N/D`$-solution for the $`\pi \pi `$-scattering amplitude below 900 MeV
The pion-pion scattering partial amplitude being a function of the invariant energy squared, $`s=M_{\pi \pi }^2`$, can be represented as a ratio $`N(s)/D(s)`$, where $`N(s)`$ has left-hand cut, which is due to the ”forces” (the interactions due to $`t`$\- and $`u`$-channel exchanges), while the function $`D(s)`$ is determined by the rescatterings in the $`s`$-channel. $`D(s)`$ is given by the dispersion integral along the right-hand cut in the complex-$`s`$ plane:
$$A(s)=\frac{N(s)}{D(s)},D(s)=1\underset{4\mu _\pi ^2}{\overset{\mathrm{}}{}}\frac{ds^{}}{\pi }\frac{\rho (s^{})N(s^{})}{s^{}s+i0}.$$
(1)
Here $`\rho (s)`$ is the invariant $`\pi \pi `$ phase space, $`\rho (s)=(16\pi )^1\sqrt{(s4\mu _\pi ^2)/s}`$. It is supposed in (1) that $`D(s)1`$ with $`s\mathrm{}`$ and CDD-poles are absent (a detailed presention of the $`N/D`$-method can be found in ).
The $`N`$-function can be written as an integral along the left-hand cut as follows:
$$N(s)=\underset{\mathrm{}}{\overset{s_L}{}}\frac{ds^{}}{\pi }\frac{L(s^{})}{s^{}s},$$
(2)
where the value $`s_L`$ marks the beginning of the left-hand cut. For example, for the one-meson exchange diagram $`g^2/(m^2t)`$, the left-hand cut starts at $`s_L=4\mu _\pi ^2m^2`$, and the $`N`$-function in this point has a logarithmic singularity; for the two-pion exchange, $`s_L=0`$.
Below we work with the amplitude $`a(s)`$, which is defined as follows:
$$a(s)=\frac{N(s)}{8\pi \sqrt{s}\left(1P\underset{4\mu _\pi ^2}{\overset{\mathrm{}}{}}\frac{ds^{}}{\pi }\frac{\rho (s^{})N(s^{})}{s^{}s}\right)}.$$
(3)
The amplitude $`a(s)`$ is related to the scattering phase shift: $`a(s)\sqrt{s/4\mu _\pi ^2}=\mathrm{tan}\delta _0^0`$. In Eq. (3) the threshold singularity is singled out explicitly, so the function $`a(s)`$ contains left-hand cut only as well as the poles corresponding to zeros of the denominator of the right-hand side (3): $`1=P\underset{4\mu _\pi ^2}{\overset{\mathrm{}}{}}(ds^{}/\pi )\rho (s^{})N(s^{})/(s^{}s)`$. The pole of $`a(s)`$ at $`s>4\mu _\pi ^2`$ corresponds to the phase shift value $`\delta _0^0=90^{}`$. The phase of the $`\pi \pi `$ scattering reaches the value $`\delta _0^0=90^{}`$ at $`\sqrt{s}=M_{90}850`$ MeV. Because of that, the amplitude $`a(s)`$ may be represented in the form:
$$a(s)=\underset{\mathrm{}}{\overset{s_L}{}}\frac{ds^{}}{\pi }\frac{\alpha (s^{})}{s^{}s}+\frac{C}{sM_{90}^2}+D.$$
(4)
For the reconstruction of the low-mass amplitude, the parameters $`D,C,M_{90}`$ and $`\alpha (s)`$ have been determined by fitting to the experimental data. In the fit we have used the method, which has been approved in the analysis of the low-energy nucleon-nucleon amplitudes . Namely, the integral in the right-hand side of (4) has been replaced by the sum
$$\underset{\mathrm{}}{\overset{s_L}{}}\frac{ds^{}}{\pi }\frac{\alpha (s^{})}{s^{}s}\underset{n}{}\frac{\alpha _n}{s_ns}$$
(5)
with $`\mathrm{}<s_ns_L`$.
The description of data within the $`N/D`$-solution, which uses six terms in the sum (5), is demonstrated on Fig. 1a. Parameters of the solution are given at Table 1. The scattering length in this solution is equal to $`a_0^0=0.22\mu _\pi ^1`$, the Adler zero is at $`s=0.12\mu _\pi ^2`$. The $`N/D`$-amplitude is sewed with the $`K`$-matrix amplitude of Refs. , and figure 1b demonstrates the level of the coincidence of the amplitudes $`a(s)`$ for both solutions (the values of $`a(s)`$ which correspond to the $`K`$-matrix amplitude are shown with error bars determined in ).
The dispersion relation solution has a correct analytic structure at the region $`|s|<1`$ GeV<sup>2</sup>. The amplitude has no poles on the first sheet of the complex-$`s`$ plane; the left-hand cut of the $`N`$-function after the replacement given by Eq. (5) is transformed into a set of poles on the negative piece of the real $`s`$-axis: six poles of the amplitude (at $`s/\mu _\pi ^2=5.2,9.6,10.4,31.6,36.0,40.0`$) represent the left-hand singularity of $`N(s)`$. On the second sheet (under the $`\pi \pi `$-cut) the amplitude has two poles: at $`s(4i14)\mu _\pi ^2`$ and $`s(70i34)\mu _\pi ^2`$ (see Fig. 2). The second pole, at $`s=(70i34)\mu _\pi ^2`$, is located beyond the region under consideration, $`|s|<1`$ GeV<sup>2</sup> (nevertheless, let us stress that the $`K`$-matrix amplitude has a set of poles just in the region of the second pole of the $`N/D`$-amplitude). The pole near threshold, at
$$s(4i14)\mu _\pi ^2,$$
(6)
is what we discuss. The $`N/D`$-amplitude has no poles at $`Re\sqrt{s}600900`$ MeV despite the phase shift $`\delta _0^0`$ reaches $`90^{}`$ here.
The data do not fix the $`N/D`$-amplitude rigidly. The position of the low-mass pole can be easily varied in the region $`Res(04)\mu _\pi ^2`$, and there are simultaneous variations of the scattering length in the interval $`a_0^0(0.210.28)\mu _\mu ^1`$ and Adler zero at $`s(01)\mu _\pi ^2`$.
The problem of the low-mass $`\sigma `$-meson was discussed previously. In the approaches which take into account the left-hand cut, the following positions of the pole singularities were found:
(i) dispersion relation approach, $`s(0.2i22.5)\mu _\pi ^2`$ ,
(ii) meson exchange models, $`s(3.0i17.8)\mu _\pi ^2`$ , $`s(0.5i13.2)\mu _\pi ^2`$ ,
$`s(2.9i11.8)\mu _\pi ^2`$ ,
(iii) linear $`\sigma `$-model, $`s(2.0i15.5)\mu _\pi ^2`$ ,
which are in an agreement with our result.
However in Refs. , the pole singularities were obtained in the region of higher masses, at $`Res(710)\mu _\pi ^2`$.
## 2 Low-mass pole as the eyewitness of confinement
We believe that the existence of the low-mass pole in the $`00^{++}`$-amplitude is not an occasional event. The creation of a composite scalar/isoscalar particle with small mass should be associated with certain fundamental phenomenon at separations, which are large in hadronic scale: such a phenomenon may be the confinement.
The confinement interaction is modelled by the scalar potential in colour octet state, $`V_8(r)`$; the potential increases infinetely at large distances, $`V_8(r)r`$ at $`r\mathrm{}`$. The formation of the confinement potential is promptly related to the creation of a set of $`q\overline{q}`$-pairs, or a $`q\overline{q}`$-chain. The examples are provided by the multihadron production at $`\mu ^+\mu ^{}`$-annihilation (the transition $`\gamma ^{}q\overline{q}`$) or highly excited meson decay (transition $`M^{}q\overline{q}`$, see Fig. 3a). In the decay process of Fig. 3a, the flying away colour quarks produce a chain of the $`q\overline{q}`$-pairs due to which the colour of the upper quark flees to the bottom quark. The process of Fig. 3a results in cutting the self-energy diagram of Fig. 3b. The interaction block inside the quark loop is resposible for the formation of the colour potential $`V_8(r)`$; this block is shown separately in Fig. 3c.
The increase of $`V_8(r)r`$ at large $`r`$ means the existence of a strong singularity in the momentum representation at small $`|\stackrel{}{q}|`$: $`V_8(\stackrel{}{q}^2)1/\stackrel{}{q}^4`$. Using notations of Fig. 3c, one has $`\stackrel{}{q}^2=s`$. So, the set of diagrams of Fig. 3c–type being responsible for the confinement forces has a strong singularity near $`s=0`$.
The infinite increase of $`V_8(r)r`$ at $`r\mathrm{}`$, which is reproduced in the singular behaviour $`V_8(q^2)1/q^4`$ at $`q^20`$, represents an idealized picture of the confinement. In this picture the limit $`V_8(q^2)1/q^4`$ at $`q^20`$ can be interpreted as an exchange by massless composite colour-octet particle with a coupling growing infinetely: $`g^2(q^2)/q^2`$ with $`g^2(q^2)1/q^2`$ at small $`q^2`$. This composite colour-octet particle interacts as a whole system only at large distances: its compositeness guarantees the return to standard QCD at small $`r`$. At large distances, the strongly interacting colour-octet massless particle provides colour neutralization of the quark/gluon systems.
However, the colour screening effects (in particular, encountered in the production of white particles) cut the infinite growth of $`V_8(r)`$ at very large $`r`$ allowing the behaviour $`V_8(r)r`$ at $`r<R_{confinement}`$ only (for example, see Ref. ). It results in a softening of singular behaviour of $`V_8(q^2)`$ at $`q^20`$. Nevertheless, the $`s`$-channel pole singularities in the block of Fig. 3c–type may survive. Our point is that in this case the singularities at $`s0`$ reveal themseves not only in the colour-octet state but in colour-singlet one as well, i.e. in the $`\pi \pi `$-scattering amplitude.
To clarify this point, let us re-write the block of Fig. 3c in terms of the colour/flavour propagators (t’Hooft–Veneziano diagram). In Fig. 3d, the t’Hooft–Veneziano diagram which corresponds to the block of Fig. 3c is shown: solid line represents the propagation of colour and dashed line describes that of flavour. It is seen that
(i) in the $`s`$-channel the block contains two open colour lines, $`c=3`$ and $`\overline{c}=3`$, so one has two colour states in the $`s`$-channel, octet and singlet: $`c\overline{c}=1+8`$.
(ii) the two $`s`$-channel colour amplitudes, octet and singlet ones, are not suppressed in the $`1/N`$-expansion ($`N=N_c=N_f`$), that means that the same leading-$`N`$ subgroups of the diagrams are responsible for the formation of the octet and singlet amplitudes. Therefore, the positions of the pole singularities in both amplitudes are the same in the leading-$`N`$ terms. However, the next-to-leading terms (such as decay of the white state into $`\pi \pi `$) discharge the rigid degeneration of the octet and singlet states.
Summing up, in the leading-$`N`$ terms the pole structure near $`q^2=0`$ is the same both for colour–singlet and colour–octet amplitudes. The colour–octet amplitude providing the growth of the potential $`V_8(r)r`$ at $`rR_{confinement}`$ can have singularities near $`q^2=0`$. This means that the colour–singlet amplitude has similar singulariries. Because of that, corresponding white state can be named the eyewitness of confinement.
The existence of states, which may be called the eyewitnesses of confinement, was discussed in Ref. , though in Ref. the scalar quartet $`(f_0(980),a_0(980))`$ was under discussion.
## 3 Conclusion
We have analysed the structure of the low-mass $`\pi \pi `$-amplitude in the region $`M_{\pi \pi }\begin{array}{c}<\hfill \\ \hfill \end{array}900`$ MeV using the dispersion relation $`N/D`$-method, which provides us with a possibility to take the left-hand singularities into consideration. The dispersion relation $`N/D`$-amplitude is sewed with that given by the $`K`$-matrix analysis performed at $`M_{\pi \pi }4501950`$ MeV . Obtained in this way the $`N/D`$-amplitude has a pole on the second sheet of the complex-$`s`$ plane near the $`\pi \pi `$ threshold. This pole corresponds to the existence of the low–energy bound state.
On the basis of the conventional confinement model, we argue that the confinement interaction can produce low-mass pole singularities both for the colour–octet and colour–singlet states. Then the colour–singlet scalar state, which reveals itself in the low-mass $`\pi \pi `$-amplitude, may be named the eyewitness of confinement.
## Acknowledgement
We thank A.V. Anisovich, Ya.I. Azimov, D.V. Bugg, Yu.L. Dokshitzer, H.R. Petry, A.V. Sarantsev and V.V. Vereschagin for fruitful discussions. The article is supported by the RFBR grant N 98-02-17236. |
no-problem/9911/cond-mat9911331.html | ar5iv | text | # Masses of composite fermions carrying two and four flux quanta: Differences and similarities
## Abstract
This study provides a theoretical rationalization for the intriguing experimental observation regarding the equality of the “normalized” masses of composite fermions carrying two and four flux quanta, and also demonstrates that the mass of the latter type of composite fermion has a substantial filling factor dependence in the filling factor range $`4/17>\nu >1/5`$, in agreement with experiment, originating from the relatively strong inter-composite fermion interactions here.
This work has been motivated by certain perplexing experimental results obtained recently by Yeh et al. and Pan et al. regarding the composite fermion mass. Composite fermions are formed when electrons confined to two dimensions are exposed to a strong magnetic field, since they best screen the repulsive Coulomb interaction by effectively absorbing an even number of magnetic flux quanta each to turn into composite fermions . Numerous properties of composite fermions have been investigated over the past decade; in particular, their Fermi sea, Shubnikov-de Haas oscillations, cyclotron orbits and quantized Landau levels have been observed, and their charge, spin, statistics, mass, g factor and thermopower have been measured . The subject of this article is the masses of composite fermions carrying two and four flux quanta, denoted by <sup>2</sup>CFs and <sup>4</sup>CFs, respectively. The authors of Refs. and define a dimensionless “normalized” mass $`m_{nor}^{}=m^{}/(m_e\sqrt{B[T]})`$ where $`m^{}`$ is the composite fermion (CF) mass, $`m_e`$ is the electron mass in vacuum and $`B[T]`$ is the magnetic field quoted in Tesla. (The electron mass in vacuum is chosen only as a convenient reference; the CF mass is not related to it in any way.) One unexpected outcome of the experiments was that the mass of composite fermion carrying four flux quanta at $`\nu =n/(4n+1)`$ behaves qualitatively differently than the mass of composite fermion carrying two flux quanta at $`\nu =n/(2n+1)`$. Specifically, whereas for the latter the mass is approximately constant sufficiently far from the composite fermion Fermi sea at $`\nu =1/2`$, for the former the mass increases substantially as the filling factor decreases from $`\nu =4/17`$ to $`\nu =1/5`$. This behavior was attributed to the presence of the localized Wigner crystal state at still smaller filling factors. However, one may ask: Is this effect intrinsic or driven by disorder? If intrinsic, can our present theoretical understanding account for it? What is its physical origin? Another remarkable finding was that the values of $`m_{nor}^{}`$’s for composite fermions carrying two and four flux quanta are quite close in regions where they do not have strong filling factor dependence. This was entirely unforeseen and rather difficult to justify in general terms; in fact, for a strictly two dimensional system, the $`m_{nor}^{}`$ at $`\nu =1/3`$ and $`\nu =1/5`$ can be determined from the known gaps to differ by a factor of 2.5. These experimental results thus pose well defined questions for the theory to explain. It is clear that the explanation will not involve any universal symmetry principles, but rather a detailed, quantitative understanding of the physics.
In the limit of very large magnetic fields, it is a good approximation (made throughout this work) to restrict the Hilbert space of states to the lowest Landau level of electrons. The kinetic energy degree of freedom of the electron is then quenched, and the defining problem contains no mass parameter. The physical picture that we wish to present here is as follows. When electrons transform into composite fermions, the interaction energy of electrons is converted into two parts: the kinetic energy of composite fermions, which defines a “bare” mass for the composite fermion, and a residual interaction between composite fermions. The experimentally measured mass is of course the fully renormalized CF mass. When the inter-CF interaction is weak, the measured mass is roughly equal to the bare mass, and therefore approximately constant as a function of the filling factor. However, when the inter-CF interaction is non-negligible, the measured mass can be substantially different from the bare mass and also have significant filling factor dependence. Both the bare mass and its renormalization are formally of the same order, originating from the same underlying electron-electron interaction, and it is not obvious a priori how to separate them theoretically. Nonetheless, we will see that, on the basis of general considerations, it is possible to deduce theoretically a bare mass for composite fermion. We find that (i) the regions where the composite fermion mass has significant filling factor dependence coincide with the regions where the residual interactions are strong; and (ii) the bare masses of composite fermions carrying two and four flux quanta are approximately equal.
The mass of composite fermions carrying $`2p`$ flux qauanta (<sup>2p</sup>CFs) at $`\nu =n/(2pn+1)`$ will be defined by equating the activation energy $`\mathrm{\Delta }`$ measured in transport experiments to an effective cyclotron energy of the composite fermion : $`\mathrm{\Delta }[n/(2pn+1)]=\mathrm{}eB^{}/m^{}c=\mathrm{}^2/[(2pn+1)m^{}l_0^2]`$ where $`l_0=\sqrt{\mathrm{}c/eB}`$ is the magnetic length, and $`B^{}`$ is the effective magnetic field for composite fermion, given by $`B^{}=B/(2pn+1)`$ at $`\nu =n/(2pn+1)`$. On the other hand, since the gaps are determined entirely by the Coulomb interaction, the only energy in the lowest Landau level constrained problem, they must be proportional to $`e^2/ϵl_0`$, $`ϵ`$ being the dielectric constant of the background material. Defining $`\mathrm{\Delta }=\delta (e^2/ϵl_0),`$ one obtains $`m^{}=\mathrm{}^2ϵ/[(2pn+1)\delta e^2l_0].`$ For GaAs, with $`ϵ=12.8`$ and band mass $`=0.07m_e`$, the dimensionless mass is related to $`\delta `$ as $`m_{nor}^{}=0.0264/[(2pn+1)\delta ].`$
Experimentally, rather than using the gaps directly, the mass of composite fermion is often determined from an analysis of the temperature dependence of the Shubnikov-de Haas (SdH) oscillations of composite fermions, or from the slope of the activation gaps plotted as a function of the effective magnetic field, with the expectation that the mass thus obtained is less affected by the disorder, and is consequently closer to the intrinsic mass than the mass obtained directly from the experimental activation gaps. Incidentally, a different mass of composite fermion is the “polarization mass,” determined from the values of the Zeeman energy at which transitions between ground states of different polarizations take place; its normalized value for <sup>2</sup>CFs was estimated theoretically to be $`0.6`$, in good agreement with subsequent experiments .
To investigate the findings of Refs. and , we have computed the gaps for fractional quantum Hall states at $`\nu =n/(4n+1)`$ in a Monte Carlo scheme using the Jain wave functions, and employing the lowest-Landau-level projection technique described in Ref. . In each case, the thermodynamic limit for the gap is obtained by a consideration of systems with up to 100 particles. Up to 10 million Monte Carlo steps are performed for each energy (20 million for relatively large systems, containing 70-100 particles); the error in the thermodynamic gap is a combination of the uncertainty in the gaps of finite systems and the uncertainty in their extrapolation to the thermodynamic limit. The wave function in the transverse direction, needed for the determination of the effective two-dimensional inter-electron interaction, is evaluated as a function of the two-dimensional density ($`\rho `$) in a self-consistent local density approximation (LDA), for a heterojunction sample. $`V_{eff}(r)`$ differs from the Coulomb interaction at short distances, causing a reduction in the value of the gap. Disorder is known to further reduce the gap, but in the absence of a reliable theoretical approach to deal with it, it has been set to zero in the following. Landau level (LL) mixing is also neglected, but it is known to make only a slight correction to the value of the gap after the non-zero thickness is incorporated . There is no adjustable parameter in the calculations. For further details, the reader is referred to Ref. and the references therein.
The gap to charged excitations was calculated for $`\nu =1/5`$, $`2/9`$, $`3/13`$, and $`4/17`$ as a function of the density. Fig. (1) shows the mass deduced from the gap, in units of $`m_e\sqrt{B[T]}`$, and Fig. (2) shows the $`n`$ dependence of the mass for certain typical densities. The corresponding results for <sup>2</sup>CFs are also shown in these figures for comparison . In the following, we discuss the conclusions that emerge from these results.
The reader may note that for a given filling factor, $`m_{nor}^{}`$ is only very weakly dependent on density, as also found experimentally . For example, while the density changes by a factor of 20 from $`0.5\times 10^{11}`$ cm<sup>-2</sup> to $`1.0\times 10^{12}`$ cm<sup>-2</sup>, $`m_{nor}^{}`$ changes only by $``$ 30% or less; e.g., for <sup>2</sup>CFs, it goes from approximately 0.12 to 0.13-0.16 for the filling factors shown. At first sight, this may seem precisely as expected, since for strictly zero thickness, for which the interparticle interaction is proportional to $`e^2/l_0\sqrt{B}`$, the factor $`\sqrt{B}`$ accounts for the entire density dependence of the mass (remember, the density and $`B`$ are related for a given filling factor). However, in reality, there is another length scale controlled by the density, namely the width of the wave function in the direction perpendicular to the plane, which also affects the interaction between electrons and gives a density dependence to $`m_{nor}^{}`$. To estimate the size of the correction, consider the widely used model potential $`V_{ZDS}(r)=(e^2/ϵ)(\lambda ^2+r^2)^{1/2}`$. The relevant quantity is the dimensionless ‘thickness’ parameter $`\lambda /l_0`$, which changes by a factor of $``$ 4.5 over the density range stated above, assuming a constant $`\lambda `$; for a change of $`\lambda /l_0`$ from 1.0 to 4.5, the results in Ref. imply a change of $`m_{nor}^{}`$ by a factor of 4 to 6 at filling factors $`\nu =1/3`$, $`2/5`$ and $`3/7`$. This underlines the importance of the use of a realistic interaction, which leads to a much smaller variation of the normalized mass as a function of the density. Some insight into the issue of the density dependence can be gained from the Fang-Howard variational form for the transverse wave function , $`\xi _{FH}z\mathrm{exp}[bz/2]`$, where $`z`$ is the distance in the direction perpendicular to the plane and $`b\rho ^{1/3}`$, neglecting the depletion charge density . The maximum of the wave function is obtained at $`w=2/b`$, which is a measure of the effective width in the transverse direction. Noting that $`l_0\rho ^{1/2}`$ for a fixed filling factor, we obtain for the dimensionless width, $`w/l_0\rho ^{1/6}`$; the smallness of the exponent gives an indication of why the density dependence of the mass is so weak. Coming back to $`V_{ZDS}(r)`$, the above discussion implies that the relationship between $`\lambda `$ and the “thickness” (or the density) is not straightforward for the heterojunction geometry.
A clear qualitative difference between the behaviors of the masses of <sup>2</sup>CFs and <sup>4</sup>CFs as a function of $`n`$, the number of filled CF-LLs, is manifest in Fig. (2), for the pure two-dimensional case as well as after including the finite thickness correction. The mass is seen to increase significantly in going from $`\nu =4/17`$ to $`\nu =2/9`$, in agreement with the experiment . What is the physical origin of this behavior? By definition, for strictly non-interacting composite fermions the $`m_{nor}^{}`$ is a constant equal to the bare $`m_{nor}^{}`$, implying that any variation in the mass originates from a renormalization of the bare mass by the residual interaction between composite fermions. One therefore expects to see a filling factor dependence of the CF mass only in regions where the inter-CF interaction is strong compared to the “bare” gap. One such region is in the vicinity of the composite fermion Fermi sea where the bare gap becomes rather small. The interaction between composite fermions also becomes increasingly significant at small filling factors. A recent work has found that in the filling factor range $`1/4>\nu >1/5`$ interaction between composite fermions is strong, in fact comparable to the effective CF Fermi energy, causing a spontaneous magnetization for <sup>4</sup>CFs even when the Zeeman energy is negligible. We believe that the same interaction between composite fermions causes the anomalous filling factor dependence of $`m_{nor}^{}`$ in this region. This is indeed closely related to the physics attributed to this effect in Ref. , since the inter-CF interactions are also responsible for an instability into the Wigner crystal state at still lower filling factors . Further, it is intuitively quite plausible that the mass is enhanced, rather than diminished, by the inter-CF interaction, because it will likely broaden the CF-LLs and, leading to a gap reduction or a mass increment.
The Fig. (2) also gives a clue to how one may understand theoretically the experimentally discovered equality of the masses of <sup>2</sup>CFs and <sup>4</sup>CFs. In the two-dimensional limit, the mass ratios $`m_{nor}^{}[\nu =n/(4n+1)]/m_{nor}^{}[\nu =n/(2n+1)]`$ is $``$ 2.5, 2.0, 1.7, and 1.4 for $`n=1`$, 2, 3, and 4, respectively. One of the original motivations behind our calculations was to investigate if the finite thickness correction might bring the ratios closer to unity, but they turn out to be rather insensitive to the density. These ratios, however, are not really inconsistent with experiment, since the mass equality pertains to the masses outside the interaction dominated regions, i.e., to the bare masses, which raises the question of how the bare masses can be deduced from the fully renormalized masses obtained in our calculations. For <sup>2</sup>CFs, it is natural to fix the bare mass by extrapolating the mass to $`n\mathrm{}`$ in Fig. (2) while leaving out the masses at $`\nu =5/11`$ and $`\nu =4/9`$, where the interaction effects become important. For <sup>4</sup>CFs, it would be appropriate to determine masses from the states $`\nu =n/(4n1)`$, following Refs. and , where the filling factor dependence is small. However, these states are accessed from $`n`$ filled Landau levels by reverse flux attachment , the technical complications arising from which preclude us from carrying out calculations directly at these filling factors. We instead propose to determine the bare mass of <sup>4</sup>CFs by performing an extrapolation of the masses at $`\nu =2/9`$, $`\nu =3/13`$ and $`\nu =4/17`$ to $`\nu =1/4`$ in Fig. (2) . Quite remarkably, this procedure produces very similar values of $`m_{nor}^{}`$’s for <sup>4</sup>CFs and <sup>2</sup>CFs, e.g., $`m_{nor}^{}0.120.13`$ at $`\rho =0.5\times 10^{11}`$ cm<sup>-2</sup>. Furthermore, the bare $`m_{nor}^{}`$ of <sup>4</sup>CFs is approximately one half of the $`m_{nor}^{}`$ at $`\nu =1/5`$ in our calculations. In experiments, the interpretation of the mass at $`\nu =1/5`$ is complicated by the proximity to strong localization; the mass at $`\nu =4/5`$, which is $`\nu =1/5`$ of holes, may provide a more meaningful comparison to theory. The mass at $`\nu =4/5`$ was found to be $`m_{nor}^{}=0.530.58`$, also approximately twice the value of the experimental bare mass $`m_{nor}^{}=0.250.27`$. This general consistency with experiment provides an a posteriori justification for our method of obtaining the bare mass of <sup>4</sup>CFs.
As mentioned earlier, there is no fundamental principle that forces the equality of the bare masses … it just turns out to be so for the real situation. It is in fact straightforward to construct artificial potentials for which the bare masses are not equal. For example, consider a model in which only two pseudpotentials, $`V_1`$ and $`V_3`$ are non-zero, with $`V_1>>V_3.`$ For <sup>2</sup>CFs the gaps are determined by the full interaction, i.e., effectively by $`V_1`$. For <sup>4</sup>CFs, a good approximation to the various eigenstates (exact in the limit $`V_3/V_10`$) is obtained by multiplying the corresponding <sup>2</sup>CF eigenstates by $`_{j<k}(z_jz_k)^2`$, where $`z_j=x_j+iy_j`$ is the coordinate of the $`j`$th electron. This eliminates states containing particles with relative angular momentum equal to unity, making eigenenergies independent of $`V_1`$. Thus, in this model the mass of <sup>2</sup>CFs is controlled by $`V_1`$ whereas that of <sup>4</sup>CFs by $`V_3`$, and consequently the two can be tuned independently of one another.
While we have obtained the qualitative behavior seen in experiment, the explanation is incomplete until the value of the experimental mass itself can be accounted for theoretically. At the moment, the theoretical mass differs from the experimental one by roughly a factor of two. A proper understanding of the discrepancy would ultimately require a quantitative theory of Shubnikov de Haas (SdH) oscillations of the composite fermions, and of how the disorder affects the mass. The deviation between the theoretical and experimental masses is at least consistent with what one might expect from disorder. First, the experimental bare mass for <sup>2</sup>CFs and <sup>4</sup>CFs ($`m_{nor}^{}0.250.27`$) is bigger than the theoretical one ($`m_{nor}^{}0.120.13`$). Second, the values of the experimental and theoretical bare masses differ by roughly the same amount (a factor of two) for both <sup>2</sup>CFs and <sup>4</sup>CFs.
In conclusion, the theory is able to reproduce certain qualitative features of the dependence of the composite fermion mass on various parameters. The following overall picture has emerged: The bare masses of <sup>2</sup>CFs and <sup>4</sup>CFs are quite close, but are significantly renormalized in regions where the inter-CF interactions are non-negligible, specifically, close to the Fermi sea as well as for $`\nu <1/4`$. We thank V. Scarola, R. Shankar, and H. Stormer for useful discussions and comments, and gratefully acknowledge support by the National Science Foundation under grant no. DMR-9615005, and by the National Center for Supercomputing Applications at the University of Illinois (Origin 2000), as well as the Numerically Intensive Computing Group at the Penn State University-CAC. |
no-problem/9911/cond-mat9911176.html | ar5iv | text | # Untitled Document
Bound states of interacting helium atoms
Stefan V. Mashkevich<sup>a</sup> and Stanislav I. Vilchynskyy<sup>b</sup>
<sup>a</sup>Institute for Theoretical Physics, 252143 Kiev, Ukraine<sup>*</sup><sup>*</sup>*Email: mash@mashke.org; sivil@ap3.bitp.kiev.ua
<sup>b</sup>Chair of Quantum Field Theory, Physics Department,
Taras Shevchenko Kiev University, 252127 Kiev, Ukraine<sup>1</sup><sup>1</sup>footnotemark: 1
We study the possibility of existence of bound states for two interacting <sup>4</sup>He atoms. It is shown that for some potentials, there exist not only discrete levels but also bands akin to those in the Kronig–Penney model.
1. INTRODUCTION
It is known that most of the paradoxes of modern microscopic theory of superfluidity have to do with the commonly accepted assumption of the single-particle Bose condensate (SBC) prevailing in the quantum structure of the superfluid component. In the <sup>4</sup>He Bose liquid, because of strong interaction, the SBC is strongly suppressed (it comprises no more than 10% of all <sup>4</sup>He atoms ), consequently it cannot be the basis of the superfluid component. If in some domain of momentum space there is strong enough attraction between atoms, bound pairs of bosons can form, whence a pair coherent condensate appears, and it is this condensate that will be the basis of the superfluid component.
The hypothesis of coupling of helium atoms below the $`\lambda `$ point may resolve some of those paradoxes .
The main purpose of this paper is to explore the possibility of existence of bound states for two interacting <sup>4</sup>He atoms for different theoretical and phenomenological interatomic potentials. We will show that for some of the potentials, discrete levels do exist. Besides, we will analyze certain “quasi-crystallic” models, motivated by the conjecture that at temperatures below the $`\lambda `$ point, helium possesses a structure close to that of quantum crystals. Specifically, we will consider a model of atoms “smeared” over a sphere and a Kronig–Penney-type model. These models also allow for pairing of helium atoms; moreover, the Kronig–Penney-type model leads to a band of allowed energies.
2. TWO INTERACTING ATOMS
During several decades, different phenomenological potentials of interatomic interaction in gaseous and liquid <sup>4</sup>He have been constructed, based on empirical data on thermodynamic, kinetic and quantum mechanical properties of helium \[4–8\].
Consider the Schrödinger equation for two helium atoms ($`\mathrm{\Psi }`$ being the radial part of the wave function, $`L=0`$)
$$\frac{\mathrm{}^2}{2m}\frac{1}{r^2}\frac{}{r}\left(r^2\frac{}{r}\mathrm{\Psi }\right)+\mathrm{\Phi }(r)\mathrm{\Psi }=E\mathrm{\Psi },$$
(1)
$`\mathrm{\Phi }(r)`$ being the potential and $`m`$ the reduced mass ($`2m=m_{\mathrm{He}}=6.646610^{24}`$ g). For a given $`\mathrm{\Phi }(r)`$, we analyze Eq. (1) for discrete levels by means of solving it by the Runge–Kutta method from the left and from the right and trying to choose $`\epsilon `$ such that the logarithmic derivatives of the two solutions match at some midpoint. If this fails, it is concluded that there is no level.
Table 1
Energy levels for two helium atoms assuming various interaction potentials
| Potential | $`\mathrm{\Phi }(r)`$, $`10^{12}`$ erg (r in Å) | Level, K |
| --- | --- | --- |
| Rosen–Margenau | $`925\mathrm{exp}(4.4r)560\mathrm{exp}(5.33r)\frac{1.39}{r^6}\frac{3}{r^8}`$ | None |
| Slatter–Kirkwood | $`770\mathrm{exp}(4.6r)\frac{1.49}{r^6}`$ | None |
| Intem–Schneider | $`1200\mathrm{exp}(4.72r)\frac{1.24}{r^6}\frac{1.89}{r^8}`$ | None |
| Lennard–Jones | $`4ϵ\left[\left(\frac{\sigma }{r}\right)^{12}\left(\frac{\sigma }{r}\right)^6\right]`$ | |
| | Case 1 : $`\sigma =2.556`$ Å, $`ϵ=10.22\text{K}`$ | None |
| | Case 2 : $`\sigma =2.642`$ Å, $`ϵ=10.80\text{K}`$ | 0.0201 |
| Buckingham | $`\{\begin{array}{cc}770\mathrm{exp}(4.6r)\frac{1.49}{r^6}\text{},r2.61;\hfill & \\ 977\mathrm{exp}(4.6r)\frac{1.50}{r^6}\text{}\frac{2.51}{r^8},r2.61\hfill & \end{array}`$ | 0.00632 |
| Massey–Buckingham | $`1000\mathrm{exp}(4.6r)\frac{1.91}{r^6}`$ | 0.0622 |
| Buckingham–Hamilton | $`977\mathrm{exp}(4.6r)\frac{1.5}{r^6}\frac{2.51}{r^8}`$ | 0.00229 |
3. AN ATOM AT THE CENTER OF A LATTICE
At temperatures below the $`\lambda `$ point, the structure and dynamical properties of helium are close to those of quantum crystals (e.g., Putterman pointed out a possibility of transverse oscillations in superfluid helium). Motivated by this, let us study the behavior of a helium atom in the center of a lattice. For qualitative purposes, we replace the actual cubic lattice, with the lattice constant $`d=4.5`$ Å, by $`c=8`$ atoms “smeared” over a sphere of radius $`a=d\sqrt{3}/2`$. The potential corresponding to such a model is
$$U(r)=\frac{c}{2}_0^\pi \mathrm{\Phi }(\sqrt{r^2+a^22ra\mathrm{cos}\theta })\mathrm{sin}\theta d\theta ,$$
(2)
where $`\mathrm{\Phi }`$ is taken to be the Lennard–Jones potential (case 2 above). We then have to find a level in the potential $`U(r)`$, depicted below.
Obviously, since $`U(r)`$ is infinite at $`r=a`$, there will be infinitely many levels. We find the ground level to be $`E=11.45`$ K, and the (certainly unphysical) first excited level to be $`E=+35.64`$ K.
Fig. 1. The ground level for the atom in the center of a lattice.
4. TWO NEIGHBORING CELLS AND A BAND
Now, consider a “quasi-crystallic” model. A more realistic potential, without an infinite barrier, is
$$\mathrm{\Phi }_2(r)=\{\begin{array}{cc}\mathrm{\Phi }(r),\hfill & 0<rad,\hfill \\ \mathrm{\Phi }(ad),\hfill & ad<ra+d,\hfill \\ \mathrm{\Phi }(2ar),\hfill & a+d<r2a.\hfill \end{array}$$
(3)
The parameter $`d`$ is taken to be 2.2 Å, which corresponds to twice the quantum chemical radius of the helium atom.
We get two levels: an even one with $`E=13.862`$ K and an odd one with $`E=13.8486`$ K.
Fig. 2. The potential and the levels (the splitting is not noticeable) in the model of two neighboring cells.
Finally, consider a periodic potential:
$`\mathrm{\Phi }_P(r)`$ $`=`$ $`\mathrm{\Phi }_2(r),0<r2a;\mathrm{\Phi }_P(r+2a)=\mathrm{\Phi }_P(r).`$ (4)
We find the allowed band by a well-known method : The wave function is represented as
$$\psi (r)=\mathrm{e}^{\mathrm{i}Kr}u_K(r),$$
(5)
where $`u_K(r)`$ is a periodic function. Matching the logarithmic derivative leads to
$$\mathrm{cos}2Ka=\frac{[u_1(0)u_2^{}(a)+u_1(a)u_2^{}(0)][u_2(0)u_1^{}(a)+u_2(a)u_1^{}(0)]}{2[u_1(0)u_2^{}(0)u_2(0)u_1^{}(0)]},$$
(6)
where $`u_1(r)`$ and $`u_2(r)`$ are two arbitrary linearly independent solutions of the Schrödinger equation with energy $`E`$. The values of $`E`$ for which this equation can be solved for $`K`$ (that is, for which the right-hand side is not greater than 1 by absolute value), form an allowed band. Solving the Schrödinger equation numerically and plotting the $`E`$ dependence of the right-hand side, one finds the band and its boundaries at different values of $`d`$. The maximal width of the band, at $`d2.34`$, is about 1 K. Depicted below is the potential and the band at $`d=2.33`$.
Fig. 3. The allowed band in the periodic potential model.
When $`d`$ is further increased, the upper edge of the band will “overflow” the potential and merge into the continuous spectrum. In fact, the band edges coincide with the even and odd levels in the two-well problem considered above. The reason is that the even and odd (with respect to $`a`$) wave functions of that problem (the boundary conditions for which are $`\psi (0)=\psi (2a)=0`$) can be extended into the antiperiodic ($`\mathrm{cos}2Ka=1`$) and periodic ($`\mathrm{cos}2Ka=1`$) wave functions, respectively, of the periodic potential. According to the above, these functions correspond to the band edges.
For a rectangular potential (Kronig–Penney model ) of wells of width $`a`$ divided by barriers of height $`V_0`$ and width $`b`$ with the lattice constant being fixed: $`a+b=4.5`$ Å, the band is determined by the equation
$$\frac{\beta ^2\alpha ^2}{2\alpha \beta }\mathrm{sinh}\beta b\mathrm{sin}\alpha a+\mathrm{cosh}\beta b\mathrm{cos}\alpha a=\mathrm{cos}k(a+b),$$
(7)
where $`\alpha =\sqrt{2mE}/\mathrm{},\beta =\sqrt{2m(V_0E)}\mathrm{}`$. The values of $`E`$ for which the right-hand side is between -1 and 1, form the allowed band.
Table 2
Results ($`E_{\mathrm{low}}`$, $`E_{\mathrm{high}}`$ and $`\mathrm{\Delta }E`$ — lower edge, upper edge, and band width, respectively, all energies are in K):
| $`a`$ | $`V_0`$ | $`E_{\mathrm{low}}`$ | $`E_{\mathrm{high}}`$ | $`\mathrm{\Delta }E`$ |
| --- | --- | --- | --- | --- |
| 2.5 | 10 | 2.15 | 2.35 | 0.20 |
| 3.5 | 10 | 1.17 | 1.62 | 0.45 |
| 3.5 | 5 | 0.78 | 1.56 | 0.78 |
| 3.5 | 2 | 0.38 | 1.52 | 1.14 |
| 4.0 | 2 | 0.20 | 1.92 | 1.72 |
5. CONCLUSION
Thus, we have shown that in a superfluid Bose liquid, formation of bound pairs of bosons is possible — for some potentials, there are discrete levels. Moreover, it has been shown that bound states can also exist in more realistic models — the one of atoms “smeared” over the sphere and the periodic Kronig–Penney type model. Note in particular that the Kronig–Penney type model allows for the existence of bands. The presence of such bands may explain the observable critical velocities and the energy gap for the excitation spectrum curve (“S curve”) in superfluid helium, which will be discussed in subsequent papers.
We are grateful to P.I. Fomin for suggesting the problem and constant encouragement and to A. Kostyuk for an interesting discussion and advice. Calculations have been made using Mathematica.
REFERENCES
1. E.A. Pashytskiy, Fiz. Nizk. Temp., 25 (1999) 115.
2. H.W.Jackson, Phys. Rev. A10 (1974) 278.
3. S.I. Vilchinskii, P.I. Fomin, Sov. J. Low Temp. Phys., 21 (1995) 7.
4. E.A. Mason, Journ. Chem. Phys., 22 (1954) 1678.
5. P. Rosen, Journ. Chem. Phys., 18 (1950) 1182; H. Margenau, Phys. Rev., 56 (1939) 1000; C.H. Page, Phys. Rev., 53 (1938) 426.
6. J.C. Slatter, J.G. Kirkwood, Phys. Rev., 37 (1931) 682.
7. J.O. Hirschfelder, Ch.F. Curtiss, R.B. Bird, Molecular Theory of Gases and Liquids, New York, 1954.
8. I.G. Kaplan, Introduction to the Theory of Intermolecular Interaction, Nauka, Moscow, 1982.
9. B. Plushkin (ed.), Molecular interaction from the diatomic molecules to the polymer, Mir, Moscow, 1981.
10. R.A. Buckingham, J. Hamilton, H.S.W. Massey, Proc. Roy. Soc., A179 (1941) 103.
11. S. Putterman, Superfluid Hydrodynamics, American Elsevier Publishing Company, Inc, New York, 1974.
12. S. Fluegge, Problems on Quantum Mechanics, Mir, Moscow, 1974, vol. 1., p. 75.
13. R. Kronig, W. Penney, Proc. Roy. Soc., 130 (1931) 499. |
no-problem/9911/cond-mat9911171.html | ar5iv | text | # Stripes disorder and correlation lengths in doped antiferromagnets
## I Introduction
Recent experiments shed new light and conjure up new questions on the glassy state of doped antiferromagnets. There is growing experimental evidence and theoretical arguments that electronic charge and spin ”topological stripes” density wave correlations are a generic feature in lightly doped strongly correlated antiferromagnetic materials, which may not necessarily be superconducting (as evidenced in both Copper-Oxides and non-superconducting Nickel-Oxides systems).
Moreover, there is preliminary evidence for stripes throughout the glassy state region in $`La_{2x}Sr_xCuO_4`$, which extends into part of the superconducting phase. Thus, our analysis of stripe disorder is relevant for understanding the general phase diagram of High-Tc materials. (Though the role that stripes play in the particular phenomenon of superconductivity is still controversial).
In all experimental systems, where the full charge & spin stripes structure was observed, it was found that both spin and charge order are glassy . An intrinsic finite stripes correlation length signature of disorder is consistently observed. In this paper, based on our analysis of general types of disorder in stripes, we are able to determine unambiguously the nature of the glassy state in these systems. Our analysis is rather simple, based on elementary general principles, and independent of model details and parameters.
The canonical two dimensional stripes structure comprise of hole rich lines (charge stripes) which form anti-phase domain walls between antiferromagnetic (AFM) correlated spin regions (see fig.2a). As a result, the spin periodicity $`a_s`$ is twice the charge periodicity $`a_c`$. The charge and spin correlations (in the periodic direction) are hence inexorably connected. It is thus quite surprising that, experimentally, the correlation lengths of charge and spin differ substantially; e.g., in Copper-Oxides $`(LaNd)_{2x}Sr_xCuO_4`$ (with $`x>0.1`$) the ratio of the stripes spin correlation length $`\xi _s`$ to the charge correlation length $`\xi _c`$ is $`\xi _s/\xi _c4/1`$ .
We highlight a distinction between two ways by which disorder potential can lead to finite correlation length. Conventional models attribute glassiness to the existence of topological defects. In contrast, Giamarchi & LeDoussal recently argued for the existence of a thermodynamically distinct zero temperature glassy state - labeled ”Bragg Glass” - due purely to elastic deformation. In two dimensions it is reduced to a ”quasi Bragg glass” state, were disorder by elastic deformations result with correlation length $`\xi `$ smaller than typical distance $`R_D`$ between unpaired defects; $`\xi R_D`$. Since the typical distinctions between the two glassy states are not easily probed by experiments, the Bragg glass state has been a subject of continuing controversy.
Kivelson&Emery speculated that a stripes Bragg glass state may occur as part of a general stripes phase diagram. In stripes, due to the unique feature of having two distinct and coupled periodic order parameters (spin and charge), we argue that the ratio of spin and charge correlation lengths, $`\xi _s/\xi _c`$, provide a novel sharp criterion for determining the dominant form of disorder in the system. We conclude that all current experiments on stripes are compatible only with the conjecture that stripe correlations are disordered primarily by non-topological elastic deformations (see fig.2c) .
The line of argument is the following: We find that disorder dominated by topological defects lead only to the possibility of having the charge stripes correlation length larger than the spin stripes correlation length $`\xi _s`$, i.e., $`\xi _s/\xi _c<1`$. Consequently, we establish a robust qualitative distinction between topological defects leading always to $`\xi _s/\xi _c<1`$, while predominant elastic deformations leading always to $`\xi _s/\xi _c>1`$ (and expectantly $`\xi _s/\xi _c=4`$). On this basis, we argue that the observation of $`\xi _s/\xi _c4`$ in $`(LaNd)_{2x}Sr_xCuO_4`$ unequivocally indicates a quasi Bragg glass type state. It implies that topological defects are much less relevant than commonly assumed for glassy properties of Copper-Oxide systems. Expected spectral properties are discussed.
The charge stripes are modeled as classical elastic charge density wave structures. Similarly, the AFM spin regions are discussed as a classical two-dimensional Heisenberg model. Disorder is then incorporated as acting directly on these classical objects. The observed correlation length should not be confused with the size of the stripes domains, where stripes orientation in distinct domains differ by 90. All of our analysis concerns the disorder within a single stripes domain (i.e., all stripes have the same general orientation).
## II Disorder by topological defects
The topology of the stripes charge lines is similar to that of smectic liquid crystals . Since in this paper we are not interested in significant changes of the stripes orientation, we ignore disclination defects and focus on dislocations (fig.1). Though each topological defect in the charge lines order induces also a topological defect in the spin order, the respective disordering effects of the defects are different.
The charge lines are antiphase domain walls of the spin order, i.e., the local AFM order parameter undergoes a local $`\pi `$ phase slip across each charge stripe line. Therefore, if a round trip goes through an odd number of charge lines then it is enclosing a $`\pi `$ vortex of the spin order. Such is the case for going around a line ending or a line splitting dislocation defects, which are highlighted by colored circles in fig.1 (The defects are caricaturistically drawn with rather sharp line angles, but more realistic softening of the curves will not soften our conclusions). One should not confuse dislocations in the charge stripes lines with any kind of dislocations in the 2D Heisenberg spin lattice points (i.e., the underlying Cu-O plane). The magnetic $`\pi `$-vortex defects are associated purely with rotations in spin space. The resulting spin texture can be either an XY model vortex (if spins remain confined to the plane), or otherwise Skyrmeons in an $`O\left(3\right)`$ model.
A connection with the more familiar spin glass model of frustrated/non-frustrated plaquettes in a 2D Heisenberg model can be made by mapping onto a square lattice where, every bond which is cut by a charge stripe is a ferromagnetic bond and otherwise an AFM bond. Frustrated/non-frustrated plaquettes are those with an odd/even number of AFM bonds respectively.
To recapitulate, every topological defect in the charge smectic order is simultaneously associated with a topological defect in the spin order. Therefore, it seems at first that disorder by topological defects would inevitably lead to having the same charge and spin correlation lengths (i.e., $`\xi _s/\xi _c1`$).
In fig.1, we show a possible arrangement of two dislocation defects (the full 2D plane can be understood as composed of similar arrangements at a given density of defects). The associated $`\pi `$-vortex spin defects are known to limit the spin correlation length in the plane, yet the effect on charge correlations is less trivial. While any dislocation defect in a smectic topology destroys the correlation along the lines ($`x`$-axis in fig.1), the effect on correlations perpendicular to the lines ($`z`$-axis in fig.1) is more subtle. The experimentally measured stripes correlations which are discussed throughout this paper are the ones perpendicular to the stripe lines.
To be precise, imagine an x-ray scattering experiment performed on the ”window” of charge stripes configuration depicted in fig.1. If the beam’s spot size is narrow (e.g., like the dashed yellow line) then obviously the observed peak width will not differ from that of perfectly ordered stripes (within the size of the depicted stripes figure). If the beam’s spot sized is widened to size of the window of fig.1 then there will be additional second harmonic satellite peaks, but the width of all peaks will still remain that of the perfectly ordered stripes system (with an added weak background due to the relatively small deformation regions).
In conclusion, due to the different character of the charge and spin order parameters, the distance between topological defects limits the spin correlation length but the charge stripes correlation length (in the modulation direction) is less affected. Therefore, predominant topological defects disorder always lead to having $`\xi _s/\xi _c<1`$.
## III Disorder by elastic deformations
In the previous section we established the effect of topological defects. We now examine the extreme opposite case, where topological defects are excluded and only elastic deformations dominate the stripe’s disorder (fig.2c). i.e., a perfect Bragg glass stripes state. We show that, unlike the case of disorder by topological defects, a Bragg glass stripe state allows for $`\xi _s/\xi _c>1`$, and moreover it is expected that $`\xi _s/\xi _c=4`$.
### A Correlation length ratio $`\xi _s/\xi _c`$
First, we elucidate how a finite correlation length arise due to static elastic deformations. Elastic deformations of the stripes look like fig.2c. For clarity, we elaborate an approximate simpler model of disorder (depicted in fig.2b): Picture the charge stripes as an array of parallel rigid rods (fig.2a) with period $`a_c=u_0^{j+1}u_0^j`$ (in the absence of disorder), and that the effect of external disorder potential is only to locally perturb the stripes separation.
The disorder potential induces deviations $`\mathrm{\Delta }a_c^j`$ in the local separation between stripes; $`u^{j+1}u^j=a_c+\mathrm{\Delta }a_c^j`$, with equal probability of local compression and extension, i.e., $`\mathrm{\Delta }a_c^j=\pm \left|\mathrm{\Delta }a_c\left(P\right)\right|`$, and which may have a distribution of variable size deviations $`\left|\mathrm{\Delta }a_c\left(P\right)\right|`$ with probability $`P`$. The typical local deformation may be small, $`\left|\mathrm{\Delta }a_c\left(P\right)\right|a_c`$. Yet, when statistically accumulated over a length of $`N`$ stripes, the average size of fluctuation in the distance $`u^{j+N}u^j`$ from its ordered state (where $`u_0^{j+N}u_0^j=Na_c`$) grows typically like $`\sqrt{N}\left|\mathrm{\Delta }a_c\right|`$, where $`\left|\mathrm{\Delta }a_c\right|\sqrt{\left|\mathrm{\Delta }a_c\left(P\right)\right|^2}`$.
The Bragg glass correlation length $`\xi _c`$ is the scale on which relative displacements of the stripes from their ordered position becomes of order of the bare stripes spacing $`a_c`$. In the ordered state (fig.2a), a Bragg peak results from the constructive interference of reflections from each stripe line (at the stripes wave-vector). Due to disorder, when the accumulated deviation from the ordered state $`_{j=1}^{N_a}\mathrm{\Delta }a_c^j`$ gets to be about half the bare periodicity (see fig.2b), the constructive interference is lost which give rise to a finite peak width in a scattering experiment by which the correlation length is defined. Thus, we define
$$\xi _cN_aa_c=\left(\frac{a_c}{2\left|\mathrm{\Delta }a_c\right|}\right)^2a_c$$
(1)
where $`N_a`$ is given by the condition
$$\frac{a_c}{2}\left|\underset{j=1}{\overset{N_a}{}}\mathrm{\Delta }a_c^j\right|=\sqrt{N_a}\left|\mathrm{\Delta }a_c\right|$$
(2)
In the relevant materials, the disorder potential (due mostly to the dopant $`Sr`$ charge impurities out of the stripe plane) couples directly only to the charge stripes. What is the effect on the spin correlation length? The key observation is that, in the stripe structure, the spin order deviations are forced to follow the charge lines, since the charge lines are anti-phase domain walls irrespective of their position (elsewhere we explicitly defend this statement). In other words, the spin order is enslaved to the charge order and has no independent intrinsic elastic stiffness (in real space motion, not in spin space rotations). This is an essential outcome of the frustrated phase separation view of stripes.
Therefore, for the resulting spin correlation length, it is only the spin period $`a_s`$ that replace $`a_c`$ on the left side of (2) in the defining condition which relate the bare period to the associated finite correlation length due to elastic deformations, while the average disorder ”steps” remain $`\left|\mathrm{\Delta }a_c\right|`$ per charge line also for the spin order. Thus,
$`\xi _s`$ $`=`$ $`\left({\displaystyle \frac{a_s}{2\left|\mathrm{\Delta }a_c\right|}}\right)^2a_c`$ (3)
$`\xi _s/\xi _c`$ $`=`$ $`\left({\displaystyle \frac{a_s}{a_c}}\right)^2=4.`$ (4)
Which is exactly the experimental observation, with no free adjustable parameters. The above conclusion is insensitive to details of the proper elastic model of stripes (and also insensitive to the characteristic deformation size $`\left|\mathrm{\Delta }a_c\right|`$). It is the outcome of the generic stripes structure where $`a_s/a_c=2`$, and the generic origin of finite correlation length due to random small deformations in periodic systems.
Once the correct framework for analyzing the glassy state is identified, we are in position to derive various implications. For example, we estimate the strength of disorder as characterized by the ratio $`\left(\frac{va_c^2}{2\left|V_q\right|}\right)^1`$, where $`v`$ is the elastic coefficient associated with compressing or stretching the spacing of the stripes with respect to their preferred spacing $`a_c`$, and $`V_q`$ is the disorder potential strength.
Properly, the charge stripes should be described by some form of anisotropic 2D elastic model (and moreover coupled to an underlying lattice which breaks the rotation symmetry in the plane). The full elastic model is very complicated, but a qualitative estimate can be derived using the limit model of stripes as a one dimensional CDW (as in fig.2b). Using the method of for weak disorder, the ensuing correlation length $`\xi _c`$ is estimated,
$$\frac{\xi _c}{a_c}=\left(\frac{va_c^2}{2\left|V_q\right|}\right)^{2/3}.$$
(5)
For $`\left(LaNd\right)_{7/8}Sr_{1/8}CuO_4`$, where $`a_c16`$Å and $`\xi _c100`$Å, we find $`\left(\frac{va_c^2}{2\left|V_q\right|}\right)^11/15`$, which indicates that the assumption of weak disorder is at least self-consistent.
### B Charge and spin glassiness without topological defects?
Since the disorder potential due to the dopant $`Sr`$ impurities couples only to the charge stripes, it is not a-priori clear how a concomitant spin disorder character can be obtained other than due to topological defects.
A glassy state is characterized by a wide distribution of activation energies. For the frequency dependent conductivity $`\sigma \left(\omega \right)`$ associated with various fluctuation modes of the charge stripes density wave, we can use the calculations of . $`\sigma \left(\omega \right)`$ has the general form of a peak at energy $`\omega _0v/\xi _c`$ and peak-width of similar magnitude $`\mathrm{\Delta }\omega _0`$. $`v/\xi _c`$ can be interpreted as the excitation of a fluctuation mode on the order of the pinning localization length ”domain”. The peak shape is not Gaussian, and has power-law tails. In contrast with the case of topological defects, there is no contribution to the glassy spin characteristics from static non-topological deformations of the charge stripes. Yet, spin dynamics is affected through its coupling to the charge stripes dynamics. Though an explicit microscopic theory of spin-charge coupling in stripes is still missing, it is expected that the distribution of charge fluctuation frequencies translates into a distribution of local spin fluctuation energies (possibly with similar characteristics). Thus, when topological defects are dilute, glassy spin behavior may be dominated by the contribution of non-topological stripe fluctuations.
## IV Conclusions
To summarize, our main conclusion is the following: In stripes, disorder purely by topological defects configurations lead to $`\xi _s/\xi _c<1`$. Therefore, the seemingly minute detail of experimental finding that $`\xi _s/\xi _c4`$ in $`\left(LaNd\right)_{7/8}Sr_{1/8}CuO_2`$ is enough to exclude the possibility of disorder dominated by topological defects. Moreover, we find that for disorder by pure elastic deformations it is expected that $`\xi _s/\xi _c=4`$ quite insensitively to details of the elastic stripe models. Thus, any observation of $`1<\xi _s/\xi _c<4`$ implies that there is a relatively dilute concentration of topological defects (with typical distance $`R_D`$) where $`\xi _c<R_D<4\xi _c`$, which cut the spin correlation length $`\xi _s`$ to be below its expected value in the absence of defects.
Hence we proclaim that $`\left(LaNd\right)_{7/8}Sr_{1/8}CuO_2`$ at low temperatures is a quasi Bragg glass type disordered system, in the sense that it is disordered mainly by elastic deformations resulting in stripes correlation length $`\xi _c`$ much smaller than the typical distance $`R_D`$ between unpaired topological defects, i.e., $`4\xi _c<R_D`$. Thus, any theoretical approaches and interpretation of the glassy properties of the known experimental stripe systems should not be blindly done by fitting parameters to conventional theory based on topological defects (dressed in stripes costume).
Our analysis can be applied, for example, to the effect of Zinc doping on stripes. It is not clear whether the main effect is of local pinning of the stripes or also as nucleation centers for stripe defects. We propose that an examination of the ratio $`\xi _s/\xi _c`$ will give the answer.
Following our analysis, non-superconducting $`\left(LaNd\right)_{7/8}Sr_{1/8}CuO_2`$ becomes the first doped Copper-Oxide material whose electronic ground-state, including interactions and disorder, is delineated. It is also one of the first strong evidence for realization of the controversial quasi Bragg glass state.
###### Acknowledgement 1
I thank Jose Torres, A. Nersesyan, J. Zaanen, and S. Kivelson for discussions. |
no-problem/9911/gr-qc9911001.html | ar5iv | text | # Toward gravitational wave detection
## I Detector Characterization
Quality data analysis requires quality data. Part of the process of producing quality data is identifying and, as far as possible, removing instrumental and environmental artifacts. Here we illustrate, using data taken during November 1994 at the LIGO 40 M prototype, the identification and removal, through linear regression, of artifacts due to harmonics of the 60Hz power mains.
A power spectrum (psd) of the LIGO 40 M IFO\_DMRO (interferometer differential-mode read-out; hereafter, “gravity-wave”) channel shows a series of narrow spectral features at 60 Hz and its harmonics. Similar narrow spectral features are evident in the magnetometer channel, IFO\_MAGX, which was recorded simultaneously with the gravity wave channel.
Focus attention, in both the magnetometer and gravity-wave channel, on a narrow band about one of the harmonics. We suppose that, in this narrow band, the gravity wave channel $`h`$ is related to the magnetometer channel $`M`$ through an expression of the form
$$h[k]=\frac{B(q)}{A(q)}M[kn]+\frac{C(q)}{D(q)}e[k],$$
(1)
where the index $`k`$ indicates the sample number, the residual $`e[k]`$ is white and $`A(q),B(q),C(q)`$ and $`D(q)`$ are polynomials in the lag operator $`q^1`$,
$$q^1M[k]:=M[k1].$$
(2)
The ratio $`B(q)/A(q)`$ is a linear filter that can be thought of as the transfer function between the magnetometer and the gravity wave channel; similarly, the ratio $`D(q)/C(q)`$ can be thought as a filter that whitens that part of the gravity wave channel not explained by the magnetometer channel.
Using a small sample of data we find the “best” filters $`B(q)/A(q)`$ and $`C(q)/D(q)`$, where better choices yield smaller residuals and have fewer poles and zeros. (Fewer poles and zeros are desired because we don’t want to over fit the data; smaller residuals are desired because we want to identify everything in $`h`$ that can be explained by $`M`$.)
To illustrate, we focus on the 540 Hz harmonic in an approximately 2666 second continuous stretch of LIGO 40 M data taken on 19 November 1994. We mix this harmonic down to zero frequency and down-sample the data to a 4 Hz bandwidth. Using a $`100`$ second segment of data from both the magnetometer and gravity wave channels, we find the filters $`B(q)/A(q)`$ and $`C(q)/D(q)`$ and the lag $`n`$. In this case the best filters have six zeros and one pole each. The quantity
$$h[k]\frac{B(q)}{A(q)}M[kn]$$
(3)
is then as free from the effects of the 540 Hz harmonic as we can make it, under the hypotheses of this model.
Figure 1 shows the effectiveness of this analysis. The top two panels show the gravity wave and magnetometer channel psd, with the 60 Hz harmonic features marked with asterisks. In the left bottom panel one curve shows one quadrature of the mixed and decimated gravity wave channel, a second shows the prediction that comes from applying the filter $`B(q)/A(q)`$ to the lagged magnetometer channel, and the third is their difference (cf. eq. 3). The final panel shows the psd of this difference, superposed with the psd of the original $`h[k]`$ and the magnetometer prediction. The magnetometer channel explains 40 dB of the contamination of the gravity wave channel by the 540 Hz harmonic.
## II Benchmarks for detector design
Gravitational wave detectors are built to detect gravitational waves. Better detectors do a better job of detecting gravitational waves. But, what are the relative advantages of, e.g., better sensitivity in a narrow band as opposed to somewhat worse sensitivity, but over a broader band? How do we quantify better? To aid in answering this question we have developed a Matlab model, bench, that calculates different figures of merit, based on source science, for use in detector design and configuration trade studies.
An interferometer is described to bench in terms its laser, optical surface, suspension and substrate properties, since it is these that determine the dominant contributions to the detectors noise performance.<sup>1</sup><sup>1</sup>1Gross parameters, such as arm length, may also be varied. From this characterization bench determines the detectors expected thermal noise in the mirror suspensions and substrates, radiation pressure and laser shot noise. Using this idealized noise model bench calculates two different figures of merit: the first, an effective distance to which inspiraling binary neutron stars can be observed above a fixed threshold signal-to-noise, and the second, a measure of the upper-bound that can be placed on the intensity of a stochastic gravitational-wave background in the LIGO detector system, assuming identical interferometers installed at each observatory.
The bench model for the principal interferometer noise sources has the following features:
* Radiation pressure and laser shot noise expressions support interferometer configurations including power recycling and resonant sideband extraction through the specification of three mirror transmittances and associated losses in the optical system. Thermal lensing effects are estimated and a warning issued if the laser power on the beam splitter exceeds the bounds permitted by the losses in the optical system.
* The suspension thermal noise model includes thermoelastic and structural damping for ribbon and cylindrical suspensions composed of different materials (and, for ribbon suspensions, different aspect ratios) rowan97a ; rowan97b ;
* Thermal noise in the (cylindrical) mirror substrates depends on substrate dimensions, material properties (Young modulus, Poisson ratio, loss angle), and the incident (laser) beam radius bondu98a .
The binary inspiral “effective distance” figure of merit is a distance $`r_0`$ such that the observed rate of inspiraling binary neutron star systems with S/N greater than 8 is equal to $`4\pi r_0^3\dot{n}/3`$, where $`\dot{n}`$ is the rate per unit volume of inspiraling binary systems finn96a . The stochastic signal sensitivity benchmark determines a threshold on the cross-correlation between two identical detectors (located and oriented like the LIGO detectors) such that, in the absence of a stochastic signal (or any other cross-correlated noise), the cross-correlation estimated using 1/3 yr data would exceed this threshold in only one of every one hundred trials. Other benchmarks are planned.
Figure 2 shows a sample noise model produced by bench for an interferometer whose parameters are those described in gustafson99a as a possible LIGO II goal, but whose mirror reflectivities are optimized to maximize the distance to which neutron star binary inspirals could be observed. In this configuration, a single interferometer could survey an effective volume of radius 290 Mpc for neutron star binary inspirals: a volume large enough to expect an event rate of just less than one per month. The two LIGO interferometers operating in this configuration would observe an effective volume $`2^{1/2}`$ times large, with an expected event rate of one just less than once every two weeks.
## III Mock data for mock data challenges
Reliable analysis software is a prerequisite for reliable data analysis. Validating the performance of analysis system software will involve “Mock-Data Challenges” (MDCs). In a MDC, “mock data” — artificially generated time-series whose statistical character and signal content is known exactly — is passed through an analysis pipeline.
MDCs take two forms. In the first, idealizations of the detector noise, for which the pipeline response can be anticipated, are constructed and passed through the analysis pipeline. Agreement between the anticipated and actual system response validates the analysis system implementation. In the second form, more faithful simulations of detector noise are used to calibrate the analysis system: i.e., determine, in a realistic but controlled environment, the detection efficiency and false alarm frequency as a function of the pipeline thresholds associated with the selections and data cuts.
In either form, mock data always includes the fundamental noise sources that contribute the greatest part of the detector noise power. In existing and planned interferometric detectors these fundamental contributions arise from radiation pressure noise, laser shot noise, suspension and substrate thermal noise. The thermal noise contributions have the character of structurally damped harmonic oscillators with small loss angles. The significant contribution from the substrate thermal noise arises from the low-frequency tail of the noise distribution, whose power spectral density (psd) is proportion to $`f^1`$. The significant contribution from the suspension thermal noise arises from the high-frequency tail of the suspension pendulum mode, where the psd is proportional to $`f^5`$. Additionally, the resonant peaks associated with the weakly damped suspension violin modes contribute important instrumental artifacts that must be part of a realistic noise simulation.
The general plan of our noise simulator is to find a combination of linear filters, acting in parallel on independent white noise sequences, whose sum gives rise to a sequence whose power spectral density (psd) has the desired form. The design of short, effective linear filters that capture either the odd-power dependence on $`f`$ characteristic of the thermal noise tail of structurally damped oscillators, or the strong resonant peaks of the weakly damped systems, has been a stumbling block in this program. We have overcome those difficulties by developing a physical model of a structurally damped system whose noise psd has the desired in-band character. Arising from a physical model, the psd can be factored into a real, linear, zero-pole-gain filter that is stable and invertible (i.e., has all of its poles in the left half-plane and zeros in the right half-plane), and with the required magnitude response. The filter’s zeros, poles and gain are determined uniquely and directly by location and quality factor of the resonance, and the desired simulation bandwidth.
The first panel of figure 3 shows the psd of noise simulated to have the character of a structurally damped harmonic oscillator over three decades in frequency above and below the resonance. The simulation psd is overlaid with the spectrum of an idealized structurally damped harmonic oscillator with the same loss-angle and resonance frequency. This model involved thirteen poles and an equal number of zeros. They agree to better than 1% over the detector bandwidth. The second panel shows the noise psd of the other components of the simulation (radiation pressure, shot, internal thermal and pendulum mode suspension thermal noise) for a LIGO II like interferometer without resonant sideband extraction.<sup>2</sup><sup>2</sup>2When simulating noise at the LIGO I sample rate of 16.384 KHz the simulator, currently implemented as an interpreted Matlab program, produces mock data at the rate of 81,920 samples per second, or 5$`\times `$ the real-time detectors sample rate, on a Sun Ultra-30 workstation. The inverse is the true, not amortized, cost per simulated sample and holds for any number of samples.
## IV Gravitational waves and $`\gamma `$-ray bursts
Gamma-ray bursts (GRBs) are likely triggered by the violent formation of a solar mass black hole, surrounded by a debris torus, at cosmological distances. Given the distance, the violence of the formation event, and the range of possible progenitors, waveforms from events like these cannot be predicted a priori, nor the gravitational radiation associated with an individual burst detected directly.
Nevertheless, if GRBs are accompanied by gravitational wave bursts (GWBs) the correlated output of two gravitational wave detectors evaluated in the moments just prior to a GRB will differ from that evaluated at other times. This difference can be detected, with increasing sensitivity as the number of detector observations coincident with GRBs increases. Observations at the two LIGO observatories, operating at the anticipated LIGO I sensitivity and coincident with 1000 GRBs, can be used to set a 95% confidence upper limit of $`h_{\text{RMS}}1.7\times 10^{22}`$ on the gravitational waves associated with GRBs. (See finn99f for more details.)
Consider the correlation $`X`$ between the output $`h_1`$ and $`h_2`$ of two LIGO gravitational wave detectors:
$`X`$ $`:=`$ $`x_1,x_2={\displaystyle _0^T𝑑t𝑑t^{}x_1(t)Q(|tt^{}|)x_2(t^{})},`$ (4)
where we have adjusted the origin of time in each detector so that plane gravitational waves from a direction $`\stackrel{}{n}`$ arrive “simultaneously” in the two detectors. Assuming that GWB signals from GRBs are broadband bursts, take the Fourier transform of $`Q`$ to be $`\stackrel{~}{Q}(f)=\left(S_1(|f|)S_2(|f|)\right)^1`$, where $`S_i(f)`$ is the power spectral density (psd) of detector $`i`$, for $`f`$ in the detector band, and $`0`$ otherwise.
Every time a GRB occurs (say, at time $`t_0`$) adjust the origin of time so that $`\stackrel{}{n}`$ points towards the GRB and form $`X`$ with the interval $`(t_0T,t_0)`$ of data from the two detectors. The duration $`T`$ of this interval we choose large enough so that we are likely to have included in the interval any associated GWB. For current models of gamma-ray bursts this is no longer than several hundred seconds (where we have accounted for the cosmological redshift of these distant sources).
For each observed GRB we thus have an $`X`$. Collect these $`X`$ into the on-source observation set $`𝒳_{\mathrm{on}}`$. Similarly, we build an off-source observation set $`𝒳_{\mathrm{off}}`$ following the same procedure but choosing random times $`t_0`$, not associated with any GRBs, and random directions $`\stackrel{}{n}`$ in the sky.
Assuming that GRB signals are weak compared to the detector noise, the sample sets $`𝒳_{\mathrm{off}}`$ and $`𝒳_{\mathrm{on}}`$ differ only in their means. This difference, $`\overline{s}`$ is just the average over the source population of $`h_1,h_2`$, where $`h_k`$ is the GWB signal in detector $`k`$. For the two LIGO detectors $`h_1`$ and $`h_2`$ are, to a good approximation, identical and $`\overline{s}`$ is proportional to the mean-square amplitude of the wave of $`h`$ over the source population.
We can test for the difference in the means of the two distributions $`𝒳_{\mathrm{on}}`$ and $`𝒳_{\mathrm{off}}`$ using Student’s $`t`$-test snedecor67a , a standard test for difference in means. This provides a simple, yes/no answer to the question of whether GWBs are associated with GRBs.
Alternatively, we can use the value of the $`t`$-statistic to set an upper bound on $`\overline{s}`$. To assess the strength of the upper bound, assume that there is no gravitational radiation associated with GRBs. In this case the ensemble mean, median and mode of the $`t`$ statistic is zero. Assuming that we actually observed $`t`$ equal to zero we would obtain the 95% upper bound
$`h_{\text{RMS},95\%}^2`$ $``$ $`\left[9.4\times 10^{22}\right]^2\left({\displaystyle \frac{T}{500\text{s}}}{\displaystyle \frac{1000}{N_{\text{on}}}}\right)^{1/2}{\displaystyle \frac{S_0}{\left(3\times 10^{23}\text{Hz}^{1/2}\right)^2}}\left({\displaystyle \frac{\mathrm{\Delta }f}{100\text{Hz}}}\right)^{3/2}.`$ (5)
where, for convenience, we have modeled the LIGO I detector noise as approximately constant with power spectral density $`S_0`$ over the bandwidth $`\mathrm{\Delta }f`$, and much higher elsewhere. The value of $`T`$ adopted here is consistent with external shock models of GRBs; if, on the other hand, it becomes clear that internal shock models are more appropriate (as is becoming more likely), then $`T`$ will be reduced by a factor of 1000 and the limit will improve by a factor of nearly six.
This upper limit is remarkably strong, especially because it arises without assuming any model for the GWB source or waveform, or the detector noise.<sup>3</sup><sup>3</sup>3The $`t`$ statistic is a robust one; correspondingly, the binary test (did we/didn’t we detect) is insensitive to the actual distribution of the $`X`$ in the sets $`𝒳_{\mathrm{on}}`$ and $`𝒳_{\mathrm{off}}`$. Additionally, since each $`X`$ is a sum over many statistically independent random variables the noise contribution to each $`X`$ is also, by The Central Limits Theorem, normal. Focusing on the difference in the population means has the important consequence that noise correlated between the detectors, but not associated with gravitational waves from GRBs, does not affect the difference in the means. Correspondingly, statistical tests built around the difference in the means are insensitive to noise correlated between the two gravitational wave detectors. Observations with this sensitivity will have important astrophysical consequences, either confirming or constraining the black hole model for GRBs, neither or which can be done with strictly electromagnetic observations.
## V Acknowledgments
We are grateful to the LIGO Laboratory for permitting the use of LIGO 40M prototype data in this work. The research described here is funded by United States National Science Foundation awards PHY 98-00111, 99-6213, 98-00970, 96-02157, 96-30172; the University of Glasgow and the United Kingdom funding agency PPARC. |
no-problem/9911/cond-mat9911294.html | ar5iv | text | # Time-resolved optical observation of spin-wave dynamics
## Abstract
We have created a nonequilibrium population of antiferromagnetic spin-waves in $`\mathrm{Cr}_2\mathrm{O}_3`$, and characterized its dynamics, using frequency- and time-resolved nonlinear optical spectroscopy of the exciton-magnon transition. We observe a time-dependent pump-probe line shape, which results from excitation induced renormalization of the spin-wave band structure. We present a model that reproduces the basic characteristics of the data, in which we postulate the optical nonlinearity to be dominated by interactions with long-wavelength spin-waves, and the dynamics to be due to spin-wave thermalization.
Optical magnetic excitations have been studied extensively in several magnetic oxides , and have recently attracted interest in studies of low-dimensional correlated electron systems . In this letter we demonstrate how time-resolved, nonlinear optical spectroscopy (TR-NLOS) of optical magnetic excitations may be used to investigate the interactions and dynamics of short-wavelength magnetic modes in strongly correlated systems. These short-wavelength excitations are the most difficult to treat theoretically, and even in three dimensions their mutual interaction is not fully understood. We present results of femtosecond pump-probe spectroscopy of the exciton-magnon ($`X`$-$`M`$) absorption feature in the antiferromagnetic oxide $`\mathrm{Cr}_2\mathrm{O}_3`$ , with both temporal and spectral resolution. This optical absorption feature allows us to excite antiferromagnetic spin-waves directly, with nonequilibrium occupation distributions weighted toward large momenta, high energies, and with sufficient density to observe interaction effects. We have observed a novel nonlinear optical effect associated with the nonequilibrium occupation dependence of the spin-wave (SW) dispersion relation. In semiconductor physics, it has been demonstrated that important and nontrivial physics of correlated many- particle systems can be clarified through careful attention to dynamics, using TR-NLOS. Our work indicates that TR-NLOS can be used to directly manipulate and study magnetic excitations in strongly correlated insulators, in addition to the charged excitations usually probed with optics.
In the periodic lattice of a magnetic crystal the concept of a SW appears naturally when the fermionic spin-Hamiltonian is expressed in terms of Boson operators . Controlled approximations exist for distributions of long-wavelength SWs at low excitation densities, but away from these conditions theory must be guided by experimental observations. Neutron spectroscopy and linear optical spectroscopy have provided some of the best evidence for the existence of spin wave renormalization (SWR) at elevated temperatures , but as previous measurements were largely limited to thermally occupied SWs, not much is known about the interactions among excitations at short wavelengths. Theory suggests that the interactions may undergo qualitative changes as the zone boundary is approached . In our experiments, using laser excitation of the $`X`$-$`M`$ absorption, we are able to macroscopically occupy a strongly nonthermal distribution of SWs, with a distribution heavily weighted toward the zone boundary.
The $`X`$-$`M`$ transition may be understood as a magnon sideband to an exciton, and is similar in character to the well-known two-magnon absorption. In a cubic environment, the three electrons per $`\mathrm{Cr}^{3+}`$ site possess a ground state with $`{}_{}{}^{4}\mathrm{A}_{2}^{}`$ symmetry and a lowest excited $`{}_{}{}^{2}\mathrm{E}`$ state at $`1.7`$ eV. In a lattice, these levels couple via superexchange interactions $`J`$ and $`J^{}`$, of order 50 meV. The ground state multiplet develops into antiferromagnetic-SWs, and the excited state multiplets into “magnetic exciton” bands. The total spin projection $`S_z`$ is preserved by the two spin excitations, so the spin selection rule $`\mathrm{\Delta }m=\mathrm{\Delta }m_X+\mathrm{\Delta }m_M=0`$ is satisfied. To conserve momentum the photon is absorbed by an exciton and a magnon of equal and opposite momentum, $`𝐤_X+𝐤_M0`$ . The excitation is largely localized to neighboring sites, and consequently draws its spectral weight from states over the entire Brillouin zone. Neglecting interaction between the nearby exciton and magnon for clarity, the $`X`$-$`M`$ absorption line shape is given approximately by the joint density of states (JDOS) :
$$\rho _{em}(\omega )=\underset{𝐤}{}\delta (\omega \omega _𝐤^e\omega _𝐤^m).$$
(1)
The SW dispersion is stronger than that of the exciton, and provides the dominant contribution to the line shape. The absorption feature may be qualitatively understood as the SW DOS, shifted up rigidly in energy by the exciton energy. This allows us to excite SWs and subsequently monitor their density of states (DOS), using a pulsed near-infrared laser.
We have performed pump-probe spectroscopy of the $`X`$-$`M`$ transition, using 100 fs pulses emanating at 76 MHz from a Ti:sapphire laser tuned to $`\mathrm{}\omega `$ 1.765 eV. The beam is split in a 10:1 (pump:probe) ratio and the time delay, $`\mathrm{\Delta }t=t_{probe}t_{pump}`$ is controlled with a delay line. Both pump and probe are focused through a microscope objective to a 6 $`\mu `$m diameter spot at the sample, which is held at 10 K with a cold finger cryostat. We estimate the average temperature at the sample to be $`30`$ K. The sample is a $`2.2\mu `$m thick film of $`\mathrm{Cr}_2\mathrm{O}_3`$ grown epitaxially on sapphire, by evaporation of Cr in a reactive oxygen environment. The $`c`$ axis of both $`\mathrm{Cr}_2\mathrm{O}_3`$ and sapphire is normal to the film surface. At the peak of the $`X`$-$`M`$ absorption line, the internal transmissivity of the sample is 52%.
After the sample we measure $`\mathrm{\Delta }\mathrm{T}\times I_{inc}=I_{on}I_{off}`$, the change in the transmitted probe intensity as the pump is chopped mechanically. In the small signal regime, the normalized change in transmission reproduces the differential absorption of the sample, by the relation $`\mathrm{\Delta }\mathrm{T}/\mathrm{T}\mathrm{\Delta }\alpha L`$. We measure $`\mathrm{\Delta }\mathrm{T}/\mathrm{T}(I_{on}I_{off})/I_{off}`$ as a function of wavelength and time delay, and substract from this a small background contribution which persists at negative time delay because of the finite repetition rate. At excitation densities of $`10^3/\mathrm{Cr}`$, we observe the complex spectral feature shown in Fig. 1 for three different time delays.
The data are well described in terms of two components: a spectrally featureless photoinduced absorption (PIA), which shifts the overall $`\mathrm{\Delta }\mathrm{T}/\mathrm{T}`$ toward negative values and is relatively time independent over 100 ps, and a derivative-like line shape, which is weakly evident at $`\mathrm{\Delta }t=0`$ and grows as a spectral unit as $`\mathrm{\Delta }t`$ increases.
The magnitude and qualitative line shape shown in Fig. 1 may be explained by recognizing that the absorption of each photon creates a SW, which in turn renormalizes the overall SW band structure through interaction. In a cubic ferromagnet with occupation at long wavelengths, this renormalization takes the form
$`\omega _𝐤(\{n_𝐤^{}\})`$ $`=`$ $`\omega _𝐤^0[1{\displaystyle \frac{1}{zJS^2N}}{\displaystyle \underset{𝐤^{}}{}}n_𝐤^{}\omega _𝐤^{}]`$ (2)
$`=`$ $`\omega _𝐤^0[1\kappa _{tot}],`$ (3)
where $`_{tot}`$ is the total energy of the excited SWs . Similar, more complicated expressions hold for antiferromagnets and for different crystal structures. In our experiments, the $`X`$-$`M`$ line shape reflects this renormalization, as $`\alpha (\omega ,\{n_𝐤\})\rho _{em}(\omega ,\{n_𝐤\})`$ now depends on the time dependent distribution of photo-excited SWs, and the pump-probe experiment measures the time evolution of $`_𝐤\frac{d\alpha }{dn_𝐤}n_𝐤(t)`$. In principle, the exciton dispersion relation should also be renormalized for the same reasons, but the exciton bandwidth is a factor of 2 narrower than that of the SWs, and its dispersion is weakest near the zone boundary, so this effect contributes little to the overall line shape change. Numerical simulations confirm these arguments. Since $`\alpha `$ is proportional to the integrated JDOS given in Eq. (1), the total derivative $`\frac{d\alpha }{dn_𝐤}`$ includes two distinct contributions, one from the level shifts $`\frac{d\omega _𝐤}{dn_𝐤}`$ and the other from the change in the integration volume associated with the level shifts. Qualitatively, the level shifts produce an overall redshift in the energy, while the change in the integration volume reduces the spectral weight.
It is interesting to compare the pump-probe signal at long times to the change in the linear absorption induced by raising the temperature. The SW energy absorbed from the laser is only $`2`$% of the total absorbed energy, so from the total incident energy density of 5.6 J/$`\mathrm{cm}^3`$ only $`100\mathrm{mJ}/\mathrm{cm}^3`$ is absorbed by the SW system. From the known SW dispersion relation measured by neutrons , we may calculate the SW contribution to the specific heat, $`C_pC_v=\frac{2\pi }{15}\frac{k_B^4}{D^3}T^3`$, where $`D1.25\times 10^{28}\mathrm{J}/\mathrm{cm}`$. The energy density of one laser pulse thus corresponds to a temperature change of $`30`$ K in the magnetic system. In Fig. 2, we compare the saturated pump-probe line shape to that expected from a 30 K temperature change, based on the measured temperature dependence of the linear absorption.
For additional comparison, we also show the line shape calculated from equations (1) and (2), using a scale factor $`\kappa `$ derived from the temperature dependent absorption spectra. The pump-probe spectrum has been scaled up by a factor of three, which may reasonably be attributed to the simplifications of Eq. (2).
The agreement among these curves confirms our assignment of the line shape to SWR, and shows that the energy density in the magnetic system at long time delay is comparable to that absorbed directly by the spin system from the pump beam. The magnetic excitons absorb the majority of the laser energy, serving as a reservoir. The excess energy is transferred rapidly to quenching sites, whereupon it is dissipated over 100 ns to several microseconds via nonradiative processes . In the steady state, a large number of these defect states will be excited, together with the steady-state phonon and SW distributions. As a test for indirect energy transfer to the magnetic system via defects and phonons, we have performed two-color pump-probe experiments using a Coherent RegA 9000 regenerative amplifier system to create high intensity pump pulses at 1.5 eV, well away from the $`X`$-$`M`$ absorption feature. We focused a portion of this beam onto a 3 mm sapphire crystal to generate a white light continuum probe, and used an interference filter to select a 15 meV spectral range spanning the $`X`$-$`M`$ absorption. For absorbed pulse energy densities ranging from 1-100 times those used in the degenerate pump-probe experiments, we observed no measureable change in the $`X`$-$`M`$ absorption, indicating that the mechanisms for energy transfer from defect absorption into the magnetic system occurs on time scales much longer than those of interest here. We conclude that the magnetic system behaves as a quasi-closed system at least during the first nanosecond, and that the dynamics which we observe is related to an intrinsic internal thermalization of the optically induced, nonequilibrium SW population.
We show the time evolution directly in Fig. 3, where we plot the response at different wavelengths, keeping laser center wavelength fixed.
As in Fig. 1, we have subtracted a small background component which is present at negative time delay, which is due to the steady state heating of the sample. When the probe frequency is outside the $`X`$-$`M`$-line $`\mathrm{\Delta }\mathrm{T}/\mathrm{T}`$ exhibits prompt PIA with a decay time longer than 500 ps, not shown here. When the probe frequency is inside the $`X`$-$`M`$-line $`\mathrm{\Delta }\mathrm{T}/\mathrm{T}`$ exhibits both prompt PIA and picosecond dynamics. The initial distribution of SWs created by the laser are weighted heavily toward the zone boundary, because of the factor of $`4\pi k^2`$ in the JDOS integral given in Eq. (1). If all SWs contributed equally to the renormalization, as suggested by Eq. (2), one would expect the SWR and hence the $`X`$-$`M`$ line shape to undergo an abrupt change at $`\mathrm{\Delta }t=0`$, and remain unchanged during the internal thermalization of the SW system. This clearly is not the case: the initial population of $`k\pi /a`$ SWs contributes little to the $`X`$-$`M`$ renormalization line shape.
The exciton and magnon are initially created on neighboring sites and should interact moderately with each other, but the difference in their relative group velocities of $``$10 Å/ps indicates that they should be well separated after a picosecond or less, so we do not believe that the observed change is associated with the decay of the $`X`$-$`M`$ composite. It is also possible that the process of optical absorption in the presence of large $`k`$ excitations is not described well by the particular approximation used here, but must include additional many-particle interactions. Such effects, however, would need to suppress the contribution of SWR to the pump-probe line shape by an order of magnitude. The simplest explanation for our result is that occupation at large $`k`$ produces weaker overall SWR than those at the zone center. Such strong $`k`$ dependence in SW interactions has long been indicated theoretically, and the variation of the interaction at short wavelengths may be so strong that the overall interaction effects cancel .
Regardless of the underlying reason for the difference between zone center and zone boundary SW occupation, we can describe the dynamical response phenomenologically by dividing the occupied SW states into two different populations, those at the zone boundary (b) and those at the zone center (c), with a boundary in reciprocal space chosen to reproduce the experiments. We assume that the decay of the initial nonequilibrium energy density $`_b=_{𝐤𝐤_b}n_𝐤ϵ_𝐤`$ into the thermalized energy density $`_c=_{𝐤𝐤_c}n_𝐤ϵ_𝐤`$ is governed by a single thermalization time, $`\tau `$, and the decay over long times is set by an overall energy decay time, $`𝒯`$. Clearly, $`\tau `$ provides a measure of the interactions coupling the zone boundary spin-waves to those at lower energy, both directly and via phonons. This process may be described by the following phenomenological rate equations:
$`{\displaystyle \frac{d_b}{dt}}`$ $`=`$ $`_b/\tau _b/𝒯,`$ (4)
$`{\displaystyle \frac{d_c}{dt}}`$ $`=`$ $`_b/\tau _c/𝒯.`$ (5)
If we further assume that only the energy density due to zone center SWs $`_c`$ is involved in the renormalization line shape, we obtain the following equation for the time-dependent pump-probe signal, valid for each energy within the $`X`$-$`M`$ absorption region:
$$\mathrm{\Delta }\mathrm{T}/\mathrm{T}=a_1\mathrm{\Theta }(t)+a_2[1\mathrm{exp}(t/\tau )]\mathrm{exp}(t/𝒯),$$
(6)
where $`a_1`$ and $`a_2`$ are prefactors which depend on the spectral region of interest. The step function $`\mathrm{\Theta }(t)`$ is required to account for the PIA, which we have taken to be time independent, consistent with experiments away from the $`X`$-$`M`$ peak. The weak structure at early times, due to SWR, is also included in $`a_1`$, though in principle this may be accounted for by an additional term. We have divided the 10 nm spectral range given by our laser spectrum into ten ranges of equal width, separated by 1 nm, and measured the temporal pump-probe response in each range. We then found the best global fits to the data of Eq. (6), in which $`a_1`$ and $`a_2`$ are allowed to vary with probe wavelength, and $`\tau =40\pm 10`$ ps and $`𝒯=2\pm 1`$ ns are constrained to be the same for all ten wavelengths. The results for four of them are shown by the theoretical curves in Fig. 3. This simple model captures very well the observed dynamics.
In summary, we have generated a macroscopic, nonequilibrium population of SWs and observed its dynamics, using a pulsed laser spectroscopy. At long times, the change in the absorption line shape is well understood by assuming that the photogenerated SWs induce a renormalization of the SW dispersion relation. At short times, our results deviate sharply from the predictions of this model, indicating that short wavelength SWs do not contribute to the renormalization line shape. The SWs form a quasi-closed system over 100 ps time scales. The evolution of this nonequilibrium population is consistent with a simple model of SW thermalization, and the thermalization time characterizes intrinsic spin-wave coupling. The effects reported here provide us with information on elementary magnetic excitations that is inaccessible through conventional techniques, which typically probe only thermally occupied magnetic excitations. Moreover, the technique may be used quite generally in magnetic insulators, and may be applied to a wide variety of optical magnetic excitations, including two-magnon excitations and the recently assigned phonon-bimagnon feature in the undoped cuprates . By using multiple wavelength laser sources, this technique may be used in conjunction with excitations across the Mott gap to probe directly the interactions between charge and spin excitations in magnetic solids.
We would like to acknowledge L. Sham for critical reading of the manuscript. This work was supported by the Director, Office of Energy Research, Office of Basic Energy Sciences, Division of Materials Sciences of the U. S. Department of Energy under Contract No. DE-AC03-76SF00098, and by the Stanford NSF-MRSEC Program. A. B. S. acknowledges support by the German National Merit Foundation. |
no-problem/9911/hep-ex9911017.html | ar5iv | text | # Measurement of Longitudinal Spin Transfer to Λ Hyperons in Deep-Inelastic Lepton Scattering
## ACKNOWLEDGMENTS
We gratefully acknowledge the DESY management for its support and the DESY staff and the staffs of the collaborating institutions. This work was supported by the FWO-Flanders, Belgium; the Natural Sciences and Engineering Research Council of Canada; the INTAS, HCM, and TMR network contributions from the European Community; the German Bundesministerium für Bildung und Forschung; the Deutscher Akademischer Austauschdienst (DAAD); the Italian Istituto Nazionale di Fisica Nucleare (INFN); Monbusho, JSPS, and Toary Science Foundation of Japan; the Dutch Foundation for Fundamenteel Onderzoek der Materie (FOM); the U.K. Particle Physics and Astronomy Research Council; and the U.S. Department of Energy and National Science Foundation.
##
As described above, the procedure used to extract the longitudinal spin transfer coefficient $`D_{LL^{}}`$ \[Eq. (5)\] is based on the assumption that the selected spin quantization axis $`L^{}`$ is indeed the direction of the $`\mathrm{\Lambda }`$ polarization. However, if other components of the polarization exist, the extracted result for $`D_{LL^{}}`$ may be contaminated via interference of certain additional terms in the cross section with higher-order terms in the HERMES angular acceptance.
Let us introduce 3 perpendicular axes in the $`\mathrm{\Lambda }`$ center of mass frame, defined by two chosen unit vectors $`\widehat{J}`$ and $`\widehat{T}`$: $`\widehat{e}_1\widehat{J}`$, $`\widehat{e}_2\widehat{J}\times \widehat{T}/|\widehat{J}\times \widehat{T}|`$, $`\widehat{e}_3\widehat{e}_1\times \widehat{e}_2`$. Further, let the symbol $`D_{Li}`$ refer to the probability for spin transfer from a longitudinally polarized virtual photon to a $`\mathrm{\Lambda }`$ baryon with polarization along the axis $`i`$; the symbol $`D_{Ui}`$ denotes the probability for $`\mathrm{\Lambda }`$ polarization along the axis $`i`$ given an unpolarized beam. We take the vector $`\widehat{J}`$ to represent our direction of interest for longitudinal spin transfer to the $`\mathrm{\Lambda }`$, namely the direction of the virtual photon. The quantity $`D_{L1}`$ is thus identical to the quantity $`D_{LL^{}}^\mathrm{\Lambda }`$ defined previously \[Eq. (3)\]. The vector $`\widehat{T}`$ may be either of the other two vectors available: the electron beam direction or the momentum of the final state $`\mathrm{\Lambda }`$. The second axis $`\widehat{e}_2`$ thus represents the direction normal to the production plane. The number of interfering terms is greatly restricted by applying parity and rotational invariance to a general angular decomposition of the cross section, and by the fact that the HERMES spectrometer is symmetric in the vertical coordinate. Finally one is left with only two terms:
$$\alpha D_{U2}\mathrm{cos}\mathrm{\Theta }_2(P_B\mathrm{sin}(n\mathrm{\Phi }))$$
(7)
and
$$\alpha P_BD(y)D_{L3}\mathrm{cos}\mathrm{\Theta }_3(1+C_n\mathrm{cos}(n\mathrm{\Phi })).$$
(8)
In obtaining these expressions, it is important to note that the axis $`\widehat{e}_2`$, which represents the direction normal to the reaction plane, transforms as a pseudo-vector, while $`\widehat{e}_1`$ and $`\widehat{e}_3`$ transform as vectors. The first contribution \[Eq. (7)\] depends entirely on a non-zero $`P_B\mathrm{sin}(n\mathrm{\Phi })`$ azimuthal moment in $`\mathrm{\Lambda }`$ production, where $`\mathrm{\Phi }`$ denotes the angle between the $`\mathrm{\Lambda }`$ and the electron scattering plane, around the $`\stackrel{}{q}`$ vector. Such moments have been measured to be small in pion production. In addition, they are coupled here with a transverse $`\mathrm{\Lambda }`$ polarization and can only appear in the cross section at higher twist (i.e. they are suppressed at order $`p_T/Q`$). The second term \[Eq. (8)\] corresponds to the other component of $`\mathrm{\Lambda }`$ spin transfer in the production plane, and could contribute if the choosen spin quantization axis differs dramatically from the true $`\mathrm{\Lambda }`$ polarization direction. Monte Carlo studies reveal that even if either of the coefficients $`D_{U2}`$ or $`D_{L3}`$ were of the same magnitude as $`D_{L1}`$, they would contribute to the extracted component at a level of less than 10% of its value.
* References are available as HERMES internal notes, and can be accessed on the HERMES web pages:
http://hermes.desy.de/notes/ |
no-problem/9911/cond-mat9911044.html | ar5iv | text | # Melting of Moving Vortex Lattices in Systems With Periodic Pinning
\[
## Abstract
We study numerically the effects of temperature on moving vortex lattices interacting with periodic pinning arrays. For low temperatures the vortex lattice flows in channels, forming a hexatic structure with long range transverse and longitudinal ordering. At higher temperatures, a transition to a smectic state occurs where vortices wander between channels and longitudinal order is lost while transverse order remains. At the highest temperatures the vortex lattice melts into an isotropic liquid.
\]
Driven vortex lattices interacting with quenched and thermal disorder are an ideal system in which to study the nonequilibrium phases and transitions that arise from the interplay of competing interactions . Transport measurements , neutron scattering , and Bitter decoration experiments have provided strong evidence for transitions between different vortex dynamic phases, including creep, plastic flow, and ordered (elastic) flow. Theoretical work and simulations suggested that for a moving vortex lattice, at high velocities the effect of disorder can be represented via a shaking temperature $`T_{sh}`$, inversely proportional to the velocity. At high velocities $`T_{sh}`$ decreases below the melting temperature $`T_m`$, and the vortices reorder. $`T_m`$ of the ordered moving vortex lattice is near the equilibrium $`T_m`$ of the disorder-free stationary lattice. In the highly driven, solidified state, theoretical work suggested that the vortex lattice forms a moving Bragg-glass (MBG), or a strongly anisotropic moving smectic (MS) or moving transverse glass (MTG) . In the MBG the vortices move in correlated channels with few defects, producing quasi-long range order. In the MS/MTG vortices move in uncorrelated channels, so although power law transverse order is present, the longitudinal order is short range only. Strongly anisotropic ordering, consistent with a MS/MTG, as well as more ordered phases, consistent with a MBG and vortex channeling, have been seen in simulations and Bitter decoration experiments .
Despite the considerable work done in the case of random quenched disorder, the effects of thermal disorder and melting for the interesting case of a moving vortex lattice interacting with a periodic pinning substrate have been far less studied. Periodic pinning substrates in superconductors can be created with arrays of microholes and magnetic dots . In all such systems, the pin radius is smaller than the distance between pins, so that a moving vortex spends the largest fraction of its time in the unpinned area. A crucial difference from the random pinning case is that the effect of the periodic pinning cannot be represented by a shaking temperature. The same periodicity also induces true long range transverse or longitudinal order in the moving lattice.
We report a numerical study of the melting of moving vortices in square periodic pinning arrays. With no driving, the system with pinning melts at a higher temperature than the pin free system. For moving vortices at low temperatures, we observe a triangular lattice flowing in strict 1D correlated channels, with transverse and longitudinal long range order. The transverse order is greater than the longitudinal order, and the anisotropy increases with temperature. At higher temperatures near the melting temperature of the clean equilibrium system, we observe a transition to a moving smectic (MS) state, where transverse vortex wandering between channels occurs. At even higher temperatures near the melting temperature of the pinned equilibrium system, the moving smectic melts into a moving isotropic liquid (ML). We present the dynamic phase diagram and explain its features in terms of the pinned and unpinned equilibium melting transitions.
We use finite temperature overdamped molecular dynamics simulations in two dimensions. $`𝐟_i=\eta 𝐯_i=𝐟_i^{vv}+𝐟_i^{vp}+𝐟_d+𝐟_i^T`$. We impose periodic boundary condition in $`x`$ and $`y`$. The force between vortices at $`𝐫_i`$ and $`𝐫_j`$ is $`𝐟_i^{vv}=_{j=1}^{N_v}f_0A_vK_1(|𝐫_i𝐫_j|/\lambda )\widehat{𝐫}_{ij}`$, where $`K_1(r/\lambda )`$ is a modified Bessel function, $`f_0=\mathrm{\Phi }_0^2/8\pi ^2\lambda ^3`$, $`\lambda `$ is the penetration depth and the parameter $`A_v`$ tunes the vortex-vortex interaction strength. For most of the results here $`A_v=1.0`$. The driving force $`𝐟_d`$, representing a Lorentz force, is in the $`x`$ direction. The pinning is modeled as attractive parabolic traps of maximum strength $`f_p`$ and a range $`r_p`$ which is less than the distance between pins $`a`$. The thermal force $`f_i^T`$ has the properties $`<f_i^T>=0`$ and $`<f_i^T(t)f_j^T(t^{^{}})>=2\eta k_BT\delta _{ij}\delta (tt^{^{}})`$. The temperature $`T=(1/2\eta k_B)(Af_0)^2\delta t`$ where $`\delta t`$ is the time step in the simulation and $`A`$ is the number we tune to vary $`T`$, with $`f_i^T=Af_0`$. We take $`f_0=k_B=\eta =1`$ and use $`\delta t=0.04`$. We explore the phase diagram by conducting constant $`f_d`$ or $`T`$ sweeps on the $`f_dT`$ plane. Our model is most relevant to superconductors with perodic arrays of columnar defects or thin-film superdonctors where the vortices can be approximated as 2D objects. A realistic 3D model would be needed to address the exact nature of the liquid phase, such as whether it is a line liquid. In
this work we consider only the commensurate case where the number of vortices $`N_v`$ equals the number of pinning sites $`N_p`$. Results for the incommensurate cases will be presented elsewhere. The initial vortex positions are generated by simulated annealing with each pinning site capturing one vortex. For finite size scaling we analyze system sizes with $`N_vL^2`$ for $`N_v=224`$ to $`N_v=2112`$.
We first establish the melting temperature in the pin free and pinned systems without external drive ($`f_d=0`$) with the vortex displacements $`d_r=<|(𝐫(t)𝐫(0))|^2>`$, and the structure factor $`S(𝐤)=\frac{1}{L^2}_{i,j}e^{i𝐤[𝐫_i(t)𝐫_j(t)]}.`$ In the pinned system the vortex lattice has the same square symmetry as the pinning lattice. The melting temperature is determined from the simultaneous onset of diffusion and a drop in the peaks in $`S(k)`$. In Fig. 1 we show that the melting temperature $`T_m`$ is higher in the pinned system, with $`T_m^p0.03`$, than in the unpinned system, $`T_m^{np}0.0058`$. This is reasonable since at commensuration the pins stiffen the vortex lattice.
Next we explore the dynamic phases of the system. For $`f_p=0.22f_0`$ the $`T=0`$ depinning occurs at $`f_d=0.22f_0`$. For fixed $`f_d=0.45`$ we perform a $`T`$ sweep, and monitor $`S(𝐤)`$. Every 400 MD steps we measure the transverse displacements $`d_y=<|y(T)y(0)|^2>`$ from the initial positions of the vortices at $`T=0`$. For low drives the vortices form a pinned square vortex lattice. The pinned phase is defined by measuring $`V_x=(1/N_v)_{i=1}^{N_v}𝐯_i\widehat{𝐱}`$. In the inset of Fig. 1 we show the typical $`V_x`$ versus $`f_d`$ curves for two different temperatures. The transition from the pinned to moving phases are marked by a jump in the $`V_x`$ at a well defined $`f_d`$. From $`V_x`$ versus $`f_d`$ curves we found little evidence for plastic or collective creep behaviour for temperatures below $`T_m^p`$; however, much longer time scales would be necessary to explore the creep behaviour.
Above some $`T`$ dependent driving force, we find that the vortices form a triangular lattice, with the principle lattice vector aligned with the direction of motion, since the vortices spend part of thier time between pins where
the vortex-vortex interaction dominates.
In Fig. 2(a) we plot the transverse $`S(k_T)`$ and longitudinal $`S(k_L)`$ structure function peaks for increasing $`T`$. For low $`T`$, $`S(k_T)`$ is only slightly larger than $`S(k_L)`$. This anisotropy between the peaks becomes more pronounced as $`T`$ is increased. The ordering can be analyzed from the finite size scaling of the structure factor as $`S(k)/L^2L^\eta `$ . A solid with long range order will have $`\eta =0`$ while a system with short range order such as a liquid will have $`\eta =2`$. In the inset of Fig. 2(a), $`S(k_T)/L^2`$ and $`S(k_L)/L^2`$ (where $`N_vL^2`$) are plotted for different system sizes in the low temperature regime ($`T=0.0025`$). Both peaks scale as $`\eta 0`$, indicative of long range order. Near $`T=0.0095`$, which we label $`T_{MS}`$, $`S(k_L)`$ drops precipitously, indicating complete loss of longitudinal order, while $`S(k_T)`$ retains a significant finite value. Gradually $`S(k_T)`$ drops to the level of $`S(k_L)`$ near $`T=0.03`$, which we label $`T_{ML}`$. In this $`T>T_{MS}`$ regime the longitudinal peak $`S(k_L)/L^2`$ scales as $`\eta =1.95\pm 0.03`$, consistent with an $`\eta =2`$ scaling behavior indicating the loss of longitudinal order. At this same temperature the transverse peak $`S(k_T)/L^2`$ shows a $`L^{0.0}`$ form (triangles in upper inset of Fig. 2), indicating that long range transverse order is still present. The behavior of the two peaks for $`T_{MS}<T<T_{ML}`$ indicates the presence of a moving smectic phase. For $`T>T_{ML}`$ both $`S(k_L)`$ and $`S(k_T)`$ scale as $`L^2`$ as the system becomes an isotropic liquid. We label these three phases the moving crystal
‘(MC), the moving smectic (MS) and moving liquid phase (ML). $`\eta `$ has some temperature dependence near the transitions which we will examine elsewhere. In the inset to Fig. 2(b) we plot the fraction of 6-fold coordination number $`P_6`$ versus $`T`$ as obtained from the Voronoi construction. The proliferation of defects occurs at $`T=T_{MS}`$ as seen by the drop in $`P_6`$. In Fig.2(b) we plot the transverse displacement $`d_y`$. For $`T<T_{MS},`$ $`d_y0`$ indicating that the vortices are moving in straight 1D channels. For $`TT_{MS}`$, $`d_y`$ increases indicating that the onset of vortex wondering in the $`y`$ direction is correlated with the loss of longitudinal order and the proliferation of defects.
To examine the individual vortex behavior in the different dynamic phases in Fig. 3 we plot snapshots of the vortex positions and trajectories for MC at $`T=0.0025`$, MS at $`T=0.015`$, and ML at $`T=0.035`$. In the MC phase, the vortices form an ordered triangular lattice \[Fig.3(a)\], and move in correlated 1D paths \[Fig.3(b)\], which agrees
with the zero transverse wandering $`d_y0`$ in Fig. 2(b). As $`T`$ is increased in the MC phase the channel width increases; however, vortices do not cross from one channel to another. The pinning also induces the long-range order seen from the scaling of $`S(k)`$, whereas algebraic decay would be expected since the system is in 2D.
In the MS phase, shown in Fig. 3(c), the vortices are much less ordered than in the MC phase; however, some order remains. The vortex trajectories in the MS phase, Fig. 3(d), reveal that although some channeling occurs along the pinning, there is considerable vortex motion between and across the channels. This interchannel vortex motion accounts for the increase of the transverse displacements, $`d_y`$, at the onset of the MS phase in Fig. 2(b). The residual vortex channeling accounts for the finite value of $`S(k_T)`$ in the MS phase, but the vortex positions are uncorrelated between channels so there is no longitudinal order and the vortex lattice is highly defected. Since the residual channels have the same period as the pinning lattice the transverse scaling in $`S(k)`$ gives long range order. Finally in the ML phase, shown in Fig.3(e,f), the vortex positions are disordered and the channeling behavior is lost.
In Fig. 4 we present the central result of this work, the dynamic phase diagram as a function of $`f_d`$ and $`T`$ obtained from fixed drive increasing $`T`$ simulations (denoted by solid circles) and from fixed temperature increasing drive simulations (denoted by open circles and squares). The depinning line is determined from the pronounced upward curvature in the $`V(I)`$ curves or the onset of displacements. The MC-MS line is found from the saturation of $`S(k_L)`$ to a minimum value as well as the onset of the transverse displacements. The MS-ML line is obtained from the point at which the transverse $`S(k_T)`$ peak drops to a minimal value and exhibits a scaling of $`\eta 2`$. The MS-ML line is roughly independent of drive above $`f_d=0.25f_0`$. The MS line shows some curvature toward lower temperatures for drives near the depinning line. The MC-MS transition occurs a small amount above $`T_m^{np}=0.0058`$ but approaches this value for the lower drives. The MS-ML transition line coincides with the pinned (zero driving value) melting temperature $`T_m^p`$. The pinned phase vanishes at $`T_m^p`$.
The onset of these different phases can be understood by considering that for higher drives the moving vortices spend a great part of their time outside the pinning sites because $`r_pa`$. For temperatures above $`T_m^{np}`$ the vortices enter a molten state while moving between pinning sites since they are essentially moving in a clean system. In this melted state, the thermal fluctuations overcome the vortex-vortex interaction and the correlation of vortices in adjacent channels as well as the longitudinal order is lost. The vortices start diffusing at random, leading to a large increase in $`d_y`$. Unlike the case of random pinning, which induces an additional shaking temperature that effectively lowers the temperature at which the vortices disorder or melt, vortices moving in a periodic pinning array will not experience a shaking temperature. As long as $`T`$ is less than the melting temperature of the non-driven clean system $`T_m^{np}`$, at any drive the overall moving vortex lattice remains ordered. Further there is still a pinning effect in the transverse direction. This transverse pinning which causes the channeling has been seen in simulations with random pinning and is particularly large for simulations with periodic pinning arrays . During the time the vortices are in the pinning sites they still feel a transverse pinning force until $`T>T_m^p`$, so some vortex channeling persists and finite transverse ordering appears. Above $`T_m^p`$ the pinning is no longer effective so channeling and transverse order are both lost.
A test of the above interpretation is that $`T_{MS}`$ is related to $`T_m^{np}`$. $`T_m^{np}`$ can be tuned by changing the vortex-vortex interaction prefactor $`A_v`$. As $`A_v`$ is lowered $`T_m^{np}`$ will also be lowered. In the inset of Fig. 4 we show the melting lines for the clean non-moving system and the moving system with pinning ($`f_d=0.45f_0`$) for $`T`$ versus $`A_v`$. As $`A_v`$ is increased the $`T_m^{np}`$ and $`T_{MS}`$ lines increase at the same rate. This correlation is encouraging evidence for the above interpretation. For the stiffer lattices (high $`A_v`$) the MC phase is larger.
In summary, we have studied numerically the melting and dynamic phase diagram of a vortex lattice interacting with a square periodic pinning array. The melting temperature for the non-driven pinned system is higher than that for the equivalent clean system. For moving vortices at low $`T`$ the vortex lattice moves in correlated 1D channels and has long range order with some anisotropy between the longitudinal and transverse peaks. At a higher temperature there is a transition to a Moving Smectic phase where longitudinal order is lost while long-range transverse order remains. In the Moving Smectic phase the channel flow still persists but vortices diffuse between channels. At high temperatures the transverse order and channeling is also lost. The Moving Crystal-Moving Smectic transition corresponds roughly to the melting temperature of the zero drive clean system.
We thank C.J. Olson and R.T. Scalettar for useful discussions and critical reading of the manuscript. Funding provided by CLC/CULAR.
-Note Added After submission we became aware of the paper by V. Marconi and D. Dominguez in which they also study the melting of moving vortex lattices intercting with a periodic substrate in a peroidc Josephson-juntion array in which we find several overlapping results. They, however, do not find a moving smectic phase which is due to their pinning being modeled as an egg-carton potential unlike our model in which the space between the pinned sites is essential for the smectic phase to occur. |
no-problem/9911/hep-ph9911292.html | ar5iv | text | # Single Production of Leptoquarks at the Tevatron
## I Introduction
Leptoquarks are an undeniable signal of physics beyond the standard model (SM), and consequently, there have been several direct searches for them in accelerators. In fact, many theories that treat quarks and leptons in the same footing, like composite models , technicolor , and grand unified theories , predict the existence of new particles, called leptoquarks, that mediate quark-lepton transitions. Since leptoquarks couple to a lepton and a quark, they are color triplets under $`SU(3)_C`$, carry simultaneously lepton and baryon number, have fractional electric charge, and can be of scalar or vector nature.
From the experimental point of view, leptoquarks possess the striking signature of a peak in the invariant mass of a charged lepton with a jet, which make their search rather simple without the need of intricate analyses of several final state topologies. So far, all leptoquark searches led to negative results. At the hadron colliders, leptoquarks can be pair produced by gluon–gluon and quark–quark fusions, as well as singly produced in association with a lepton in gluon–quark reactions. At the Tevatron, the CDF and DØ collaborations studied the pair production of leptoquarks which decay into electron-jet pairs . The combined CDF and DØ limit on the leptoquark mass is $`M_{\text{lq}}>242`$ GeV for scalar leptoquarks decaying exclusively into $`e^\pm `$–jet pairs. At HERA, first generation leptoquarks are produced in the $`s`$–channel through their Yukawa couplings, and the HERA experiments placed limits on their masses and couplings, establishing that $`M_{lq}215275`$ GeV depending on the leptoquark type.
In this work we studied the single production of first generation leptoquarks ($`S`$) in association with a charged lepton at the Tevatron , i.e.
$$p\overline{p}Se^\pm e^+e^{}\text{ jets}.$$
(1)
We performed a careful analyses of all possible QCD and electroweak backgrounds for the topology exhibiting jets plus a $`e^+e^{}`$ pair using the event generator PYTHIA . The signal was also generated using this package. We devised a series of cuts not only to reduce the background, but also to enhance the signal. Since the available phase space for single production is larger than the one for double production, we show that the single leptoquark search can extend considerably the Tevatron bounds on these particles. Our results indicate that the combined results of the Tevatron experiments can exclude the existence of leptoquarks with masses up to 260–285 (370–425) GeV at the RUN I (RUN II), depending on their type, for Yukawa couplings of the electromagnetic strength.
It is interesting to notice that pair production of scalar leptoquarks in a hadronic collider is essentially model independent since the leptoquark–gluon interaction is fixed by the $`SU(3)_C`$ gauge invariance. On the other hand, single production is model dependent because it takes place via the unknown leptoquark Yukawa interactions. Notwithstanding, these two signals for scalar leptoquarks are complementary because they allow us not only to reveal their existence but also to determine their properties such as mass and Yukawa couplings to quarks and leptons. In this work, we also studied the region in the parameter space where the single leptoquark production can be isolated from the pair production.
The outline of this paper is as follows. In Sec. II we introduce the $`SU(3)_CSU(2)_LU(1)_Y`$ invariant effective Lagrangians that we analyzed. We also discuss in this section the main features of the leptoquark signal and respective backgrounds. We present our results in Sec. III while our conclusions are drawn in Sec. IV.
## II Leptoquark signals and backgrounds
We assumed that scalar–leptoquark interactions are $`SU(3)_CSU(2)_LU(1)_Y`$ gauge invariant above the electroweak symmetry breaking scale $`v`$. Moreover, leptoquarks must interact with a single generation of quarks and leptons with chiral couplings in order to avoid the low energy constraints . The most general effective Lagrangian satisfying these requirements and baryon number (B), lepton number (L), electric charge, and color conservations is
$`_{eff}`$ $`=`$ $`_{F=2}+_{F=0},`$ (2)
$`_{F=2}`$ $`=`$ $`g_{1L}\overline{q}_L^ci\tau _2\mathrm{}_LS_{1L}+g_{1R}\overline{u}_R^ce_RS_{1R}+\stackrel{~}{g}_{1R}\overline{d}_R^ce_R\stackrel{~}{S}_1`$ (4)
$`+g_{3L}\overline{q}_L^ci\tau _2\stackrel{}{\tau }\mathrm{}_L\stackrel{}{S}_3,`$
$`_{F=0}`$ $`=`$ $`h_{2L}R_{2L}^T\overline{u}_Ri\tau _2\mathrm{}_L+h_{2R}\overline{q}_Le_RR_{2R}+\stackrel{~}{h}_{2L}\stackrel{~}{R}_2^T\overline{d}_Ri\tau _2\mathrm{}_L,`$ (5)
where $`F=3B+L`$, $`q`$ ($`\mathrm{}`$) stands for the left-handed quark (lepton) doublet, and we omitted the flavor indices of the leptoquark couplings to fermions. The leptoquarks $`S_{1R(L)}`$ and $`\stackrel{~}{S}_1`$ are singlets under $`SU(2)_L`$, while $`R_{2R(L)}`$ and $`\stackrel{~}{R}_2`$ are doublets, and $`S_3`$ is a triplet.
From the above interactions we can see that first generation leptoquarks can decay into pairs $`e^\pm q`$ and $`\nu _eq^{}`$, thus, giving rise to a $`e^\pm `$ plus a jet, or a jet plus missing energy. However, the branching ratio of leptoquarks into these final states depends on the existence of further decays, e.g. into new particles. In this work we considered only the $`e^\pm q`$ decay mode and that the branching ratio into this channel ($`\beta `$) is a free parameter. As we can see from Eqs. (4) and (5), only the leptoquarks $`R_{2L}^2`$, $`\stackrel{~}{R}_2^2`$ and $`S_3^{}`$ decay exclusively into a jet and a neutrino, and consequently can not give rise to the topology that we are interested in.
The event generator PYTHIA assumes that the leptoquark interaction with quarks and leptons is described by
$$\overline{e}(a+b\gamma _5)q,$$
(6)
and the leptoquark cross sections are expressed in terms of the parameter $`\kappa `$ defined as
$$\kappa \alpha _{\text{em}}\frac{a^2+b^2}{4\pi }$$
(7)
with $`\alpha _{\text{em}}`$ being the fine structure constant. We present our results in terms of the leptoquark mass $`M_{lq}`$ and $`\kappa `$, being trivial to translate $`\kappa `$ into the coupling constants appearing in Eqs. (4) and (5); see Table I. The subprocess cross section for the associated production of a leptoquark and a charged lepton
$$q+gS+\mathrm{},$$
(8)
depends linearly on the parameter $`\kappa `$ defined in Eq. (7). For the range of leptoquark masses accessible at the Tevatron, leptoquarks are rather narrow resonances with their width given by
$$\mathrm{\Gamma }(S\mathrm{}q)=\frac{\kappa \alpha _{\text{em}}}{2}M_{\text{lq}}.$$
(9)
At the parton level, the single production of leptoquarks leads to a final state exhibiting a pair $`e^+e^{}`$ and $`q`$ ($`\overline{q}`$). After the parton shower and hadronization the final state usually contains more than one jet. An interesting feature of the final state topology $`e^+e^{}`$ and jets is that the double production of leptoquarks also contribute to it. Consequently, the topology $`e^+e^{}`$–jets has a cross section larger than the pair or single leptoquark productions alone, increasing the reach of the Tevatron. In principle we can separate the single from the double production, for instance, requiring the presence of a single jet in the event. However, in the absence of any leptoquark signal, it is interesting not to impose this cut since in this case the signal cross section gets enhanced, leading to more stringent bounds.
We exhibit in Table II the total cross section for the single production of leptoquarks that couple only to $`e^\pm u`$ or $`e^\pm d`$ pairs, assuming $`\kappa =1`$ and $`\beta =1`$ and requiring one electron with $`p_T>50`$ GeV, another one with $`p_T>20`$ GeV, and $`|\eta |<4.2`$ for both $`e^\pm `$. Notice that the cross sections for the single production of $`e^+q`$ and $`e^{}q`$ leptoquarks, that is $`|F|=0`$ or $`2`$, are equal at the Tevatron. Furthermore, the cross section for the single production of a leptoquark coupling only to $`u`$ quarks is approximately twice the one for leptoquarks coupling only to $`d`$ quarks, in agreement with a naive valence–quark–counting rule. We display in Table III the production cross section of leptoquark pairs for the same choice of the parameters and cuts used in Table II. The small difference between the cross sections for the production of $`e^\pm u`$ and $`e^\pm d`$ leptoquarks is due to the exchange of a $`e^\pm `$ in the $`t`$–channel of the reaction $`q\overline{q}S\overline{S}`$.
In our analyses we kept track of the $`e^\pm `$ (jet) carrying the largest transverse momentum, that we denoted by $`e_1`$ ($`j_1`$), and the $`e^\pm `$ (jet) with the second largest $`p_T`$, that we called $`e_2`$ ($`j_2`$). The reconstruction of the jets in the final state was done using the subroutine LUCELL of PYTHIA, requiring the transverse energy of the jet to be larger than 7 GeV inside a cone $`\mathrm{\Delta }R=\sqrt{\mathrm{\Delta }\eta ^2+\mathrm{\Delta }\varphi ^2}=0.7`$.
The transverse momentum distributions of the $`e_1`$ and $`j_1`$ originating from leptoquarks are shown in Fig. 1, where we required that $`p_T^{e_1}>50`$ GeV, $`p_T^{e_2}>20`$ GeV, and $`|\eta |<4.2`$ for both $`e^\pm `$. In this figure, we added the contributions from single and pair production of $`ue^{}`$ leptoquarks of mass $`M_{\text{lq}}=300`$ GeV for $`\sqrt{s}=2.0`$ TeV. We can see from this figure that the $`e_1`$ and $`j_1`$ spectra are peaked approximately at $`M_{\text{lq}}/2`$ and exhibit a large fraction of hard leptons, and consequently the $`p_T`$ cut on $`e_1`$ does not reduce significantly the signal.
Within the scope of the SM, there are many sources of backgrounds leading to jets accompanied by a pair $`e^+e^{}`$. We divided them into three classes :
* QCD processes: The reactions included in the QCD class are initiated by hard scatterings proceeding exclusively through the strong interaction. In this class of processes, the main source of hard $`e^\pm `$ is the semileptonic decay of hadrons possessing quarks $`c`$ or $`b`$, which are produced in the hard scattering or in the parton shower through the splitting $`gc\overline{c}`$ ($`b\overline{b}`$). Important features of the events in this class are that close to the hard $`e^\pm `$ there is a substantial amount of hadronic activity and that the $`e^\pm `$ transverse momentum spectrum is peaked at small values.
* Electroweak processes: This class contains the Drell-Yan production of quark/lepton pairs and the single and pair productions of electroweak gauge bosons. It is interesting to notice that the main backgrounds by far in this class are $`q_ig(\overline{q_i})Zq_i(g)`$. This suggests that we should veto events where the invariant mass of the $`e^+e^{}`$ pair is around the $`Z`$ mass; however, even after such a cut, these backgrounds remain important due to the production of off-shell $`Z`$’s.
* Top production: The production of top quark pairs takes place through quark–quark and gluon–gluon fusions. In general, the $`e^\pm `$ produced in the leptonic top decay into $`be\nu _e`$ are rather isolated and energetic. Fortunately, the top production cross section at the Tevatron is rather small.
As an illustration, we present in Table IV the total cross section of the above background classes requiring the events to exhibit a $`e^\pm `$ with $`p_T>50`$ GeV and a second $`e^{}`$ having $`p_T>20`$ GeV with the invariant mass of this pair differing from the $`Z`$ mass by more than 5 GeV. As we can see from this table, the introduction of this $`p_T`$ cut already reduces the QCD backgrounds to a level below the electroweak processes without on–mass-shell production of $`Z`$’s. As we naively expect, the increase in the center–of–mass energy has a great impact in the top production cross section.
## III Results
Taking into account the features of the signal and backgrounds, we imposed the following set of cuts:
* We required the events to exhibit a pair $`e^+e^{}`$ and one or more jets.
* We introduced typical acceptance cuts – that is, the $`e^\pm `$ are required to be in the rapidity region $`|\eta _e|<2.5`$ and the jet(s) in the region $`|\eta _j|<4.2`$.
* One of the $`e^\pm `$ should have $`p_T>50`$ GeV and the other $`p_T>20`$ GeV.
* The $`e^\pm `$ should be isolated from hadronic activity. We required that the transverse energy deposited in a cone of size $`\mathrm{\Delta }R=0.5`$ around the $`e^\pm `$ direction to be smaller than 10 GeV.
* We rejected events where the invariant mass of the pair $`e^+e^{}`$ ($`M_{e_1e_2}`$) is close to the $`Z`$ mass, i.e. we demanded that $`|M_{e_1e_2}M_Z|<30`$ GeV. This cut reduces the backgrounds coming from $`Z`$ decays into a pair $`e^+e^{}`$.
* We required that all the invariant masses $`M_{e_ij_k}`$ ($`i`$, $`k=1`$, $`2`$) are larger than 10 GeV.
* We accepted only the events which exhibit a pair $`e^\pm `$–jet with an invariant mass $`M_{ej}`$ in the range $`|M_{ej}M_{\text{lq}}|<30`$ GeV. An excess of events signals the production of a leptoquark.
In Fig. 2 we present the $`M_{ej}`$ spectrum after the cuts (C1)–(C6) originating from the background and the production of a $`e^+u`$ leptoquark of mass 250 GeV with $`\kappa =\beta =1`$. The largest invariant mass of the four possible combinations is plotted both for background (dashed line) and signal (solid line). The signal peak is clearly seen out of the background.
### A Pair production
At this point it is interesting to obtain the attainable bounds on leptoquarks springing from the search of leptoquark pairs. In this case we required, in addition to cuts (C1)–(C7), that the events present two $`e^\pm `$–jet pairs with invariant masses satisfying $`|M_{ej}M_{\text{lq}}|<30`$ GeV. Our results show that CDF and DØ should be able to constrain the leptoquark masses to be heavier than 225 (350) GeV at the RUN I (RUN II) for $`\beta =1`$ and $`\kappa =0`$, assuming that only the background is observed. When the data of both experiment are combined, the limit changes to 250 (375) GeV. It is interesting to notice that our results for the RUN I are compatible with the ones obtained by the Tevatron collaborations . Moreover, taking into account the single production of leptoquarks changes these constraints only by a few GeV for $`\kappa =1`$.
### B Single production
We display in Fig. 3 the total background cross section and its main contributions as a function of $`M_{lq}`$ after applying the cuts (C1)–(C7) for center–of–mass energies of 1.8 and 2.0 TeV. We can see from this figure that the number of expected background events per experiment at the RUN I (II) is 4 (102) for $`M_{\text{lq}}=200`$ GeV dropping to 0 (8) for $`M_{\text{lq}}=400`$ GeV. For the sake of comparison, we display in Fig. 4 the total cross section for the production of $`e^+u`$ and $`e^{}d`$ leptoquarks assuming $`\kappa =1`$ and $`\beta =1`$ for the same cuts and center–of–mass energies.
We estimated the capability of the Tevatron to exclude regions of the plane $`\kappa \beta M_{\text{lq}}`$ assuming that only the background events were observed. We present in Fig. 5a the projected 95% CL exclusion region for $`e^+u`$ and $`e^+d`$ leptoquarks at the RUN I with an integrated luminosity of 110 pb<sup>-1</sup> per experiment. From our results we can learn that the search for single $`e^\pm u`$ ($`e^\pm d`$) leptoquarks in each experiment can exclude leptoquark masses up to 265 (245) GeV for $`\kappa \beta =1`$. Combining the results of CDF and DØ expands this range of excluded masses to 285 (260) GeV respectively. The corresponding results for the RUN II with an integrated luminosity of 2 fb<sup>-1</sup> per experiment are presented in Fig. 5b. Here we can see that the combined CDF and DØ data will allow us to rule out $`e^\pm u`$ ($`e^\pm d`$) leptoquarks with masses up to 425 (370) GeV, assuming that $`\beta =\kappa =1`$.
It is important to stress that events exhibiting a pair of leptoquarks also contribute to our single leptoquark search. This is the reason why lighter leptoquarks can be observed even for arbitrarily small $`\kappa `$; see Figs. 5. However, the maximum mass that can be excluded for $`\kappa =0`$ is smaller than the limit coming from the search for leptoquark pairs since the requirement of an additional $`e^\pm `$–jet pair with invariant mass close to $`M_{\text{lq}}`$ helps to further reduce the backgrounds. For instance the single leptoquark search can rule out leptoquarks with masses up to 330 GeV for $`\kappa =0`$ at RUN II while the search for leptoquark pairs leads to a lower bound of 375 GeV.
In principle we can separate the double production of leptoquarks from the single one. An efficient way to extract the single leptoquark events is to require that just one jet is observed. At the RUN I this search leads to an observable effect only for rather large values of $`\kappa `$. On the other hand, this search can be done at the RUN II, however, the bounds coming from this analysis are weaker than the ones obtained above; see Fig. 6. We can interpret this figure as the region of the $`\kappa \beta M_{\text{lq}}`$ where we can isolat the single leptoquark production and study this process in detail.
## IV Conclusions
The analyses of the single production of leptoquarks at the Tevatron RUN I allow us to extend the range of excluded masses beyond the present limits stemming from the search of leptoquark pairs. We showed that in the absence of any excess of events CDF and DØ individually should be able to probe $`e^\pm u`$ ($`e^\pm d`$) leptoquark masses up to 265 (245) GeV for Yukawa couplings of the electromagnetic strength and $`\beta =1`$. In the case $`\beta =0.5`$ these limits reduce to 250 (235) GeV. Moreover, combining the results from both experiments can further increase the Tevatron reach for leptoquarks. Assuming that leptoquarks decay exclusively into the known quarks and leptons and $`\kappa =1`$, the combined Tevatron results can exclude $`S_{1L}`$ and $`S_3^0`$ leptoquarks with masses up to 270 GeV, $`S_{1R}`$, $`R_{2L}^1`$, and $`R_{2R}^1`$ leptoquarks with masses 285 GeV, and $`\stackrel{~}{S}_{1R}`$, $`S_3^+`$, $`R_{2R}^2`$, and $`\stackrel{~}{R}_2^1`$ with masses up to 260 GeV. This results represent an improvement over the present bounds obtained at the Tevatron , however, the bounds are similar to the ones obtained by the HERA collaborations .
At the RUN II, the search for the single production of leptoquarks will be able to rule out leptoquarks with masses even larger. For instance, the CDF and DØ combined results can probe $`e^\pm u`$ ($`e^\pm d`$) leptoquark masses up to 425 (370) GeV for $`\kappa \beta =1`$. In the case $`\kappa \beta =0.5`$, these bounds reduce to 385 (350) GeV. However, even these improved limits will not reach the level of the indirect bounds ensuing from low energy physics . Direct limits more stringent than the indirect ones will only be available at the LHC or future $`e^+e^{}`$ colliders .
## Acknowledgments
This work was supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), and by Programa de Apoio a Núcleos de Excelência (PRONEX). |
no-problem/9911/hep-ph9911503.html | ar5iv | text | # 1 Chew-Frautschi plot for the six low-lying 𝐼=1 parity even mesons (𝜌-trajectory). The masses of the resonances were taken from [].
## Abstract
A model for a Regge trajectory compatible with the threshold behavior required by unitarity and asymptotics in agreement with Mandelstam analyticity is analyzed and confronted with the experimental data on the spectrum of the $`\rho `$ trajectory as well as those on the $`\pi ^{}p\pi ^0n`$ charge-exchange reaction. The fitted trajectory deviates considerably from a linear one both in the space-like and time-like regions, matching nicely between the two.
PACS numbers: 12.40.Nn, 13.85.Ni.
Analytic model of a Regge trajectory in the space-like and time-like regions<sup>1</sup><sup>1</sup>1Presented at the Bogolyubov Conference, Moskow-Dubna-Kiev, 27 Sept.-6 Oct., 1999.
R. Fiore<sup>a</sup>, L.L. Jenkovszky<sup>b</sup>, V. Magas<sup>b,c</sup>, F. Paccanoni<sup>d</sup>, A. Papa<sup>a</sup>
<sup>a</sup> Dipartimento di Fisica, Università della Calabria,
Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza
I-87036 Arcavacata di Rende, Cosenza, Italy
<sup>b</sup> Bogolyubov Institute for Theoretical Physics,
Academy of Sciences of Ukraine
252143 Kiev, Ukraine
<sup>c</sup> Department of Physics, Bergen University,
Allegaten 55, N-5007, Norway
<sup>d</sup> Dipartimento di Fisica, Università di Padova,
Istituto Nazionale di Fisica Nucleare, Sezione di Padova
via F. Marzolo 8, I-35131 Padova, Italy
Regge trajectories may be considered as building blocks in the framework of the analytic $`S`$-matrix theory. We dedicate this contribution to the late N.N. Bogolyubov, whose contribution in this field is enormous, on the occasion of his 90-th anniversary. The model to be presented is an example of the realization of the ideas of the analytic $`S`$-matrix theory.
There is a renewed interest in the studies of the dynamics of the Regge trajectories . There are various reasons for this phenomenon.
The hadronic string model (see e.g. ) was successful as a mechanical analogy, generating a spectrum similar to that of a linear trajectory, but it fails to incorporate the interaction between the strings. Although intuitively it seems clear that hadron production corresponds to breakdown of the strings, the theory of interacting strings faces many problems. Paradoxically, the final goal of the hadronic string theory and, in a sense, of the modern strong interaction theory, is the reconstruction of the dual (e.g. Veneziano) amplitude from the interacting strings, originated by the former.
Non-linear trajectories were derived also from potential models. The saturation of the spectrum of resonances was shown to be connected to a screening quark-antiquark potential.
A relatively new development is that connected with various quantum deformations, although the relation between $`q`$ deformations and non-linear (logarithmic) trajectories was first derived by Baker and Coon . $`q`$-deformations of the dual amplitudes (or harmonic oscillators) resulted in deviations from linear trajectories, although the results are rather ambiguous. By a different, so-called $`k`$-deformation, the authors arrived at rather exotic hyperbolic trajectories.
All these developments were preceded by earlier studies of general properties of the trajectories , that culminated in classical papers of the early 70-ies by E. Predazzi and co-workers , followed by the paper of late A.A. Trushevsky , who were able to show, on quite general grounds, that the asymptotic rise of the Regge trajectories cannot exceed $`|t|^{1/2}`$. This result, later confirmed in the framework of dual amplitudes with Mandelstam analyticity , is of fundamental importance. Moreover, wide-angle scaling behavior of the dual amplitudes imposes an even stronger, logarithmic asymptotic upper bound on the trajectories. The combination of a rapid, nearly linear rise at small $`|t|`$ with the logarithmic asymptotics may be comprised in the following form of the trajectory:
$$\alpha (t)=\alpha (0)\gamma \mathrm{ln}(1\beta t),$$
(1)
where $`\gamma `$ and $`\beta `$ are constants.
The threshold behavior of the trajectories is constrained by unitarity:
$$m\alpha _n(t)(tt_n)^{e\alpha (t_n)+1/2},$$
(2)
where $`t_n`$ is the mass of the $`n`$th threshold. The combination of this threshold behavior with the square-root and/or logarithmic behavior is far from trivial, unless one assumes a simplified square root threshold behavior that, combined with the logarithmic asymptotics, results in the following form
$$\alpha (t)=\alpha _0\gamma \mathrm{ln}(1+\beta \sqrt{tt_n}).$$
(3)
The next question is how do various thresholds enter the trajectory. In a long series of papers N.A. Kobylinsky with his co-workers advocated the additivity idea
$$\alpha (t)=\alpha (0)+\underset{n}{}\alpha _n(t),$$
(4)
with $`\alpha _n(t)`$ having only one threshold branch point on the physical sheet. The choice of the threshold masses is another controversial problem. Kobylinsky et al. assumed that the thresholds are made only of the lowest-lying particles (and their antiparticles), appearing in the $`SU(3)`$ octet and decuplet - $`\pi `$ and $`K`$ mesons and baryons ($`N`$, eventually $`\mathrm{\Sigma }`$ and/or $`\mathrm{\Xi }`$). We prefer to include the physical $`4m_\pi `$ threshold, an intermediate one at $`1`$ GeV, as well as a heavy one accounting for the observed (nearly linear) spectrum of resonances on the $`\rho `$ trajectory. The masses of the latter will be fitted to the data.
Fig. 1 shows the Chew-Frautschi plot with the trajectory (3), (4) and four thresholds included. This trajectory matches well with the scattering data , as shown in Fig. 2, where fits to the scattering data based on the model are presented.
The construction of a trajectory with a correct threshold behaviour and Mandelstam analyticity, or its reconstruction from a dispersion relation is a formidable challenge for the theory. This problem can be approached by starting from the following simple analytical model where the imaginary part of the trajectory is chosen as a sum of terms like
$$m\alpha _n(t)=\gamma _n\left(\frac{tt_n}{t}\right)^{e\alpha (t_n)+1/2}\theta (tt_n).$$
(5)
A rough estimate of $`Re\alpha (t_n)`$ can be obtained from a linear trajectory adapted to the experimental data. We have checked this approximation a posteriori and found that it works. It could be improved by iterating the zeroth order approximation. From the dispersion relation for the trajectory, the real part can be easily calculated
$$e\alpha (t)=\alpha (0)+\frac{t}{\sqrt{\pi }}\underset{n}{}\gamma _n\frac{\mathrm{\Gamma }(\lambda _n+3/2)}{\sqrt{t_n}\mathrm{\Gamma }(\lambda _n+2)}{}_{2}{}^{}F_{1}^{}(1,1/2;\lambda _n+2;\frac{t}{t_n})\theta (t_nt)+$$
$$+\frac{2}{\sqrt{\pi }}\underset{n}{}\gamma _n\frac{\mathrm{\Gamma }(\lambda _n+3/2)}{\mathrm{\Gamma }(\lambda _n+1)}\sqrt{t_n}{}_{2}{}^{}F_{1}^{}(\lambda _n,1;3/2;\frac{t_n}{t})\theta (tt_n),$$
(6)
where $`\lambda _n=e\alpha (t_n)`$. Work in this direction is in progress.
Acknowledgment
One of us (L.L.J.) is grateful to the Dipartimento di Fisica dell’Università della Calabria and to the Istituto Nazionale di Fisica Nucleare - Sezione di Padova e Gruppo Collegato di Cosenza for their warm hospitality and financial support. |
no-problem/9911/astro-ph9911218.html | ar5iv | text | # Short-scale gravitational instability in a disordered Bose gas
## I Introduction
Gravitational (Jeans ) instability is believed to be at the root of all the large-scale structure that we observe in the universe. (For a review on structure formation, see ref. .) The instability is a direct consequence of gravity being an attractive force: particles will accrue on small clumps, and the clumps will get bigger. Jeans’s original calculation was based on description of matter via classical hydrodynamics. This description applies only at spatial scales larger than the mean-free path of the particles (which is defined here with respect to some interaction other than gravity). Evolution of density perturbations with scales smaller than the mean-free path can for many purposes be described by the collisionless Vlasov approximation—unless the scale is so small that this semiclassical method breaks down. In the collisionless approximation, a self-gravitating gas is unstable with respect to sufficiently long-wave perturbations, and the instability is of the same type as Jeans’s. (We will define later more precisely what “the same type” means.)
For a wide class of initial distributions, however, the collisionless instability disappears for short-scale perturbations. Consider, for example, a homogeneous (on average) nonrelativistic gas of particles of mass $`m`$ in a box of fixed size, in the Newtonian approximation to gravity (in addition, we assume that the uniform component of density does not gravitate). We show in Appendix that, for the class of momentum distributions defined there, the collisionless instability exists only for density perturbations with wavenumbers
$$k<Ck_J^2/k_0k_{},$$
(1)
where $`k_0`$ is a typical momentum of the particles, and $`C`$ is a numerical constant depending on their momentum distribution. (We use a system of units with $`\mathrm{}=1`$.) The wavenumber $`k_J`$ is determined by the average density number $`n_{\mathrm{ave}}`$:
$$k_J^4=16\pi Gm^3n_{\mathrm{ave}},$$
(2)
and will be called the Jeans wavenumber. One may wonder if any gravitational instability exists for perturbations with $`k>k_{}`$, that is at shorter scales.
The condition (1) suggests that, depending on the relation between $`k_0`$ and $`k_J`$, a self-gravitating gas can be in one of two different regimes. When
$$k_Jk_0,$$
(3)
eq. (1) shows that collisionless instability occurs for perturbations with all wavenumbers up to $`kk_0`$, at which point the semiclassical description of particles breaks down, and the Vlasov equation can no longer be used. In at least one case, however, one can show that a linear instability exists even for $`k>k_0`$ (although of course the argument cannot be based on the Vlasov equation). This case is a perfectly ordered Bose gas, i.e. a gas in which all particles are Bose-Einstein condensed in the $`k=0`$ mode. Elementary excitations of this Bose-Einstein condensate are Bogoliubov’s quasiparticles, and their dispersion law is known, from Bogoliubov’s work , for arbitrary two-body potential. When the only interaction between particles is Newtonian gravity, the dispersion law (for a nonrelativistic gas) reads
$$\omega (k)=(1/2m)\left[k^4k_J^4\right]^{1/2},$$
(4)
which means that the perfectly ordered Bose gas is gravitationally unstable for all $`k<k_J`$. This result had been reported previously in a number of papers . (We have presented it here for a region that is out of the Hubble expansion; a treatment in the expanding universe is also available .)
The perfectly ordered state is just one possible initial state of a Bose gas. A generic initial state is a patchwork of correlated domains, rather than one such domain. If we describe a nonrelativistic Bose gas by a complex field $`\psi `$, correlated regions are those in which the phase of $`\psi `$ is more or less constant in space. The typical size of correlated regions is $`2\pi /k_0`$. If such a region contains many particles, we can apply formula (4) to it individually, but only for wavenumbers $`kk_0`$: only these sample a more or less ordered background. Eq. (4) then predicts an instability for $`k`$ in the range $`k_0k<k_J`$. At $`kk_0`$, we expect this instability to cross over smoothly to the instability found in the Vlasov approximation. Thus, we expect that a nonrelativistic Bose gas satisfying the condition (3) is unstable with respect to density perturbations with all wavenumbers $`k<k_J`$ (although the theoretical description of the instability has to be switched at $`kk_0`$).
We can now formulate more precisely what we mean by saying that the gravitational instability in the perfectly ordered Bose gas is of the same type as the instability found by Jeans for matter described by classical hydrodynamics. The similarity has two closely related aspects. First, in either case, the instability is linear, i.e. density perturbations with different wavenumbers do not have to interact with one another for it to occur. Second, the growth rate of the instability, for low-$`k`$ modes, scales as square root of the average number density:
$$\mathrm{Im}\omega (0)\sqrt{n_{\mathrm{ave}}}$$
(5)
(see e.g. eq. (4)). One can say that this square-root scaling law defines a certain “universality class”, to which the original Jeans instability, the collisionless instability, and the instability of the ordered Bose gas all belong.
In this paper, we want to consider a nonrelativistic Bose gas in which the relation between $`k_0`$ and $`k_J`$ is reversed compared to (3):
$$k_Jk_0.$$
(6)
Our motivation is simply that we do not know at present which type of initial state is more likely to be realized in the early universe. We will call the case (6) disordered (because correlated domains are initially small), as opposed to the case (3), which we will call ordered (initial correlated domains are large). According to eq. (1), the collisionless instability will occur in a disordered gas only at large spatial scales. So, if there is a gravitational instability on shorter scales, it is caused by gravitational scattering of the particles, rather than by collisionless effects. Our goal was to determine effects of gravitational scattering on the evolution of a disordered Bose gas.
We imagine the following sequence of events in the early universe. A disordered Bose gas first becomes unstable with respect to large-scale density perturbations via the collisionless (Vlasov) mechanism. As clumps formed by this mechanism grow denser, collisionless instability can develop on shorter scales. We expect that this process will give rise to clusters of matter, each cluster including a number of smaller clumps and some inter-clump gas. Gravitational scattering, which is a slower process, will operate inside those clusters, after they have already fallen out of the Hubble expansion. Accordingly, we considered a disordered Bose gas in a box of fixed size. In addition, we used Newtonian gravity, which we expect to be good for sufficiently short-scale perturbations.
Our results, obtained by numerically following the evolution of a random classical field, apply when the initial state has large (compared to unity) occupation numbers at low momenta and little occupation at high momenta. These results are as follows. (i) Starting from a disordered state, we have observed a well-defined crossover from collisionless instability to a slower growth of the density contrast. This slower growth can be viewed as a gravitational instability of a different type than the linear instabilities discussed above. We confirm the existence of this new type of gravitational instability by running simulations in smaller boxes, so as to remove the collisionless instability altogether. In this case, any growth of the density contrast will have to be attributed to some new type of instability. We indeed observe such a growth, and we interpret it as a result of gravitational scattering of the bosons. The scaling of the rate of the growth with $`n_{\mathrm{ave}}`$, in a small box, is quite distinct from the square-root law (5). In this sense, the new instability is in a different “universality class” than the linear instabilities. (ii) Clumps that result from this nonlinear instability are phase-coherent; in other words, the instability leads to formation of Bose stars. (By a Bose star we mean any coherent gravitationally bound configuration of a Bose field; ground states of gravitating bosons were originally discussed in ref. .) It has been long considered possible that Bose stars will form as result of the linear instability in an ordered Bose gas (although, to our knowledge, that had not been actually demonstrated). Formation of Bose stars in an initially disordered gas is a different effect. We attribute it to gravitational scattering in our case being stimulated: as an effect of Bose statistics, particles that scatter into regions of high density will prefer to end up in phase with particles that are already there.
The rest of the paper is organized as follows. Sect. 2 contains formulation of the problem in terms of a random classical field. In Sect. 3 we present numerical results. Our conclusions, and connections to earlier work, are given in Sect. 4. Some technical details concerning the collisionless instability are given in Appendix.
## II Setting up a random classical field
We consider a nonrelativistic Bose gas described by the following equations:
$`i{\displaystyle \frac{\psi }{t}}`$ $`=`$ $`{\displaystyle \frac{^2}{2m}}\psi +m\mathrm{\Phi }\psi +g_4\psi ^{}\psi ^2;`$ (7)
$`^2\mathrm{\Phi }`$ $`=`$ $`4\pi Gm(\psi ^{}\psi n_{\mathrm{ave}}).`$ (8)
Here $`\psi `$ is a complex Bose field, and $`\mathrm{\Phi }`$ is the (real) gravitational potential. Number density of bosons is $`n=\psi ^{}\psi `$, and its volume average is called $`n_{\mathrm{ave}}`$. For reasons given in the introduction, we assume it sufficient to consider the gas in a box of fixed size (rather than in an expanding universe) and to use the Newtonian approximation to gravity.
All numerical results presented in the next section are for $`g_4=0`$, i.e. for the case when gravity is the only interaction between the particles. The method that we use can be applied equally well to the case $`g_40`$. In this paper, however, we limit ourselves to only a cursory discussion (in the concluding section) of possible effects of short-range interactions.
The field $`\psi `$ is in principle a quantum operator but here we treat it as a random classical field. Usefulness of this classical approximation has been recently emphasized in connection with a number of problems in nonequilibrium statistics of Bose fields. Among these problems, the one closest to the problem at hand is phase separation in a nonequilibrium Bose gas with a local attractive interaction (and a repulsive interaction at larger densities) . Some of the results that we obtain here, notably, the coherent nature of the clumps, are similar to those for a local interaction, but due to the long-range nature of gravity there are also some differences. For example, for a short-range interaction, one can also identify ordered and disordered types of initial states. Results of ref. correspond to a disordered initial state. For neither type of state, however, we expect a large-distance crossover to collisionless (Vlasov) dynamics.
The system (7)–(8) can be viewed as a modification, required to include Newtonian gravity, of the Gross-Pitaevskii (GP) equation familiar from the theory of laboratory Bose gases . The system (7)–(8) is universal in the same sense as the GP equation is: at low gas densities, the only parameter needed to characterize short-range interactions is the scattering length, which is encoded in the coupling constant $`g_4`$.
The classical approximation is good when the field $`\psi `$ is large and its power spectrum is contained at small momenta: speaking in terms of particles, the occupation numbers in low-momentum modes are much larger than unity, while those in high-momentum modes are small and rapidly decrease with momentum. We stress, however, that when scattering is strong, as it may become at later stages of clumping, the very notion of a particle, as an entity propagating for relatively large times without collisions, becomes ill-defined. One of the useful features of the classical approximation is that it allows one to avoid dealing with that notion altogether. In addition, provided the power spectrum of $`\psi `$ satisfies the necessary conditions, the classical approximation will apply equally well either in the ordered or in the disordered case. We also stress that, although this may sound paradoxical, the classical approximation does take into account effects of the quantum Bose statistics of the particles. Indeed, the very possibility for $`\psi `$ to become (almost) classical, in the limit of large occupation numbers, is one such effect.
For numerical simulations presented in this paper, we have chosen the initial power spectrum of $`\psi `$ in the following simple form (cf. ref. ):
$$|\psi _\text{k}|^2=A\mathrm{exp}(k^2/k_0^2),$$
(9)
where $`\psi _\text{k}`$ are Fourier components of the field $`\psi `$:
$$\psi (\text{r})=V^{1/2}\underset{\text{k}}{}\psi _\text{k}\mathrm{exp}(i\text{k}\text{r}),$$
(10)
$`V`$ is the integration volume. For an ideal gas, $`|\psi _\text{k}|^2`$ would be the conventionally defined occupation numbers; we will often use the same terminology for the interacting gas. For the classical approximation to apply, it is sufficient that $`A1`$. Although the distribution (9) may look like an equilibrium Maxwell distribution, it is totally unrelated to the latter. The Maxwell distribution follows from the Bose distribution in the limit when the occupation numbers are small. In our case, $`A1`$, and the occupation numbers (at low wavenumbers) are large. Thus, the spectrum (9) is highly nonthermal. Eq. (9) determines the magnitudes of $`\psi _\text{k}`$. The phases of $`\psi _\text{k}`$ are chosen as uncorrelated random numbers. The initial number density contrast corresponding to these initial conditions is $`\delta n/n1`$.
Bose gases with highly nonthermal spectra can form in the early universe through various nonequilibrium processes, such as spinodal decomposition or postinflationary reheating. (Formation of nonthermal spectra in the latter case is reviewed in .) We do not pretend that the specific form (9) is in any way realistic. We only use it as a representative of spectra satisfying the conditions of applicability of the classical approximation. We have also considered some different forms of the initial spectra and have convinced ourselves that these modifications do not alter the results in any significant way.
Finally, for comparison with the classical method used here, we review results obtained in the collisionless (Vlasov) approximation. Evolution of density perturbations with wavenumbers $`kk_0`$ can be studied by considering particles as semiclassical wave packets. We stress that this semiclassical approximation for the particles is entirely different from the classical approximation for the field $`\psi `$. In particular, it does not require that the occupation numbers be large but requires that perturbations be slowly varying. (It is the other way around for the approach based on a random classical field.) When the semiclassical view of the particles applies (i.e. a perturbation is slowly varying), one can describe them with a classical distribution function $`f(\text{r},\text{p};t)`$, which is proportional to the Fourier transform, with respect to k, of the quantum average of $`\psi _{\text{p}\text{k}/2}^{}\psi _{\text{p}+\text{k}/2}`$. The distribution function, together with the gravitational potential $`\mathrm{\Phi }`$, can be used to construct a collisionless approximation (which, if needed, can be augmented by a collision integral). The construction is entirely analogous to the derivation of the Vlasov equation for plasmas, see e.g. . In the collisionless approximation, we find that frequencies $`\omega (\text{k})`$ of small perturbations of a homogeneous state with a distribution function $`f_0(\text{p})`$ are determined by the dispersion equation
$$1+\frac{4\pi Gm^2}{k^2}\text{k}\frac{f_0}{\text{p}}\frac{d^3p}{\text{k}\text{v}\omega i0}=0,$$
(11)
where $`\text{v}=\text{p}/m`$. When $`kv`$, for a typical $`v`$, is much smaller than $`|\omega |`$, the dispersion equation reduces to
$$1+4\pi Gmn_{\mathrm{ave}}/\omega ^2=0.$$
(12)
Eq. (12) shows that there is a linear instability with the growth rate
$$\omega _i(0)=(4\pi Gmn_{\mathrm{ave}})^{1/2}=k_J^2/2m.$$
(13)
For a wide class of initial distribution functions, which is specified in Appendix, the collisionless instability disappears at $`kk_{}`$, where $`k_{}`$ is estimated as in the condition (1). The precise value for $`k_{}`$ is obtained by setting $`\omega =0`$ in eq. (11) and solving for $`k`$. (This procedure is justified in the Appendix.) For example, in an initial state with power spectrum (9) the distribution function is
$$f_0(\text{p})=\frac{A}{(2\pi )^3}\mathrm{exp}(p^2/p_0^2),$$
(14)
where $`p_0=k_0`$ in the system of units with $`\mathrm{}=1`$. In this case, collisionless instability occurs only for
$$k<k_{}=(\xi /2)^{1/2}k_0,$$
(15)
where $`\xi =k_J^4/k_0^4`$. Eq. (15) is equivalent to (1) with $`C=1/\sqrt{2}`$.
## III Numerical results
For numerical work, it is convenient to rescale the fields of our model, so that we have to deal with a smaller number of parameters. Define new fields $`\chi `$ and $`\varphi `$ as
$`\chi `$ $`=`$ $`(8\pi Gm^3)^{1/2}\psi ,`$ (16)
$`\varphi `$ $`=`$ $`2m^2\mathrm{\Phi }.`$ (17)
Then, eqs. (7)–(8) with $`g_4=0`$ take the form
$`2mi{\displaystyle \frac{\chi }{t}}`$ $`=`$ $`^2\chi +\varphi \chi ,`$ (18)
$`^2\varphi `$ $`=`$ $`\chi ^{}\chi \nu _{\mathrm{ave}},`$ (19)
where $`\nu _{\mathrm{ave}}`$ is the average of $`\nu =\chi ^{}\chi `$ over space. Initial condition (9) takes the form
$$|\chi _\text{k}|^2=B\mathrm{exp}(k^2/k_0^2),$$
(20)
where $`B=8\pi Gm^3A`$, and $`\chi _\text{k}`$ are related to $`\chi (\text{r})`$ in the same way as $`\psi _\text{k}`$ to $`\psi (\text{r})`$, see eq. (10). Finally, the expression (2) for the Jeans wavenumber reduces to
$$k_J^4=2\nu _{\mathrm{ave}}.$$
(21)
All data presented in the figures were obtained by numerically integrating the system (16)–(17), with initial power spectra of the form (20) and uncorrelated random initial phases for $`\chi _\text{k}`$, on $`64^3`$ cubic lattices with periodic boundary conditions. We have chosen the unit of length so that $`k_0=2\pi `$, and the unit of time so that $`2m=1`$. The integrations were done using a second-order in time operator-splitting algorithm; updates corresponding to the operator $`^2`$ were carried out by a spectral method based on the fast Fourier transform. The algorithm conserves the number of particles exactly; energy nonconservation was below 1% for all the data sets represented in the figures.
As we have mentioned in the introduction, there are basically two choices for the size of the integration box. One possibility is to choose the box large enough to activate collisionless instability, in order to see a crossover from the collisionless instability to a different regime. While it is clear, even a priori, that a linear instability cannot carry on forever and will have to end somehow, it is still interesting to see explicitly how that happens and what the different regime is. Representative results are shown in Fig. 1. These results are for $`B=20`$ and box size $`L=4.25`$. A crossover, around $`t=1.2`$, from a more rapid to a slower overall growth is seen both in the density contrast and, especially, in the ratio of the total potential and kinetic energies. The density contrast is defined as
$$\delta n/n=(\nu \nu _{\mathrm{ave}})^2^{1/2}/\nu _{\mathrm{ave}},$$
(22)
where the brackets denote averaging over space. It is significant that the growth of the density contrast continues after the linear instability apparently terminates. This allows us to speak about a nonlinear instability, which we attribute to gravitational scattering. The overall growth of the density contrast is superimposed on rapid oscillations, which are especially large at the later, scattering stage. We attribute these oscillations in the density contrast to oscillations of matter about a forming star.
To confirm the existence of a nonlinear instability, it is useful to run simulations in a smaller box, so that the long-wave modes that could undergo the collisionless (linear) instability are eliminated. For a cubic box with periodic boundary conditions, this means choosing the side length $`L`$ so that $`2\pi /L`$ is larger than the wavenumber $`k_{}`$ that appears on the right-hand side of (1) (there are no density perturbations with $`k=0`$ because the total number of particles is conserved). Any growth of the density contrast in an initially disordered gas in such a box will have to be attributed to some new type of gravitational instability. We can find out if this instability belongs to a new “universality class” by studying how its rate depends on the initial density of the gas.
When gravity is the only interaction between the particles, the rate of clumping, during an initial stage when the gas is still nearly uniform, can be written in general as
$$t_c^1=\frac{k_0^2}{2m}F(k_J^4/k_0^4,k_{}L).$$
(23)
The function $`F`$ depends on two parameters that measure the strength of gravitational interaction between the particles:
$$\xi =k_J^4/k_0^4$$
(24)
measures the strength of interactions for momentum transfers of order $`2\pi /k_0`$, while $`k_{}L`$ measures that for momentum transfers of order $`k_{\mathrm{min}}=2\pi /L`$. In the limit $`k_{}L\mathrm{}`$, when the initial clumping is due to collisionless instability, $`F`$ becomes a function of its first argument only, and
$$F(\xi )\sqrt{\xi },$$
(25)
see (12). In an ordered Bose gas, dependence of the rate on $`L`$ disappears whenever $`k_Jk_{\mathrm{min}}`$, and it scaling with $`\xi `$ is then given by the same square-root law (25), see (4).
In an initially disordered gas, situation is more complicated. The condition (6) (i.e. $`\xi 1`$) guarantees only that interactions in the initial state are weak for momentum transfers of order $`2\pi /k_0`$. In a small box, we also have the condition $`k_{}<k_{\mathrm{min}}`$, which guarantees that interactions are at most of moderate strength even for the smallest momentum transfers. In this case, the initial gas can be considered weakly interacting, and one may contemplate applying kinetic theory. However, even if suitable kinetic equations exist, they will, in general, not reduce to a Boltzmann equation (properly modified to include the effect of large occupation numbers). Indeed, as we have already noted, fluctuations with spatial scales $`\mathrm{}2\pi /k_0`$ cannot be described using the conventional semiclassical distribution function. If such fluctuations are important (and we will argue that in the present case they are), the Boltzmann equation will not apply.
The simplest way to see that the Boltzmann equation is in general insufficient for studying evolution of a Bose gas, either for long-range or for short-range interactions, is to note that it includes interaction only via the scattering probability and so does not distinguish between attractive and repulsive interactions. On the other hand, on physical grounds we expect dynamics for these two types of interactions to be vastly different.
Let us refer to the rate predicted by the Boltzmann equation for changes in the distribution function as the Boltzmann rate. For a short-range attractive interaction, it has been found numerically in ref. that scaling of the clumping rate with $`\xi `$ agrees to a good accuracy with scaling of the Boltzmann rate. In general, however, there is no reason to expect such agreement. For gravitational scattering, the Boltzmann rate (for a gas with large occupation numbers) is of the form (23) with
$$F(\xi ,k_{}L)\xi ^2\mathrm{ln}(k_0L).$$
(26)
The logarithm here is analogous to the Coulomb logarithm in plasma kinetics (cf. ref. ); it is due to an infrared divergence in the collision integral. In a small box, this divergence is cut off at wavenumber transfers of order $`k_{\mathrm{min}}=2\pi /L`$. Now, consider the case when $`k_0`$ and $`L`$ are fixed. The scaling predicted by (26) is $`F(\xi )\xi ^2`$. This is not quite what we see numerically. Instead, a good fit to the rate is obtained with
$$F(\xi )\xi ^\alpha .$$
(27)
where $`\alpha `$ is noticeably larger than 2.
We attribute this large deviation of $`\alpha `$ from 2 to an important role played by fluctuations with spatial scales much smaller than $`2\pi /k_0`$. One can readily invent a reason why such fluctuations are more important in the presence of a long-range interactions than in the absence of such: even a short-scale fluctuation will influence many particles when there is a long-range force. We thus arrive at a picture of short-scale fluctuations constantly appearing and disappearing, until one of them is able to start a growing clump.
To extract the scaling law from our numerical results, we have used data corresponding to $`L=3.5`$ and three different values of $`B`$: $`B=7`$, $`B=10`$ and $`B=20`$. The corresponding values of $`\xi =k_J^4/k_0^4`$ are 0.050, 0.071, and 0.143. For these values of $`L`$ and $`\xi `$ , the right-hand side of (15) is smaller than $`2\pi /L`$, so collisionless instability does not occur. Consequently, the density contrast’s growth, which was observed in all three cases, is a signature of a new type of gravitational instability.
The growth of the density contrast should be more accurately referred to as a growth of some average with respect to time, because it is superimposed on rapid oscillations similar to those seen at later stages in Fig. 1. We expect that such an average will depend on time and $`\xi `$ mainly through a dependence on the ratio $`t/t_c`$, or, because $`k_0`$ and $`L`$ are kept constant, through the product $`t\xi ^\alpha `$, where $`\alpha `$ is the power that we want to determine:
$$\overline{\delta n/n}f(t\xi ^\alpha ).$$
(28)
The overline denotes averaging over several oscillations, and $`f`$ is some function. When we plot the averaged density contrast, for the above three values of $`B`$, against the rescaled time variable $`t\xi ^\alpha `$, we find that a good collapse of the plots is achieved for
$$\alpha =2.7,$$
(29)
see Fig. 2. The collapse is not nearly as good for $`\alpha =2`$, the value characteristic of a Boltzmann rate. Our interpretation is that the instability is a result of gravitational scattering, in which fluctuations with spatial scales $`\mathrm{}2\pi /k_0`$ play an important role.
There is an approximately linear stage in the growth of $`\overline{\delta n/n}`$ in Fig. 2, from about 0.5 to about 1.2 in units of $`100t\xi ^{2.7}`$. Fitting this linear growth with a time dependence of the form
$$\overline{\delta n/n}=t/t_c+\mathrm{const},$$
(30)
provides one possible definition of the rate $`t_c^1`$. Numerical fitting gives
$$t_c^1250\xi ^{2.7}.$$
(31)
This corresponds to eq. (23) with $`F(\xi )6\xi ^{2.7}`$.
The estimate (31) has been obtained for initial power spectra of the form (9). It is natural to expect that a different form of the initial power spectrum will lead to a different numerical coefficient in (31). That is indeed so, although in cases that we have considered the difference is not overwhelming. For instance, one can generate a random field with a power spectrum of the form (9) and then make, by hand, the magnitudes of $`\chi `$ on all lattice sites equal to some fixed value, while preserving the phases (in coordinate space). One can then use this new field as an initial condition that has a power spectrum different from (9). We have done that for $`L=3.5`$, and $`B=10`$ and $`20`$. In this case, the density contrast is initially zero. However, it rapidly develops to a nonzero value, as dynamics of the density is activated by dynamics of the phase of $`\psi `$. The subsequent evolution of the density contrast is fairly similar to that seen in Fig. 2. The scaling of the rate is consistent with $`\alpha =2.7`$, but the rate itself is about half of that given by eq. (31).
Finally, Fig. 3 shows the profile of the star at different moments of time, and Fig. 4 illustrates the star’s phase-correlated (Bose-Einstein condensed) nature. (Only one large star has formed per box, in addition to a number of much smaller clumps.)
## IV Discussion
The main result of this work is numerical evidence for a new type of gravitational instability. We have observed this instability in simulations of a disordered Bose gas in a box of a fixed size. We interpret it as being due to collisions between the particles, or, in terminology more suitable for a system with large occupation numbers, between classical waves. We have found that scaling of the instability rate with the average density deviates from the behavior characteristic of changes described the Boltzmann equation. We interpret this deviation as an indicator of an important role played by short-scale fluctuations of the field. We have also presented evidence (Fig. 1) that this instability will take over from a collisionless instability in a large system. This means that it could contribute to formation of structures inside regions that had already fallen out of the Hubble expansion. In addition, we have found that this collisional instability leads to formation of a phase-correlated clump of matter (a Bose star) in an initially disordered Bose gas. Hence, Bose stars can form through gravity alone even in the absence of a large preexisting correlation length.
These results were obtained using the classical approximation for a Bose field. The classical approximation applies to a field that has a certain type of nonthermal power spectrum, namely, occupation numbers are large at small wavenumbers but rapidly decrease towards the ultraviolet. We do not exclude, however, that a similar nonlinear instability exists in matter with different distributions of particles with respect to momenta, for example, in a thermal or nearly thermal gas (for which the classical approximation does not apply).
Our results may apply to nonbaryonic dark matter (if the dark-matter particle is a boson) and to low-density hydrogen. (At high densities, when inelastic processes are important, the simple description (7) is no longer adequate.) It is fascinating to think that some of the matter in the universe may be in the form of superfluid hydrogen. However, there may be interesting implications for structure formation even if our results apply only during initial stages of clumping, when the density is still low.
We should also comment on a possible role of short-range interactions in a disordered self-gravitating Bose gas. As we have already mentioned, at low enough densities all the short-range interactions will be encoded in a single coefficient $`g_4`$ of the cubic term in eq. (7). The strength of this local interaction relative to the strength of gravitational scattering in the initial state is measured by the parameter $`\eta =g_4k_0^2/4\pi Gm^2`$. This parameter can be small even when $`g_4`$ is much larger than $`G/c^2`$ ($`c`$ is the speed of light), provided the ratio $`k_0/mc`$ is small enough, i.e. the initial state is sufficiently nonrelativistic. On the other hand, a not-too-small $`\eta `$ can lead to interesting effects. A repulsive interaction ($`g_4>0`$) in a Bose gas causes coarsening—growth of the sizes of correlated domains . If the interaction is strong enough, we expect that the gas will become sufficiently ordered before the nonlinear instability had a chance to develop, and clumping will take place via the corresponding linear instability. In addition, we expect that repulsion will make emerging Bose stars larger in size and smaller in density. (For equilibrium configurations, it has been long known that even a small repulsion makes a large difference .) An attractive interaction ($`g_4<0`$) produces correlated clumps all by itself ; we expect that it will help gravity along when both are present.
Finally, we discuss relation of our work to earlier work in the literature. We can discern two trends in the earlier work on formation of Bose stars. Some researchers considered gravitational stability and evolution of already coherent configurations. That work included linear stability analysis of a homogeneous Bose-Einstein condensate and a study of spherically symmetric collapse of an already coherent spherically symmetric configuration . It has been suggested that the Jeans-type instability of a homogeneous Bose-Einstein condensate may lead to formation of Bose stars. Although we do not present corresponding data in this paper, we have confirmed that a Bose star indeed forms in an ordered Bose gas. However, we have given evidence that a large preexisting correlation length, i.e. a high initial degree of spatial coherence, is not necessary for Bose star formation. A Bose star can form even in an initially incoherent (disordered) gas via a new type of gravitational instability that we have identified here. The second trend in the earlier work was to consider ordering effects of local interactions. In particular, it has been argued that due to effects of the Bose statistics even a very weak interaction, such as that between axions, can result in a relaxation time comparable to the age of the universe , and, as a consequence, a Bose star may form out of an initially incoherent clump . Here we have shown that phase coherence can develop due to gravity alone, in the absence of any additional interaction.
The author thanks F. Finelli and I. Tkachev for discussions. This work was supported in part by the U.S. Department of Energy under Grant DE-FG02-91ER40681 (Task B).
## More on the collisionless instability
In this appendix we will show that for a wide class of initial distribution functions the collisionless instability (such as found in the Vlasov approximation) disappears at sufficiently large wavenumbers. Whether the instability is present or absent is determined from the dispersion relation (11), which we rewrite here as
$$\gamma (\omega ,k)=0,$$
(32)
where $`\gamma `$ is the “gravitational permittivity”:
$$\gamma (\omega ,k)=1+\frac{4\pi Gm^3}{k^2}_{\mathrm{}}^{\mathrm{}}𝑑p_x\frac{g^{}(p_x)}{p_xm\omega /ki0}.$$
(33)
We have oriented coordinate axes so that $`\text{k}=(k,0,0)`$, with $`k>0`$, and defined a distribution function with respect to $`p_x`$:
$$g(p_x)=𝑑p_y𝑑p_zf_0(p_x,p_y,p_z).$$
(34)
In an isotropic medium, the form of $`g(p_x)`$ would not depend on the direction of $`x`$ (that is the direction of k), but we do not assume isotropy here.
By its physical meaning, $`g(p_x)`$ is nonnegative. We assume that for any direction of k the function $`g(p_x)`$ satisfies the following conditions: (i) $`g(p_x)`$ is smooth (infinitely differentiable) and decreases with $`|p_x|`$ rapidly enough for the integral in (33) to be convergent for any $`\omega `$ with $`\mathrm{Im}\omega >0`$. (ii) $`g(p_x)`$ is even; (iii) $`g(p_x)`$ is monotonically decreasing for all $`p_x>0`$. From these conditions, two more follow: (iv) $`g^{}(0)=0`$; and (v) $`g^{\prime \prime }(0)<0`$. Primes denote derivatives with respect to $`p_x`$.
Eq. (33) defines an analytic function in the upper half-plane of complex $`\omega `$; in our definition, the upper half-plane does not include the real axis. To obtain $`\gamma (\omega ,k)`$ in the lower half-plane, we have to analytically continue it there from the upper half-plane. (Using eq. (33) directly in the lower half-plane would give another, unphysical sheet of $`\gamma (\omega ,k)`$.)
Because $`g(p_x)`$ is even, $`g^{}(p_x)`$ is odd. From this, it follows that in the upper half-plane $`\gamma (\omega ,k)`$ can have zeroes only on the imaginary axis. For $`\omega =i\omega _i`$ with $`\omega _i>0`$, $`\gamma `$ takes the form
$$\gamma (i\omega _i,k)=1+4\pi Gm^3_{\mathrm{}}^{\mathrm{}}𝑑p_x\frac{p_xg^{}(p_x)}{k^2p_x^2+m^2\omega _i^2}$$
(35)
(in particular, it is purely real). The conditions on $`g(p_x)`$ guarantee that $`p_xg^{}(p_x)<0`$ for any nonzero $`p_x`$. That means that the integral in (35) is negative, so that at any fixed $`k>0`$ $`\gamma (i\omega _i,k)`$ is a monotonically increasing function of $`\omega _i`$. Consequently, at a given $`k`$ it has at most one zero in the upper half-plane of $`\omega `$. This zero, when it occurs, signals an instability of the initial state with respect to fluctuations with that $`k`$. For example, such a zero always exists for $`k0`$; its position, $`\omega _i(0)`$, is given by eq. (13).
Now let us see what happens as $`k`$ increases. Zeroes of $`\gamma (\omega ,k)`$, i.e. roots of eq. (32), define a dispersion law $`\omega (k)`$. As long as, for a given $`k`$, the zero is still in the upper half-plane, we can continue to use eq. (35). First, it follows from (35) that $`\omega _i(0)`$ is an upper bound on $`\omega _i(k)`$. So, as we change $`k`$, a root cannot appear from infinity. Second, differentiating (32) with respect to $`k`$ and using again (35), we find that, when $`g(p_x)`$ obeys the stated conditions, $`d\omega _i(k)/dk`$ is necessarily negative. So, as $`k`$ increases, the root moves down towards the real axis. It reaches the real axis at the value $`k=k_{}`$ determined by setting $`\omega _i=0`$ in (35):
$$k_{}^2=4\pi Gm^3\left|_{\mathrm{}}^{\mathrm{}}𝑑p_xg^{}(p_x)/p_x\right|.$$
(36)
(Note that there is no divergence at $`p_x=0`$ here, because $`g^{}(0)=0`$, see condition (iv).) We have already established that any root of (32) in the upper half-plane is on the imaginary axis, and that such a root cannot appear from infinity. As a consequence, the point $`\omega =0`$ is the only point where the root of (32) can leave or enter the upper half-plane of $`\omega `$. Because the corresponding value of $`k`$ is unique, the root will not reappear in the upper half-plane at any $`k>k_{}`$. We conclude that the collisionless instability is absent for all $`kk_{}`$.
A detailed picture of how the instability disappears can be obtained by expanding the original expression for $`\gamma `$, eq. (33), in an asymptotic series about $`\omega =0`$. The $`i0`$ prescription in (33) will ensure that we are expanding on the correct sheet of $`\gamma (\omega ,k)`$:
$$\gamma (\omega ,k)=1+\frac{4\pi Gm^3}{k^2}_{\mathrm{}}^{\mathrm{}}𝑑p_x\frac{g^{}(p_x)}{p_xi0}\left(1+\frac{m\omega /k}{p_xi0}+O(\omega ^2)\right).$$
(37)
For a smooth even $`g(p_x)`$, this evaluates to
$$\gamma (\omega ,k)=1\frac{k_{}^2}{k^2}\left(1+\frac{i\pi \omega mg^{\prime \prime }(0)/k}{_{\mathrm{}}^{\mathrm{}}𝑑p_xg^{}(p_x)/p_x}+O(\omega ^2)\right).$$
(38)
Writing $`k=k_{}+\mathrm{\Delta }k`$ and expanding in small $`\mathrm{\Delta }k`$, we find that the root of the dispersion equation has the form
$$\omega (\mathrm{\Delta }k)=i\frac{2\mathrm{\Delta }k}{\pi m}\left|\frac{_{\mathrm{}}^{\mathrm{}}𝑑p_xg^{}(p_x)/p_x}{g^{\prime \prime }(0)}\right|+O((\mathrm{\Delta }k)^2).$$
(39)
This expression shows explicitly that, as $`k`$ increases past $`k_{}`$, the root moves from the upper half-plane into the lower half-plane.
The structure of higher-order terms in (38) is as follows: the coefficients of even powers of $`\omega `$ are all real, while the coefficients of odd powers are all imaginary. So, to any finite order in $`\mathrm{\Delta }k`$, the root of the dispersion relation will remain purely imaginary even after it moves in the lower half-plane, and the mode corresponding to that $`\mathrm{\Delta }k`$ will be overdamped (non-oscillatory). This conclusion may be obviated by terms that are “nonperturbative” in $`\mathrm{\Delta }k`$, i.e. are not seen in the asymptotic expansion (37). We also stress that it applies only to fluctuations about a homogeneous state, not to a state that already contains clumps. |
no-problem/9911/astro-ph9911015.html | ar5iv | text | # The Lithium-Rotation Correlation in the Pleiades Revisited
## 1 Introduction
Predictions are made by standard stellar models (e.g., Bahcall & Ulrich (1988)) about the surface abundances of elements in stars. However, there are indications that such models are incomplete. A case in point is the surface abundance of the element lithium (Li) in low mass stars, which is observed to decrease with time.
The solar meteoritic value for the Li abundance is 3.31$`\pm `$0.04 (Anders & Grevesse (1989)). A study of Li abundances in young, pre-main-sequence (PMS) T Tauri Stars (TTS) suggests a value of log N(Li) = 3.2$`\pm `$0.3 (Magazzu, Rebolo, & Pavlenko (1992)), consistent with the meteoritic value. While this is one indicator of the initial Li abundance, TTS abundance determinations are beset by complications due to their youth, such as uncertain $`T_{\mathrm{eff}}`$ estimates and the presence of a circumstellar accretion disk. The study of stars in open clusters of different ages, like $`\alpha `$ Persei (50 Myr) and the Pleiades (70–100 Myr), shows that there is nearly a uniform Li abundance of 3.2 for high mass stars ($`7000`$ K).
It is known that surface Li depletion takes place during the PMS evolution of low mass stars due to Li burning via ($`p`$, $`\alpha `$) reactions at low temperatures of $`T2.6\times 10^6`$ K. Surface depletion can occur in standard models through convective mixing if the base of the convection zone is hot enough to burn Li (Bodenheimer (1965), Pinsonneault (1997)). Because PMS stars have deep convection zones, they burn Li during the PMS. As the depth of the convection zone is a function of mass (increasing with lower mass) Li is depleted on the main sequence only in lower mass stars ($`0.9`$ M). However, the Pleiades evinces a large dispersion in surface Li abundance at a given color for $`T_{\mathrm{eff}}5500`$ K (e.g., Soderblom et al. 1993b ). Standard stellar models are unable to reproduce this dispersion. Furthermore, open cluster observations indicate some depletion is observed on the main sequence as well, which is in conflict with standard standard models.
Because these models cannot fully explain the observed depletion patterns, additional mixing mechanisms seem necessary. Rotation provides one driving mechanism for such non-convective mixing, through meridional circulation (Tassoul (1978), Zahn (1992)) and instabilities caused by differential rotation (Zahn (1983)). Hence, rotation in stars has received much scrutiny as a possible agent of Li depletion and of the observed scatter in open cluster Li abundances at a given mass. Models which include rotational mixing (Pinsonneault, Kawaler, & Demarque (1990)) are able to predict the dispersion seen in older systems, but not at young ages like that of the Pleiades (Chaboyer, Demarque, & Pinsonneault (1995)). The study of Li abundances is a rich and vast field, and there have been several efforts to study the correlation of surface Li abundances with rotation using stars in open clusters. Here, we concentrate on the connection between surface Li abundance and rotation using data in the young Pleiades cluster.
Because the Pleiades Li scatter is such a difficult obstacle in our understanding of early stellar evolution, a historical summary seems in order. Butler et al. (1987) studied a sample of 11 K-stars in the Pleiades and determined that four rapid rotators had higher Li abundances than four slow rotators. They believed this consistent with the evolutionary picture that on arrival on the main sequence, stars had high rotation rates and high Li abundances (i.e., they arrived on the main sequence before there was time for rotational braking or Li depletion). As the star spun down, the Li abundance decreased as well. Hence, they concluded that the faster rotators were younger than the slower, hence less depleted.
A study of the distribution of rotational velocities of low-mass stars in the Pleiades by Stauffer & Hartmann (1987) revealed that there was a wide range of rotation velocities in the Pleiades K and M dwarfs. They showed that the distribution of rotation velocities in the Pleiades could be reproduced quite well invoking angular momentum loss, without having to resort to a large age spread which is also in conflict with the narrow main-sequence seen among the low-mass Pleiades stars.
Soderblom et al. (1993b) carried out an extensive study of Li abundances in the Pleiades. They considered several possible explanations for the dispersion in the observed abundances, including observational errors and the effect of starspots. They concluded that the spread in Li abundances seen was real and not an artifact of other physical conditions. They found that the Li abundance was correlated well with both rotation and chromospheric activity, and speculated that rapid rotation was somehow able to preserve Li in stars. While they found some low $`v\mathrm{sin}i`$ systems with high Li abundances, it was possible that these stars are faster rotators simply seen at low inclination.
Balachandran et al. (1988) studied a sample of stars in the younger $`\alpha `$ Persei cluster (50 Myr) and concurred with the picture of Li-poor stars as slow rotators. However, a comparison of the Pleiades to $`\alpha `$ Per by Soderblom et al. (1993b) showed that while most stars had similar abundances, a significant number of stars in $`\alpha `$ Per had abundances that were less than that in the Pleiades by 1 dex or more. This was difficult to understand until Balachandran et al. (1996) published a corrected list of Li abundances that culled all non-members from the sample, bringing consistency to the Pleiades and $`\alpha `$ Per abundances.
Garcia Lopez et al. (1991a,b) added seven stars to the Butler et al. sample in the range $`4500T_{\mathrm{eff}}5500`$ K and asserted a clear connection between Li abundance and $`v\mathrm{sin}i`$. They also found that the correlation breaks down for temperatures cooler than 4500 K. In a subsequent paper in 1994, they enlarged their sample and further studied the correlation, concluding that their earlier assertion was correct – there were no rapid rotators with low Li abundances and there was a clear relationship between log N(Li) and $`v\mathrm{sin}i`$. They did note three stars (H II 320, 380 and 1124) having low $`v\mathrm{sin}i`$ values and Li abundances comparable to those of the rapid rotators as counterexamples, but speculated these objects were rapid rotators seen at low inclination angles. It is to be noted that, when determining mean abundances for their rapid and slow rotator populations, they included the 3 stars with low $`v\mathrm{sin}i`$ and high Li abundances in their rapid rotator sample. While this does not change the qualitative result they obtained, it does affect the magnitude of the difference in abundance between the two populations. It also demonstrates that there is a range of abundances for slow rotators.
Jones et al. (1996a) derived Li abundances and rotation velocities for 15 late-K Pleiades dwarfs, and also found that the correlation between Li abundance and rapid rotation breaks down for cooler stars ($`T_{\mathrm{eff}}4400`$ K). Jones et al. (1997) determined rotational velocities and Li abundances in the 250 Myr old cluster M34, intermediate in age between the young Pleiades (70–100 Myr) and the Hyades (500 Myr). They concluded that the Li depletion and rotation velocities were in between the Pleiades and Hyades values, and that the pattern seen in these clusters suggested an evolutionary sequence for angular momentum loss and Li depletion.
One could speculate that some of the Li dispersion in the Pleiades and $`\alpha `$ Per may be due to NLTE effects and unknown effects of stellar activity on the Li I line formation (Houdebine & Doyle (1995); Russell (1996); but see Soderblom et al. 1993b). The structural effects of rotation might also be responsible for the Li depletion pattern. Martin & Claret (1996) included this ingredient in their models for masses of 0.7 and 0.8 M and were able to produce enhanced Li abundances for stars with high initial angular momenta as a result of less effective PMS Li destruction in rapid rotators (i.e., their models imply initially rapid rotators will have high Li abundances relative to the other stars at the same $`T_{\mathrm{eff}}`$ at young ages). Angular momentum loss as well as rotationally-induced mixing could affect these models.
The questions we consider here are if there truly is a correlation between Li abundance and rotation rate in the Pleiades, what the nature of the correlation is, and if not, what might explain the abundance scatter. We start with a careful sample selection for our analysis, and examine various possible causes that might contribute to the dispersion (including errors in abundance determination). We then proceed to an analysis of the Li-rotation correlation and explore other possible correlations that might be masquerading as a Li-rotation correlation.
## 2 Pleiades Lithium Abundances
### 2.1 Sample Selection and Definition
Lithium abundances were derived from the datasets in the studies of Soderblom et al. (1993a; S93a) and Jones et al. (1996). The two studies were merged to form our starting sample with the latter data preferred in cases of overlap. Secondary stellar companions can affect photometric colors from which $`T_{\mathrm{eff}}`$ values are derived, activity levels, measured line strengths, and activity levels deduced spectroscopically. Thus, in order to look at the intrinsic Pleiades Li abundance dispersion unrelated (directly or indirectly) to the presence of a stellar companion, binary systems were excised from our sample. Cluster and interloping field binaries identified by Mermilliod et al. (1992) and Bouvier et al. (1997) were removed from the starting sample. Two spectroscopic binaries not in these lists, but identified as such by S93a, were also removed.
The color-magnitude diagram of this refined sample was then inspected to photometrically identify binaries using the dereddened $`BVI`$ photometry described by Pinsonneault et al. (1998). We found H II 739 to be an obviously overluminous (or overly red) outliar in the $`V`$ vs. $`BV`$, $`V`$ vs. $`VI`$, and $`I`$ vs. $`VI`$ diagrams, and eliminated it from the refined sample. Finally, all stars with upper limits on the $`\lambda 6707`$ Li I line’s equivalent width were eliminated. These upper limits, as censored data, complicate the ensuing statistical analysis. These stars are also the very hottest and very coolest in the sample. Their photometrically-inferred $`T_{\mathrm{eff}}`$ values and model atmospheres may be slightly more uncertain than the other objects in the sample. Their elimination simplifies the analysis and reduces possible additional sources of uncertainty.
This final sample of 76 Pleiades stars is listed in the first column of Table 1. The extinction-corrected $`V`$ magnitude and reddening-corrected $`(BV)`$ and $`(VI)`$ colors are given in the second, third, and fourth columns. The color-magnitude diagram of these stars evinces a tight main sequence, and is shown in Figure 1 (open circles) with a 100 Myr isochrone described in Pinsonneault et al. (1998) and assuming a distance modulus of 5.63 (Pinsonneault et al. 1998). The only possibly discrepant outliars remaining are: a) H II 686, which appears overluminous in the V vs. $`BV`$ plane, but not the $`VI`$ plane b) H II 676, which appears underluminous (or too blue) in the $`VI`$ plane and perhaps $`BV`$ also, and c) H II 2034, which appears underluminous (or too blue) in the $`BV`$ plane, but not in $`VI`$. There is no convincing evidence that these slight discrepancies are related to binarity. Rather, they may be due to relatively large photometric errors in one passband or to physical effects (e.g., increased red flux from spots) unrelated to binarity. Stars rejected as binaries are plotted as filled triangles; their general propensity to reside above the main sequence is evident. margin: Tab. 1 margin: Fig. 1
### 2.2 Stellar $`T_{\mathrm{eff}}`$ and Activity Measures
S93a and Jones et al. (1996) provide photometric $`T_{\mathrm{eff}}`$ estimates for all of the stars in Table 1. We re-examine these for comparison and because of concern that chromospheric activity or starspots might affect the colors of young stars. We calculated $`T_{\mathrm{eff}}`$ using our $`(BV)_\mathrm{o}`$ values and the relation from Soderblom et al. (1993b, equation 3): $`T_{\mathrm{eff}}=1808(BV)_\mathrm{o}^26103(BV)_\mathrm{o}+8899`$. Temperatures were also derived from our $`(VI)_\mathrm{o}`$ colors using the relation from Randich et al. (1997): $`T_{\mathrm{eff}}=99008598(VI)_\mathrm{o}+4246(VI)_\mathrm{o}^2755(VI)_\mathrm{o}^3`$. Both of these relations are based on the data from Bessell (1979), and should provide self-consistent temperatures given self-consistent photospheric colors. Columns 5, 6 and 9 give the $`T_{\mathrm{eff}}`$ values of S93a, and those derived here from $`(BV)`$ and $`(VI)`$.
We adopt the H$`\alpha `$\- and Ca II infrared triplet-based chromospheric emission measurements from S93a as stellar activity indicators. These are the ratio of the flux (relative to an inactive star of similar color) in the H$`\alpha `$ and Ca II lines relative to the total stellar bolometric flux. Given canonical views of a relation between stellar mass and chromospheric emission on the main sequence, it is also of interest to measure the residual H$`\alpha `$ and Ca II flux ratios. That is, we wish to detrend the general relation between stellar mass and activity such that activity differences unrelated to large-scale mass differences can be quantified. This was done by fitting the H$`\alpha `$ and Ca II flux ratios as a function of $`(VI)`$ color temperature with a linear relation<sup>1</sup><sup>1</sup>1Quantitative comparison of the resulting $`\chi ^2`$ values indicated that fits with higher order functions did not yield statistically improved descriptions of the flux ratio-color relations., and subtracting this fitted flux ratio (computed at a given $`VI`$) from the measured flux ratio of each star. The relation for the fitted H$`\alpha `$ flux ratio (used below) was found to be $`\mathrm{log}R_{\mathrm{H}\alpha ,\mathrm{fit}}=(0.00044742\times T_{\mathrm{eff}}(VI))2.12515`$. The relation for the Ca II flux ratio was found to be $`\mathrm{log}R_{\mathrm{CaII},\mathrm{fit}}=(0.00021017\times T_{\mathrm{eff}}(VI))3.50280`$.
We find strong evidence that our $`T_{\mathrm{eff}}`$ values (hence, assuming self-consistency of the color-$`T_{\mathrm{eff}}`$ relations, the photometric colors) are affected by activity level. Figure 2 shows the difference between the $`(BV)`$\- and $`(VI)`$-based $`T_{\mathrm{eff}}`$ values versus the H$`\alpha `$ flux ratios (top panel) and the mass-independent residual H$`\alpha `$ flux ratios (bottom panel). A relation is seen in both panels, such that the lowest $`T_{\mathrm{eff}}`$ differences are seen predominantly for the lowest flux ratios while the largest $`T_{\mathrm{eff}}`$ differences are seen predominantly for stars having the largest flux ratios. The ordinary linear correlation coefficients are significant above the 99.9% confidence levels for both panels. margin: Fig. 2
The binary stars (filled triangles) behave similarly to the single stars in both panels; on average, though, the binaries exhibit larger $`T_{\mathrm{eff}}`$ residuals than the single stars. This systematic offset likely reflects the additional influence of fainter (hence cooler and redder) companions on the photometric colors. If this interpretation is correct, it could suggest that the handful of inactive single stars with significant $`T_{\mathrm{eff}}`$ residuals in the upper left portion of both panels are unrecognized binaries.
Significant differences between the $`(BV)`$\- and $`(VI)`$-based $`T_{\mathrm{eff}}`$ estimates, the slight propensity for $`(BV)`$ to yield larger temperatures, and the association of these properties with stellar activity seems to be a common property of young stars noted and discussed by others (e.g., Garcia Lopez et al. 1994; Randich et al. 1997; King 1998; Soderblom et al. 1999). Explanations for these observed properties in young stars are at least twofold: a) increased $`B`$-band flux due to boundary layer emission associated with a circumstellar disk (presumably not applicable for our near-ZAMS Pleiads), and b) increased $`I`$-band flux due to the presence of cool spots. It is straightforward to associate increased prevalence and surface coverage of spots with increasing activity, and H$`\alpha `$ emission (used here to quantify activity) has been associated with accretion of circumstellar material in young stars. Given that the Pleiades age ($`100`$ Myr) is an order of magnitude larger than inferred disk lifetimes for solar-type stars (Skrutskie et al. 1990), spots are the more likely cause of the temperature difference-activity relation in our Pleiades sample.
In their recent study of the effects of activity on Pleiades Li abundances, Stuik et al. (1997) find that activity– specifically the presence of spots and plages– may significantly alter photospheric colors. Indeed, they suggest that the resulting changes in color may be a more dominant contributor to the Pleiades Li spread than line strength differences. Additionally, they find that such color variations are both surprising and complex. Their empirical solar-based activity models indicated that both spots and plages lead to increased $`(BV)`$ colors; in contrast, their “best-effort” theoretical stellar models indicate a decrease in $`(BV)`$. Sorting out which (if either) set of models are appropriate for specific Pleiads (in addition to other empirical details such as specific spot/plage coverage and ratio) might be further illuminating. Because the direction of changes in $`(VI)`$ also flips in their models, Stuik et al. (1997) note that spot/plage-related changes in color may not be ideally identified in two-color plots (e.g., Soderblom et al. 1993).
### 2.3 Lithium Abundance Determinations and Detrending
Li abundances for all our Pleiads were determined from the measured $`\lambda 6707`$ line strengths and our preferred $`T_{\mathrm{eff}}`$ value. A blending complex lies some 0.4 Å blueward of the Li doublet. In our stars, the typical contribution of these blending features is $`10`$ mÅ, which is significantly smaller than typical Li line strength of $`100`$ mÅ. The blending contribution was subtracted<sup>2</sup><sup>2</sup>2Deblending corrections were not applied to any stars taken from Jones et al. (1996) following their claim that instrumental resolution was sufficient to separate the Li line and blending complex. While it is not clear to us that this is true given that some of their objects have appreciable rotation (see their Figure 1), it does not affect the present results inasmuch as the Jones et al. rapid rotators have Li line strengths significantly larger than those expected of the blending complex. following the approach of S93a, who parameterized the contaminating line strength as a function of $`(BV)`$ color. Here, we recast this parameterization as a function of $`T_{\mathrm{eff}}`$ so that differences in our $`(BV)`$\- and $`(VI)`$-based temperatures were consistently accounted for in the analysis.
Given the $`T_{\mathrm{eff}}`$ values and the corrected Li line strengths, abundances were calculated using Table 2 from S93a. This was done by fitting a surface map of the equivalent width-temperature-abundance grid of S93a using high order polynomials. The internal interpolation accuracy is generally a few thousandths of a dex<sup>3</sup><sup>3</sup>3The two hottest single stars have $`T_{\mathrm{eff}}`$ values significantly outside the curve of growth grids provided by S93a. Extrapolation to these temperatures with high order polynomials leads to errantly low abundances by a few tenths of a dex. To the extent that we are mainly interested in the cooler Pleiads and that we are only interested in the differential star-to-star Li abundances (i.e., large scale abundance morphology with $`T_{\mathrm{eff}}`$ is removed later), these known errors are unimportant for the present analysis. In the case of these two stars (H II 133 and 470), we simply caution those who would use our absolute abundances, and also note that the few much smaller extrapolations to lower $`T_{\mathrm{eff}}`$ outside the S93a grid are not believed to be affected by any substantial amount.. Columns 8 and 11 of Table 1 give the derived Li abundances<sup>4</sup><sup>4</sup>4by number, relative to hydrogen, on the usual scale with log $`N`$(H)$`=12.`$ for our $`(BV)`$-based $`T_{\mathrm{eff}}`$, and for our $`(VI)`$-based $`T_{\mathrm{eff}}`$.
At the Pleiades age, PMS Li burning has significantly depleted the initial photospheric Li content of many of our stars. Moreover, this PMS depletion is a sensitive function of mass (or $`T_{\mathrm{eff}}`$) with less massive stars having depleted more Li due to deeper convection zones and longer PMS evolutionary timescales. In examining star-to-star Li abundance differences connected with parameters such as rotation or activity, we must remove this general large-scale trend in the abundance vs. $`T_{\mathrm{eff}}`$ plane.
The procedure is illustrated in Figures 3 and 4, which shows the Pleiades Li abundances versus our $`(BV)`$\- and $`(VI)`$-based $`T_{\mathrm{eff}}`$. The familiar and large (3 orders of magnitude) abundance depletion over 2000 K of $`T_{\mathrm{eff}}`$ is seen in Figure 3. The mean trend is shown by the dashed line, which is a fourth order Legendre polynomial fitted to the single star data after rejecting $`\pm 3\sigma `$ outliars. Fourth order fits were also conducted for the data based on the S93a temperatures and our $`(BV)`$-based values. These fits provide a mean fiducial Li abundance at any given $`T_{\mathrm{eff}}`$ to which observed abundances (calculated assuming the same source of $`T_{\mathrm{eff}}`$) can be compared to infer and measure a relative Li “enhancement” or “depletion” for each star as shown in Figure 4. For $`(VI)`$-based temperatures, the approximation to the fit shown in Figure 3 is given by: $`\mathrm{log}N(\mathrm{Li})_{\mathrm{fit}}=10.8602+(2.9785\times 10^3\times T)+(1.1736\times 10^7\times T^2)(3.8181\times 10^{11}\times T^3)`$. For $`(BV)`$-based temperatures, the approximation to the fit shown in Figure 3 is given by: $`\mathrm{log}N(\mathrm{Li})_{\mathrm{fit}}=10.9959+(2.8360\times 10^3\times T)+(1.7354\times 10^7\times T^2)(4.2744\times 10^{11}\times T^3)`$ margin: Fig. 3 margin: Fig. 4
### 2.4 Errors
Uncertainties in the Li abundances were estimated from those in $`T_{\mathrm{eff}}`$ and equivalent width. Here, we are only interested in the internal errors which affect the star-to-star Li abundances. A measure of the internal uncertainties in the $`T_{\mathrm{eff}}`$ estimates is provided by the estimates from the $`(BV)`$ and $`(VI)`$ colors. For the single stars, the difference in the two color-temperatures exhibits a per star standard deviation of 108 K. Assuming equal contributions from both $`(BV)`$ and $`(VI)`$, this suggests an internal error of $`\pm 76`$ K in the $`T_{\mathrm{eff}}`$ of any one Pleiad derived from any one color. This uncertainty was translated to a Li abundance error by re-deriving abundances with $`T_{\mathrm{eff}}`$ departures of this size. The adoption of identical errors in (B-V) and (V-I)-based Teff values is a simplifying assumption (though one likely true within a few tens of K). Inasmuch as our conclusions are the same using either the (B-V) or (V-I) colors, it is not a critical one for this work.
The other significant source of uncertainty is in the Li line measurements. For S93a line strengths (the majority of our sample), uncertainties come from their own quality measures: a ($`\pm 12`$ mÅ), b ($`\pm 18`$ mÅ), c ($`\pm 25`$ mÅ), and d ($`\pm 40`$ mÅ). Jones et al. (1996) state that their uncertainties range from 5-20 mÅ and depend largely on projected rotational velocity. Assuming this range and the stars’ $`vsini`$ values, we have adopted the reasonable values shown in column 12 of Table 1. Given S93a’s note that the equivalent widths of Butler et al. (1987) may have to be regarded with caution, and the typical expected uncertainties from Poisson noise expected from their S/N and resolution, we have assigned an uncertainty of $`\pm 30`$ mÅ in these line strengths. Based on the S/N, resolution, and $`vsini`$ values of the observations from Boesgaard et al. (1988), we have adopted a conservative uncertainty of $`\pm 5`$ mÅ for their data. In a similar fashion, we assigned uncertainties of $`\pm 25`$ mÅ to the equivalent widths from Pilachowski et al. (1987). The line strength uncertainties were translated to Li uncertainties by re-deriving abundances with the adopted equivalent width departures.
Final Li abundance uncertainties, shown in Figures 3 and 4, are calculated by summing the two errors in quadrature, and listed in columns 13 and 14 of Table 1. We emphasize that for the purpose of looking at the star-to-star Li abundance differences in cool Pleiads, the effects of $`T_{\mathrm{eff}}`$ errors are minimized. This is because the movement of a star in the $`T_{\mathrm{eff}}`$-Li plane due to $`T_{\mathrm{eff}}`$ errors is very nearly along the cool star depletion trend for $`T_{\mathrm{eff}}5800`$ K. To take into account this correlation in looking at the differential Li abundances (i.e., the actual values versus an expected value from a fitted trend to the data), the abundance errors due to departures in $`T_{\mathrm{eff}}`$ were combined with the slope of the fitted Li versus $`T_{\mathrm{eff}}`$ trend at the $`T_{\mathrm{eff}}`$ of each star. The total uncertainties in the differential Li abundances are given in the final two columns of Table 1.
#### 2.4.1 Li Abundance Scatter
Large scatter in the star-to-star Li abundances is apparent in Figures 3 and 4. Comparison of the observed scatter with that expected from the estimated uncertainties indicates that the spread is statistically significant. The presence of real global scatter was considered by comparing the variance of the differential Li abundances with that based on the uncertainties given in Table 1. The sizable reduced chi-squared statistic ($`\chi _\nu ^2=12.78`$, $`\nu =72`$) indicates probabilities of the observed variance ($`s(\mathrm{Li})^20.13`$ dex<sup>2</sup>) occurring by chance are infinitesimal.
Additional analysis was carried out by binning in $`T_{\mathrm{eff}}`$. For both the $`(BV)`$ and $`(VI)`$ based results, we broke the data up into 5 $`T_{\mathrm{eff}}`$ ranges following natural breaks in the estimated $`T_{\mathrm{eff}}`$ values which yielded comparable sample sizes (10-15 stars) in each bin. The results for both the $`(BV)`$ and $`(VI)`$ data are similar.
We find that stars in the hottest bin (bin ‘E’: 6172-6984 K and 6107-6928 K for $`BV`$ and $`VI`$) exhibit a variance that is larger than the expected value at only the 94% confidence level. The stars in the adjacent cooler bin (bin ‘D’: 5567-6048 K and 5521-6021 K for $`BV`$ and $`VI`$) exhibit a variance in the Li abundances significantly larger than expected from the uncertainties at the $`99.93`$% confidence level. The differential Li abundances in the remaining three cooler bins (bin ‘C’: 4899-5477 K and 4996-5452 K; bin ‘B’: 4507-4815 K and 4542-4746 K; bin ‘A’: 3955-4332 K and 3867-4343 K) all show observed variances significant at considerably higher confidence levels.
An important claim by Jones et al. (1996) in their study of Pleiades Li abundances is a progressive decline in the dispersion of the Li abundances as one proceeds from the late G dwarfs, to the early-to-mid K dwarfs, and finally to the later K dwarfs. We find, however, that quantitative analysis fails to provide firm support for such a conclusion. F-tests of the observed variances indicate that differences of the differential Li abundance dispersions in our cooler three bins are statistically indistinguishable. The differences between the bin B and bin A stars’ variances are significant at only the 71.5% and 78.0% confidence levels for the $`BV`$ and $`VI`$ datasets. Differences between the bin C and bin B stars’ variances are different at only the 72.7% and 75.0% confidence levels. It should be noted that these comparisons ignore the observed Li upper limits prevalent for the coolest (bin A) Pleiads. The stars with upper limits lie at the lower edge of the observed Li abundances (figure 4 of Jones et al.). Ignoring this censored data may lead to an underestimate of the true dispersion for the coolest Pleiads– making our conclusion of no significant difference in the magnitude of star-to-star abundance scatter for the late G to late K Pleiads a conservative one. Larger samples and improved upper limits (or detections) would clarify this important issue.
## 3 Nature of the Li-rotation correlation
### 3.1 Projected rotational velocity
Extant studies of Li-rotation correlations have employed $`v\mathrm{sin}i`$ measurements, which yield only a lower limit to the rotational velocity due to the unknown angle of inclination, $`i`$. In Figure 5, our $`VI`$-based absolute and differential Li abundances are plotted against the projected velocity measurement $`v\mathrm{sin}i`$. The top two panels (a and b) show the data for all $`T_{\mathrm{eff}}`$. The bottom two panels (c and d) show data with $`4500T_{\mathrm{eff}}5500`$ K, which is the temperature range in which a clear connection between Li abundance and $`v\mathrm{sin}i`$ was asserted by Garcia Lopez et al. (1994). It is seen that while there is a range of abundances at the lower values of the rotation velocity, the rapid rotators ($`v\mathrm{sin}i`$ 30 km s<sup>-1</sup>) do show a tendency to have higher Li abundances in the intermediate $`T_{\mathrm{eff}}`$ range (panel d). margin: Fig. 5
### 3.2 Rotational Period
The Pleiades now has many members with photometrically-determined rotation periods (Krishnamurthi et al. 1998), which are free of the ambiguity associated with inclination angle. Hence, it is now possible to consider the true nature of the correlation between rotation and Li abundance. For example, we find a slowly rotating star (H II 263, $`P=4.8`$ d) that has a high Li abundance. Furthermore, two of the three stars (H II 320 and 1124) in the Garcia Lopez et al. (1994) study with low $`v\mathrm{sin}i`$ and high Li abundances also have measured rotation periods of 4.58d and 6.05d respectively. Thus, there are several cases where high Li abundance in stars with low $`v\mathrm{sin}i`$ is not due to inclination angle effects– Li overabundances are not solely restricted to rapid rotators. This is apparent in Figure 6, where the surface Li abundance is plotted against rotation period, P<sub>rot</sub>, instead of $`v\mathrm{sin}i`$. In particular, we draw attention to the large range in abundances seen at longer rotation periods ($`>`$4.0 days; panels b and d). Thus there exist at least a few Pleiads which are true slow rotators, but have high Li abundances. margin: Fig. 6
We next examined the proposal by Garcia Lopez et al. (1994) that there is a very clear relationship between rotation and log N(Li) for stars with M $``$0.7-0.9 M. Figure 7 shows the $`VI`$-based Li abundances versus mass for Pleiads with photometrically-measured rotation rates. The symbol size is proportional to the rotation period. When rotational periods are considered rather than $`v\mathrm{sin}i`$ measurements, a true range of Li abundances with rotation is seen in the mass range 0.7-0.9 M– there are genuine slow rotators with abundances similar to the fast rotators. Thus, there appears to be a range of rotation at all abundances. Hence, P<sub>rot</sub> is essential to study the true correlation. margin: Fig. 7
### 3.3 Structural effects of rotation
Rapid rotation affects the structure of a star and hence the derived mass at a given $`T_{\mathrm{eff}}`$ (Endal & Sofia (1979)). The structural effects of rotation would alter a star’s color such that rapidly rotating objects would be redder, hence perceived as cooler, and thus be assigned a lower mass. We therefore examined the abundances as a function of mass rather than effective temperature.
To investigate this issue, it was necessary to construct stellar models for different disk lifetimes (Krishnamurthi et al. (1997)). We ran models for $`\omega _{crit}`$ = 5$`\times \omega _{}`$ and 10$`\times \omega _{}`$ to represent fast rotators and the slow rotators. The rotation-corrected masses were derived by interpolation in the models across effective temperature and rotation velocity for different disk lifetimes. We found that the percent change in mass is small (5%) even for the most rapidly rotating star in the Pleiades (H II 1883, $`v\mathrm{sin}i`$=140 km s<sup>-1</sup>). The change is between 1% and 2% for stars with $`v\mathrm{sin}i`$ in the 50-100 km s<sup>-1</sup>, and $`<1`$% for $`v\mathrm{sin}i`$ 50 km s<sup>-1</sup>. These small alterations fail to eliminate the Li dispersion, which sets in at M$`<`$0.9 M in the Li-mass plane. Thus, the structural effects of rotation on the derived mass-temperature relation are not large enough to account for the Pleiades Li abundance dispersion.
These results differ with those of Martin & Claret (1996), who also explored the structural effects of rotation and found enhanced Li abundances for stars with high initial angular momentum. This is not seen in our models, which predict small rotational structure effects on the Pleiades Li abundances, similar to the models of Pinsonneault et al. (1990). Mendes et al. (1999) have noted the conflict between the results of Martin & Claret (1996) and Pinsonneault et al. (1990), and considered the hydrostatic effects of rotation on stellar structure and Li depletion using their own stellar models. Their results are in agreement with Pinsonneault et al. (1990), and they too find that hydrostatic effects are too small to explain the observed Li abundance spread in the Pleiades.
## 4 Li and Stellar Activity
### 4.1 Li and chromospheric emission
Since the large Pleiades Li spread is in such a young cluster, one may wonder if its long-sought explanation is related to stellar activity. Additionally, since rotation and activity are well-correlated, a Li-activity relation may be masquerading as a Li-rotation relation instead. Here, we discuss if magnetic activity indicators such as chromospheric emission (CE) are correlated with the Li abundance. Several studies have pointed out that activity is correlated with the Li abundance (e.g., Soderblom et al. 1993b , Jones, Fischer & Stauffer 1996b ). There have also been some studies speculating that CE affects the apparent abundance of Li (e.g., Houdebine & Doyle (1995)) and hence may be at least partly responsible for the dispersion.
Figure 8 contains our results, and shows the $`VI`$-based differential Li abundances versus the Ca II infrared triplet fluxes (top panel) and residual fluxes (bottom panel). A relation is seen in both panels, such that the lowest log N(Li) differences are seen predominantly for the lowest flux ratios while the largest log N(Li) differences are seen predominantly for stars having the largest flux ratios. The ordinary correlation coefficients are significant at the 99.7% and $`99.9`$% confidence levels for the chromospheric Ca fluxes and residual fluxes, suggesting a significant relation between chromospheric activity differences and Li abundance differences (though not necessarily causal). margin: Fig. 8
### 4.2 Spreads in Other Elements
Important clues to the cause of the Pleiades Li abundance scatter can be found from examination of other elements not destroyed in stellar interiors like <sup>7</sup>Li. Variations in such abundances may signal effects other than differential Li processing, and perhaps point to an illusory difference caused by inadequate treatment of line formation.
#### 4.2.1 Potassium
One of the most useful features for this purpose is the $`\lambda 7699`$ K I line. The usefulness of this feature is two-fold. First, there is the similarity in electronic configuration with the Li I atom and the fact that this particular K transition and the $`\lambda 6707`$ Li I are both neutral resonance features. Second, the interplay of abundance and ionization effects leads to the happy circumstance that the line strengths of these two features are comparable in Pleiades dwarfs. Thus, line formation for both features should be similar in many respects.
The K I line strengths were taken from S93a and Jones et al. (1996). These were then plotted versus $`T_{\mathrm{eff}}`$ as derived from both $`BV`$ and $`VI`$. The relation was well fit by a 4th order Legendre polynomial approximated by the relation: $`\mathrm{EW}(\mathrm{K})=9098.151(4.126458\times T)+(6.381173\times 10^4\times T^2)(3.308942\times 10^8\times T^3)`$ for the $`BV`$ colors and by the relation: $`\mathrm{EW}(\mathrm{K})=8926.171(4.170605\times T)+(6.702972\times 10^4\times T^2)(3.642286\times 10^8\times T^3)`$ for the $`VI`$ colors. These fits showed considerable scatter – the line strength dispersion was $`55`$ mÅ, which is considerably larger (and statistically significant) than even the maximum equivalent width errors estimated by S93a. So scatter is present in the potassium data as well.
Differential K I equivalent widths ( \[observed$``$fitted\]/fitted ) are plotted against the differential Li abundances in Figure 9. A correlation between the values is present, though with considerable scatter. The one-sided correlation coefficients are significant at the 99.0 and 98.3% confidence levels for the $`(BV)`$ and $`(VI)`$-based results. Such a correlation (of whatever magnitude), however, may arise not from some physical mechanism; instead, it may simply reflect correlated measurement errors. margin: Fig. 9
Like the differential Li abundances, the differential K I line strengths are correlated with activity measure. Figure 10 shows the differential K I equivalent width versus the Ca II fluxes (top panel) and residual fluxes (bottom panel). The correlations are analogous to those seen for the differential Li abundances in Figure 8, and are significant at the $`98.5`$% (top panel) and $`99.9`$% (bottom panel) confidence levels. margin: Fig. 10
#### 4.2.2 Calcium
To examine the possibility of correlated measurement errors, we considered the line strengths of the $`\lambda `$6717 Ca I feature taken from S93a. These were fitted against $`T_{\mathrm{eff}}`$ in the same manner as the K I equivalent widths. The relations are given by: $`\mathrm{EW}(\mathrm{Ca})=4203.706(1.859218\times T)+(2.888087\times 10^4\times T^2)(1.538325\times 10^8\times T^3)`$ for the $`BV`$ colors and by $`\mathrm{EW}(\mathrm{Ca})=2978.740(1.249348\times T)+(1.887209\times 10^4\times T^2)(9.973237\times 10^9\times T^3)`$ for the $`VI`$ colors. The scatter associated with these fits is $`20`$ mÅ, which is consistent with the S93a uncertainties. Interestingly, unlike Li and K, there is no evidence for scatter in the calcium data above the measurement uncertainties. Differential Ca I equivalent widths are plotted against the differential Li abundances in Figure 11. The relation is flat. Unlike K, there is no significant correlation– the ordinary correlation coefficients are significant at only the $`80`$% confidence level. margin: Fig. 11
This indicates to us that the correlated scatter in Li and K line strengths is not due to measurement errors. Rather, we suggest that some physical mechanism affecting the details of line formation not included in standard LTE model photosphere analyses is the cause. Such a mechanism, if having star-to-star differences, may be the dominant source of the Li abundance scatter in Pleiades dwarfs. Since activity evinces such differences, it may naturally provide such a mechanism.
In an important theoretical study, Stuik et al. (1997) have urged similar caution in regarding Pleiades Li scatter as solely due to genuine abundance differences. These authors consider the photospheric effects of activity on Pleiades Li I and K I line strengths by modeling surface spots and plages. They can neither exclude nor confirm these particular manifestations of magnetic activity as the cause of the problematic and important K I variations in cool Pleiads. Their extensive efforts, though, do open the door for future improvement.
First, it seems important to establish whether their empirical solar-based spot/plage models or their “best-effort” theoretical stellar models are more nearly correct, and if one or the other model set is indeed applicable to all Pleiads since the two model sets produce color and line strength changes opposite in sign. Second, Stuik et al. (1997) note that their radiative equilibrium and mixed activity calculations depart from observations with increasing $`(BV)`$. As they acknowledge, such disparities may signal other effects not yet considered: UV ”line haze”, which may depend on the presence and structure of an overlying chromosphere, impacting the details of line formation; unknown properties and effects of Pleiads’ granulation patterns; and the influence of so-called solar-like ”abnormal granulation” within plages. Third, other sources of non-thermal heating of the photosphere by chromospheric ‘activity’ may need to be considered. Finally, simply relating colors or effective temperatures (from color-$`T_{\mathrm{eff}}`$ conversions) of Pleaids having different activity levels may be more problematic than realized.
Houdebine & Doyle (1995; HD95) demonstrate that formation of the $`\lambda `$6707 Li I line is sensitive to activity in M dwarfs. The extent of these effects depends on the relative coverage of plages and spots. HD95 note the particular importance of the role of ionization in reducing the resonance line’s optical depth. In late G and K dwarfs like those showing scatter in the Pleiades, star-to-star variations in departures of both photoionisation and collisional ionization from that predicted by model photospheres might introduce significant star-to-star variations in the derived Li abundance<sup>5</sup><sup>5</sup>5Overionization from photospheric convective inhomogeneities has been discussed in the context of Population II star Li abundances by Kurucz (1995).. Interestingly, King et al. (1999) find element-to-element abundance differences in two cool ($`T_{\mathrm{eff}}4500`$ K) Pleiades dwarfs and a similarly cool NGC 2264 PMS member which are ionization potential dependent. We suggest that current evidence may implicate non-photospheric ionization differences as a likely source of star-to-star Li variations in the Pleiades.
## 5 Other Mechanisms and Concerns
### 5.1 Metal abundance variations
Variations in abundances of other elements can affect stellar Li depletion via the effects of stellar structure on PMS Li burning. For example, Figure 3 of Chaboyer, Demarque, & Pinsonneault (1995), indicates that very small metal abundance differences of, say, 0.03 dex lead to substantial ($`0.30.4`$ dex) differences in PMS Li burning for $`T_{\mathrm{eff}}4500`$ K.
Extant studies (Boesgaard & Friel (1990); Cayrel, Cayrel, & Campbell (1988)) of Pleiades F- and G-star iron abundances (which cannot simply be equated with “metallicity” when it comes to PMS Li depletion; Swenson et al. (1994)) suggest no intrinsic scatter larger than 0.06-0.10 dex. The photometric scatter of the single stars in the color-magnitude diagram might allow a metallicity (or, perhaps more properly, those elements which are dominant electron donors in the stellar photospheres) spread of 0.05 dex. While small, these constraints would still permit substantial Li abundance spreads for cool Pleiads. Additionally, abundances of elements which may have a large impact on PMS Li depletion but little effect on atmospheric opacity (e.g., oxygen) have yet to be determined in cool Pleiades dwarfs.
However, the Li spread in the Pleiades extends to $`T_{\mathrm{eff}}`$ values substantially hotter than $`4500`$ K. At hotter $`T_{\mathrm{eff}}`$ values, model PMS Li-burning is less sensitive to metallicity. For example, in the range 5000-5200 K, the observed Li abundance spread would require “metallicity” differences approaching a factor of two. Such spreads would be surprising indeed, and not expected based on the limited results of extant Fe analyses of hotter cluster dwarfs. Abundance differences (of a large number of elements) of this size would not be difficult to exclude with good quality spectra as part of future studies.
### 5.2 Magnetic Fields
Ventura et al. (1998) have recently investigated the effects of magnetic fields in stellar models and PMS Li depletion. They find that even small fields are able to inhibit convection, and thus PMS Li depletion. They suggest that a dynamo generated magnetic field linked to rotational velocity (thus, presumably yielding an association between activity and rotation given conventional wisdom) would result in ZAMS star-to-star Li variations that mirror differences in star-to-star rotational (and presumably activity) differences. As these authors admit, the fits of their magnetic models to the Li-$`T_{\mathrm{eff}}`$ morphology and significant scatter of the Pleiades observations are not “perfect” or “definitive”; however, the qualitative agreement and ability to produce star-to-star scatter and general relations between Li abundance and rotation and activity are encouraging. Continued observations (especially detailed spectroscopic abundance determinations of various elements in numerous Pleiads) and theoretical work will be needed to establish the degree to which the Pleiades Li spread is illusory or real and, if the latter, its cause(s).
## 6 Summary and Conclusions
The very large dispersion in Li abundances at fixed $`T_{\mathrm{eff}}`$ in cool ($`T5400`$ K) Pleiads is a fundamental challenge for stellar evolution because standard stellar models of uniform age and abundance are unable to reproduce it. A variety of mechanisms (rotation, activity, magnetic fields, and incomplete knowledge of line formation) have been proposed to account for this scatter. Here, we construct a sample of likely single Pleiads and consider this problem with: a) differential Li abundances relative to a mean $`T_{\mathrm{eff}}`$ trend b) rotational periods instead of projected rotational velocities c) chromospheric emission indicators, and d) line strengths of other elements.
We calculated $`T_{\mathrm{eff}}`$ values from both $`(BV)`$ and $`(VI)`$ on a self-consistent scale based on the calibrations of Bessell (1979). We find differences in the two $`T_{\mathrm{eff}}`$ values and these are significantly correlated with both general activity level and with differences in activity, suggesting that surface inhomogeneities may noticeably affect stellar colors. Our results are consistent with a growing body of evidence of significant differences between $`(BV)`$\- and $`(VI)`$-based $`T_{\mathrm{eff}}`$ values, a propensity for $`(BV)`$ to yield larger $`T_{\mathrm{eff}}`$ values, and a relation of these characteristics with activity in young stars from 5 Myr old PMS stars in NGC 2264 (Soderblom et al. 1999) to PMS stars in the 30 Myr old IC 2602 (Randich et al. 1997) to ZAMS stars in the $`100`$ Myr old Pleiades.
However, the similarity between the sensitivity of the derived Li abundance to $`T_{\mathrm{eff}}`$ and the clusters’ physical Li-$`T_{\mathrm{eff}}`$ morphology means that even substantial $`T_{\mathrm{eff}}`$ errors are not a significant source of star-to-star Li scatter. Nor are observational errors. Comparison of the scatter in the differential Li abundances with errors from $`T_{\mathrm{eff}}`$ and line strength uncertainties indicates an infinitesimal probability that the observed scatter occurs by chance. We find significant scatter in the Li abundances below $`6000`$ K; it is significantly larger, though, below $`5500`$ K. Statistical analysis fails to support previous claims of smaller scatter in the late K dwarfs relative to the late G and early-mid K dwarfs.
There is a spread of Li abundance at low $`v\mathrm{sin}i`$, whereas the rapid projected rotators tend to have larger differential Li abundances in the range $`4500T_{\mathrm{eff}}5500`$. However, use of photometric rotation periods (free from uncertainties in the inclination angle $`i`$) indicates there is not a one-to-one mapping between differential Li abundance and rotation. The stars H II 263, 320, and 1124 are examples of stars with Li excesses but slow rotation (P$`=4.66.1`$ d). In contrast to previous claims based on $`v\mathrm{sin}i`$, the rotation periods indicate a true range of Li abundance with rotation in the mass bin 0.7-0.9 M.
Using the theoretical framework of Krishnamurthi et al. (1997), we constructed stellar models to investigate the hydrostatic effects of rotation on stellar structure and PMS Li burning. As shown in Figure 8, these models fail to account for the Pleiades Li dispersion, which is in agreement with the independent conclusions of Mendes et al. (1999).
We find that the star-to-star differences in Pleiades Li abundances are correlated with activity differences, as measured from Ca II infrared triplet flux ratios, at a statistically significant level. Moreover, the Li differences are significantly correlated with differences in the strengths of the $`\lambda `$7699 K I resonance feature. This seems to not be due to correlated measurement errors since the Li differences show no correlation with the $`\lambda `$6717 Ca I line strength residuals. This is a significant result given similarities in the Li and K feature’s atomic properties and line strengths. We suggest that incomplete treatment of line formation, related to activity differences, plays a significant role in the Li dispersion– i.e., that part of the dispersion is illusory. As emphasized by Houdebine & Doyle (1995), the formation of the Li I feature is sensitive to ionization conditions. If chromospheric activity variations can produce significant variations in photo- and collisional-ionization in the Li I line formation region not accounted for by LTE analyses using model photospheres, this may lead to errors in the inferred abundance. Relatedly, we note the results of King et al. (1999) who found ionization potential-dependent effects in the elemental abundances of two cool ($`T_{\mathrm{eff}}`$) Pleiads and a similarly cool NGC 2264 PMS star.
If such conjecture is correct, we expect that somewhat older (less active) cluster stars will exhibit less Li dispersion. This seems to be the case for the $`800`$ Myr old Hyades cluster (Thorburn et al. 1993) and perhaps also for the mid-G to mid-K stars in M34 (Jones et al. 1997). These clusters still exhibit scatter, and this may be real and due to differences in depletion from structural effects of rotation (Mendes et al. 1999), magnetic fields (Ventura et al. 1998), small metallicity variations (§5.1), main sequence depletion due to angular momentum transport from spin-down (Pinsonneault et al. 1990; Charbonnel et al. 1992) or a planetary companion (Cochran et al. 1997), and photospheric accretion of circumstellar/planetary material (Alexander 1967; Gonzalez 1998). The amount of scatter expected in even older clusters is less clear. If, e.g., rotationally-induced mixing acts over longer timescales then the scatter may well increase again; indeed, the substantial Li scatter in M67 solar-type stars observed by Jones et al. (1999) could indicate that this is the case.
These authors called attention to the possible pattern of very large Li scatter in young clusters, considerably reduced scatter in intermediate-age clusters, and increased scatter in older clusters. In the scenario we envision, variations in activity-regulated ionization of the Li I atom may be responsible for the majority of (mostly illusory) star-to-star Li differences in near-ZAMS and younger stars; of course, this does not exclude a (lesser) role from other variable mechanisms influencing PMS Li burning. If the decline in the activity level of intermediate-age stars reduces the importance of variable ionization, then the (smaller) Li scatter in these stars could arise from variations in PMS Li burning due to, e.g., the hydrostatic effects of rotation on stellar structure, inhibition of convection by magnetic fields, and small metallicity variations; additional contributions may come from processes just beginning to become effective for ZAMS stars such as rotationally-induced mixing and planetary/circumstellar accretion. In older stars such as M67, the increase in scatter (and overall Li depletion) is then a product of processes efficiently acting on the main-sequence proper such as rotationally-induced mixing and/or photospheric accretion.
Distinguishing specific mechanisms and their relative importance for Li depletion and scatter at a given age will require continuing observational and theoretical efforts. Important advances on the theoretical front are at least three-fold: continued investigation of the role of magnetic fields in PMS Li depletion, realistic model atmospheres which include chromospheres, and detailed NLTE abundance calculations which employ these to extend extant sophisticated modeling attempts (e.g., Stuik et al. 1997). On the observational front, continued observations of Li in a variety of clusters spanning a range in age and metallicity are needed. We believe that particularly important observational work to be accomplished includes the determination of photometric periods in more cluster stars, detailed abundances of numerous elements (in particular, using both ionization sensitive and ionization insensitive features and elements) in cluster stars, quantification of even small “metal” (not just Fe) abundance spreads in cluster stars, and the association between planetary systems and parent star Li and light metal abundances.
AK acknowledges support from NASA grant H-04630D to the University of Colorado. |
no-problem/9911/nucl-th9911067.html | ar5iv | text | # Band structure from random interactions
## Abstract
The anharmonic vibrator and rotor regions in nuclei are investigated in the framework of the interacting boson model using an ensemble of random one- and two-body interactions. We find a predominance of $`L^P=0^+`$ ground states, as well as strong evidence for the occurrence of both vibrational and rotational band structures. This remarkable result suggests that such band structures represent a far more general (robust) property of the collective model space than is generally thought.
A recent analysis of experimental energy systematics of medium and heavy even-even nuclei suggests a tripartite classification of nuclear structure into seniority, anharmonic vibrator and rotor regions . Plots of the excitation energies of the yrast states with $`L^P=4^+`$ against $`L^P=2^+`$ show a characteristic slope for each region: 1.00, 2.00 and 3.33, respectively. In each of these three regimes, the energy systematics is extremely robust. Moreover, the transitions between different regions occur very rapidly, typically with the addition or removal of only one or two pairs of nucleons. The transition between the seniority region (either semimagic or nearly semimagic nuclei) and the anharmonic vibrator regime (either vibrational or $`\gamma `$ soft nuclei) was addressed in a simple schematic shell model calculation and attributed to the proton-neutron interaction . The empirical characteristics of the collective regime which consists of the anharmonic vibrator and the rotor regions, as well as the transition between them, have been studied in the framework of the interacting boson model (IBM) . An analysis of phase transitions in the IBM has shown that the collective region is characterized by two basic phases (spherical and deformed) with a sharp transition region, rather than a gradual softening which is traditionally associated with the onset of deformation in nuclei .
In a separate development, the characteristics of low-energy spectra of many-body even-even nuclear systems have been studied recently in the context of the nuclear shell model with random two-body interactions . Despite the random nature of the interactions, the low-lying spectra still show surprisingly regular features, such as a predominance of $`L^P=0^+`$ ground states separated by an energy gap from the excited states, and the evidence of phonon vibrations. The occurrence of these pairing effects cannot be explained by the time-reversal symmetry of the random interactions . A subsequent analysis of the pair transfer amplitudes has shown that pairing is a robust feature of the general two-body nature of shell model interactions and the structure of the model space . On the other hand, no evidence was found for rotational band structures.
The existence of robust features in the low-lying spectra of medium and heavy even-even nuclei suggests that there exists an underlying simplicity of low-energy nuclear structure never before appreciated. In order to address this point we carry out a study of the systematics of collective levels in the framework of the IBM with random interactions. In an analysis of energies and quadrupole transitions we show that despite the random nature (both in size and sign) of the interaction terms, regular features characteristic of the anharmonic vibrator and rotor regions emerge. Our results imply that these features are, to a certain extent, independent of the specific character of the interaction, and probably arise from the two-body nature of the Hamiltonian and the structure of the collective model space.
In the IBM, collective nuclei are described as a system of $`N`$ interacting monopole and quadrupole bosons. We consider the most general one- and two-body IBM Hamiltonian $`H=H_1+H_2`$. The one- and two-body matrix elements are chosen independently using a Gaussian distribution of random numbers with zero mean and variances
$`H_{1,\alpha \alpha ^{}}^2`$ $`=`$ $`v^2(1+\delta _{\alpha \alpha ^{}}),`$ (1)
$`H_{2,\beta \beta ^{}}^2`$ $`=`$ $`{\displaystyle \frac{1}{(N1)^2}}v^2(1+\delta _{\beta \beta ^{}}).`$ (2)
Since the matrix elements of $`H_1`$ and $`H_2`$ are proportional to $`N`$ and $`N(N1)`$, respectively, we have introduced a relative scaling between the one- and two-body interaction terms of $`1/(N1)`$. The coefficient $`v^2`$ is independent of the angular momentum and represents an overall energy scale. The ensemble defined by Eq. (2) is similar, but not identical, to the Two-Body Random Ensemble of . In all calculations we take $`N=16`$ bosons and 1000 runs. For each set of randomly generated one- and two-body matrix elements we calculate the entire energy spectrum and the $`B(E2)`$ values between the yrast states.
Just as in the case of the nuclear shell model , we find a predominance (63.4 $`\%`$) of $`L^P=0^+`$ ground states; in 13.8 $`\%`$ of the cases the ground state has $`L^P=2^+`$, and in 16.7$`\%`$ it has the maximum value of the angular momentum $`L^P=32^+`$. For the cases with a $`L^P=0^+`$ ground state we have calculated the probability distribution of the energy ratio $`R=[E(4^+)E(0^+)]/[E(2^+)E(0^+)]`$. Fig. 1 shows a remarkable result: the probability distribution $`P(R)`$ has two very pronounced peaks, one at $`R1.95`$ and a narrower one at $`R3.35`$. These values correspond almost exactly to the harmonic vibrator and rotor values (see the results for the $`U(5)`$ and $`SU(3)`$ limits in Table I). No such peak is observed for the $`\gamma `$ unstable or deformed oscillator case ($`SO(6)`$ limit).
Energies by themselves are not sufficient to decide whether or not there exists band structure. Levels belonging to a collective band are connected by strong electromagnetic transitions. In Fig. 2 we show a correlation plot between the ratio of $`B(E2)`$ values for the $`4^+2^+`$ and $`2^+0^+`$ transitions and the energy ratio R. For the $`B(E2)`$ values we use the quadrupole operator
$`\widehat{Q}_\mu (\chi )`$ $`=`$ $`(s^{}\stackrel{~}{d}+d^{}s)_\mu ^{(2)}+\chi (d^{}\stackrel{~}{d})_\mu ^{(2)},`$ (3)
with $`\chi =\sqrt{7}/2`$. For completeness, in Table I we show the results for the three symmetry limits of the IBM . In the large $`N`$ limit, the ratio of $`B(E2)`$ values is 2 for the harmonic oscillator ($`U(5)`$ limit) and 10/7 both for the deformed oscillator ($`SO(6)`$ limit) and the rotor ($`SU(3)`$ limit). There is a strong correlation between the first peak in the energy ratio and the vibrator value for the ratio of $`B(E2)`$ values (the concentration of points in this region corresponds to about 50 $`\%`$ of all cases), as well as for the second peak and the rotor value (about 25 $`\%`$ of all cases). For the region $`2.3\stackrel{<}{}R\stackrel{<}{}3.0`$ one can see a concentration of points around the value 1.4, which reflects the transition between the deformed oscillator and the rotor limits (see Table I). Calculations for different values of the number of bosons $`N`$ show the same results.
Despite the randomness of the interactions these results constitute strong evidence for the occurrence of both vibrational and rotational band structures. We have repeated the calculations for different values of the number of bosons $`N`$ and find the same results. Since the results presented in Figs. 1 and 2 were obtained with random interactions, with no restriction on the sign or size of the one- and two-body matrix elements, it is of interest to compare them with a calculation in which the parameters are restricted to the ‘physically’ allowed region. To this end we consider the consistent Q-formulation which uses the same form for the quadrupole operator, Eq. (3), i.e. with the same value of $`\chi `$, for the $`E2`$ operator and the Hamiltonian
$`H`$ $`=`$ $`ϵ\widehat{n}_d\kappa \widehat{Q}(\chi )\widehat{Q}(\chi ).`$ (4)
The parameters $`ϵ`$ and $`\kappa `$ are restricted to be positive, whereas $`\chi `$ can be either positive or negative $`\sqrt{7}/2\chi \sqrt{7}/2`$. The properties of the Hamiltonian of Eq. (4) can be investigated by taking the scaled parameters $`\eta =ϵ/[ϵ+4\kappa (N1)]`$ and $`\overline{\chi }=2\chi /\sqrt{7}`$ randomly on the intervals $`0\eta 1`$ and $`1\overline{\chi }1`$ (these coefficients have been used as control parameters in a study of phase transitions in the IBM ). In Figs. 3 and 4 we show the corresponding probability distribution and correlation plot for the consistent Q-formulation of the IBM with realistic interactions. Although in this case the points are concentrated in a smaller region of the plot than before, the results show the same qualititative behavior as for the IBM with random one- and two-body interactions. In Fig. 4 we have identified each of the dynamical symmetries of the IBM (and the transitions between them). There is a large overlap between the regions with the highest concentration of points in Figs. 2 and 4.
In conclusion, we have studied the IBM using random ensembles of one- and two-body Hamiltonians. It was found that despite the randomness of the interactions the ground state has $`L^P=0^+`$ in 63.4 $`\%`$ of the cases. For this subset, the analysis of both energies and quadrupole transitions shows strong evidence for the occurrence of both vibrational and rotational band structure. These features arise from a much wider class of Hamiltonians than are generally considered to be ‘realistic’. This suggests that these band structures arise, at least in part, as a consequence of the one- and two-body nature of the interactions and the structure of the collective model space, and hence represent a far more general and robust property of collective Hamiltonians than is commonly thought. This is in qualitative agreement with the empirical observations of robust features in the low-lying spectra of medium and heavy even-even nuclei .
A similar situation has been observed in the context of the nuclear shell model with respect to the pairing properties which were formerly exclusively attributed to the particular form of the nucleon-nucleon force. On the other hand, the random IBM Hamiltonians studied in this Letter display not only vibrational-like phonon collectivity but, in contrast to the results in , also imply the emergence of rotational bands. The IBM is based on the assumption that low-lying collective excitations in nuclei can be described as a system of interacting monopole and quadrupole bosons, which in turn are associated with generalized pairs of like-nucleons with angular momentum $`L=0`$ and $`L=2`$. It would be very interesting to establish whether rotational features can also arise from ensembles of random interactions in the nuclear shell model, if appropriate (minimal) restrictions are imposed on the parameter space.
It is a pleasure to thank Stuart Pittel and Rick Casten for interesting discussions. This work was supported in part by DGAPA-UNAM under project IN101997. |
no-problem/9911/astro-ph9911466.html | ar5iv | text | # The precession of eccentric discs in close binaries
## 1 Introduction
Superhumps are now commonly found in short period cataclysmic variables (Patterson 1998) and X-ray binaries (O’Donoghue & Charles 1996). The most likely explanation is that these periodic luminosity variations are caused by the tidal stresses on an eccentric, precessing accretion disc (Whitehurst 1988; Hirose & Osaki 1990; Lubow 1991). In this model, the superhump period, $`P_{\mathrm{sh}}`$, equates to the period of the disc’s precession, $`P_\mathrm{d}`$, as measured in the binary frame. Although reliable measurements of $`P_{\mathrm{sh}}`$ have been made for at least 53 separate systems (Patterson 1998), the observations have yet to be compared properly with the model’s theoretical predictions.
In this paper we consider the factors determining the precession rate of an eccentric disc, and compare our theoretical understanding, simulation results and the observations.
## 2 disc precession periods
It is now well established that in binaries with mass ratios $`q\stackrel{<}{}1/4`$ it is possible for eccentricity to be excited in the accretion disc at the $`3:1`$ eccentric inner Lindblad resonance (Lubow 1991). Here we define $`q=m_2/m_1`$ to be the mass of the donor star divided by the mass of the accreting star. Under certain circumstances, eccentricity may be excited in systems with mass ratios as large as $`1/3`$ (Murray, Warner & Wickramasinghe 1999).
How an eccentric disc precesses coherently has been a source of confusion. For a single particle, the rate at which an elliptical orbit precesses
$$\omega _{\mathrm{dyn}}=\mathrm{\Omega }\kappa .$$
(1)
Now, for a given gravitational potential, a particle’s angular frequency, $`\mathrm{\Omega }`$, and radial or epicyclic frequency, $`\kappa `$, are both functions of $`r`$. Hence the precession rate, $`\omega `$, also depends upon the mean radius of the orbit. Yet neither the observations (Patterson 1998) nor the simulations (Murray 1998) yield any indication of differential precession. So how does a disc, that extends over a range of radii, organize itself to precess with a unique frequency?
The answer is that we are not dealing with a collection of isolated test particles, but with a gaseous disc. The eccentricity is excited at the Lindblad resonance, and then propagates inwards through the disc as a wave, getting wrapped into a spiral by the differential rotation of the gas (Lubow 1992). If the spiral is wound up on a length scale much smaller than the radius (the tight winding limit), then $`\omega `$, the rate of azimuthal advance of the eccentricity, is governed by the dispersion relation
$$(\mathrm{\Omega }\omega )^2=\kappa ^2+k^2c^2,$$
(2)
where $`c`$ is the sound speed of the gas, and $`k`$ is the radial wavenumber of the spiral (see e.g. Binney & Tremaine 1987). Now, as $`\omega `$ is much less than $`\mathrm{\Omega }`$ and $`\kappa `$ we have
$`\omega `$ $``$ $`\mathrm{\Omega }\kappa k^2c^2/(2\mathrm{\Omega })`$ (3)
$`=`$ $`\omega _{\mathrm{dyn}}k^2c^2/(2\mathrm{\Omega }).`$ (4)
$`\omega `$ is determined for the wave in the region in which it is launched. As the wave propagates inwards, the frequency remains constant but the wavelength changes in response to the changing environment through which it moves.
The above arguments are based upon the analysis of Lubow (1992). He also identified a third factor that contributed to the precession when the magnitude of the eccentricity was changing secularly. In this paper we are interested in the long term mean value for $`\omega `$ and so this extra term need not be considered further.
Equation 4 tells us that a gaseous disc will precess more slowly than a ballistic particle at the resonance radius. Lubow (1992) estimated that pressure effects could reduce $`\omega `$ by approximately one percent of $`\mathrm{\Omega }`$. Now $`\omega _{\mathrm{dyn}}`$ is itself of the order of a few per cent of $`\mathrm{\Omega }`$, so the reduction is significant. (Simpson & Wood (1998) misinterpreted Lubow’s work and ignored the pressure term as being a few percent of the dynamical precession).
Hirose & Osaki (1993) obtained estimates for the superhump period by solving the eigenvalue problem of a linear one-armed mode in an inviscid disc. Their arguments correspond with those given above and their results agree very well with those of Lubow (1992). They showed that an increase in disc temperature allowed the eccentric mode to propagate further into the disc, with the precession rate being reduced as a result. Lubow (1992) also completed several two dimensional hydrodynamic disc simulations, in which he isolated the various contributing factors to the precession. His numerical results clearly showed pressure forces to be important. In fact, for those particular simulations, the pressure contributions reduced $`\omega `$ to half the dynamical value. Till now, observational data has only been compared with dynamical estimates for the precession that we know to be inadequate.
## 3 The theory and simulations compared
The numerical simulations of Whitehurst (1988) lead directly to the eccentric disc model for superhumps. In this section we compare subsequent numerical results (Hirose & Osaki 1990; Kunze, Speith & Riffert 1997; Murray 1998; Simpson & Wood 1998; Murray et al. 1999) with the dynamical equation for precession (equation 1). We do not attempt a comparison with the hydrodynamic equation as the simulations of Hirose & Osaki were completed using the sticky particle method due to Lin & Pringle (1976) which does not account for pressure forces.
For approximately circular orbits, the rate of dynamical precession
$$\omega _{\mathrm{dyn}}=a(r)\frac{q}{\sqrt{1+q}}\mathrm{\Omega }_{\mathrm{orb}},$$
(5)
where
$$a(r)=\frac{1}{4r^{1/2}}\frac{d}{dr}(r^2\frac{db_{1/2}^{(0)}}{dr}).$$
(6)
$`b_s^{(j)}`$ is the standard notation for a Laplace coefficient from celestial mechanics. We have made use of the hyper-geometric series expression for $`b_s^{(j)}`$ found in Brouwer & Clemence (1961) (equation 42, Chapter 15) to evaluate $`a(r)`$ as a function of radius (see figure 1). Clearly, the precession rate of a ballistic particle is an increasing function of radius. But as mentioned above, differential precession is not observed. Thus in previous applications of the dynamical theory to discs, it was assumed that the entire disc precessed as if it were a single ballistic particle at the resonance radius, $`r_{\mathrm{res}}0.477`$$`d`$, (see e.g. Hirose & Osaki). Unfortunately, most of these comparisons are of little value because they either failed to correctly evaluate $`a(r)`$ (e.g. Patterson 1998) or they made use of an incorrect equation for the precession that first appeared in Whitehurst & King (1991).
Observational data is usually presented in terms of the superhump period excess
$$ϵ=\frac{P_{\mathrm{sh}}P_{\mathrm{orb}}}{P_{\mathrm{orb}}}.$$
(7)
In terms of the disc precession rate $`\omega `$,
$$ϵ=\frac{\omega }{\mathrm{\Omega }_{\mathrm{orb}}\omega }.$$
(8)
So in figure 2 we have plotted as a function of mass ratio the superhump period excesses obtained from the disc simulations of several different authors. The solid curve shows the superhump period excess estimated by equation 5 with $`r=r_{\mathrm{res}}`$.
All authors used the smooth particle hydrodynamics (SPH) technique, except Hirose & Osaki who used the sticky particle technique. The pressure contribution to the disc precession is thus absent from their simulations. The various implementations of SPH are described in Flebbe et al. (1994), Simpson & Wood (1998), and Murray (1996).
The calculations completed for Murray (1998), and Murray et al. (1999) were of very cool isothermal discs. In units of the binary separation, $`d`$, and the orbital angular frequency, $`\mathrm{\Omega }_{\mathrm{orb}}^1`$, we set the sound speed $`c=0.02`$$`d\mathrm{\Omega }_{\mathrm{orb}}`$. The pressure contribution is proportional to the square of the sound speed and so was very small. We also used a very large value for the shear viscosity. This reduced the viscous time scale to a value that enabled us to follow the evolution of discs to steady state in a reasonable amount of computing time. However with a larger shear viscosity it was easier for material to penetrate the resonance. Significantly eccentric discs and very strong superhump signals were obtained as a result. Furthermore, the large shear viscosity inhibited the inward propagation of the eccentricity and further reduced the effectiveness of the pressure term in equation 4.
Without the benefit of any significant pressure contribution to the disc precession, the points from Murray (1998) and Hirose & Osaki lie very close to the dynamical precession curve. Some are marginally above the curve. As mentioned above, the large shear viscosities used allowed the discs to effectively penetrate the resonance and extend to somewhat larger radii. On the other hand the four points from Murray et al. (1999) are all well above the curve, and cannot be explained in terms of large discs. For these mass ratios the resonance lies beyond the tidal truncation radius. That is, for $`q\stackrel{>}{}1/4`$ simply periodic orbits begin intersecting one another inside $`r_{\mathrm{res}}`$. The orbits in this region will no longer be approximately circular, in contradiction of the assumption underlying the analytical expressions in Hirose & Osaki, and Lubow (1992). We conclude therefore that, gas pressure considerations aside, expressions such as equation 5 will only be valid for mass ratios less than approximately 1/4.
The discs of Simpson & Wood, and of Kunze et al. precessed significantly more slowly than expected from purely dynamical considerations (these points lie below the dynamical curve in figure 2). Kunze et al. assumed each element of the disc radiated as a blackbody and set the sound speed according to the blackbody temperature. Simpson & Wood used a polytropic equation of state with index $`\gamma =1.01`$ and integrated the internal energy equation for each particle. Although a comparison is not straightforward, these discs would have been somewhat hotter than the ones constructed for this paper. As a consequence, the retrograde contribution to the precession due to pressure forces would have been larger, and the values for $`P_\mathrm{d}`$ obtained by Simpson & Wood and Kunze et al. are closer to observed $`P_{\mathrm{sh}}`$.
We are currently running a set of simulations with the shear viscosity ten times smaller than in Murray (1998), but with other parameters unchanged. In particular, the sound speed $`c=0.02`$$`d\mathrm{\Omega }_{\mathrm{orb}}`$. This gives an effective value of the Shakura-Sunyaev parameter $`\alpha 0.16`$ at the resonance (as opposed to $`1.6`$ in the previous calculations). At present we have results only for $`q=0.10`$ (the simulations now take much longer to complete). This particular calculation ran for 100 orbital periods. As is to be expected, the superhump amplitude was reduced. At the conclusion of the calculation the superhump period (averaging over 25 superhump cycles) $`P_{\mathrm{sh}}=(1.0295\pm 0.0005)P_{\mathrm{orb}}`$. This period is significantly below the dynamical precession curve even though the disc is very cool, and it corresponds very well with the results of Simpson & Wood, and Kunze et al. Our result has been included in figure 2 but we emphasize that at the conclusion of the simulation, the disc mass was still very slowly increasing, and the superhump period had not completely stabilised.
In the above discussion we did not refer to the calculations of Whitehurst (1994) simply because he used an initial mass transfer burst to set up his discs, and the superhump periods were not obviously steady state values. The simulations described in section 4.2 illustrate the time scale upon which resonant discs adjust to changes in the mass transfer rate. Before we leave this subject, a word of warning. We discovered in one particular trial simulation that once mass return from the outer edge of the disc to the secondary star occurred, the superhump period excess was reduced by approximately $`5\%`$. The explanation is simple. Material at the outer edge of the disc precesses most rapidly. Remove that material and you slow the disc precession. This effect can hinder comparison with theory and other simulations. Such mass return occurred in the simulations of Hirose & Osaki and of Simpson & Wood.
We conclude that for mass ratios less than $`1/4`$, equation 5 provides a useful upper limit for the steady state precession rate of a gaseous disc. However, for systems with $`q\stackrel{>}{}1/4`$, the intersection of simply periodic orbits near the resonance radius renders equation 5 invalid.
The next step is to compare simulation results with eigenmode calculations such as those performed by Hirose & Osaki (1993). They tabulated disc precession rates as a function of the sound speed at the resonance for systems with mass ratios $`q=0.15`$ and $`0.05`$. Both the eigenmode calculations and the hydrodynamic simulations show similar magnitude pressure contributions to the precession. However, as yet the assumptions made in making the two sets of calculations (e.g. equation of state) differ sufficiently as to prevent a more detailed comparison.
## 4 Theory and observation compared
Patterson (1998) tabulated the superhump period excesses for 53 systems. In this section we will compare this data with both the dynamical and hydrodynamical equations for precession. In order to do this we require an independent means of estimating a system’s mass ratio. Unfortunately, although the mass of the secondary star as a function of orbital period is reasonably well constrained, there is evidence of considerable variation in the white dwarf masses of otherwise similar systems (see e.g. Smith & Dhillon 1998, figure 5). The resulting uncertainty in $`q`$ is large enough to interfere with our comparisons with theory. An early attempt to compare theory and observation by Molnar & Kobulnicky (1992), was hampered in exactly this fashion (see their figure 2).
In figure 3 we have plotted the superhump data from Patterson (1998) against theoretical predictions for the dynamical precession (made using equation 5 with $`r=r_{\mathrm{res}}`$). In order to estimate the mass ratio of a system of given orbital period we have used the observationally derived stellar masses obtained by Smith & Dhillon (1998). They found the mean white dwarf mass for all cataclysmic variables to be $`0.76\pm 0.22M_{}`$, and they estimated that $`M_{\mathrm{sec}}/M_{}=(0.038\pm 0.003)P^{(1.58\pm 0.09)}`$. The central (darker) curve in figure 3 represents the best theoretical estimate. The outer two curves show the consequences of the uncertainty in the Smith & Dhillon values for the theory. In fact, Smith & Dhillon found the mean white dwarf mass for systems below the period gap to be $`0.69\pm 0.13M_{}`$, and to be $`0.80\pm 0.22M_{}`$ for systems above the period gap. However precession rates recalculated using the adjusted white dwarf masses differed only marginally from the original estimates.
Assumptions about stellar masses aside, figure 3 shows that the superhump observations cannot be adequately explained in terms of simple dynamical precession. The retrograde precession due to pressure forces is necessary to bring closer agreement between theory and observations. Previously published figures showing closer agreement between dynamical precession estimates and observations only did so because the dynamical precession was calculated incorrectly. For example Patterson (1998) used the equation
$$\omega _{\mathrm{dyn}}/\mathrm{\Omega }_{\mathrm{orb}}=0.23\frac{q}{\sqrt{1+q}}.$$
(9)
The coefficient should be approximately $`0.4`$ (see Hirose & Osaki 1990; Lubow 1992; figure 1 above). In order to explain superhumps in long period systems with equation 9, mass ratios as high as $`0.5`$ are required. These $`q`$ values are clearly incompatible with eccentricity excitation at the $`3:1`$ inner Lindblad resonance. However, if we use the more defensible equation 4 for the precession, then smaller mass ratios are obtained.
Our confidence in the comparison between theory and observation is limited by uncertainty in $`q`$. The inadequacy of the dynamical expression for precession is much more clearly apparent when we consider the eclipsing systems OY Car, HT Cas and Z Cha. Very accurate determinations of the mass ratios of these systems are respectively obtained in Wood et al. (1989), Horne, Wood & Stiening (1991), and Wood et al. (1986). In Table 1 we list for each system the observed $`\omega `$ and the dynamical precession rate. In each case the difference between the two values is too large to be explained by uncertainties in either $`q`$ or equation 5. The difference is simply the (retrograde) pressure contribution to the precession.
We can check whether the inferred $`\omega _{\mathrm{pr}}`$ is consistent with the assumption that the eccentricity is tightly wound. The pitch angle $`i`$ of the spiral wave is given by $`\mathrm{cot}i=|kr|`$. Thus, substituting for $`k`$ in equation 4, we find that the pressure contribution to the precession rate at the $`3:1`$ resonance
$$\omega _{\mathrm{pr}}\frac{2}{3}\mathrm{\Omega }_{\mathrm{orb}}(\frac{c}{\mathrm{\Omega }_{\mathrm{orb}}d}\frac{1}{\mathrm{tan}i})^2.$$
(10)
Note that Lubow (1992) had the constant of proportionality inverted in his equation 21. If we assume the sound speed at the resonance radius $`c=0.05d\mathrm{\Omega }_{\mathrm{orb}}`$ then we obtain pitch angles of $`17,16`$ and $`15^{}`$ for OY Car, Z Cha and HT Cas respectively. These are certainly consistent with the tight winding approximation.
We complete this section with a comparison of the observations with the predictions of the hydrodynamic theory. As it is not clear what value $`k`$ should take in equation 4, we make the naive assumption that the pressure contribution to precession in all 53 systems tabulated in Patterson (1998) will be similar to that of OY Car, HT Cas and Z Cha. We take a mean value for $`\omega _{\mathrm{pr}}=1.9\mathrm{rad}\mathrm{day}^1`$, and plot predicted precession rates against the observations in figure 4. A much improved fit is achieved. As with figure 3, three curves are drawn to show the influence of uncertainty in our knowledge of white dwarf and secondary masses. Much of the difference between the curves is due to the uncertainty in white dwarf mass, with the uppermost curve corresponding to $`M_{\mathrm{wd}}=0.54M_{}`$. The observational data is perhaps best fit with a curve generated assuming a white dwarf mass $`0.65M_{}`$.
We recall that equation 5 underestimates the dynamical precession rate for systems with mass ratios $`\stackrel{>}{}1/4`$. Thus the theoretical curves should rise more steeply long-wards of say $`P_{\mathrm{orb}}=3`$ hr. The implication then is that long period superhumpers have significantly more massive white dwarfs than do their shorter period counterparts. As a consequence the mass ratios of the long period systems will be smaller than previously estimated. This is qualitatively consistent with the eccentricity being excited at the $`3:1`$ Lindblad resonance, and with our previous result (Murray et al. 1999) that the excitation can occur at mass ratios $`\stackrel{<}{}1/3`$. In other words, of those systems with say $`P_{\mathrm{orb}}3`$ hrs, only those with more massive white dwarfs will exhibit superhumps. This then is distinct from any effect caused by systematic variation in white dwarf masses in the cataclysmic variable population as a whole.
## 5 Conclusions
We have compared the theoretical predictions for the precession rates of eccentric discs with simulation results and with observed superhump periods. Comparison with disc simulations showed that the dynamical equation provided a useful upper limit for the disc precession rate of systems with mass ratios $`\stackrel{<}{}1/4`$.
Previous papers showing good agreement between observations and a simple dynamical model for precession are based on incorrectly calculated dynamical expressions. The consequence of using these models is that very large mass ratios are predicted for the long period superhumpers. At such large mass ratios the resonance lies well beyond the truncation radius of the disc and eccentricity cannot be excited.
We show that when the correct expression for dynamical precession is used, there is poor agreement with the observations. This can be seen even though there is considerable uncertainty in estimating the mass ratio of a system of given orbital period.
The observed superhump periods of the eclipsing systems OY Car, Z Cha and HT Cas are shown to be significantly less than predicted by the dynamical theory, demonstrating that the retrograde contribution of pressure forces is important.
The inclusion of a retrograde pressure contribution to the precession rate not only improves the fit to the data but also requires long period superhumpers have smaller mass ratios than previously thought. This in turn is more consistent with the eccentricity being generated at the $`3:1`$ Lindblad resonance.
## Acknowledgements
The author would like to thank Brian Warner for many useful discussions; and Steve Lubow and Jim Pringle for helpful remarks. JRM is employed on a grant funded by the Leverhulme Trust. |
no-problem/9911/cond-mat9911271.html | ar5iv | text | # Magnetic order in ferromagnetically coupled spin ladders
## I Introduction
One of the main topics in condensed matter physics in recent years has been the study of low-dimensional antiferromagnetic spin systems. The strong interest in this field has been sparkled by the realization that CuO<sub>2</sub> planes play an essential role in high-T<sub>c</sub> superconductors, which was followed by the appearance of several compounds characterized by the presence of strong electronic correlations. These compounds include many cuprates, nickelates, vanadates and manganites, and are characterized by important and unique properties. In most of them the proximity of low-dimensional antiferromagnetic (AF) phases are the key to understand these properties.
At the same time, the concept of spin ladder, originally introduced to explain the presence of a spin gap in (VO)<sub>2</sub>P<sub>2</sub>O<sub>7</sub> and later in layered cuprates like Sr<sub>n-1</sub>Cu<sub>n+1</sub>O<sub>2n</sub> (Refs. ) became an important theoretical tool to understand the behavior of strongly correlated systems. The physics of the two-leg spin ladder is characterized by the existence of a singlet-triplet spin gap and an exponential decay of correlation functions. The ground state which can be thought to a good approximation as a product of singlets living on the rungs is now well understood. However, the above mentioned cuprates, the important case of Sr<sub>14</sub>Cu<sub>24</sub>O<sub>41</sub>, as well as many other compounds like CaV<sub>2</sub>O<sub>5</sub>, actually contain layers of coupled two-leg ladders. These ladders are coupled by frustrated interactions in a trellis lattice which make its study with analytical or numerical techniques quite difficult. In principle, in the absence of frustration, a reduction of the gap as the interladder coupling increases is expected. Eventually, the system becomes gapless at a quantum critical point (QCP) and for larger coupling it behaves essentially as a two-dimensional (2D) spin-1/2 square antiferromagnet. Much less is known for the trellis lattice, although a Schwinger boson study suggests the transition from a spin liquid to a possible spiral order as the interladder coupling (ILC) increases. Quantum Monte Carlo (QMC) studies of this frustrated system are hampered by the “minus sign problem” and the possibility of using this powerful technique is severely reduced. In this sense, one of the objectives of the present work is to study a model in which the frustrated AF interladder couplings of the trellis lattice are replaced by much simpler ferromagnetic (FM) couplings in a square lattice. We expect that some of the physics of the frustrated system can be captured by this effective simplified model. Besides, there are compounds which consist of FM coupled ladders like SrCu<sub>2</sub>O<sub>3</sub>. The results of the present work could be relevant to other FM coupled gaped systems like the dimerized chains in (VO)<sub>2</sub>P<sub>2</sub>O<sub>7</sub>. Previous studies have compared the behavior of AF and FM frustrated and nonfrustrated coupled gapless spin systems (spin chains).
Alternatively, a renewed interest in coupled ladders comes from the high-T<sub>c</sub> cuprate superconductors themselves. A number of recent experiments, mainly neutron scattering studies, indicate the presence of incommensurate spin correlations which in turn have been interpreted as coming from the segregation of charge carriers into 1D domain walls or “stripes” leaving the regions between them as undoped antiferromagnets. There are several theoretical scenarios that have predicted or that attempt to explain this stripe order but the origin of this order and its relation to superconductivity is still controversial. In particular, the problem of stripe formation in a 2D microscopic model like the t-J or Hubbard models is extremely difficult to study with analytical or numerical techniques. In principle, the inclusion of charge and spin degrees of freedom is essential for the understanding of this problem. However, it has been suggested that assuming the presence of a stripe structure it is very instructive to study its magnetic properties by using a model with spin degrees of freedom only. In this simplified model, the AF insulating regions between the charged stripes are considered as $`n`$-leg isotropic ladders coupled by an effective interaction. In one such study, following the initial picture from Ref. , the insulating regions were considered as 3-leg ladders coupled by AF interactions. However, a numerical study of the 2D t-J model, as well as early studies of charge inhomogeneities in Hubbard and t-J models, indicate the formation of “bond-centered” stripes, i.e. doped two-leg ladders alternating with undoped two-leg ladders. Coupled spin two-leg ladders were also studied but its relevance to the physics of Cu-O planes is relative since they miss the essential ingredient that the magnetic order of the AF slices is $`\pi `$-phase shifted as emphasized in Ref. . Hence, the second motivation for the present work comes from the assumption that this $`\pi `$-phase-shift between ladders can be modeled by taking a ferromagnetic coupling between them.
In summary, the purpose of the present work is to study ferromagnetically coupled two-leg ladders and compare their behavior with the case of AF coupling. If this model is considered as an approximation of AF systems in the trellis lattice, the FM coupling is an effective interaction coming from the frustrated interactions between ladders. If this model is considered as an approach to the stripe phase of the cuprates, the FM interaction comes from a collective effect determined from the competition of charge and spin degrees of freedom. In both cases, the results of the present study lead to predictions which can be tested experimentally. We use essentially numerical techniques like QMC (world-line algorithm) which allows us to reach low enough temperatures so as to capture ground state properties, and exact diagonalization with the Lanczos algorithm (LD), complemented by the continued fraction formalism to compute dynamical properties.
## II Quasi-one dimensional study.
To gain insight about the effects of FM interladder couplings we start from the case of FM coupled AF dimers which are the simplest systems with a spin gap. We are thus led to 1D or quasi-1D systems which are much easier to study from the numerical point of view. Besides, systems with a random distribution of AF and FM couplings have received some theoretical attention and their possible physical realization in SrCuPt<sub>1-p</sub>Ir<sub>p</sub>O<sub>6</sub> has been discussed. The Hamiltonian is given by:
$`=_{dimer}+_{inter}`$ (1)
where:
$`_{dimer}`$ $`=`$ $`J{\displaystyle \underset{a}{}}𝐒_{a;1}𝐒_{a;2},`$ (2)
$`_{inter}`$ $`=`$ $`{\displaystyle \underset{a,b,i,j}{}}J_{inter,a,b,i,j}𝐒_{a;i}𝐒_{b;j},`$ (3)
where $`a`$ is a dimer index and $`i=1,2`$ labels the sites in a dimer. $`J=1`$ for simplicity. Periodic boundary conditions are imposed in the longitudinal direction. There are several ways of coupling dimers. We consider here the simplest case of dimers forming a FM-AF alternating chain (Fig. 1(a)). Another possibility is that of dimers forming a two-leg ladder with FM leg and AF rung interactions. This second case has already been studied numerically but it is not relevant for the problems we wish to address. Besides, we consider the case of FM-coupled AF plaquettes instead of dimers in which case we have a two-leg ladder with FM-AF alternating interactions along the legs (Fig. 1(b)).
Our first concern in this section is the behavior of the spin gap starting from the situation of isolated dimers or plaquettes. Using exact diagonalization we computed the spin gap $`\mathrm{\Delta }`$ by substracting the energies in the $`S=0`$ and $`S=1`$ subspaces on finite clusters with up to 24 sites. The extrapolation to the bulk limit was done using the law $`\mathrm{\Delta }_{\mathrm{}}+b\mathrm{exp}(L/L_0)/L`$. The final result is shown in Fig. 2. We notice that in the limit $`J_{inter}\mathrm{}`$ we
recover the cases of a spin-1 chain for the coupled dimer system and a spin-1 ladder for the coupled plaquette one. It is easy to realize (by solving a two-dimer system and a two-site spin-1 system) that the gap for the spin-1 chain is four times larger than the gap obtained by the coupled dimer system when $`J_{inter}\mathrm{}`$ and the gap for the spin-1 ladder is twice larger than the coupled plaquette system in this limit. Thus, in the former case we obtain a gap $`\mathrm{\Delta }_{cd}\times 4=0.410`$, coincident with the value already reported in the literature. For the spin-1 ladder we would obtain a gap $`\mathrm{\Delta }_{cp}\times 2=0.290\pm 0.008`$, smaller than the gap for the S=1 chain as predicted theoretically. Qualitatively, the important feature here is that the gap decreases monotonically from the isolated dimers (or plaquettes) case as $`J_{inter}\mathrm{}`$. In the case of FM coupled dimers, a monotonic behavior could be guessed from the fact that this system continuously evolves towards the valence-bond-solid picture of a spin-1 chain in that limit.
On the other hand, the behavior of the coupled plaquettes system is not obviously predictable.
The second point we want to examine is the behavior of the excitations of these systems, in particular the $`S=1`$ excitations as can be measured by neutron scattering experiments. For this purpose, using conventional Lanczos techniques with the continued fractions formalism, we have computed the dynamical structure function ($`zz`$-component) $`S(𝐪,\omega )`$ which is shown in Fig 3. Already for the simplest case of coupled dimers (Fig 3(a),(b)) one can see an interesting feature which we will observe also for the coupled ladders case in the next section. The position of the gap which is at $`q=\pi `$ for AF dimerized chains shifts to $`\pi /2`$ for the case of FM coupled dimers. A similar behavior is observed for coupled plaquettes (Fig 3(c),(d)), where the lowest energy peak changes from $`(q_x,q_y)=(\pi ,\pi )`$ to $`(\pi /2,\pi )`$ ($`x`$ is the longitudinal direction) by switching from AF to FM interplaquette couplings. In both cases, there is a transfer of spectral weight from the original AF peak to the FM one. This behavior is independent of the absolute value of $`J_{inter}`$, except for finite size effects.
## III Coupled ladders.
We have now arrived at the central part of this work. The Hamiltonian for the system we consider now (Fig. 1(c)) is essentially the same as (3) which we rewrite here for clarity:
$`=_{ladder}+_{inter}`$ (4)
where:
$`_{ladder}`$ $`=`$ $`J{\displaystyle \underset{a,l,i}{}}𝐒_{a;l,i}𝐒_{a;l,i+1}+J{\displaystyle \underset{a,i}{}}𝐒_{a;1,i}𝐒_{a;2,i},`$ (5)
$`_{inter}`$ $`=`$ $`J_{inter}{\displaystyle \underset{a,i}{}}𝐒_{a;2,i}𝐒_{a+1;1,i},`$ (6)
where $`a`$ stands now for a ladder index and $`l=1,2`$ for the two legs in a ladder. We have taken the case of an isotropic ladder by simplicity. $`J`$ is again taken as the unit of energy.
We start with the study of the ground state energy of the system of both FM and AF coupled ladders. To this purpose we have performed standard QMC simulations for $`L\times L`$ lattices with $`L=4,6,8,12,`$ and 16. Periodic boundary conditions in both directions are considered. For each lattice and set of coupling constants, we took $`\mathrm{T}/J=0.125,0.100`$ and 0.07, and the Trotter number $`M`$ at each temperature such that the error due to the time discretization is comparable or smaller than the statistical error. Typically, $`M=140`$ for $`\mathrm{T}/J=0.07`$. We made runs up to $`10^6`$ MC steps for both thermalization and measurement. We computed the energy in the total $`S_z=0`$ and $`S_z=1`$ subspaces ($`E_0`$ and $`E_1`$ respectively). For each set of coupling constants, we extrapolated the corresponding energies per site $`e_0`$ and $`e_1`$ to the bulk limit using the law $`e_{\mathrm{}}+b\mathrm{exp}(L/L_0)/L^2`$ in the gaped region and $`e_{\mathrm{}}+b/L^3`$ in the gapless case. We obtained very close values for $`e_{\mathrm{},0}`$ and $`e_{\mathrm{},1}`$ which gives an indication of the good quality of the fits. The results for the ground state energy for FM and AF ladder couplings are shown in Fig. 4. An interesting feature can be noticed: the energies for both signs of $`J_{inter}`$ are degenerate within numerical errors for $`|J_{inter}|<0.3`$. This situation corresponds to a physics governed mainly by singlet dimers on the ladder rungs and in this case the sign of the coupling between these relatively isolated dimers is irrelevant. On the other hand, for $`|J_{inter}|>0.3`$, for AF interladder coupling it is possible that the singlets delocalize from a single ladder and finally form a “resonant valence bond” state or that the singlet-triplet excitations be replaced by gapless magnon excitations. Previous numerical studies precisely locate at $`J_{inter}0.3`$ the position of the QCP at which the ladder-like spin liquid is replaced by a long range 2D-like AF order thus choosing the second possibility. The important point we want to suggest is that in both cases the energy of the system would be lower than for the case of FM ILC where the singlets on the ladders still persist. To illustrate this scenario we have computed on the $`16\times 16`$ cluster the spin-spin correlation functions $`S(𝐫)=S_0^zS_𝐫^z`$ for $`𝐫=(1,0)`$ (leg direction), (0,1) (rung), and (1,1), inside a ladder, and $`𝐫=(0,1)`$ between two ladders. These correlations, normalized in such a way that $`S(0)=1`$, are shown in absolute value in the inset of Fig. 4 as a function of $`|J_{inter}|`$. The differences between FM and AF ILC appear in the (0,1) (rung) correlations, which remain stronger in the former case, and most importantly, in the interladder
(0,1) correlation which increases faster in the latter. Of course, this correlation is negative (positive) in the AF (FM) case.
This indication of a difference between the two ILC cases can be traced to a more intimate level which would also provide experimentally measurable features. To this end, let us examine now the static structure factor $`S(𝐪)`$ obtained by Fourier transforming the spin-spin correlations obtained by QMC at the lowest temperature attained. In the case of AF ILC the peak is at $`(\pi ,\pi )`$ in all the range from the isolated ladder, which corresponds to the gaped “quantum disordered” region, to the isotropic square lattice, but the extrapolation of its intensity to the bulk limit becomes nonzero only for $`J_{inter}>J_{inter,cr}`$, i.e. in the “renormalized classical” region. For FM ILC, as shown in Fig. 5(a) for $`J_{inter}=0.2`$, the situation is qualitatively different. The peak in $`S(𝐪)`$ is now at $`(\pi ,\pi /2)`$, a feature which is similar to the one seen in the simpler cases of coupled chains and plaquettes. This behavior has been found for all clusters considered, and for all $`J_{inter}<0`$ except for finite size effects: the smaller $`|J_{inter}|`$ the larger the size needed to reach the bulk behavior. This is illustrated for the $`4\times 4`$ cluster.
A highly nontrivial behavior is found if the temperature dependence is analyzed. In Fig. 5(b)) the evolution of the structure factor for the $`20\times 20`$ cluster and $`J_{inter}=0.2`$ is shown at $`\mathrm{T}=0.2,0.3,0.4,0.5,0.6,0.8,1.0,1.5`$ and 2.0. In the high temperature region ($`\mathrm{T}>0.8`$) the peak is located at $`(\pi ,\pi )`$. As T is lowered the peak starts to shift towards the zero temperature peak $`(\pi ,\pi /2)`$ which is reached at $`\mathrm{T}=0.3`$. We found almost no variation with cluster size of these two crossover temperatures at this value of $`J_{inter}`$. The fact that only at a finite temperature the peak of the magnetic structure factor starts to be incommensurate is reminiscent to the one first found in La<sub>1.6-x</sub>Nd<sub>0.4</sub>Sr<sub>x</sub>CuO<sub>4</sub> ($`x0.8`$) where a charge-stripe order is developed at $`\mathrm{T}_\mathrm{c}=65\mathrm{K}`$ followed by a spin-stripe order at a lower temperature $`\mathrm{T}_\mathrm{s}=50\mathrm{K}`$.
An important quantity to compute is the bulk limit of the peak of the structure factor. This quantity is related with the behavior of spin-spin correlations at the maximum distance along the ladder direction ($`x`$-axis), along the direction transversal to the ladders ($`y`$-axis) and at the maximum distance of the 2D cluster. In the case of the AF square lattice, this latter quantity is proportional in the bulk limit to the squared staggered magnetization and it should be equal in that limit to the static structure factor at momentum $`(\pi ,\pi )`$. The finite size scaling of $`S(\pi ,\pi /2)`$ is shown in Fig. 6(a) for $`J_{inter}=0.2`$ and $`0.6`$. We have attempted extrapolations to the bulk limit using both exponential and power laws. Due to the fact that clusters with $`L=4,8,16`$ and $`L=6,12`$ belong to two different sets (which is more noticeable for large values of $`|J_{inter}|`$), the extrapolation procedure is not very reliable. However, as shown in Fig 6(a), one can conclude that $`S(\pi ,\pi /2)`$ is zero for $`J_{inter}=0.2`$ and nonzero for $`J_{inter}=0.6`$. The finite size behavior of the spin-spin correlation at the maximum distance along the ladder direction, $`S_{max,x}`$, which is the one with smaller errors in our simulations, is similar to the one for $`S(\pi ,\pi /2)`$ and the extrapolated values are also zero (nonzero) for $`J_{inter}=0.2`$ ($`J_{inter}=0.6`$).
This crossover in the behavior of $`S(\pi ,\pi /2)`$ as a function of $`J_{inter}`$ poses us with the question of the existence of a point analogous to the QCP in the AF ILC case. In the limit of $`J_{inter}\mathrm{}`$ the coupled ladder system becomes equivalent to a system of AF coupled spin-1 chains, where a finite coupling is necessary to change to a gapless regime. To answer this question, let us now examine the behavior of the singlet-triplet spin gap as a function of $`J_{inter}`$. Although this is not a convenient quantity to compute with QMC since it involves a difference between absolute values of the energies and then for large clusters the error becomes comparable to its value, we could get an indication of the presence or absence of a gapless region. The gap was computed for finite clusters and then extrapolated to the bulk using the law $`\mathrm{\Delta }_{\mathrm{}}+b\mathrm{exp}(L/L_0)/L`$ (or $`a/L^2`$ for the gapless case). The results are depicted in Fig. 6(b). For the AF case, it can be seen a rapid decrease of $`\mathrm{\Delta }`$ as $`J_{inter}`$ is increased, confirming earlier predictions and calculations. The gap vanishes at the QCP, $`J_{inter,cr}0.3`$. For FM ILC we also obtain a monotonically decreasing behavior, similar to the one found in the previous section for coupled dimers and plaquettes. The gap seems larger to that of AF ILC but it could vanish at $`J_{inter}0.4`$ within error bars. The calculation of other quantities like correlation lengths and/or using more powerful techniques should be necessary to obtain a reliable estimation for $`J_{inter,cr}`$.
The final part of our study which can eventually lead to a deeper understanding of the excitations involved in this system is the analysis of the dynamical structure factor $`S(𝐪,\omega )`$ which has been done with LD as in the previous section. In this case, we have to limit ourselves to somewhat smaller clusters but we hope that the qualitative features we found will survive in the bulk limit. Results obtained for the $`4\times 4`$ cluster are shown in Fig. 7. For AF ILC (Fig. 7(a)) the peak in $`S(𝐪,\omega )`$ is located at $`(\pi ,\pi )`$, as expected in the bulk limit for an AF order. When a FM ILC is involved (Fig. 7(b)) it can be seen that considerable spectral weight is transferred to the peak at $`(\pi ,\pi /2)`$, which becomes also the lowest energy excitation. Results for the $`6\times 4`$ cluster are quite similar and it is quite reassuring that these results are consistent with the ones obtained with QMC and shown in Fig. 5.
## IV Conclusions
In summary, we have numerically obtained some exact (except for extrapolation procedures) results for ferromagnetically coupled systems, in particular two-leg ladders. Our main results are embodied in Fig. 5 and Fig. 7, i.e. at zero temperature the peak of the structure factor is located at $`(\pi ,\pi /2)`$ and it corresponds to the lowest energy excitation. Fig. 6(a) also suggests finite values of this peak of the structure factor and the spin-spin correlation at the maximum distance along the ladder axis in the bulk limit for strong $`J_{inter}`$. There also two crossover temperatures, a higher one at which the peak starts to shift away from $`(\pi ,\pi )`$ and a lower one at which the peak reaches its zero temperature position. Besides the intrinsic interest for the theoretical understanding of spin-1/2 ladder systems, we will try to emphasize in this section their possible relevance for realistic compounds containing ladders. As mentioned in the introduction, we should consider in the first place ladder compounds like SrCu<sub>2</sub>O<sub>3</sub>, Sr<sub>14</sub>Cu<sub>24</sub>O<sub>41</sub> (which upon Ca-doping and under pressure becomes superconducting) and CaV<sub>2</sub>O<sub>5</sub>. In these compounds, the ladders are coupled forming a trellis lattice. In the former case the interladder couplings are actually ferromagnetic. In the others, due to the frustrated nature of the AF ILC one could speculate that to some extent they could be modeled effectively by FM couplings. For all these compounds, then we predict that neutron scattering experiments would show peaks at $`(q_x,q_y)=(\pi ,\pi /2)`$, where the $`x`$ ($`y`$) axis is in the direction parallel (perpendicular) to the ladders.
As also suggested in the introduction, our results could be related to the striped structure which dynamically appears in the Cu-O planes of high-T<sub>c</sub> cuprates. In this case we are able to trace the origin of the neutron scattering peaks observed away from $`(\pi ,\pi )`$ to an effective ferromagnetic interaction between $`\pi `$-shifted insulating spin ladders. In fact, this behavior can be observed already for the simplest case of ferromagnetically coupled AF dimers as shown in Section II. In this case, it is easy to verify on small chains by LD or QMC that the peak moves continuously from $`q=\pi /2`$ to $`\pi `$ as some of the FM couplings are replaced by AF ones. If this picture could translate to coupled ladders, then one would be lead to the conclusion that some of the spin two-leg ladders are $`\pi `$-shifted while others are in phase in order to reproduce the experimentally observed incommensurate peaks. Another feature we want to emphasize is the temperature evolution of the structure factor (Fig. 5(b)): the peak at $`(\pi ,\pi /2)`$ is reached at a finite temperature and there is a range of temperatures in which an incommensurate peak is present. As mentioned in the previous section, this is reminiscent of the order in which charge and the spin stripes appear in the cuprates as the temperature is decreased. The fact that the spin gap possibly remains finite for somewhat strong values of $`|J_{inter}|`$ is also interesting for the stripe scenario of cuprates although in this case our results for the bulk limit are affected by large error bars. Of course, the question of to what extent this model of FM coupled ladders could apply to this scenario should come of detailed comparison with more realistic models like 2D t-J or Hubbard models.
###### Acknowledgements.
We wish to acknowledge many interesting discussions with A. Dobry, A. Greco and A. Trumper. We thank the Supercomputer Computations Research Institute (SCRI) and the Academic Computing and Network Services at Tallahassee (Florida) for allowing us to use their computing facilities. |
no-problem/9911/hep-ph9911341.html | ar5iv | text | # Neutrino Mass Anarchy
## Abstract
What is the form of the neutrino mass matrix which governs the oscillations of the atmospheric and solar neutrinos? Features of the data have led to a dominant viewpoint where the mass matrix has an ordered, regulated pattern, perhaps dictated by a flavor symmetry. We challenge this viewpoint, and demonstrate that the data are well accounted for by a neutrino mass matrix which appears to have random entries.
preprint: UCB-PTH-99/51, LBNL-44551
1 Neutrinos are the most poorly understood among known elementary particles, and have important consequences in particle and nuclear physics, astrophysics and cosmology. Special interests are devoted to neutrino oscillations, which, if they exist, imply physics beyond the standard model of particle physics, in particular neutrino masses. The SuperKamiokande data on the angular dependence of the atmospheric neutrino flux provides strong evidence for neutrino oscillations, with $`\nu _\mu `$ disappearance via large, near maximal mixing, and $`\mathrm{\Delta }m_{atm}^210^3`$ eV<sup>2</sup>. Several measurements of the solar neutrino flux can also be interpreted as neutrino oscillations, via $`\nu _e`$ disappearance. While a variety of $`\mathrm{\Delta }m_{}^2`$ and mixing angles fit the data, in most cases $`\mathrm{\Delta }m_{}^2`$ is considerably lower than $`\mathrm{\Delta }m_{atm}^2`$, and even in the case of the large angle MSW solution, the data typically require $`\mathrm{\Delta }m_{}^20.1\mathrm{\Delta }m_{atm}^2`$. The neutrino mass matrix apparently has an ordered, hierarchical form for the eigenvalues, even though it has a structure allowing large mixing angles.
All attempts at explaining atmospheric and solar neutrino fluxes in terms of neutrino oscillations have resorted to some form of ordered, highly structured neutrino mass matrix. These structures take the form $`M_0+ϵM_1+\mathrm{}`$, where the zeroth order mass matrix, $`M_0`$, contains the largest non-zero entries, but has many zero entries, while the first order correction terms, $`ϵM_1`$, have their own definite texture, and are regulated in size by a small parameter $`ϵ`$. Frequently the pattern of the zeroth order matrix is governed by a flavor symmetry, and the hierarchy of mass eigenvalues result from carefully-chosen, small, symmetry-breaking parameters, such as $`ϵ`$. Such schemes are able to account for both a hierarchical pattern of eigenvalues, and order unity, sometimes maximal, mixing. Mass matrices have also been proposed where precise numerical ratios of different entries lead to the desired hierarchy and mixing.
In this letter we propose an alternative view. This new view selects the large angle MSW solution of the solar neutrino problem, which is preferred by the day to night time flux ratio at the $`2\sigma `$ level. While the masses and mixings of the charged fermions certainly imply regulated, hierarchical mass matrices, we find the necessity for an ordered structure in the neutrino sector to be less obvious. Large mixing angles would result from a random, structureless matrix, and such large angles could be responsible for solar as well as atmospheric oscillations. Furthermore, in this case the hierarchy of $`\mathrm{\Delta }m^2`$ need only be an order of magnitude, much less extreme than for the charged fermions. We therefore propose that the underlying theory of nature has dynamics which produces a neutrino mass matrix which, from the viewpoint of the low energy effective theory, displays anarchy: all entries are comparable, no pattern or structure is easily discernable, and there are no special precise ratios between any entries. Certainly the form of this mass matrix is not governed by approximate flavor symmetries.
There are four simple arguments against such a proposal
* The neutrino sector exhibits a hierarchy with $`\mathrm{\Delta }m_{}^210^510^3\text{eV}^2`$ for the large mixing angle solution, while $`\mathrm{\Delta }m_{atm}^210^310^2\text{eV}^2`$,
* Reactor studies of $`\overline{\nu }_e`$ at the CHOOZ experiment have indicated that mixing of $`\nu _e`$ in the $`10^3\text{eV}^2`$ channel is small , requiring at least one small angle,
* Even though large mixing would typically be expected from anarchy, maximal or near maximal mixing, as preferred by SuperKamiokande data, would be unlikely,
* $`\nu _e,\nu _\mu `$ and $`\nu _\tau `$ fall into doublets with $`e_L`$, $`\mu _L`$ and $`\tau _L`$, respectively, whose masses are extremely hierarchical ($`m_e:m_\mu :m_\tau 10^4:10^1:1`$).
By studying a sample of randomly generated neutrino mass matrices, we demonstrate that each of these arguments is weak, and that, even when taken together, the possibility of neutrino mass anarchy still appears quite plausible.
2 We have performed an analysis of a sample of random neutrino matrices. We investigated three types of neutrino mass matrices: Majorana, Dirac and seesaw. For the Majorana type, we considered $`3\times 3`$ symmetric matrices with 6 uncorrelated parameters. For the Dirac type, we considered $`3\times 3`$ matrices with 9 uncorrelated parameters. Lastly, for the seesaw type, we considered matrices of the form $`M_DM_{RR}^1M_D^T`$, where $`M_{RR}`$ is of the former type and $`M_D`$ is of the latter. We ran one million sample matrices with independently generated elements, each with a uniform distribution in the interval $`[1,1]`$ for each matrix type: Dirac, Majorana and seesaw.
To check the robustness of the analysis, we ran smaller sets using a distribution with the logarithm base ten uniformly distributed in the interval $`[1/2,1/2]`$ and with random sign. We further checked both of these distributions but with a phase uniformly distributed in $`[0,2\pi ]`$. Introducing a logarithmic distribution and phases did not significantly affect our results (within a factor of two), and hence we discuss only matrices with a linear distribution and real entries.
We make no claim that our distribution is somehow physical, nor do we make strong quantitative claims about the confidence intervals of various parameters. However, if the basic prejudices against anarchy fail in these simple distributions, we see no reason to cling to them.
In each case we generated a random neutrino mass matrix, which we diagonalized with a matrix $`U`$. We then investigated the following quantities:
$`R`$ $``$ $`\mathrm{\Delta }m_{12}^2/\mathrm{\Delta }m_{23}^2,`$ (1)
$`s_C`$ $``$ $`4|U_{e3}|^2(1|U_{e3}|^2),`$ (2)
$`s_{atm}`$ $``$ $`4|U_{\mu 3}|^2(1|U_{\mu 3}|^2),`$ (3)
$`s_{}`$ $``$ $`4|U_{e2}|^2|U_{e1}|^2,`$ (4)
where $`\mathrm{\Delta }m_{12}^2`$ is the smallest splitting and $`\mathrm{\Delta }m_{23}^2`$ is the next largest splitting. What ranges of values for these parameters should we demand from our matrices? We could require they lie within the experimentally preferred region. However, as experiments improve and these regions contract, the probability that a random matrix will satisfy this goes to zero. Thus we are instead interested in mass matrices that satisfy certain qualitative properties. For our numerical study we select these properties by the specific cuts
* $`R<1/10`$ to achieve a large hierarchy in the $`\mathrm{\Delta }m^2`$.
* $`s_C<0.15`$ to enforce small $`\nu _e`$ mixing through this $`\mathrm{\Delta }m^2`$.
* $`s_{atm}>0.5`$ for large atmospheric mixing.
* $`s_{}>0.5`$ for large solar mixing.
The results of subjecting our $`10^6`$ sample matrices, of Dirac, Majorana and seesaw types, to all possible combinations of these cuts is shown in Table I.
First consider making a single cut. As expected, for all types of matrices, a large percentage (from 18% to 21%) of the random matrices pass the large mixing angle solar cut, and similarly for the large mixing angle atmospheric cut (from 59% to 71%). Much more surprising, and contrary to conventional wisdom, is the relatively large percentage passing the individual cuts for $`R`$ (from 10% to 64%) and for $`s_C`$ (from 12% to 18%). The distribution for $`R`$ is shown in Figure 1.
Naively, one might expect that this would peak at $`R=1`$, which is largely the case for Dirac matrices, although with a wide peak. In the Majorana case there is an appreciable fraction $`(20\%)`$ that have a splitting $`R1/10`$, while in the seesaw scenario the majority of cases $`(64\%)`$ have a splitting $`R1/10`$ — it is not at all unusual to generate a large hierarchy.
We can understand this simply: first a splitting of a factor of $`10`$ in the $`\mathrm{\Delta }m^2`$’s corresponds to only a factor of $`3`$ in the masses themselves if they happen to be hierarchically arranged. Secondly, in the seesaw scenario, taking the product of three matrices spreads the $`\mathrm{\Delta }m^2`$ distribution over a wide range.
While one would expect random matrices to typically give large atmospheric mixing, is it plausible that they would give near-maximal mixing, as required by the SuperKamiokande data? In Figure 2 we show distributions of $`s_{atm}`$, which actually peak in the $`0.95<s_{atm}<1.0`$ bin. We conclude that it is not necessary to impose a precise order on the mass matrix to achieve this near-maximal mixing.
Finally, we consider correlations between the various cuts. For example, could it be that the cuts on $`R`$ and $`s_C`$ selectively pass matrices which accidentally have a hierarchical structure, such that $`s_{atm}`$ and $`s_{}`$ are also small in these cases? From Table I we see that there is little correlation of $`s_{atm}`$ with $`s_C`$ or $`R`$: the fraction of matrices passing the $`s_{atm}`$ cut is relatively insensitive to whether or not the $`s_C`$ or $`R`$ cuts have been applied. However, there is an important anticorrelation between $`s_{}`$ and $`s_C`$ cuts; for example, in the seesaw case roughly half of the matrices satisfying the $`s_C`$ cut satisfy the $`s_{}`$ cut, compared with $`20\%`$ of the original set. This anticorrelation is shown in more detail in Figure 3, which illustrates how the $`s_C`$ cut serves to produce a peak at large mixing angle in the $`s_{}`$ distribution.
For random matrices we expect the quantity
$$s_C+s_{}=4(|U_{e1}U_{e2}|^2+|U_{e1}U_{e3}|^2+|U_{e2}U_{e3}|^2)$$
(5)
to be large, since otherwise $`\nu _e`$ would have to be closely aligned with one of the mass eigenstates. Hence, when we select matrices where $`s_C`$ happens to be small, we are selecting ones where $`s_{}`$ is expected to be large.
3 We have argued that the neutrino mass matrix may follow from complete anarchy, however the electron, muon, tau mass hierarchies imply that the charged fermion mass matrix has considerable order and regularity. What is the origin for this difference? The only answer which we find plausible is that the lepton doublets, $`(\nu _l,l)_L`$, appear randomly in mass operators, while the lepton singlets, $`l_R`$, appear in an orderly way, for example, regulated by an approximate flavor symmetry. This idea is particularly attractive in SU(5) grand unified theories where only the 10-plets of matter feel the approximate flavor symmetry, explaining why the mass hierarchy in the up quark sector is roughly the square of that in the down quark and charged lepton sectors. Hence we consider a charged lepton mass matrix of the form
$$M_l=\widehat{M}_l\left(\begin{array}{ccc}\lambda _e& 0& 0\\ 0& \lambda _\mu & 0\\ 0& 0& \lambda _\tau \end{array}\right)$$
(6)
where $`\lambda _{e,\mu ,\tau }`$ are small flavor symmetry breaking parameters of order the corresponding Yukawa couplings, while $`\widehat{M}_l`$ is a matrix with randomly generated entries. We generated one million neutrino mass matrices and one million lepton mass matrices, and provide results for the mixing matrix $`U=U_l^{}U_\nu `$, where $`U_\nu `$ and $`U_l`$ are the unitary transformations on $`\nu _l`$ and $`l_l`$ which diagonalize the neutrino and charged lepton mass matrices. We find that the additional mixing from the charged leptons does not substantially alter any of our conclusions – this is illustrated for the case of seesaw matrices in Table II. The mixing of charged leptons obviously cannot affect $`R`$, but it is surprising that the distributions for the mixings $`s_{atm,,C}`$ are not substantially changed.
4 All neutrino mass matrices proposed for atmospheric and solar neutrino oscillations have a highly ordered form. In contrast, we have proposed that the mass matrix appears random, with all entries comparable in size and no precise relations between entries. We have shown, especially in the case of seesaw matrices, that not only are large mixing angles for solar and atmospheric oscillations expected, but $`\mathrm{\Delta }m_{}^20.1\mathrm{\Delta }m_{atm}^2`$, giving an excellent match to the large angle solar MSW oscillations, as preferred at the $`2\sigma `$ level in the day/night flux ratio. In a sample of a million random seesaw matrices, 40% have such mass ratios and a large atmospheric mixing. Of these, about 10% also have large solar mixing while having small $`\nu _e`$ disappearance at reactor experiments. Random neutrino mass matrices produce a narrow peak in atmospheric oscillations around the observationally preferred case of maximal mixing. In contrast to flavor symmetry models, there is no reason to expect $`U_{e3}`$ is particularly small, and long baseline experiments which probe $`\mathrm{\Delta }m_{atm}^2`$, such as K2K and MINOS, will likely see large signals in $`\overline{\nu }_e`$ appearance. If $`\mathrm{\Delta }m_{atm}^2`$ is at the lower edge of the current Superkamiokande limit, this could be seen at a future extreme long baseline experiment with a muon source. Furthermore, in this scheme $`\mathrm{\Delta }m_{}^2`$ is large enough to be probed at KamLAND, which will measure large $`\overline{\nu }_e`$ disappearance.
###### Acknowledgements.
This work was supported in part by the Director, Office of Science, Office of High Energy and Nuclear Physics, Division of High Energy Physics of the U.S. Department of Energy under Contract DE-AC03-76SF00098 and in part by the National Science Foundation under grant PHY-95-14797. |
no-problem/9911/math-ph9911031.html | ar5iv | text | # Krein’s method in inverse scattering The work on this paper was started while the author visited BGU in Beer-Sheva. The author thanks BGU for hospitality and Professor D. Alpay for useful discussions. key words: Inverse scattering, Krein’s method Math subject classification: 34B25, 34R30, PACS 02.30.Hq 03.40.Kf, 03.65.Nk
## 1 Introduction
The inverse scattering problem on half-axis consists of finding
$$q(x)L_{1,1}=\{q:q=\overline{q},_0^{\mathrm{}}x|q(x)|𝑑x<\mathrm{}\}$$
from the knowledge of the scattering data
$$𝒮:=\{S(k),k_j,s_j,1jJ\}.$$
(1.1)
Here
$$S(k):=\frac{f(k)}{f(k)}$$
is the $`S`$-matrix, $`f(k)`$ is the Jost function
$$f(k):=f(0,k),$$
(1.2)
$`f(x,k)`$ is the solution to the equation
$$\mathrm{}f:=f^{\prime \prime }+k^2fq(x)f=0$$
(1.3)
which is uniquely defined by the condition
$$f(x,k)=e^{ikx}+o(1),x+\mathrm{},$$
(1.4)
$`k_j>0`$ are the (only) zeros of $`f(x)`$ in the region $`_+:=\{k:\text{ Im }k>0\}`$, $`k_j^2`$ are the negative eigenvalues of the Dirichlet operator $`\frac{d^2}{dx^2}+q(x)`$ on $`(0,\mathrm{})`$, $`s_j>0`$ are the norming constants:
$$s_j=\frac{2ik_j}{\dot{f}(ik_j)f^{}(0,ik_j)},$$
(1.5)
and $`J0`$ is the number of the negative eigenvalues.
For simplicity we assume that there are no bound states. This assumption is removed in section 4.
This paper is a commentary to Krein’s paper . It contains not only a detailed proof of the results announced in but also a proof of the new results not mentioned in . In particular, it contains an analysis of the invertibility of the steps in the inversion procedure based on Krein’s results, and a proof of the consistency of this procedure, that is, a proof of the fact that the reconstructed potential generates the scattering data from which it was reconstructed. Basic results are stated in Theorems 1.1 – 1.4 below.
Consider the equation
$$(I+H_x)\mathrm{\Gamma }_x:=\mathrm{\Gamma }_x(t,s)+_0^xH(tu)\mathrm{\Gamma }_x(u,s)𝑑u=H(ts),0t,sx.$$
(1.6)
Equation (1.6) shows that $`\mathrm{\Gamma }_x=(I+H_x)^1H=I(I+H_x)^1`$, so
$$(I+H_x)^1=I\mathrm{\Gamma }_x$$
(1.6’)
in operator form, and
$$H=(I\mathrm{\Gamma }_x)^1I.$$
(1.6”)
Let us assume that $`H(t)`$ is a real-valued even function
$$H(t)=H(t),H(t)L^1()L^2(),$$
$$1+\stackrel{~}{H}(k)>0,\stackrel{~}{H}(k):=_{\mathrm{}}^{\mathrm{}}H(t)e^{ikt}𝑑t=2_0^{\mathrm{}}\mathrm{cos}(kt)H(t)𝑑t.$$
(1.7)
Then (1.6) is uniquely solvable for any $`x>0`$, and there exists a limit
$$\mathrm{\Gamma }(t,s)=\underset{x\mathrm{}}{lim}\mathrm{\Gamma }_x(t,s):=\mathrm{\Gamma }_{\mathrm{}}(t,s),t,s0,$$
(1.8)
where $`\mathrm{\Gamma }(t,s)`$ solves the equation
$$\mathrm{\Gamma }(t,s)+_0^{\mathrm{}}H(tu)\mathrm{\Gamma }(u,s)𝑑u=H(ts),0t,s<\mathrm{}.$$
(1.9)
Given $`H(t)`$, one solves (1.6), finds $`\mathrm{\Gamma }_{2x}(s,0)`$, then defines
$$\psi (x,k):=\frac{E(x,k)E(x,k)}{2i},$$
(1.10)
where
$$E(x,k):=e^{ikx}\left[1_0^{2x}\mathrm{\Gamma }_{2x}(s,0)e^{iks}𝑑s\right].$$
(1.11)
Formula (1.11) gives a one-to-one correspondence between $`E(x,k)`$ and $`\mathrm{\Gamma }_{2x}(s,0)`$.
###### Remark 1.1.
In $`\mathrm{\Gamma }_{2x}(0,s)`$ is used in place of $`\mathrm{\Gamma }_{2x}(s,0)`$ in the definition of $`E(x,k)`$. By formula (2.21) (see section 2 below) one has $`\mathrm{\Gamma }_x(0,x)=\mathrm{\Gamma }_x(x,0)`$, but $`\mathrm{\Gamma }_x(0,s)\mathrm{\Gamma }_x(s,0)`$ in general.
Note that
$$E(x,\pm k)=e^{\pm ikx}f(\pm k)+o(1),x+\mathrm{},$$
(1.12)
where
$$f(k):=1_0^{\mathrm{}}\mathrm{\Gamma }(s)e^{iks}𝑑s,$$
(1.13)
$$\mathrm{\Gamma }(s):=\underset{x+\mathrm{}}{lim}\mathrm{\Gamma }_x(s,0):=\mathrm{\Gamma }_{\mathrm{}}(s,0),$$
(1.14)
and
$$\psi (x,k)=\frac{e^{ikx}f(k)e^{ikx}f(k)}{2i}+o(1),x+\mathrm{}.$$
(1.15)
Note that
$$\psi (x,k)=|f(k)|\mathrm{sin}(kx+\delta (k))+o(1),x+\mathrm{},$$
where
$$f(k)=|f(k)|e^{i\delta (k)},\delta (k)=\delta (k),k.$$
The function $`\delta (k)`$ is called the phase shift. One has $`S(k)=e^{2i\delta (k)}`$.
We have changed the notations from in order to show the physical meaning of the quantity (1.12): $`f(k)`$ is the Jost function of the scattering theory. The function $`\psi (x,k)/f(k)`$ is the solution to the scattering problem (see equation (2.39) below). Krein calls
$$S(k):=\frac{f(k)}{f(k)}$$
(1.16)
the $`S`$-function, and we will show that (1.16) is the $`S`$-matrix used in physics.
Assuming no bound states, one can solve the inverse scattering problem (ISP):
$$\text{given }S(k)k>0\text{, find }q(x).$$
A solution of the ISP based on the results of consists of four steps:
1) Given $`S(k)`$, find $`f(k)`$ by solving the Riemann problem (2.37).
2) Given $`f(k)`$, calculate $`H(t)`$ using the formula
$$1+\stackrel{~}{H}=1+_{\mathrm{}}^{\mathrm{}}H(t)e^{ikt}𝑑t=\frac{1}{|f(k)|^2}.$$
(1.17)
3) Given $`H(t)`$, solve (1.6) and find $`\mathrm{\Gamma }_x(t,s)`$ and then $`\mathrm{\Gamma }_{2x}(2x,0)`$, $`0x<\mathrm{}`$.
4) Define
$$A(x)=2\mathrm{\Gamma }_{2x}(2x,0),$$
(1.18)
where
$$A(0)=2H(0),$$
($`1.18^{}`$)
and calculate the potential
$$q(x)=A^2(x)+A^{}(x),A(0)=2H(0).$$
(1.19)
One can also calculate $`q(x)`$ by the formula:
$$q(x)=2\frac{d}{dx}[\mathrm{\Gamma }_{2x}(2x,0)\mathrm{\Gamma }_{2x}(0,0)].$$
Indeed, $`2\mathrm{\Gamma }_{2x}(2x,0)=A(x),`$ see (1.18), $`\frac{d}{dx}\mathrm{\Gamma }_{2x}(0,0)=\mathrm{\Gamma }_{2x}(2x,0)\mathrm{\Gamma }_{2x}(0,2x),`$ see (2.22), and $`\mathrm{\Gamma }_{2x}(2x,0)=\mathrm{\Gamma }_{2x}(0,2x),`$ see (2.21).
There is an alternative (known) way, based on the Wiener-Levy theorem, to do step 1):
Given $`S(k)`$, find $`\delta (k),`$ the phase shift, then calculate the function
$$g(t):=\frac{2}{\pi }_0^{\mathrm{}}\delta (k)\mathrm{sin}(kt)𝑑k,$$
and finally calculate
$$f(k)=\mathrm{exp}\left(_0^{\mathrm{}}g(t)e^{ikt}𝑑k\right).$$
The potential $`qL_{1,1}`$ generates the $`S`$-matrix $`S(k)`$ with which we started provided that the following conditions (1.20) -(1.22) hold:
$$S(k)=\overline{S(k)}=S^1(k),k,$$
(1.20)
the overbar stands for complex conjugation,
and
$$ind_{}S(k)=0,$$
(1.21)
$$F(x)_{L^{\mathrm{}}(_+)}+F(x)_{L^1(_+)}+xF^{}(x)_{L^1(_+)}<\mathrm{},$$
(1.22)
where
$$F(x):=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}[1S(k)]e^{ikx}𝑑k.$$
(1.23)
By the index (1.21) one means the increment of the argument of $`S(k)`$ ( when $`k`$ runs from $`\mathrm{}`$ to $`+\mathrm{}`$ along the real axis) divided by $`2\pi `$. The function (1.10) satisfies the equation
$$\psi ^{\prime \prime }+k^2\psi q(x)\psi =0,x_+.$$
(1.24)
Recall that we have assumed that there are no bound states.
In section 2 the above method is justified and the following theorems are proved:
###### Theorem 1.1.
If (1.20)-(1.22) hold, then $`q(x)`$ defined by (1.19) is the unique solution to ISP and this $`q(x)`$ has $`S(k)`$ as the scattering matrix.
###### Theorem 1.2.
The function $`f(k)`$, defined by (1.13), is the Jost function corresponding to potential (1.19).
###### Theorem 1.3.
Condition (1.7) implies that equation (1.6) is solvable for all $`x0`$ and its solution is unique.
###### Theorem 1.4.
If condition (1.7) holds, then relation (1.14) holds and $`\mathrm{\Gamma }(s):=\mathrm{\Gamma }_{\mathrm{}}(s,0)`$ is the unique solution to the equation
$$\mathrm{\Gamma }(s)+_0^{\mathrm{}}H(su)\mathrm{\Gamma }(u)𝑑u=H(s),s0.$$
(1.25)
The diagram explaining the inversion method for solving ISP, based on Krein’s results from , can be shown now:
$$S(k)\underset{s_1}{\overset{(2.38)}{}}f(k)\underset{s_2}{\overset{(1.17)}{}}H(t)\underset{s_3}{\overset{(1.6)}{}}\mathrm{\Gamma }_x(t,s)\underset{s_4}{\overset{\text{(trivial)}}{}}\mathrm{\Gamma }_{2x}(2x,0)\underset{s_5}{\overset{(1.18)}{}}A(x)\underset{s_6}{\overset{(1.19)}{}}q(x).$$
(1.26)
In this diagram $`s_m`$ denotes step number $`m`$. Steps $`s_2`$, $`s_4`$, $`s_5`$ and $`s_6`$ are trivial. Step $`s_1`$ is almost trivial: it requires solving a Riemann problem with index zero and can be done analytically, in closed form. Step $`s_3`$ is the basic (non-trivial) step which requires solving a family of Fredholm-type linear integral equations (1.6). These equations are uniquely solvable if assumption (1.7) holds, or if assumptions (1.20)-(1.22) hold.
We analyze in section 2 the invertibility of the steps in diagram (1.26). Note also that, if one assumes (1.20)-(1.22), diagram (1.26) can be used for solving the inverse problems of finding $`q(x)`$ from the following data:
from $`f(k)`$, $`k>0`$,
from $`|f(k)|^2`$, $`k>0`$, or
from the spectral function $`d\rho (\lambda )`$.
Indeed, if (1.20)-(1.22) hold, then a) and b) are contained in diagram (1.26), and c) follows from the known formula (e.g., , p.256)
$$d\rho (\lambda )=\{\begin{array}{cc}\hfill \frac{\sqrt{\lambda }}{\pi }\frac{ds}{|f(\sqrt{\lambda })|^2},& \lambda >0\hfill \\ \hfill 0,& \lambda <0.\hfill \end{array}$$
(1.27)
Let $`\lambda =k^2`$. Then (still assuming (1.21)) one has:
$$d\rho =\frac{2k^2}{\pi }\frac{1}{|f(k)|^2}dk,k>0.$$
(1.28)
Note that the general case of the inverse scattering problem on the half-axis, when
$$\text{ind}_{}S(k):=\nu 0,$$
can be reduced to the case $`\nu =0`$ by the procedure described in section 4 provided that $`S(k)`$ is the $`S`$matrix corresponding to a potential $`qL_{1,1}(_+)`$. Necessary and sufficient conditions for such an $`S(k)`$ are conditions (1.20)-(1.22) (see ).
Section 3 contains a discussion of the numerical aspects of the inversion procedure based on Krein’s method. There are advantages in using this procedure (as compared with the Gelfand-Levitan procedure): integral equation (1.6), solving of which constitutes the basic step in the Krein inversion method, is a Fredholm convolution-type equation. Solving such an equation numerically leads to inversion of Toeplitz matrices, which can be done efficiently and with much less computer time than solving the Gelfand-Levitan equation (5.3). Combining Krein’s and Marchenko’s inversion methods yields the most efficient way to solve inverse scattering problems.
Indeed, for small $`x`$ equation (1.6) can be solved by iterations since the norm of the integral operator in (1.6) is less than 1 for sufficiently small $`x`$, say $`0<x<x_0`$. Thus $`q(x)`$ can be calculated for $`0x\frac{x_0}{2}`$ by diagram (1.26).
For $`x>0`$ one can solve by iterations Marchenko’s equation for the kernel $`A(x,y)`$:
$$A(x,y)+F(x+y)+_x^{\mathrm{}}A(x,s)F(s+y)𝑑s=0,0xy<\mathrm{},$$
(1.29)
where, if (1.21) holds, the known function $`F(x)`$ is defined by the formula:
$$F(x):=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}[1S(k)]e^{ikx}𝑑k.$$
(1.30)
Indeed, for $`x>0`$ the norm of the operator in (1.29) is less than 1 () and it tends to $`0`$ as $`x+\mathrm{}`$.
Finally let us discuss the following question: in the justification of both the Gelfand-Levitan and Marchenko methods, the eigenfunction expansion theorem and the Parseval relation play the fundamental role. In contrast, the Krein method apparently does not use the eigenfunction expansion theorem and the Parseval relation. However, implicitly, this method is also based on such relations. Namely, assumption (1.7) implies that the function $`S(k)`$, that is, the $`S`$-matrix corresponding to the potential (1.19), has index $`0`$. If, in addition, this potential is in $`L_{1,1}(_+)`$, then conditions (1.20) and (1.22) are satisfied as well, and the eigenfunction expansion theorem and Parseval’s equality hold. Necessary and sufficient conditions, imposed directly on the function $`H(t)`$, which guarantee that conditions (1.20)-(1.22) hold, are not known. However, from the results of section 2 it follows that conditions (1.20)-(1.22) hold if and only if $`H(t)`$ is such that the diagram (1.26) leads to a $`q(x)L_{1,1}(_+)`$. Alternatively, conditions (1.20)-(1.22) hold (and consequently, $`q(x)L_{1,1}(_+)`$) if and only if condition (1.7) holds and the function $`f(k)`$, which is uniquely defined as the solution to the Riemann problem
$$\mathrm{\Phi }_+(k)=[1+\stackrel{~}{H}(k)]^1\mathrm{\Phi }_{}(k),k,$$
(1.31)
by the formula
$$f(k)=\mathrm{\Phi }_+(k),$$
(1.32)
generates the $`S`$-matrix $`S(k)`$ (by formula (1.16)), and this $`S(k)`$ satisfies conditions (1.20)-(1.22). Although the above conditions are verifiable in principle, they are not satisfactory because they are implicit, they are not formulated in terms of structural properties of the function $`H(t)`$ (such as smoothness, rate of decay, etc.).
In section 2 Theorems 1.1 – 1.4 are proved. In section 3 numerical aspects of the inversion method based on Krein’s results are discussed. In section 4 the ISP with bound states is discussed. In section 5 a relation between Krein’s and Gelfand-Levitan’s methods is explained.
## 2 Proofs
###### Proof of Theorem 1.3.
If $`vL^2(0,x)`$, then
$$(v+H_xv,v)=\frac{1}{2\pi }[(\stackrel{~}{v},\stackrel{~}{v})_{L^2()}+(\stackrel{~}{H}\stackrel{~}{v},\stackrel{~}{v})_{L^2()}]$$
(2.1)
where the Parseval equality was used,
$`\stackrel{~}{v}:=`$ $`{\displaystyle _0^x}v(s)e^{iks}𝑑s,`$ (2.2)
$`(v,v)=`$ $`{\displaystyle _0^x}|v|^2𝑑s=(\stackrel{~}{v},\stackrel{~}{v})_{L^2()}.`$
Thus $`I+H_x`$ is a positive definite selfadjoint operator in the Hilbert space $`L^2(0,x)`$ if (1.7) holds. Note that, since $`H(t)L^1()`$, one has $`\stackrel{~}{H}(k)0`$ as $`|k|\mathrm{}`$, so (1.7) implies
$$1+\stackrel{~}{H}(k)c>0.$$
(2.3)
A positive definite selfadjoint operator in a Hilbert space is boundedly invertible. Theorem 1.3 is proved. $`\mathrm{}`$
Note that our argument shows that
$$(I+H_x)^1_{L^2()}c^1.$$
(2.4)
Before we prove Theorem 1.4, let us prove a simple lemma. For results of this type, see .
###### Lemma 2.1.
The operator
$$H\phi :=_0^{\mathrm{}}H(tu)\phi (u)𝑑u$$
(2.5)
is a bounded operator in $`L^p(_+)`$, $`p=1,2,\mathrm{}`$.
For $`\mathrm{\Gamma }_x(u,s)L^1(_+)`$ one has
$$_x^{\mathrm{}}𝑑uH(tu)\mathrm{\Gamma }_x(u,s)_{L^2(0,x)}c_1(_x^{\mathrm{}}𝑑u|\mathrm{\Gamma }_x(u,s)|)^2.$$
(2.6)
###### Proof.
Let $`\phi _p:=\phi _{L^p(_+)}`$. One has
$$H\phi _1\underset{u_+}{sup}_0^{\mathrm{}}𝑑t|H(tu)|_0^{\mathrm{}}|\phi (u)|𝑑u_{\mathrm{}}^{\mathrm{}}|H(s)|𝑑s\phi _1=2H_1\phi _1,$$
(2.7)
where we have used the assumption $`H(t)=H(t)`$. Similarly,
$$H\phi _{\mathrm{}}2H_1\phi _{\mathrm{}}.$$
(2.8)
Finally, using Parseval’s equality, one gets:
$$H\phi _2^2=2\pi \stackrel{~}{H}\stackrel{~}{\phi }_+_{L^2()}^22\pi \underset{k}{sup}|\stackrel{~}{H}(k)|^2\phi _2^2,$$
(2.9)
where
$$\phi _+(x):=\{\begin{array}{cc}\hfill \phi (x),& x0,\hfill \\ \hfill 0,& x<0.\hfill \end{array}$$
(2.10)
Since $`|\stackrel{~}{H}(k)|2H_1`$ one gets from (2.9) the estimate:
$$H\phi _22\sqrt{2\pi }H_1\phi _2.$$
(2.11)
To prove (2.6), one notes that
$$_0^x𝑑t|_x^{\mathrm{}}𝑑uH(tu)\mathrm{\Gamma }_x(u,s)|^2\underset{u,vx}{sup}_0^x𝑑t|H(tu)H(tv)|(_x^{\mathrm{}}|\mathrm{\Gamma }_x(u,s)|𝑑u)^2$$
$$c_1(_x^{\mathrm{}}𝑑u|\mathrm{\Gamma }_x(u,s)|)^2.$$
Estimate (2.6) is obtained. Lemma 2.1 is proved. $`\mathrm{}`$
###### Proof of Theorem 1.4.
Define $`\mathrm{\Gamma }_x(t,s)=0`$ for $`t`$ or $`s`$ greater than $`x`$. Let $`w:=\mathrm{\Gamma }_x(t,s)\mathrm{\Gamma }(t,s)`$. Then (1.6) and (1.9) imply
$$(I+H_x)w=_x^{\mathrm{}}H(tu)\mathrm{\Gamma }(u,s)𝑑u:=h_x(t,s).$$
(2.12)
If condition (1.7) holds, then equations (1.9) and 91.25) have solutions in $`L^1(_+)`$, and, since $`sup_t|H(t)|<\mathrm{}`$, it is clear that this solution belongs to $`L^{\mathrm{}}(_+)`$ and consequently to $`L^2(_+)`$, because $`\phi _2\phi _{\mathrm{}}\phi _1`$. The proof of Theorem 1.3 shows that such a solution is unique and does exist. From (2.4) one gets
$$\underset{x0}{sup}(I+H_x)^1_{L^2(0,x)}c^1.$$
(2.13)
For any fixed $`s>0`$ one sees that $`sup_{xy}h_x(t,s)0`$ as $`y\mathrm{}`$, where the norm here stands for any of the three norms $`L^p(0,x),p=1,2,\mathrm{}`$. Therefore (2.12) and (2.11) imply
$`w_{L^2(0,x)}^2`$ $`c^2h_x_{0,x}`$ (2.14)
$`c^2{\displaystyle _x^{\mathrm{}}}H(tu)\mathrm{\Gamma }(u,s)𝑑y_{L^1(0,x)}{\displaystyle _x^{\mathrm{}}}H(tu)\mathrm{\Gamma }(u,s)𝑑y_{L^{\mathrm{}}(0,x)}`$
$`\text{ const }\mathrm{\Gamma }(u,s)_{L^1(x,\mathrm{})}^20\text{ as }x\mathrm{},`$
since $`\mathrm{\Gamma }(u,s)L^1(_+)`$ for any fixed $`s>0`$ and $`H(t)L^1()`$.
Also
$`w(t,s)_{L^{\mathrm{}}(0,x)}^22(h_x_{L^{\mathrm{}}(0,x)}^2+H_xw_{L^{\mathrm{}}(0,x)}^2)`$ (2.15)
$`c_1\mathrm{\Gamma }(u,s)_{L^1(x,\mathrm{})}^2+c_2\underset{t}{sup}H(tu)_{L^2(0,x)}^2w_{L^2(0,x)}^2,`$
where $`c_j>0`$ are some constants. Finally, by (2.6), one has;
$`w(t,s)_{L^2(0,x)}^2`$ $`c_3({\displaystyle _x^{\mathrm{}}}|\mathrm{\Gamma }(u,s)|𝑑u)^20\text{ as }x+\mathrm{}.`$ (2.16)
From (2.15) and (2.16) relation (1.14) follows. Theorem 1.4 is proved. $`\mathrm{}`$
Let us now prove Theorem 1.2. We need several lemmas.
###### Lemma 2.2.
The function (1.11) satisfies the equations
$$E^{}=ikEA(x)E_{},E(0,k)=1,E_{}:=E(x,k),$$
(2.17)
$$E_{}^{}=ikE_{}A(x)E,E_{}(0,k)=1,$$
(2.18)
where $`E^{}=\frac{dE}{dx}`$, and $`A(x)`$ is defined in (1.18).
###### Proof.
Differentiate (1.11) and get
$$E^{}=ikEe^{ikx}\left(2\mathrm{\Gamma }_{2x}(2x,0)e^{ik2x}+2_0^{2x}\frac{\mathrm{\Gamma }_{2x}(s,0)}{(2x)}e^{iks}𝑑s\right)$$
(2.19)
We will check below that
$$\frac{\mathrm{\Gamma }_x(t,s)}{x}=\mathrm{\Gamma }_x(t,x)\mathrm{\Gamma }_x(x,s)$$
(2.20)
and
$$\mathrm{\Gamma }_x(t,s)=\mathrm{\Gamma }_x(xt,xs).$$
(2.21)
Thus, by (2.20),
$$\frac{\mathrm{\Gamma }_{2x}(s,0)}{(2x)}=\mathrm{\Gamma }_{2x}(s,2x)\mathrm{\Gamma }_{2x}(2x,0).$$
(2.22)
Therefore (2.19) can be written as
$$E^{}=ikEe^{ikx}A(x)+A(x)e^{ikx}_0^{2x}\mathrm{\Gamma }_{2x}(s,2x)e^{iks}𝑑s.$$
(2.23)
By (2.21) one gets
$$\mathrm{\Gamma }_{2x}(s,2x)=\mathrm{\Gamma }_{2x}(2xs,0).$$
(2.24)
Thus
$`e^{ikx}{\displaystyle _0^{2x}}\mathrm{\Gamma }_{2x}(s,2x)e^{iks}𝑑s`$ $`={\displaystyle _0^{2x}}\mathrm{\Gamma }_{2x}(2xs,0)e^{ik(xs)}𝑑s`$ (2.25)
$`=e^{ikx}{\displaystyle _0^{2x}}\mathrm{\Gamma }_{2x}(y,0)e^{iky}𝑑y.`$
From (2.23) and (2.25) one gets (2.17).
Equation (2.18) can be obtained from (2.17) by changing $`k`$ to $`k`$. Lemma 2.2 is proved if formulas (2.20)-(2.21) are checked.
To check (2.21), use $`H(t)=H(t)`$ and compare the equation for $`\mathrm{\Gamma }_x(xt,xs):=\phi `$,
$$\mathrm{\Gamma }_x(xt,xs)+_0^xH(xtu)\mathrm{\Gamma }_x(u,xs)𝑑u=H(xtx+s)=H(ts),$$
(2.26)
with equation (1.6). Let $`u=xy`$. Then (2.26) can be written as
$$\phi +_0^xH(ty)\phi 𝑑y=H(ts),$$
(2.27)
which is equation (1.6) for $`\phi `$. Since (1.6) has at most one solution, as we have proved above, formula (2.21) is proved.
To prove (2.20), differentiate (1.6) with respect to $`x`$ and get:
$$\mathrm{\Gamma }_x^{}(t,s)+_0^xH(tu)\mathrm{\Gamma }_x^{}(u,s)𝑑u=H(tx)\mathrm{\Gamma }_x(x,s)$$
(2.28)
Set $`s=x`$ in (1.6), multiply (1.6) by $`\mathrm{\Gamma }_x(x,s)`$, compare with (2.28) and use again the uniqueness of the solution to (1.6). This yields (2.20).
Lemma 2.2 is proved. $`\mathrm{}`$
###### Lemma 2.3.
Equation (1.24) holds.
###### Proof.
From (1.10) and (2.17)-(2.18) one gets
$$\psi ^{\prime \prime }=\frac{E^{\prime \prime }E_{}^{\prime \prime }}{2i}=\frac{(ikEA(x)E_{})^{}(ikE_{}A(x)E)^{}}{2i}.$$
(2.29)
Using (2.17)-(2.18) again one gets
$$\psi ^{\prime \prime }=k^2\psi +q(x)\psi ,q(x):=A^2(x)+A^{}(x).$$
(2.30)
Lemma 2.3 is proved. $`\mathrm{}`$
###### Proof of Theorem 1.2.
The function $`\psi `$ defined in (1.10) solves equation (1.24) and satisfies the conditions
$$\psi (0,k)=0,\psi ^{}(0,k)=k.$$
(2.31)
The first condition is obvious (in there is a misprint: it is written that $`\psi (0,k)=1`$), and the second condition follows from (1.10) and (2.15):
$$\psi ^{}(0,k)=\frac{E^{}(0,k)E_{}^{}(0,k)}{2i}=\frac{ikEAE_{}(ikE_{}AE)}{2i}|_{x=0}=\frac{2ik}{2i}=k.$$
Let $`f(x,k)`$ be the Jost solution to (1.24) which is uniquely defined by the asymptotics
$$f(x,k)=e^{ikx}+o(1),x+\mathrm{}.$$
(2.32)
Since $`f(x,k)`$ and $`f(x,k)`$ are lineraly independent, one has
$$\psi =c_1f(x,k)+c_2f(x,k),$$
(2.33)
where $`c_1`$, $`c_2`$ are some constants independent of $`x`$ but depending on $`k`$.
From (2.31) and (2.33) one gets
$$c_1=f(k),c_2=f(k);f(k):=f(0,k).$$
(2.34)
Indeed, the choice of $`c_1`$ and $`c_2`$ guarantees that the first condition (2.31) is obviously satisfied, while the second follows from the Wronskian formula:
$$f^{}(0,k)f(k)f(k)f^{}(0,k)=2ik.$$
(2.35)
From (2.32), (2.33) and (2.34) one gets:
$$\psi (x,k)=e^{ikx}f(k)+e^{ikx}f(k)+o(1),x+\mathrm{}.$$
(2.36)
Comparing (2.36) with (1.15) yields the conclusion of Theorem 1.2. $`\mathrm{}`$
## Invertibility of the steps of the inversion procedure and proof of Theorem 1.1
Let us start with a discussion of the inversion steps 1) – 4) described in the introduction.
Then we discuss the uniqueness of the solution to ISP and the consistency of the inversion method, that is, the fact that $`q(x)`$, reconstructed from $`S(k)`$ by steps 1) – 4), generates the original $`S(k)`$.
Let us go through steps 1) – 4) of the reconstruction method and prove their invertibility. The consistency of the inversion method follows from the invertibility of the steps of the inversion method.
Step 1.$`Sf(k)`$.
Assume $`S(k)`$ satisfying (1.20)-(1.22) is given. Then solve the Riemann problem
$$f(k)=S(k)f(k),k.$$
(2.37)
Since $`\text{ind}_{}S(k)=0`$, one has $`\text{ind}_{}S(k)=0`$. Therefore the problem (2.37) of finding an analytic function $`f_+(k)`$ in $`_+:=\{k:\text{ Im }k>0\}`$, $`f(k):=f_+(k)`$ in $`_+`$, and and analytic function $`f_{}(k):=f(k)`$ in $`_{}:=\{k:\text{ Im }k<0\}`$ from equation (2.37) can be solved in closed form. Namely, define
$$f(k)=\mathrm{exp}\left\{\frac{1}{2\pi i}_{\mathrm{}}^{\mathrm{}}\frac{\mathrm{ln}S(y)dy}{yk}\right\},\text{ Im }k>0.$$
(2.38)
Then $`f(k)`$ solves (2.37), $`f_+(k)=f(k)`$, $`f_{}(k)=f(k)`$. Indeed,
$$\mathrm{ln}f_+(k)\mathrm{ln}f_{}(k)=\mathrm{ln}S(k),k$$
(2.39)
by the known jump formula for the Cauchy integral. Integral (2.38) converges absolutely at infinity and $`\mathrm{ln}S(y)`$ is differentiable with respect to $`y`$, so the Cauchy integral in (2.38) is well defined.
To justify the above claims, one uses the known properties of the Jost function
$$f(k)=1+_0^{\mathrm{}}A(0,y)e^{iky}𝑑y:=1+_0^{\mathrm{}}A(y)e^{iky}𝑑y,$$
(2.40)
where (see )
$$|A(y)|c_{\frac{y}{2}}^{\mathrm{}}|q(t)|𝑑t,$$
(2.41)
$$\left|\frac{A(y)}{y}+\frac{1}{4}q\left(\frac{y}{2}\right)\right|c_{\frac{y}{2}}^{\mathrm{}}|q(t)|𝑑t,$$
(2.42)
$`c>0`$ stand for various constants and $`A(y)`$ is a real-valued function. Thus
$$f(k)=1\frac{A(0)}{ik}\frac{1}{ik}_0^{\mathrm{}}A^{}(t)e^{ikt}𝑑t,$$
(2.43)
$$S(k)=\frac{f(k)}{f(k)}=\frac{1\frac{A(0)}{ik}\frac{1}{ik}\stackrel{~}{A^{}}(k)}{1+\frac{A(0)}{ik}+\frac{1}{ik}\stackrel{~}{A^{}}(k)}=1+0\left(\frac{1}{k}\right).$$
(2.44)
Therefore
$$\mathrm{ln}S(k)=0\left(\frac{1}{k}\right)\text{as}|k|\mathrm{},k.$$
(2.45)
Also
$$\dot{f}(k)=1+i_0^{\mathrm{}}A(y)ye^{iky}𝑑y,\dot{f}:=\frac{f}{k}.$$
(2.46)
Estimate (2.41) implies
$$_0^{\mathrm{}}y|A(y)|𝑑y2_0^{\mathrm{}}t|q(t)|𝑑t<\mathrm{},$$
(2.47)
so that $`f^{}(k)`$ is bounded for all $`k`$, as claimed. Note that
$$f(k)=\overline{f(k)},k.$$
(2.48)
The converse step
$$f(k)S(k)$$
is trivial:
$$S(k)=\frac{f(k)}{f(k)}.$$
If $`\text{ind}_{}S=0`$ then $`f(k)`$ is analytic in $`_+`$, $`f(k)0`$ in $`_+`$, $`f(k)=1+O\left(\frac{1}{k}\right)`$ as $`|k|\mathrm{}`$, $`k_+`$, and (2.48) holds.
Step 2.$`f(k)H(t)`$.
This step is done by formula (1.17):
$$H(t)=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}e^{ikt}\left(\frac{1}{|f(k)|^2}1\right)𝑑k.$$
(2.49)
The integral in (2.49) converges in $`L^2()`$. Indeed, it follows from (2.43) that
$$|f(k)|^21=\frac{2}{k}_0^{\mathrm{}}A^{}(t)\mathrm{sin}(kt)𝑑t+O\left(\frac{1}{|k|^2}\right),|k|\mathrm{},k.$$
(2.50)
The function
$$w(k):=\frac{1}{k}_0^{\mathrm{}}A^{}(t)\mathrm{sin}(kt)𝑑t$$
(2.51)
is continuous because $`tA^{}(t)L^1(_+)`$ by (2.42) and $`wL^2(R)`$ since $`w=o\left(\frac{1}{|k|}\right)`$ as $`|k|\mathrm{}`$, $`k`$.
The converse step
$$H(t)f(k)$$
(2.52)
is also done by formula (1.17): Fourier inversion gives $`|f(k)|^2=f(k)f(k),`$ and factorization yields the unique $`f(k)`$, since $`f(k)`$ does not vanish in $`_+`$ and tends to $`1`$ at infinity.
Step 3.$`H\mathrm{\Gamma }_x(s,0)\mathrm{\Gamma }_{2x}(2x,0).`$
This step is done by solving equation (1.6). By Theorem 1.3 equation (1.6) is uniquely solvable since condition (1.7) is assumed. Formula (1.17) holds and the known properties of the Jost function (1.4) are used: $`f(k)1`$ as $`k\pm \mathrm{}`$, $`f(k)0`$ for $`k0`$, $`k`$, $`f(0)0`$ since $`\text{ind}_{}S(k)=0`$.
The converse step $`\mathrm{\Gamma }_x(s,0)H(t)`$ is done by formula (1.6”). The converse step
$$\mathrm{\Gamma }_{2x}(2x,0)\mathrm{\Gamma }_x(s,0)$$
(2.53)
constitutes the essence of the inversion method.
This step is done as follows:
$$\mathrm{\Gamma }_{2x}(2x,0)\stackrel{(1.18)}{}A(x)\stackrel{(2.17)(2.18)}{}E(x,k)\stackrel{(1.11)}{}\mathrm{\Gamma }_x(s,0).$$
(2.54)
Given $`A(x)`$, system (2.17)-(2.18) is uniquely solvable for $`E(x,k)`$.
Note that the step $`q(x)f(k)`$ can be done by solving the uniquely
solvable integral equation (see ):
$$f(x,k)=e^{ikx}+_x^{\mathrm{}}\frac{\mathrm{sin}[k(yx)]}{k}q(y)f(y,k)𝑑y$$
(2.55)
with $`qL_{1,1}(_+)`$, and then calculating $`f(k)=f(0,k)`$.
Step 4.$`A(x):=\mathrm{\Gamma }_{2x}(2x,0)q(x).`$
This step is done by formula (1.19). The converse step
$$q(x)A(x)$$
can be done by solving the Riccati problem (1.19) for $`A(x)`$ given $`q(x)`$ and the initial condition $`2H(0)`$. Given $`q(x)`$, one can find $`2H(0)`$ as follows: one finds $`f(x,k)`$ by solving equation (2.55), which is uniquely solvable if $`qL_{1,1}(_+)`$, then one gets $`f(k):=f(0,k)`$, and then calculates $`2H(0)`$ using formula (2.49) with $`t=0`$:
$$2H(0)=\frac{1}{\pi }_{\mathrm{}}^{\mathrm{}}\left(\frac{1}{|f(k)|^2}1\right)𝑑k.$$
###### Proof of Theorem 1.1..
If (1.20)-(1.22) hold, then, as has been proved in (and earlier in a different form in ), there is a unique $`q(x)L_{1,1}(_+)`$ which generates the given $`S`$-matrix $`S(k)`$.
It is not proved in that $`q(x)`$ defined in (1.19) (and obtained as a final result of steps 1) – 4)) generates the scattering matrix $`S(k)`$ with which we started the inversion.
Let us now prove this. We have already discussed the following diagram:
$$S(k)\underset{(1.16)}{\overset{(2.38)}{}}f(k)\stackrel{(1.17)}{}H(t)\stackrel{(1.6)}{}\mathrm{\Gamma }_x(s,0)\mathrm{\Gamma }_{2x}(2x,0)\stackrel{(1.18)}{}A(x)\stackrel{(1.19)}{}q(x).$$
(2.56)
To close this diagram and therefore establish the basic one-to-one correspondence
$$S(k)q(x)$$
(2.57)
one needs to prove
$$\mathrm{\Gamma }_{2x}(2x,0)\mathrm{\Gamma }_x(s,0).$$
This is done by the scheme (2.54).
Note that the step $`q(x)A(x)`$ requires solving Riccati equation (1.19) with the boundary condition $`A(0)=2H(0)`$. Existence of the solution to this problem on all of $`_+`$ is guaranteed by the assumptions (1.20)-(1.22). The fact that these assumptions imply $`q(x)L_{1,1}(_+)`$ is proved in and . Theorem 1.1 is proved. $`\mathrm{}`$
Uniqueness theorems for the inverse scattering problem are not given in , . They can be found in -.
###### Remark 2.1.
From our analysis one gets the following result:
###### Proposition 2.1.
If $`q(x)L_{1,1}(_+)`$ and has no bounds states and no resonance at zero, then Riccati equation (1.19) with the initial condition (1.18) has the solution $`A(x)`$ defined for all $`x_+`$.
## 3 Numerical aspects of the Krein inversion procedure.
The main step in this procedure from the numerical viewpoint is to solve equation (1.6) for all $`x>0`$ and all $`0<s<x`$, which are the parameters in equation (1.6).
Since equation (1.6) is an equation with the convolution kernel, its numerical solution involves inversion of a Toeplitz matrix, which is a well developed area of numerical analysis. Moreover, such an inversion requires much less computer memory and time than the inversion based on the Gelfand-Levitan or Marchenko methods. This is the main advantage of Krein’s inversion method.
This method may become even more attractive if it is combined with the Marchenko method. In the Marchenko method the equation to be solved is (, ):
$$A(x,y)+F(x+y)+_x^{\mathrm{}}A(x,s)F(s+y)𝑑s=0,yx0,$$
(3.1)
where $`F(x)`$ is defined in (1.23) and is known if $`S(k)`$ is known, the kernel $`A(x,y)`$ is to be found from (3.1) and if $`A(x,y)`$ is found then the potential is recovered by the formula:
$$q(x)=2\frac{dA(x,x)}{dx}.$$
(3.2)
Equation (3.1) can be written in operator form:
$$(I+F_x)A=F.$$
(3.3)
The operator $`F_x`$ is a contraction mapping in the Banach space $`L^1(x,\mathrm{})`$ for $`x>0.`$ The operator $`H_x`$ in (1.6) is a contraction mapping in $`L^{\mathrm{}}(0,x)`$ for $`0<x<x_0`$, where $`x_0`$ is chosen to that
$$_0^{x_0}|H(tu)|𝑑u<1.$$
(3.4)
Therefore it seems reasonable from the numerical point of view to use the following approach:
Given $`S(k)`$, calculate $`f(k)`$ and $`H(t)`$ as explained in Steps 1 and 2, and also $`F(x)`$ by formula (1.23).
Solve by iterations equation (1.6) for $`x<x<x_0`$, where $`x_0`$ is chosen so that the iteration method for solving (1.6) converges rapidly. Then find $`q(x)`$ as explained in Step 4.
Solve equation (3.1) for $`x>x_0`$ by iterations. Find $`q(x)`$ for $`x>x_0`$ by formula (3.2).
## 4 Discussion of the ISP when the bound states are present.
If the data are (1.1) then one defines
$$w(k)=\underset{j=1}{\overset{J}{}}\frac{kik_j}{k+ik_j}\text{ if }\text{ind}_{}S(x)=2J$$
(4.1)
and
$$W(k)=\frac{k}{k+i\gamma }w(k)\text{ if }\text{ind}_RS(k)=2J1,$$
(4.2)
where $`\gamma >0`$ is chosen so that $`\gamma k_j`$, $`1jJ`$.
Then one defines
$$S_1(k):=S(k)w^2(k)\text{ if }\text{ind}_{}S=2J$$
(4.3)
or
$$S_1(k):=S(k)W^2(k)\text{ if }\text{ind}_{}S=2J1.$$
(4.4)
Since $`\text{ind}_{}w^2(k)=2J`$ and $`\text{ind}_{}W^2(k)=2J+1`$, one has
$$\text{ind}_{}S_1(k)=0.$$
(4.5)
The theory of section 2 applies to $`S_1(k)`$ and yields $`q_1(x)`$. From $`q_1(x)`$ one gets $`q(x)`$ by adding bound states $`k_j^2`$ and norming constants $`s_j`$ using the known procedure (e.g. see ).
## 5 Relation between Krein’s and GL’s methods.
The GL (Gelfand-Levitan) method consists of the following steps (see , for example):
Step 1. Given $`f(k)`$, the Jost function, find
$`L(x,y)`$ $`:={\displaystyle \frac{2}{\pi }}{\displaystyle _0^{\mathrm{}}}𝑑kk^2\left({\displaystyle \frac{1}{|f(k)|^2}}1\right){\displaystyle \frac{\mathrm{sin}kx}{k}}{\displaystyle \frac{\mathrm{sin}ky}{k}}`$ (5.1)
$`={\displaystyle \frac{1}{\pi }}{\displaystyle _0^{\mathrm{}}}𝑑k\left(|f(k)|^21\right)\left(\mathrm{cos}[k(xy)]\mathrm{cos}[k(x+y)]\right)`$
$`:=M(xy)M(x+y),`$
where
$$M(x):=\frac{1}{\pi }_0^{\mathrm{}}𝑑k\left(|f(k)|^21\right)\mathrm{cos}(kx).$$
(5.2)
Step 2. Solve the integral equation for $`K(x,y)`$:
$$K(x,y)+L(x,y)+_0^xK(x,s)L(s,y)𝑑s=0,0yx.$$
(5.3)
Step 3. Find
$$q(x)=2\frac{dK(x,x)}{dx}.$$
(5.4)
Krein’s function (see (1.17)) can be written as follows:
$$H(t)=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}\left(|f(k)|^21\right)e^{ikt}𝑑k=\frac{1}{\pi }_0^{\mathrm{}}\left(|f(k)|^21\right)\mathrm{cos}(kt)𝑑k.$$
(5.5)
Thus the relation between the two methods is given by the known formula:
$$M(x)=H(x).$$
(5.6)
In fact, the GL method deals with the inversion of the spectral foundation $`d\rho `$ of the operator $`\frac{d^2}{dx^2}+q(x)`$ defined in $`L^2(_+)`$ by the Dirichlet boundary condition at $`x=0`$. However, if $`\text{ind}_{}S(k)=0`$ (no bound states and no resonance at $`k=0`$), then (see e.g. ):
$$d\rho =\{\begin{array}{cc}\hfill \frac{2k^2dk}{\pi |f(k)|^2},& \lambda >0,\lambda =k^2,\hfill \\ \hfill 0,& \lambda <0,\hfill \end{array}$$
so $`d\rho `$ in this case is uniquely defined by $`f(k)`$. |
no-problem/9911/astro-ph9911344.html | ar5iv | text | # A New View on Young Pulsars in Supernova Remnants: Slow, Radio-quiet & X-ray Bright
## 1. Where Are All The Young Neutron Stars?
For the last 30 years it has been understood that young neutron stars (NSs) are created during Type II/Ib supernova explosions involving a massive star. Common wisdom holds that these neutron stars are born as rapidly rotating ($`10`$ ms) Crab-like pulsars. Furthermore, pulsars and their accompanying supernova remnants are thought to be highly visible for tens of thousands of years, the former via radio-loud, Crab-like “plerionic” (Weiler & Sramek 1988) pulsar nebulae, the latter as distinctive X-ray and radio shell-type remnants. So where are all the young ($`<10^4`$ yr) neutron stars? Of the 220 known Galactic SNR (Green 1998) and over 1100 detected radio pulsars (Camilo et al., this volume), few associations between the two populations have been identified with any certainty.
The current paradigm rests on the discoveries in the 1960’s of the Crab and Vela pulsars in their respective supernova nebulae. These were taken as spectacular confirmation for the existence of neutron stars postulated much earlier by Baade & Zwicky (1934) based on theoretical arguments. The connection seems firm as the properties and energetics of these pulsars could be uniquely explained in the context of rapidly rotating, magnetized neutron stars emitting beamed non-thermal radiation. Their fast rotation rates and large magnetic fields ($`10^{12}`$ G) are consistent with those of a main-sequence star collapsed to NS dimension and density. A fast period essentially precluded all but a NS hypothesis and thus provided direct evidence for the reality of NSs. Furthermore, their inferred age and association with SNRs provided strong evidence that NSs are indeed born in supernova explosions.
So it is quite remarkable that, despite detailed radio searches, few Galactic SNR have yielded a NS candidate over the years since the initial discoveries. A recent census tallied only $`10`$ SNRs with pulsed central radio sources (Helfand 1998). Furthermore, comprehensive radio surveys suggest that most radio pulsars near SNRs shells can be attributed to chance overlap (e.g. Gaensler & Johnston 1995). With the results of these new surveys, traditional arguments for the lack of observed radio pulsars associated with SNR, such as those invoking beaming and large “kick” velocities, become less compelling, and perhaps even circular.
It is now clear that this discrepancy is an important and vexing problem in current astrophysics.
## 2. The Revolution Evolution: Slowly Rotating Young X-ray Pulsars
Progress in resolving this mystery is suggested by X-ray observations of young SNRs. These are revealing X-ray bright, but radio-quiet compact objects at their centers. It is now understood that these objects form a distinct class of radio-quiet neutron stars (Caraveo et al. 1996, Gotthelf, Petre, & Hwang 1997 and refs. therein). Often these objects have been labeled “cooling neutron stars”, mainly because of their lack of optical counterparts.
Some of these sources have been found to be slowly rotating pulsars with unique properties. Their temporal signal is characterized by spin periods in the range of $`512`$ s, steady spin-down rates, and highly modulated sinusoidal pulse profiles ($`30\%`$). They have steep X-ray spectra (photon index $`>3`$) with X-ray luminosities of $`10^{35}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. As a class, these pulsars are currently referred to as the anomalous X-ray pulsars (AXP; see refs. in Gotthelf & Vasisht 1997). Nearly half are located at the centers of SNRs, suggesting that they are relatively young ($`<10^5`$ yrs-old). And so far, no counterparts at other wavelengths have been identified for these X-ray bright objects. The prototype for this class, the 7 s pulsar 1E 2259+586 in the $`10^4`$ yrs-old SNR CTB 109, has been known for nearly two decades (Gregory & Fahlman 1980).
There are now almost a dozen slow radio-quiet X-ray pulsars apparently associated with young SNRs. These include the four known soft $`\gamma `$-ray repeaters (SGR) which have recently been confirmed as slow rotators (Kouveliotou et al. 1998), and likely associated with young SNRs (e.g. Cline et al. 1982; Kouveliotou et al. 1998). The census of these radio-quiet objects now approach in number those estimated for those candidate young radio-bright objects connected with SNRs.
## 3. A Key Object: the Radio-quiet Slow X-ray Pulsar in Kes 73
Given the latest X-ray results, it now appears likely that at least half of the observed young neutron stars follow an evolutionary path quite distinct from that of the Crab pulsar. An understanding of such alternative paths for young NS evolution is suggested by 1E 1841$``$045, the remarkable 12-s anomalous X-ray pulsar in the center of the SNR Kes 73. This young object has the longest period and most rapid spin-down rate of any known isolated young pulsar. A recent comprehensive study of the long term spin history of 1E 1841$``$045 indicated steady braking on a timescale of $`\tau _s4\times 10^3`$ yrs, consistent with the inferred age of Kes 73 (Vasisht et al. 2000). The similarity in age along with the central location of the pulsar strongly suggests that the two objects are related.
If the Kes 73 pulsar and other NS candidates like them were indeed born as fast rotators, then a mechanism must be found to slow them down to their currently observed rates. The rapid but steady spin-down of the Kes 73 pulsar suggests a possibility. The equivalent magnetic field for a rotating dipole is $`B_{dipole}3.2\times 10^{19}(P\dot{P})^{1/2}8\times 10^{14}`$ G, one of the highest magnetic fields observed in nature. Theory describing a NS with such an enormous field, a “magnetar”, has been worked out by Duncan & Thompson (1992). Vasisht & Gotthelf (1997) suggest that the Kes 73 pulsar was born as a magnetar $`2\times 10^3`$ years ago and has since spun down to a long period due to rapid dipole radiation losses. The pulsar in Kes 73 provided the first direct evidence for a magnetar through spin-down measurements and the apparent consistency in age between the pulsar and SNR (see Gotthelf, Vasisht & Dotani 1999 for details).
In the magnetar model, the enormous magnetic field provides a natural mechanism for braking the pulsar and spinning it down so quickly. Pulsar spin-down via mass accretion, for any reasonable accretion rate, would require longer than a Hubble time to spin-down to the observed values. Furthermore, the observed luminosities of these slow X-ray pulsars is consistent with them being powered by magnetic field decay. If the total X-ray emission were powered by rotational energy loss, as it the case for the radio pulsars, the available energy is far too small. The maximum luminosity derivable just from spin-down is $`L_X<4\pi ^2I\dot{P}/P^310^{34}\mathrm{erg}\mathrm{s}^1`$ well below the measured value of $`L_X3\times 10^{35}\mathrm{erg}\mathrm{s}^1`$. On the other hand, the measured luminosity is appreciably low for an accretion powered binary system $`10^{3638}\mathrm{erg}\mathrm{s}^1`$. These facts along with a lack of stochastic variability and a steep spectrum makes an accretion scenario all but unlikely.
In conclusion, it now seems likely that at least half the population of young neutron stars in SNR evolve as slow AXP-like pulsars, as exemplified by Kes 73. The Crab-like pulsars, highly visible via their radio nebulae, are thus a less common manifestation of young NS evolution. We note that we do not need to invoke a substantial space velocity for the former NSs, as those X-ray pulsars within known SNRs typically lie at their centers. The SGRs may represent an evolutionary stage during which young NSs are likely to be produce bursts. Under this scenario, the AXPs and SGR phenomena are closely related, linked by their strong magnetic field. We consider that many of the young NSs “missing” in radio surveys can be accounted for by the above discussed radio-quiet NSs. As their evolution along the $`P\dot{P}`$ diagram cannot intersect the bulk of the aged radio pulsar phase-space, AXP-like pulsars thus require the existence of a vast population of previously unappreciated NSs.
### Acknowledgments.
This research is supported by the NASA LTSA grant NAG 5-7935. E.V.G. thanks Steven Lawrence for comments.
## References
Baade, W. & Zwicky, F. 1934, Phys. Rev., 45, 138
Caraveo, P. A., Bignami, G. F., Trumper, J. 1996, AARv, 7, 209
Cline, T. L. 1982, ApJ, L255, 45
Duncan. R. C. & Thompson, C. 1992, ApJ, 392, 9
Gaensler, B. & Johnston, S. 1995, MNRAS, 277, 1243
Gotthelf, E. V. & Vasisht, G. 1997, ApJ, 486, L133
Gotthelf, E. V., Petre, R. & Hwang, U. 1997, ApJ, 487, L175
Gotthelf, E. V., G. Vasisht, & Dotani, T. 1999, ApJ, 522, L49
Gregory, P. C. & Fahlman, G. G. 1980, Nature, 287, 805
Green D.A., 1998, http://www.mrao.cam.ac.uk/surveys/snrs
Helfand, D. J. 1998, Mem. Soc. Astron. Ital., 69, 791
Kouveliotou, C. et al. 1998, Nature, 391, 235
Vasisht, G. & Gotthelf, E. V. 1997, ApJ, 486, L129
Vasisht, G., Gotthelf, E. V., Torii, K., & Gaensler, B. 2000, in press
Weiler, K. W. & Sramek, R. A. 1988, ARA&A, 26, 29. |
no-problem/9911/astro-ph9911278.html | ar5iv | text | # Centers of Barred Galaxies: Secondary Bars and Gas Flows
## 1. Introduction
Active Galactic Nuclei (AGN) require mass accretion onto the central engine. On large galactic scales, torques from a stellar bar can efficiently remove angular momentum from gas, and cause it to move inwards along two hydrodynamical shocks on the leading edges of the bar. Inflowing gas settles on near-circular orbits around the Inner Lindblad Resonance (ILR), and forms a nuclear ring, about 1 kpc in size. A secondary bar inside the main one, with its own pair of shocks, has been proposed to drive further inflow, and thus feed the AGN in a manner similar to the inflow on large scales. Here, we report results of high resolution hydrodynamical simulations, where we examine the nature of the nuclear ring, and check how efficient double bars can be in fueling AGNs.
## 2. Hydrodynamical models with a single bar
All calculations have been done with a grid-based PPM code in 2 dimensions, for isothermal, non-selfgravitating gas, and point symmetry has been imposed. Excellent resolution near the galaxy center (better than 20 pc inside the nuclear ring) was achieved by using a polar grid. In order to trace shocks better, we calculated the value div$`{}_{}{}^{2}𝐯`$ throughout the grid, where div v $`<0`$, and displayed it next to the density diagrams.
The nuclear ring at low gas sound speeds ($`c_s=5`$ km s<sup>-1</sup>) is made of a tightly wound spiral (Piner, Stone & Teuben 1995 ApJ 449, 508) with no shocks (Fig.1 left). The pair of straight main shocks in the bar ends clearly at the outer edge of the nuclear ring, and only a weak, tightly wound sound wave propagates through gas inside the ring. There is no significant gas inflow to the center.
The exceptionally high resolution of our method allowed us to study the structure of the gas flow at high sound speed (20 km s<sup>-1</sup>) Instead of the nuclear ring, a nuclear spiral with higher pitch angle develops (Fig.1 center). Its presence on the div$`{}_{}{}^{2}𝐯`$ plot indicates a spiraling shock, along which the gas falls towards the center. We predict a significant gas inflow here, which can fuel an AGN.
## 3. Results for a dynamically possible double bar
In a dynamically possible double bar, the primary bar must have an ILR, and the secondary bar must end well within the outer ILR of the main bar (Maciejewski & Sparke 1999 MNRAS in print). Resonant coupling favors the existence of stable orbits supporting double bars (Tagger et al. 1987 ApJ 318, L43). Coupling the corotation resonance of the secondary bar with the outer ILR of the main bar causes the dynamically possible secondary bar to end well inside its own corotation, i.e. it rotates slowly.
A snapshot of gas flow in a dynamically possible system with two independently rotating bars (Fig.1 right) shows the nuclear ring widened or destroyed by the secondary bar. An elliptical ring develops around the inner bar, with a size that is largely independent of the sound speed: the flow is mainly elliptical with weak transient shocks. Straight shocks form in fast rotating bars only — they curl around a slow bar and turn into a ring (Athanassoula 1992, MNRAS 259, 345). No significant gas inflow to the center is seen, even at high sound speed.
## 4. Conclusions
In a singly barred galaxy both near-circular motion of gas in the nuclear ring, and a spiraling shock extending towards the galaxy center are possible, depending on the sound speed in the gas. A dynamically possible doubly barred galaxy is likely to have a slowly rotating secondary bar, which neither creates shocks in the gas flow, nor enhances gas inflow to the galaxy center. |
no-problem/9911/astro-ph9911369.html | ar5iv | text | # The cluster 𝑀-𝑇 relation from temperature profiles observed with ASCA and ROSAT
## 1. Introduction
Galaxy clusters are the largest gravitationally bound objects in the universe, and thereby provide information on cosmic structure formation. The mass distribution of virialized objects can be predicted for different cosmologies and different initial density fluctuation spectra. By comparing such predictions to the observed cluster mass function, one can constrain cosmological parameters. Cosmological parameters most strongly affect mass function predictions for large masses that correspond to the cluster scale, and therefore clusters are of special importance.
Accurate measurements of the cluster total mass, dominated by dark matter, are challenging at present and possible only for a limited number of clusters. For this reason there is currently insufficient data for a direct derivation of the mass function. A more practical way of determining the mass function is to observe the distribution of readily available average cluster gas temperatures and to convert this to a mass function, taking advantage of the tight mass - temperature correlation predicted by hydrodynamic cluster formation simulations (e.g. Evrard, Metzler & Navarro 1996). The mass and temperature are predicted to scale as $`MT^{3/2}`$. Although different simulations and observations are in general qualitative agreement, there are significant disagreements on the details of the gas temperature profiles (Frenk et al. 1999, Markevitch et al. 1998). Therefore, this relation needs observational confirmation and calibration.
In addition to being useful for providing a link between cosmological predictions and observations, the M-T relation is also interesting in itself, because any deviations from the predicted self-similar scaling of $`MT^{3/2}`$ would indicate that additional physical processes are at play than gravity alone. The detailed behaviour of the M-T relation provides information about the process of cluster formation and energy input into the gas. One particularly important energy input source is preheating of the intergalactic gas by early supernova driven galactic winds (e.g. David, Forman & Jones (1991), Evrard & Henry (1991), Kaiser (1991), Loewenstein & Muskotzky (1996)).
Steps toward calibrating the M-T relation with observations have been made by measuring cluster masses using gravitational lensing (Hjorth, Oukbir & van Kampen 1998) and the hydrostatic equilibrium approach assuming isothermal gas (Neumann & Arnaud (1999)). Horner, Mushotzky & Scharf (1999) derived M-T relation using several independent methods: virial theorem for cluster galaxies and hydrostatic equilibrium applied to the X-ray emitting gas, both assuming isothermality and using published mass values derived from measured temperature profiles.
As in the latter works, in this paper, we derive a cluster mass - temperature relation under the assumption of hydrostatic equilibrium. We use the published total mass profiles and X-ray emission-weighted temperatures derived from temperature profiles of hot clusters measured with ASCA. For cooler groups, using the published ROSAT cluster temperature profiles, we compute the corresponding temperature values, and the total mass profiles in cases where masses are not published in sufficient detail. We incorporate several improvements over the work of Horner et al. (1999). Our ASCA cluster sample is homogeneous, the temperature profiles are all determined using the same method that accounts for the ASCA PSF (Markevitch et al. 1998) so that the resulting mass values and their errors are directly comparable. The Horner et al. (1999) sample contains mass profiles derived both with ASCA PSF correction (e.g. A2256 and A2029) as well as without it (e.g. A496 and A2199). For A496 and A2199 the PSF correction is not large for the central pointing used in Horner et al. (1999), but we measure the temperature profile to a larger radius using offset pointings where the PSF correction is significant. Another important difference is that Horner et al. (1999) extrapolate mass profiles to an overdensity radius $`r_{200}`$ assuming $`\rho _{dark}r^{2.4}`$, whereas we use the measured mass profiles up to $`r_{1000}`$. At this radius relaxed clusters are unlikely to experience significant residual turbulence that would violate the hydrostatic equilibrium assumption (e.g. Evrard et al. 1996). Our sample contains only clusters without signs of disturbance. Thirdly, we combine our ASCA sample with a low-temperature subsample of galaxy groups and galaxies, whose temperature profiles have been measured with ROSAT, in order to study the M-T relation over a temperature range of 1 - 10 keV.
We use $`H_050h_{50}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`\mathrm{\Omega }=1`$ and report 90% confidence intervals throughout the paper, except where stated otherwise.
## 2. DATA
### 2.1. The sample
The sample consists of 9 clusters, groups and galaxies with published (see Section 2.3) relatively accurate spatially resolved temperature profiles measured with ASCA (the 6 hotter ones) and ROSAT (the 3 cooler ones) up to radii of overdensity of 1000 - 500, comparable to the virial radii. Hereafter we use the usual notation $`r_i`$ and $`M_i`$ where $`r_i`$ is a radius of overdensity (the mean interior density with respect to the critical density) of $`i`$, and $`M_i`$ is the mass within that radius. For our sample, we required the objects to be apparently relaxed, with no substructure or deviation from azimuthal symmetry in ROSAT PSPC images, or in the ASCA temperature maps in Markevitch et al. (1998). Hence, the effects of bulk motions, and the deviation from hydrostatic equilibrium are minimized. The requirement of apparent relaxation limits severely the number of suitable cluster candidates for our analysis.
Most relaxed clusters have strong cooling flows which complicate the ASCA spatially resolved analysis. Since the ASCA PSF has a half-power diameter comparable to the angular size of a typical cooling flow in a nearby cluster, the cooling flow parameters can not be constrained adequately. However, when modeling the temperature structure of the cluster, the (uncertain) cooling flow model must be included as a component. The wide energy-dependent PSF scattering of the photons from a strong cooling flow therefore increase the uncertainty of the non-cooling-flow model temperatures even at a large radius. Therefore we mostly selected relaxed clusters with only moderate cooling flows (and therefore accurate temperature profiles), which further limited the number of suitable sample clusters.
We seek relaxed systems which span the 1-10 keV temperature range. The clusters with the lowest temperatures are also the faintest and accurate spectroscopy is possible only for the nearest ones. The obvious problem is that in nearby clusters the virial radius corresponds to a large angular distance, usually beyond the ASCA and ROSAT field of view. This furthermore limits the number of available objects at low temperatures (kT $`<`$ 4 keV). There are several nearby cool groups and galaxies, for which the analysis with the needed accuracy is feasible with ROSAT, with a slight extrapolation of the mass profiles (see 2.4.).
Our search in the archives and literature resulted in the following sample of objects suitable for our analysis: A496, A2199, A401, A3571, A2256 and A2029 studied with ASCA, and NGC5044, NGC507 and HCG62 with ROSAT.
### 2.2. The mass errors
Total masses are determined assuming hydrostatic equilibrium using the measured gas temperature and density profiles. For some clusters the formal uncertainty of the resulting mass values is small. However, hydrodynamic simulations show that the systematic uncertainties inherent in the hydrostatic mass determination method, such as deviations from spherical symmetry or hydrostatic equilibrium due to incomplete thermalization of the gas, will lead to about a 10 - 30% uncertainty in the calculated total masses (e.g. Evrard et al. 1996; Schindler 1996; Roettiger, Burns & Loken 1996). Since these simulations include also mergers, whereas our sample contains only relaxed clusters, our uncertainties would be towards the low end of the above interval. However, to be conservative, if any formal mass error from the literature is smaller than 20% of the mass value we use a 20% uncertainty instead. The errors on the emission-weighted temperatures are much smaller than the mass errors. For example, at the most interesting radius $`r_{1000}`$ (see below) most of the relative temperature errors are smaller than 0.2 times the relative $`M_{1000}`$ errors wherefore we ignore the temperature errors while deriving the M-T relation.
### 2.3. The ASCA subsample
For the hotter ($`>`$ 4 keV) clusters (A496, A2199, A401, A3571, A2256, A2029) we use the emission weighted cooling-flow corrected ASCA temperatures from Markevitch et al. (1998). For these clusters we use the published total mass profiles obtained using observed ASCA temperature profiles and ROSAT surface brightness profiles. The masses of A496 and A2199 (Markevitch et al. (1999)) have been derived by modeling the temperature profile with a polytropic form and the mass for A2029 (Sarazin, Wise & Markevitch (1998)) has been derived by modeling a temperature profile as a linear function of the radius. For A401 (Nevalainen, Markevitch & Forman 1999a), A3571 (Nevalainen, Markevitch & Forman 1999b) and A2256 (Markevitch & Vikhlinin 1997) the masses have been obtained modelling the dark matter density with different functional forms (see above papers for the details), computing the corresponding temperature profiles and fitting the dark matter distribution parameters. For all these clusters the temperature profile is determined relatively accurately out to $`r_{500}`$. All these clusters exhibit a temperature decline with radius, which results in smaller total mass values at our radii of interest, compared to the usual isothermal analysis. The “universal” NFW dark matter profile (Navarro, Frenk & White 1997) provides a good description of these profiles over a large range of radii.
### 2.4. The ROSAT subsample
For the cooler objects, we used the published ROSAT temperature and surface brightness profiles to compute the emission weighed mean temperatures outside the cooling flow regions, to be directly comparable to the cooling flow-corrected ASCA temperatures for hotter clusters. We wish to do all the analysis at $`r_{500}`$, but the ROSAT PSPC data do not extend to such large radii. We extrapolate the mass profiles beyond the radii of measured temperatures (except in the case of NGC507, where the temperature profile extends sufficiently far), but not beyond the radii to which the cluster surface brightness is significantly detected. For the group HCG62 and the galaxy NGC507 this allows us to reach $`r_{1000}`$, but for the NGC5044 group we reach only $`r_{1500}`$ (see Figure 1). While for the hotter clusters we use the published mass and temperature values, ROSAT data need some additional analysis, as described in detail below.
#### 2.4.1 Group HCG62
Using the ROSAT PSPC temperature profile from Ponman & Bertram (1993) we obtain the X-ray emission-weighted temperature $`T`$ = 1.16 $`\pm `$ 0.08 keV beyond the cooling radius of $`r=2.4^{}`$. For comparison, a published single component temperature of ASCA data for HCG62 gives a smaller value of $`T`$ = 1.05 $`\pm `$ 0.02 keV (Fukazawa et al. (1998)). However, in the ASCA analysis the central 0.15$`h_{50}^1`$ Mpc was excluded, which corresponds to 6.4 for HCG62. Thus the ASCA value integrates over the outer parts of HCG62. In that region, the declining ROSAT temperature profile is consistent with the above ASCA value. Therefore, we use the value derived from the profile of Ponman & Bertram (1993).
To derive the mass profile, we fitted the above ROSAT temperature profile with a polytropic form ($`T(r)\rho _{gas}(r)^{\gamma 1}`$), fixing the gas density using the results of the ROSAT surface brightness analysis (Ponman & Bertram 1993), and found the best fit with $`\gamma =1.09_{0.14}^{+0.15}`$. For this fit, we excluded the cooling flow region $`r<2.4^{}`$. The total mass for a polytropic temperature profile inside a radius $`r=xa_x`$ is given by
$$M_{tot}(r)=3.70\times 10^{13}M_{}T(r)a_x\frac{3\beta \gamma x^2}{1+x^2}\frac{\mu }{0.60},$$
(1)
where $`\mu `$ is the mean molecular weight, $`\beta `$ and $`a_x`$ are the slope parameter and core radius of the gas density profile, T is expressed in keV and $`a_x`$ in Mpc. We propagated the errors of $`\gamma `$ and T to the total mass.
#### 2.4.2 Galaxy NGC507
Kim & Fabbiano (1995) obtained a temperature profile for this galaxy with ROSAT PSPC. Excluding the data from the cooling flow area ($`r<2.0^{}`$), we obtain a mean temperature of $`T`$ = 1.09 $`\pm `$ 0.03 keV. The corresponding value obtained by ASCA is significantly higher, $`T`$= 1.26 $`\pm `$ 0.07 keV (Fukazawa et al. (1998)). We are not strongly concerned about this disagreement, because the ROSAT and ASCA energy bands are very different and there may be differences in the average temperature due to, for example, nonisothermality and a possible nonthermal component. We assume that for low temperature systems ( $`T`$ $`1`$ keV), ROSAT gives a more accurate value, and use the above ROSAT value in our analysis. We show below that if we use the higher ASCA temperature value instead, our conclusions will only be strengthened.
The total mass profile is directly taken from Kim & Fabbiano (1995), who used a polytropic model for the temperature profile, as described in (2.4.1) above. The errors of temperature and the polytropic index $`\gamma `$ are incorporated in the mass values.
#### 2.4.3 Group NGC5044
Using the ROSAT temperature profile of David et al. (1994), we obtained the emission weighed temperature $`T`$ $`=1.09\pm 0.03`$ keV, beyond the cooling radius $`r=4.0^{}`$. ROSAT temperatures beyond 0.15$`h_{50}^1`$ Mpc are consistent with the corresponding ASCA value ($`T`$ = 1.07 $`\pm `$ 0.01 keV Fukazawa et al. (1998)).
We use the analytical form for the best fit total mass given in David et al. (1994), obtained using a power-law model for the temperature profile. We propagate the errors of temperature and the exponent of the temperature profile to the total mass errors. Because the X-ray emission is not detected beyond $`r_{1500}`$ for this nearest group, we will not use this group in our analysis beyond that radius.
#### 2.4.4 NFW mass profile
The NFW model for the dark matter density profile describes well the mass distribution in hot ASCA clusters in our sample (see e.g. Markevitch & Vikhlinin 1997, Sarazin et al. 1998, Markevitch et al. (1999), Nevalainen et al. 1999a, Nevalainen et al. 1999b). Furthermore, a similar result was found for Coma in an optical study of galaxies (Geller, Diaferio & Kurtz 1999). Therefore, we compared the NFW model with the derived mass profiles for the cool groups and galaxies HCG62, NGC507 and NGC5044. We fixed the gas density profile to ROSAT values and modeled the dark matter density distribution with an NFW profile selecting its parameters so that gas plus dark matter approximate the derived total mass profiles. For all three systems, the NFW profile provides a good description of the dark matter distribution over a range of interesting radii (see Figure 1). Such an underlying universal dark matter profile is consistent with the observed similarity of gas density profiles (Neumann & Arnaud (1999), Vikhlinin, Forman & Jones 1999) and temperature profiles (Markevitch et al. 1998) in different clusters, when scaled to physical radii.
## 3. RESULTS
For each object, using its measured total mass profile, we computed the overdensity, or the mean interior density in units of the critical density as a function of the radius: $`M_{tot}(r)/(\frac{4}{3}\pi r^3\rho _c)`$, where $`\rho _c=3H_0^2(1+z)^3/8\pi G`$. We then calculated the masses within several radii of fixed overdensity (see Table 1). To remove the different amounts of evolution in our sample temperatures due to different values of $`z`$, we scaled the observed mean temperatures to $`z=0`$ according to the predicted scaling $`T_{gas}(z)(1+z)`$ for a given mass (e.g. Eke, Cole & Frenk 1996). To be consistent with this, we should also compute the overdensities at $`z=0`$, but this would require a further assumption that we observe the clusters just after their collapse. However, due to low $`z`$ values in our sample, our results do not change significantly, whether we compute the overdensities at $`z=0`$ or at the observed redshifts, and we report our results using the latter method.
Figure 2 shows that $`M_{1000}`$ is strongly correlated with $`T`$, the X-ray emission weighted cooling-flow corrected temperature. We fit the masses with a function
$$M_i=n_i\left(\frac{T_{z=0}}{10\mathrm{keV}}\right)^{\alpha _i,}$$
(2)
where i denotes different values of overdensity.
We are able to obtain interesting constraints for the above relation for radii up to $`r_{1000}`$. We exclude the low temperature data from the fits at larger radii, because the gas in the nearby ROSAT objects is not detected to such low overdensities. At $`r_{1000}`$ we get an acceptable fit with $`n_{1000}=1.23_{0.18}^{+0.21}\times 10^{15}h_{50}^1M_{}`$, $`\alpha _{1000}=1.79_{0.13}^{+0.14}`$, $`\chi ^2=6.4`$ for 6 d.o.f. The self-similar prediction $`\alpha =3/2`$ differs by 3.7 $`\sigma `$. If we use the higher ASCA temperature for NGC507 (see 2.4.2) the relation steepens by a few per cent, and the difference with the self-similar slope increases slightly. If we were to include the data of NGC5044 in the fit at $`r_{1000}`$ (see 2.4.3.), the results would not change significantly since that data point is consistent with the ones of HCG62 and NGC507, but the constraint would improve. Therefore, our choices of using the lower temperature for NGC507 and excluding NGC5044 data at $`r_{1000}`$ in all our fits are conservative. Fixing the slope $`\alpha _{1000}`$ to 1.5 the best fit gives $`\chi ^2=29.4`$ for 7 d.o.f., thus $`\alpha _{1000}`$ = 3/2 is ruled out at 99.99% confidence. In the radial range $`r_{2000}rr_{1000}`$, the values of $`\alpha `$ are consistent with a constant and significantly larger than 1.5 (see Table 2). However, if only the 6 hotter ASCA clusters are fitted, the slope at $`r_{1000}`$ is $`\alpha _{1000}=1.8\pm 0.5`$, consistent with 3/2 within the large errors.
## 4. DISCUSSION
### 4.1. The slope of the $`MT`$ relation
Our main result is that the M-T scaling in the 1 - 10 keV range is inconsistent with the self-similar prediction. A possible explanation for the steeper slope of the M-T relation, compared to the self-similar one, is preheating of the intracluster gas by supernova driven galactic winds before the clusters collapse, as proposed by e.g. David et al. (1991), Evrard & Henry (1991), Kaiser (1991), Loewenstein & Muskotzky (1996), to explain other X-ray data, such as the $`L_XT`$ relation and cluster elemental abundances. If supernovae release a similar amount of energy per unit gas mass in hot and cool clusters, the coolest clusters would be affected more significantly and exhibit a stronger shift to higher temperatures in the M-T diagram (see Figure 2) than the hotter clusters. This will steepen the M-T relation. The cluster formation simulations by Metzler & Evrard (1997), which include supernova heating, produce a slightly steeper slope (M$`T^{1.61}`$) compared to the self similar slope 1.5 in their results of simulations with no winds. They show that if all the wind energy is thermalized and retained within a virial radius, the temperature at masses $`10^{13}M_{}`$ may increase by 100%, which would totally break the scaling at low temperatures. They note that in reality the effect should not be that dramatic since the extra energy is spent on work to lift the gas within the cluster potential well.
Our results are consistent with this explanation. Fitting only the ASCA data points (kT $`>`$ 4 keV) at $`r_{1000}`$ with $`\alpha _{1000}`$ fixed to 3/2 leads to an acceptable fit with $`n_{1000}=1.06_{0.09}^{+0.08}`$ and $`\chi ^2=6.7`$ for 5 d.o.f. (see Figure 2). HCG62, NGC507 and NGC5044 are then 50%, 30% and 25% hotter than what the extrapolation of the self-similar fit would predict for their masses. These amounts are reasonable in the supernova heating scheme, according to the above simulation (Metzler & Evrard 1997).
Other work is also consistent with supernova heating scenario. Horner et al. (1999) examined the cluster M-T relation using several methods and found that spatially resolved temperature profile analysis for a sample of clusters with kT $`>`$ 3 keV gives $`T^{3/2}`$ scaling, whereas their isothermal analysis sample that includes some cooler groups with kT $`>`$ 1 keV gives a steeper slope, consistent with ours. Hjorth et al. (1998) compare temperatures and gravitational lensing masses of clusters with kT $`>`$ 5 keV and obtain an M-T slope consistent with the self-similar one. The analysis of a sample of clusters with kT $`>`$ 4 keV (Neumann & Arnaud 1999) shows that the self-similar slope of 3/2 that follows from their isothermal assumption, is consistent with their data. These observations, together with our present results, are consistent with the hypothesis that at high temperatures the M-T scaling is self-similar, but breaks down at low temperatures ($`1`$ keV). However, Horner et al. (1999) also compare temperatures with masses determined from velocity dispersions using the virial theorem. That dataset appears to be consistent with the self-similar scaling, even though their sample contains clusters with temperatures as low as $``$ 2 keV. The source of this disagreement is unclear (see 4.2. for more discussion on the virial masses).
Also supportive of energy injections, the work of Ponman, Cannon & Navarro (1999) shows that cool (T $`<`$ 4 keV) clusters observed with ROSAT and GINGA have entropies higher than achievable through gravitational collapse alone which they explain by pre-heating from strong galactic winds.
In the hydrostatic equilibrium scheme, since approximately $`M_{tot}T\times \beta `$ at a given radius, for a given object with a certain total mass, a temperature rise due to preheating will be compensated by a shallower gas density profile. If heating is more prominent at small temperatures, one would then expect lower values of $`\beta `$ in cooler objects. This seems to be consistent with observations (e.g. Mohr & Evrard 1997, Vikhlinin et al. 1999, Ponman et al. 1999). Our sample also exhibits this behaviour. The average values of $`\beta `$ with rms errors for the ROSAT and the ASCA subsample are $`0.54\pm 0.09`$ and $`0.70\pm 0.06`$, respectively.
### 4.2. The normalization
$`M_{1000}`$ values given by our best fit model with the exponent as a free parameter are 2.8 and 1.4 times smaller than corresponding values obtained by the Evrard et al. (1996) $`\mathrm{\Omega }=1`$ simulations at T = 1 keV and 10 keV, respectively. Note that both ASCA and ROSAT data, which have been obtained using independent temperature measurement methods and different instruments, give smaller values compared to the simulations. The above mentioned temperature profile analysis by Horner et al. (1999) gives a normalization 40% lower than Evrard et al. (1996) at $`r_{200}`$, which is the same difference as we find between the normalization of our $`\alpha 3/2`$ fit and Evrard et al. (1996) at $`r_{1000}`$. The normalization of the isothermal sample results of Neumann & Arnaud (1999) also is 30% lower than the Evrard et al. (1996) values at $`r=0.3r_{200}`$. It is not inconceivable that the X-ray-measured masses are a factor of $``$2 lower than the true masses (in simulations), due, for example, to significant gas turbulence or magnetic fields (e.g. Loeb & Mao 1994) invoked to explain the difference between the X-ray and lensing mass measurements. However, this seems increasingly unlikely as the cooling flow and temperature gradient effects appear to account for most of that disagreement (Allen 1998; Markevitch et al. 1999).
The gravitational lensing mass analysis by Hjorth et al. (1998) gives a 12% lower $`MT`$ normalization than Evrard et al. (1996), while Sadat, Blanchard & Oukbir (1998) found gravitational lensing masses in their sample to be 36% below the Evrard et al. (1996) scaling. Lensing masses are highly uncertain at present (see e.g. Hjorth et al. 1998). Within the errors, both above results are in agreement with our and other X-ray results.
On the other hand, the virial sample in Horner et al. (1999) gives a normalization consistent with Evrard et al. (1996). There are limitations for each of the three mass measurement methods (virial, X-ray, lensing). For example, the virial masses may be inflated by inclusion of background and foreground galaxies. The comparison is best done on the case-by-case basis, which is out of the scope of this paper. The qualitative agreement of all other results suggests that it is the simulated values by Evrard et al. (1996) that may be incorrect. These simulations also produce too steep gas density profiles and too shallow temperature profiles compared to observations (e.g. Vikhlinin et al. 1999, Markevitch et al. 1998, respectively). Comparison of several independent cluster formation codes indicates that gas temperature and density profiles are areas where different simulations disagree significantly (Frenk et al. 1999). For example, simulations by Bryan & Norman (1997) predict temperature profiles similar to those observed and should therefore produce an M-T relation closer to that observed.
## 5. CONCLUSIONS
We studied a sample of 9 relaxed galaxy clusters, galaxy groups and galaxies whose temperatures range from 1-10 keV and for which accurate temperature profiles are available. For the hotter subsample, the hydrostatic total mass profiles have been accurately determined up to $`r_{500}`$ using gas temperature profiles measured with ASCA. For the cooler subsample, the mass profiles are determined from spatially resolved spectroscopy of the ROSAT PSPC up to radii $`r_{1500}`$ \- $`r_{1000}`$. The mass profiles of the cool subsample are consistent with the “universal” NFW model, as has earlier been found for the hotter subsample. We derived the mass-temperature relation at $`r_{1000}`$ over the 1-10 keV temperature range as
$$M_{1000}=1.23\pm \mathrm{0.20\; 10}^{15}h_{50}^1M_{}\left(\frac{T_{z=0}}{10\mathrm{keV}}\right)^{1.79\pm 0.14}.$$
(3)
The normalization is significantly smaller than that predicted by simulations of Evrard et al. (1996) with $`\mathrm{\Omega }=1`$. Our relation is significantly steeper compared to the self-similar one (slope of 3/2). However, fitting only our ASCA data (kT $`>`$ 4 keV) at $`r_{1000}`$ with $`\alpha _{1000}`$ fixed to 3/2 leads to an acceptable fit
$$M_{1000}=1.06\pm \mathrm{0.09\; 10}^{15}h_{50}^1M_{}\left(\frac{T_{z=0}}{10\mathrm{keV}}\right)^{1.5},$$
(4)
with 2.6 $`\sigma `$ lower normalization than in the above mentioned simulations (Evrard et al. 1996). Although we cannot exclude a single power law slope over the whole temperature range, this behaviour is consistent with a break in the self-similar scaling at low temperatures, as expected if preheating of the intracluster gas by supernova driven galactic winds has taken place. Most independent M-T relation observations are consistent with our results and with the preheating scenario. The gas density slopes in our low temperature sample are smaller than the ones in the hot subsample, consistent with this supernova heating scheme.
XMM and Chandra missions will be useful for extending the sample size. Their large effective area, excellent angular resolution and better energy resolution over a wide energy range will improve the accuracy of the spatially resolved spectroscopy manyfold compared to the presently available data.
JN thanks Harvard Smithsonian Center for Astrophysics for the hospitality. JN thanks the Smithsonian Institute for a Predoctoral Fellowship, and the Finnish Academy for a supplementary grant. WF and MM acknowledge support from NASA contract NAS8-39073. We thank the referee for useful comments. |
no-problem/9911/math9911136.html | ar5iv | text | # Untitled Document
COALESCENCE OF SKEW BROWNIAN MOTIONS
Martin Barlow,<sup>1</sup> 1. Research partially supported by an NSERC (Canada) grant. Krzysztof Burdzy,<sup>2</sup> 2. Research partially supported by NSF grant DMS-9700721. Haya Kaspi<sup>3</sup> and Avi Mandelbaum<sup>3</sup> 3. Research partially supported by the Fund for the Promotion of Research at the Technion.
The purpose of this short note is to prove almost sure coalescence of two skew Brownian motions starting from different initial points, assuming that they are driven by the same Brownian motion. The result is very simple but we would like to record it in print as it has already become the foundation of a research project of Burdzy and Chen (1999). Our theorem is a by-product of an investigation of variably skewed Brownian motion, see Barlow et al. (1999).
Suppose that $`B_t`$ is the standard Brownian motion with $`B_0=0`$ and consider the equation
$$X_t^x=x+B_t+\beta L_t^x,t0,$$
$`(1)`$
where $`X_t^x`$ satisfies the initial condition $`X_0^x=x`$. Here $`\beta `$ is a fixed number in $`[1,1]`$ and $`L_t^x`$ is the symmetric local time of $`X_t^x`$ at $`0`$. Harrison and Shepp (1981) proved that (1) has a unique strong solution, which is skew Brownian motion. One way to define skew Brownian motion in the case $`\beta 0`$ is to start with a standard Brownian motion $`B_t^{}`$ and flip every excursion of $`B_t^{}`$ below 0 to the positive side with probability $`\beta `$, independent of what happens to other excursions. See Itô and McKean (1965) or Walsh (1978) for more information.
Theorem. If $`X_t^x`$ and $`X_t^y`$ are solutions of (1) with the same $`\beta [1,1]\{0\}`$, relative to the same Brownian motion $`B_t`$, then $`X_t^x=X_t^y`$ for some $`t<\mathrm{}`$, a.s.
Proof. For simplicity assume that $`\beta >0`$ and $`0=x<y`$. Let $`\widehat{L}_t^0=\beta L_t^0`$, $`\widehat{L}_t^y=y+\beta L_t^y`$, and
$$\begin{array}{cc}\hfill T_0& =0,\hfill \\ \hfill S_k& =inf\{t>T_k:B_t=\widehat{L}_t^y\},k0,\hfill \\ \hfill T_k& =inf\{t>S_{k1}:B_t=\widehat{L}_t^0\},k1,\hfill \\ \hfill W_k& =\frac{\widehat{L}_{S_{k1}}^y\widehat{L}_{S_{k1}}^0}{\widehat{L}_{T_{k1}}^y\widehat{L}_{T_{k1}}^0},k1,\hfill \\ \hfill V_k& =\frac{\widehat{L}_{T_k}^y\widehat{L}_{T_k}^0}{\widehat{L}_{S_{k1}}^y\widehat{L}_{S_{k1}}^0},k1,\hfill \\ \hfill M_k& =\widehat{L}_{T_k}^y\widehat{L}_{T_k}^0,k0.\hfill \end{array}$$
We will first find the distributions of $`W_k`$’s and $`V_k`$’s using excursion theory. Recall the fundamentals of excursion theory for the standard Brownian motion from, e.g., Karatzas and Shreve (1991). The Brownian excursions from $`0`$ form a Poisson point process whose clock can be identified with the local time of Brownian motion at $`0`$. The intensity of excursions on the positive side of $`0`$ whose height is greater than $`h`$ is equal to $`1/(2h)`$.
The stopping time $`S_0`$ may be described as the first time when an excursion of $`B_t\widehat{L}_t^0`$ above $`0`$ hits the level $`y\widehat{L}_t^0`$. These excursions can be identified with the excursions of the skew Brownian motion $`X_t^0`$ below $`0`$. They form a Poisson point process $`𝒫`$ similar to the Poisson point process of excursions of the standard Brownian motion from 0. The intensity of $`𝒫`$-excursions above $`0`$ with height greater than $`h`$ is equal to $`(1\beta )/(2h)`$. Note the extra factor $`1\beta `$ as compared to the analogous formula for the excursions of the standard Brownian motion. The factor can be explained using the excursion flipping construction of skew Brownian motion mentioned in the introduction—in a sense, the fraction of excursions flipped to the other side is equal to $`\beta /2`$. When the clock $`L_t^0`$ for the Poisson point process $`𝒫`$ takes a value $`u`$ then the instanteneous intensity of excursions with height greater than $`y\widehat{L}_t^0`$ is equal to $`(1\beta )/(2(y\beta u))`$. We have $`\widehat{L}_{S_0}^y\widehat{L}_{S_0}^0<a`$ if no $`𝒫`$-excursion with height greater than $`y\widehat{L}_t^0`$ occurs before the time $`s`$ when $`y\widehat{L}_s^0=a`$, i.e., when $`L_s^0=(ya)/\beta `$. Thus excursion theory enables us to write the probability of this event using Poisson probabilities as follows,
$$P(\widehat{L}_{S_0}^y\widehat{L}_{S_0}^0<a)=\mathrm{exp}\left(_0^{(ya)/\beta }\frac{1\beta }{2(y\beta u)}𝑑u\right)=\left(\frac{a}{y}\right)^{(1\beta )/(2\beta )}.$$
Recall that $`\widehat{L}_{T_0}^y\widehat{L}_{T_0}^0=y`$. We have
$$P(W_1y<a)=P(W_1(\widehat{L}_{T_0}^y\widehat{L}_{T_0}^0)<a)=P(\widehat{L}_{S_0}^y\widehat{L}_{S_0}^0<a)=(a/y)^{(1\beta )/(2\beta )}.$$
By changing the variable we obtain for $`w(0,1)`$,
$$P(W_1<w)=w^{(1\beta )/(2\beta )}.$$
By the strong Markov property, $`P(W_k<w)=w^{(1\beta )/(2\beta )}`$ for $`w(0,1)`$ and every $`k1`$.
A totally analogous argument shows that $`P(V_k>v)=v^{(1+\beta )/(2\beta )}`$ for $`v1`$ and $`k1`$.
Note that, by the strong Markov property, all random variables $`V_k,W_k,k1`$, are jointly independent.
Next we will show that the process $`M_k`$ is a martingale and converges to 0. First, note that $`M_k=M_{k1}W_kV_k`$. It is elementary to check that $`EW_k=(1\beta )/(1+\beta )`$ and $`EV_k=(1+\beta )/(1\beta )`$. By the joint independence of $`W_k`$’s and $`V_k`$’s,
$$E(M_kM_{k1},M_{k2},\mathrm{})=M_{k1}EW_kEV_k=M_{k1},$$
which shows that $`M_k`$ is a martingale. As a positive martingale, the process $`M_k`$ must converge with probability 1 to a random variable $`M_{\mathrm{}}`$. Since for every $`k`$, $`M_k`$ is the product of $`M_{k1}`$ and an independent random variable $`W_kV_k`$, the limit $`M_{\mathrm{}}`$ can take only the values $`0`$ or $`\mathrm{}`$. By Fatou’s Lemma, $`EM_{\mathrm{}}EM_0=y`$, so $`M_{\mathrm{}}=0`$ a.s.
On every interval $`[T_k,S_k]`$ the process $`\widehat{L}_t^y\widehat{L}_t^0`$ is non-increasing but it is non-decreasing on intervals of the form $`[S_k,T_{k+1}]`$. Thus
$$\underset{t[T_k,T_{k+1}]}{sup}\widehat{L}_t^y\widehat{L}_t^0\mathrm{max}(M_k,M_{k+1}).$$
In view of convergence of $`M_k`$ to 0, we must have a.s. convergence of $`\widehat{L}_t^y\widehat{L}_t^0`$ to $`0`$ when $`t\mathrm{}`$. It remains to show that the convergence does not take an infinite amount of time.
Let $`T_{\mathrm{}}=lim_k\mathrm{}T_k`$. In view of the remarks in the last paragraph, it is not hard to see that the value of $`\widehat{L}_T_{\mathrm{}}^0`$ is bounded by $`_{k=1}^{\mathrm{}}M_k`$. Since $`\widehat{L}_{\mathrm{}}^0=\mathrm{}`$, it will suffice to show that $`_{k=1}^{\mathrm{}}M_k<\mathrm{}`$ in order to conclude that $`T_{\mathrm{}}<\mathrm{}`$. We have for $`k1`$,
$$M_k=y\underset{j=1}{\overset{k}{}}W_jV_j.$$
We can write
$$y\underset{j=1}{\overset{k}{}}W_jV_j=\mathrm{exp}\left(\mathrm{log}y+\underset{j=1}{\overset{k}{}}\left[\mathrm{log}W_j+\mathrm{log}V_j\right]\right).$$
One can directly check that the distribution of $`\mathrm{log}W_j`$ is exponential with mean $`2\beta /(1\beta )`$, while the distribution of $`\mathrm{log}V_j`$ is exponential with mean $`2\beta /(1+\beta )`$. Thus, $`E(\mathrm{log}W_j+\mathrm{log}V_{j+1})<0`$. It follows that for some $`a>0`$, we eventually have
$$\underset{j=1}{\overset{k}{}}\left[\mathrm{log}W_j+\mathrm{log}V_j\right]ak.$$
Hence, for some random $`c_1`$ and all $`k`$ we have $`M_kc_1e^{ak}`$ and so $`_{k=1}^{\mathrm{}}M_k<\mathrm{}`$, a.s.
References
Barlow, M., Burdzy, K., Kaspi, H. and Mandelbaum, A. (1999), Variably skewed Brownian motion (preprint)
Burdzy, K. and Chen, Z.-Q. (1999) Local time flow related to skew Brownian motion (preprint)
Harrison, J.M. and Shepp, L.A. (1981), On skew Brownian motion, Ann. Probab. 9 (2), 309–313.
Itô, K. and McKean, H.P. (1965), Diffusion Processes and Their Sample Paths, Springer, New York.
Karatzas, I. and Shreve, S.E. (1991), Brownian Motion and Stochastic Calculus, 2nd Edition, Springer Verlag, New York.
Walsh, J.B. (1978), A diffusion with discontinuous local time, Temps Locaux Asterisque, 52–53, 37–45.
Martin Barlow: University of British Columbia, Vancouver, BC V6T 1Z2, Canada barlow@math.ubc.ca
Krzysztof Burdzy: University of Washington, Seattle, WA 98195-4350, USA burdzy@math.washington.edu
Haya Kaspi and Avi Mandelbaum: Technion Institute, Haifa, 32000, Israel iehaya@tx.technion.ac.il, avim@tx.technion.ac.il |
no-problem/9911/cs9911001.html | ar5iv | text | # Semantics of Programming Languages: A Tool-Oriented ApproachThis research was supported in part by the Telematica Instituut under the Domain-Specific Languages project.
## 1 The Role of Programming Language Semantics
Programming language semantics has lost touch with large groups of potential users . Among the reasons for this unfortunate state of affairs, one stands out. Semantic results are rarely incorporated in practical systems that would help language designers to implement and test a language under development, or assist programmers in answering their questions about the meaning of some language feature not properly documented in the language’s reference manual. Nevertheless, such systems are potentially more effective in bringing semantics-based formalisms and techniques to the places they are needed than their dissemination in publications, courses, or even exemplary (but little-used) programming languages.
The current situation in which semantics, languages, and tools are drifting steadily further apart is shown in Figure 2. The tool-oriented approach to semantics aims at making semantics definitions more useful and productive by generating as many language-based tools from them as possible. This will, we expect, reverse the current trend as shown in Figure 2. The goal is to produce semantically well-founded languages and tools. Ultimately, we envision the emergence of “Language Design Assistants” incorporating substantial amounts of semantic knowledge.
Table 1 lists the semantics definition methods we are aware of. Examples of their use can be found in . Petri nets, process algebras, and other methods that do not specifically address the semantics of programming languages, are not included. Dating back to the sixties, attribute grammars and denotational semantics are among the oldest methods, while abstract state machines (formerly called evolving algebras), coalgebra semantics, and program algebra are the latest additions to the field. Ironically, while attribute grammars are popular with tool builders, semanticists do not consider them a particularly interesting definition method. Since we will only discuss the various methods in general terms without going into technical details, the reader need not be familiar with them. In any case, the differences between them, while often hard to decipher because the field is highly fragmented and appropriate “dictionaries” are lacking, do not affect our main argument.
Table 2 lists a representative language development system (if any) for the semantics definition methods of Table 1. The last entry, Software Refinery, which has its origins in knowledge-based software environments research at Kestrel Institute, does not fit any of the current semantics paradigms. The pioneering Semanol system is, to the best of our knowledge, no longer in use and is not included. The systems listed have widely different capabilities and are in widely different stages of development. Before discussing their characteristics and applications in Section 3, we first explain the general ideas underlying the tool-oriented approach to programming language semantics. These were shaped by our experiences with the ASF+SDF Meta-Environment (Table 2) over the past ten years. Finally, we discuss Language Design Assistants in Section 4.
## 2 A Tool-Oriented Approach to Semantics
The tool-oriented approach to semantics aims at making semantics definitions more useful and productive by generating as many language-based tools from them as possible. This affects many aspects of the way programming language semantics is practiced and upsets some of its dogmas.
Table 3 lists some of the tools that might be generated. In principle, the language definition has to be augmented with suitable tool-specific information for each tool to be generated, and this may require tool-specific language extensions to the core semantics definition formalism. In practice, this is not always necessary since semantics definitions tend to contain a good deal of implicit information that may be extracted and used for tool generation.
The first entry of Table 3, scanner and parser generation, is standard technology. Lex and Yacc are well-known examples of stand-alone generators for this purpose. Their input formalisms are close to regular expressions and BNF, the de facto standard formalisms for regular and context-free grammars, respectively. Unfortunately, for most of the other tools in Table 3 there are no such standard formalisms.
The key features of the tool-oriented approach are:
* Language definitions are primarily tool generator input. They do not have to provide any kind of theoretical “explanation” of the constructs of the language in question nor do they have to become part of a language reference manual.
* An interpreter that can act, among other things, as an “oracle” to programmers needing help will be among the first tools to be generated.
* Writing (large) language definitions loses its esoteric character and becomes similar to any other kind of programming. Semantics formalisms tend to do best on small examples, but lose much of their power as the language definitions being written grow. In the tool-oriented approach, semantics formalisms have to be modular and separate generation (the analogue of separate compilation) has to be supported. Libraries of language constructs become important.
* The tool-oriented approach may require addition of tool-specific features to the core formalism. This leads to an open-ended rather than a “pure” style of semantics description.
* The scope of the tool-oriented approach includes, for instance,
+ Domain-specific and little languages . Many of the tools in Table 3 are as useful for DSLs as they are for programming languages.
+ Software maintenance and renovation tools . Some of these are included at the end of Table 3.
+ Compiler toolkits such as CoSy , Cocktail , OCS , SUIF , and PIM .
## 3 Existing Language Development Systems
Table 4 summarizes the tool generation capabilities of the representative language development systems listed in Table 2. All of them can generate lexical scanners, parsers, and prettyprinters, many of them can produce syntax-directed editors, typecheckers, and interpreters, and a few can produce various kinds of software renovation tools. To this end, they support one or more specification formalisms, but these differ in generality and application domain.
For instance, the Synthesizer Generator supports attribute grammars with incremental attribute evaluation, which is particularly suitable for typechecking, static analysis and translation, but less suitable for dynamic semantics. The ASF+SDF Meta-Environment supports conditional rewrite rules rather than attribute grammars, and these can be used for defining dynamic semantics as well. Software Refinery comes with a full-blown functional language in which a wide range of computations on programs can be expressed. Other systems provide more specialized specification formalisms. PSG, for instance, uses context relations to describe incremental typechecking (even for incomplete program fragments) and denotational definitions for dynamic semantics. Gem-Mex supports a semi-visual formalism optimized for the definition of programming language semantics and tool generation. It can generate a typechecker, an interpreter, and a debugger.
Table 4 is far from complete. Some other language development systems are SIS , PSP , GAG , SPS , MESS , Actress , Pregmatic , LDL , and Eli . Many of the tools listed in Table 3 are not generated by any current system. Ample opportunities for tool generation still exist in areas like optimization, dynamic program analysis, testing, and maintenance.
## 4 Toward Language Design Assistants
The logical next step beyond semantics-based tool generation would lead to a situation similar to that of computer algebra. Large parts of mathematics are being incorporated in computer algebra systems. Conversely, computer algebra itself has become a fruitful mathematical activity, yielding new results of general mathematical interest. In the case of semantics, we see opportunities for “Language Design Assistants” incorporating a substantial amount of both formal and informal semantic knowledge. The latter is found, for instance, in language design rationales and discussion documents produced by standardization bodies. Development of such assistants will not only push semantics even further toward practical application, but also give rise to new theoretical questions.
The Language Design Assistants we have in mind would support the human language designer by providing design choices and performing consistency checks during the design process. Operational knowledge about typical issues like typing rules, scope rules, and execution models should be incorporated in them. Major research questions arise here regarding the acquisition, representation, organization, and abstraction level of the required knowledge. For instance, should it be organized according to any of the currently known paradigms of object-oriented, functional, or logic programming? Or should a higher level of abstraction be found from which these and other, new, paradigms can be derived? How can constraints on the composition of certain features be expressed and checked? Another key question is how to construct a collection of “language feature components” that are sufficiently general to be reusable across a wide range of languages.
Similar considerations apply to tool development. By incorporating knowledge about tool generation in the Language Design Assistant we can envision a Tool Generation Assistant that helps in constructing tools in a more advanced way than the tool generation we had in mind in the previous sections.
To make this perspective somewhat more tangible, consider the relatively simple case of an if-then-else-like conditional construct that has to be modelled as a language feature component. Table 5 gives an impression of the wide range of issues that has to be addressed before such a generic conditional construct can be specialized into a concrete if-then-else-statement or conditional expression in a specific language. It is a research question to design an abstract framework in which these and similar questions can be expressed and answered.
Another major question is how to organize the specialization process from language feature component to concrete language construct. The main alternatives are parameterization and transformation . Using parameterization, specialization of the component in question amounts to instantiating its parameters. Since parameters have to be identified beforehand and instantiation is usually a rather simple mechanism, the adaptability/reusability of a parameterized component is limited. Using transformations, on the other hand, a language feature component is designed without explicit parameters. Specialization is achieved by applying appropriate transformation rules to it to obtain the desired specific case. Clearly, this approach is more flexible since any part of the language feature component can be modified by the transformation rules and can thus effectively act as a parameter. The relation between this approach of meta-level transformation and parameterized modules is largely unexplored.
Although we are not aware of research on Language Design Assistants from the broad perspective sketched here, there is some work pointing in the same general direction:
* The Language Designer’s Workbench sketched as future work in has some of the same goals.
* Action semantics also emphasizes libraries of reusable language constructs.
* Plans (no longer pursued) for the Language Development Laboratory included a library of reusable language constructs, a knowledge base containing knowledge of languages and their compilers/interpreters, and a tool for language design.
* The “design and implementation by selection” of languages described in is a case study in high-level interactive composition of predefined language constructs.
### Acknowledgements
We would like to thank Jan Bergstra, Mark van den Brand, Arie van Deursen, Ralf Lämmel, and Jan Rutten for useful comments on earlier versions. |
no-problem/9911/astro-ph9911048.html | ar5iv | text | # MHD Turbulence in Star-Forming Clouds
## 1 Introduction
A fundamental unanswered question in star formation is why stars do not form faster than they are currently thought to. The free-fall times $`t_{\mathrm{ff}}`$ of molecular clouds with typical densities are
$$t_{\mathrm{ff}}=(3\pi /32G\rho )^{1/2}=(1.2\times 10^6\text{ yr})(n/10^3\text{ cm}^3)^{1/2},$$
(1)
where $`n`$ is the number density of the cloud, and I take the mean molecular mass $`\mu =3.32\times 10^{24}`$ g.
In contrast to these Myr collapse times, molecular clouds are commonly thought to be tens of Myr old (e. g. Blitz & Shu 1980). This lifetime is derived from such considerations as their locations downstream from spiral arms, the ages of stars apparently associated with them, and their overall frequency in the galaxy.
Either molecular cloud lifetimes are much shorter than commonly supposed, or they are supported against gravitational collapse by some mechanism, presumably related to the supersonic random velocities observed in them. The first possibility has very recently received an intriguing examination by Ballesteros-Paredes, Hartmann, & Vázquez-Semadeni (1999); however the remainder of this paper will concern itself with the second possibility.
Support mechanisms that have been proposed over the years have included decaying turbulence from the formation of the clouds, magnetic fields and MHD waves, and continuously driven turbulence. Each of these raises questions: how can the decay of decaying turbulence be drawn out over such long periods; can magnetohydrostatically supported regions collapsing by ambipolar diffusion reproduce the observations of molecular cloud cores (Nakano 1998); and what could be the energy source for continuously driven turbulence?
The usual formulation of the problem with maintaining turbulence arising from initial conditions is that the turbulence is measured to be strongly supersonic, and shocks are well known to dissipate energy quickly. Arons & Max (1975) were among the first to suggest that magnetic fields might solve this problem if they were strong enough to reduce shocks to Alfvén waves, since ideal linear Alfvén waves lose energy only to resistive dissipation. As we will review in more detail below, Mac Low et al. (1998) showed that a more realistically computed mix of MHD waves is not nearly so cooperative, a result confirmed by Stone, Ostriker, & Gammie (1998).
On the other hand, if the observed motions come from driving, then the energy source needs to be identified, the amount of energy it is contributing must be determined, and how to couple the energy source to the motions of the dense gas must be explained. Any clues we can derive from comparison of turbulence simulations to observations are helpful (see Mac Low & Ossenkopf 1999 and Rosolowsky et al. 1999 for recent attempts to do that).
## 2 Computations
In the rest of this paper we will present computations of compressible turbulence with and without magnetic fields and self-gravity. For most of the models we use the astrophysical MHD code ZEUS-3D (Stone & Norman 1992a, b; Clarke 1994). This is a second-order code using Van Leer (1977) advection that evolves magnetic fields using a constrained transport technique (Evans & Hawley 1988) as modified by Hawley & Stone (1995), and that resolves shocks using a von Neumann artificial viscosity. We also use a smoothed particle hydrodynamics (SPH) code (e. g. Benz 1990; Monaghan 1992) with a different formulation of the von Neumann viscosity as a comparison to our hydrodynamical models.
All of our models are set up in cubes with periodic boundary conditions and initially uniform density and, in MHD cases, magnetic field. We use an isothermal equation of state for the gas, which is a good approximation for molecular gas between number densities of $`10^2`$ cm<sup>-3</sup> and $`10^9`$ cm<sup>-3</sup> typical of molecular clouds. A pattern of Gaussian velocity perturbations is then imposed on the gas, with the spectrum defined in wavenumber space as desired. Decaying models are then left to evolve (Mac Low et al. 1998), while driven models have the same fixed pattern added in every time step with a varying amplitude computed to ensure a constant rate of kinetic energy input over time (Mac Low 1999). Models with self-gravity use an FFT solver to integrate the Poisson equation.
## 3 Decaying Turbulence
We used models of decaying turbulence to address the question of whether magnetic fields could significantly decrease the dissipation rate of supersonic turbulence, as described in Mac Low et al. (1998). For these models, we set up an initial velocity perturbation spectrum that was flat in $`k`$-space and extended from $`k=2`$ to $`k=8`$. Although $`k^2`$ spectra are often used to drive supersonic turbulence, a spectrum of Gaussian perturbations with this $`k`$-dependence is not a good match to a box full of shocks with a $`k^2`$ spectrum—in the latter case the dependence is just the Fourier transform of a step function.
In Figure 1 we show examples of cuts through 3D models of decaying turbulence computed with ZEUS at resolutions of $`128^3`$ and $`256^3`$ (Mac Low 1999). Although these are cuts rather than column density images, the tendency for the shock waves to form filamentary structures reminiscent of molecular clouds can be clearly seen.
We measured the total kinetic energy on the grid over time for these models, as shown in Figure 2(a). For comparison, we also performed a resolution study using SPH, shown in Figure 2(b). We found that the kinetic energy decays as $`t^\eta `$, with $`0.85<\eta <1.1`$ for models in the supersonic regime (Mach numbers in the range from roughly 1 to 5). This decay rate is actually somewhat slower than the decay rate for incompressible, subsonic turbulence, which, according to the theory of Kolmogorov (1941), decays with $`\eta 5/3`$.
We then added initially uniform magnetic fields to see if they could damp the decay rate. First we chose a field strong enough for the thermal sound speed and the Alfvén speed to be equal. As shown in Figure 2(c), the decay rate changed only very slightly, to $`\eta =0.91`$. Raising the field strength so that the initial Alfvén velocity is unity (Fig. 2(d)), we find only slight further change, to $`\eta =0.87`$. (These results have been fundamentally confirmed by Stone et al. 1998.) While this small decrease in the decay rate is indeed interesting to turbulence theorists, it by no means fulfills the expectations that magnetic fields would markedly reduce the energy dissipation from supersonic random motions.
## 4 Driven Turbulence
In order to try to quantify the decay rate of turbulence, we moved to models of driven turbulence, as described by Mac Low (1999). Because the wavelength of driving strongly influences the behavior of the turbulence, we used driving functions incorporating only a narrow range of wavenumbers from $`k_D1`$ to $`k_D`$, where we only quote the dimensionless driving wavenumber $`k_D`$. In Figure 3 we show cuts through two driven models with different wavelengths.
To measure how strongly equilibrium turbulence dissipated energy, we drove the turbulence with a known, fixed, kinetic energy input rate, $`\dot{E}_K`$, and measured the resulting rms velocity $`v`$. In the hydrodynamic case, we found that these quantities excellently followed the relation
$$\dot{E}_K(0.21/\pi )m\kappa v^3$$
(2)
where $`m`$ is the mass of the region, and $`\kappa =2\pi /\lambda _D`$ is the dimensionalized wavenumber (for our case, with box-size two, $`\kappa =\pi k_D`$), using $`\lambda _D`$ as the dimensional driving wavelength. Although there is some divergence in the MHD case, this relation is still good to within a factor of two even there.
From this relation, we can compute the decay rate in comparison to the free-fall time $`t_{\mathrm{ff}}`$ of the region (Mac Low 1999). If we make the assumption that $`E_K=\rho (\stackrel{}{x})\stackrel{}{v}(\stackrel{}{x})𝑑\stackrel{}{x}(1/2)mv^2`$ (noting that $`v`$ is the rms velocity in the region), we can compute a formal decay time $`t_d=E_K/\dot{E}_K`$ for the turbulence, by substituting in from equations 2 and 1 to find
$$\frac{t_d}{t_{\mathrm{ff}}}=1.2\pi \frac{\lambda _D}{\lambda _J}\frac{1}{M},$$
(3)
where $`M=v/c_s`$ is the rms Mach number. Bonnazzola et al. (1987, 1992) have suggested that $`\lambda _D<\lambda _J`$ is required for turbulent support to be effective in preventing gravitational collapse; observations show that $`M>>1`$ in typical molecular clouds (e.g. Blitz 1993), so turbulence appears likely to decay in rather less than a free-fall time, providing no help to explaining the apparent long lives of molecular clouds.
The observed random supersonic motions are likely therefore to be driven. Four energy sources suggest themselves as possible drivers. First, differential rotation of the galactic disk (Fleck 1981) is attractive as it should apply even to clouds without active star formation. Furthermore, support of clouds against collapse by shear could explain the observation that smaller dwarf galaxies, with lower shear, have larger star-formation regions (Hunter 1998). However, the question arises whether this large-scale driver can actually couple efficiently down to molecular cloud scales. Balbus-Hawley instabilities might play a role here (Balbus & Hawley 1998).
Second, turbulence driven by gravitational collapse has the attractive feature of being universal: there is no need for any additional outside energy source, as the supporting turbulence is driven by the collapse process itself. Unfortunately, it has been shown by Klessen, Burkert, & Bate (1998) not to work for gas dynamics in a periodic domain. The turbulence dissipates on the same time scale as collapse occurs, without markedly impeding the collapse. The computations reported below suggest that magnetic fields do not markedly change this conclusion.
Third, ionizing radiation (McKee 1989, Bertoldi & McKee 1997, Vázquez-Semadeni, Passot & Pouquet 1995), winds, and supernovae from massive stars provide another potential source of energy to support molecular clouds. Here the problem may be that they are too destructive, tending rather to destroy the molecular cloud they act on rather than merely stirring it up. If the clouds are coupled to a larger-scale interstellar turbulence driven by massive stars, however, perhaps this problem can be avoided. Ballesteros-Paredes et al. (1999) even suggest that they are both formed and destroyed on short time-scales by this turbulence, a possibility well worth further study.
A final suspect for the driving mechanism is jets and outflows from the common low-mass protostars that should naturally form in any collapsing molecular cloud (McKee 1989, Franco & Cox 1983, Norman & Silk 1980), allowing the attractive possibility of star-formation being a self-limiting process. It has recently become clear that these jets can reach lengths of several parsecs (Bally, Devine, & Alten 1996), implying total energies of order the stellar accretion energy, as suggested by Shu et al. (1988) on theoretical grounds. However, it remains unclear whether space-filling turbulence can be driven by sticking needles into the molecular clouds.
## 5 Self Gravity
We have begun to investigate directly the support of supersonically turbulent regions against self-gravity by including self-gravity in our models of driven turbulence with and without magnetic fields. Analytic and 2D numerical work by Bonazzola et al. (1987, 1992) and Léorat, Passot, & Pouquet (1990) suggested that a turbulent Jeans wavelength could be defined $`\lambda _{J,t}\sqrt{v^2/G\rho }`$, where $`v`$ is again the rms velocity in the region. They furthermore specified that the rms velocity differences must be measured at wavenumbers contained in the region in question.
It was already noted by Gammie & Ostriker (1996) in their 1D MHD computations that driving could promote collapse as well as preventing it. We find that this effect is very significant in our 3D models where shocks can intersect at multiple angles. Shocks in isothermal gas compress the gas by a factor of the square of the Mach number, so local densities in a region being supported by supersonic turbulence can exceed the average density by orders of magnitude. The free-fall time and Jeans length drop accordingly in these regions, leaving them no longer supported by the global motions. In Figure 4 we show examples of such collapsed regions in turbulence driven with low and high wavenumbers. The only way to prevent the collapse would be to drive the turbulence at such high power and wavenumber that even regions compressed orders of magnitude above the average were still supported, which appears astrophysically unlikely (driving wavelengths would have to be under $`10^3`$ AU if we take typical molecular cloud parameters for our models). Adding magnetic fields with strengths insufficient to allow magnetostatic support so far appears to make no qualitative difference to these results.
From our preliminary models it appears that global collapse with high star-formation efficiency can be at least strongly delayed, if not prevented, by driven turbulence, but local collapse with low star-formation efficiency will be forced. This leads us to speculate that regions of isolated star formation may correspond to regions supported by supersonic turbulence, while regions of clustered star formation may correspond to regions where the turbulence has been overwhelmed, either by the decay of the local turbulent motions or by the accretion of additional mass due to large-scale flows (e.g. in spiral arms or starburst regions).
###### Acknowledgements.
Computations discussed here were performed at the Rechenzentrum Garching of the Max-Planck-Gesellschaft, at the National Center for Supercomputing Applications, and at the Hayden Planetarium of the American Museum of Natural History. I have used the NASA Astrophysical Data System Abstract Service in the preparation of this paper. |
no-problem/9911/cond-mat9911430.html | ar5iv | text | # Influence of surface irregularities on barriers for vortex entry in type-II superconductors
## Abstract
A theoretical description of the influence of surface irregularities (such as wedge-like cracks) on the Bean-Livingston energy barrier is presented. A careful quantitative estimate of the field of the first vortex entry $`H^{}`$ into a homogeneous bulk superconductor with crack-like defects is obtained. This estimate is based on an exact analytical solution for the Meissner current distribution in a bulk superconductor with a single two-dimensional infinitely thin crack. This thin crack is characterized by the lowest $`H^{}`$ value, and, thus, appears to be the best gate for vortex entry.
Experimental and theoretical investigations of influence of properties of a sample surface on the Bean-Livingston (BL) barrier have recently attracted a great deal of attention (see, e.g., ). Surface imperfections can result in a decrease of the critical field $`H^{}`$ of the first vortex entry into a superconductor: $`H^{}\gamma H_p^{}`$, where $`H_p^{}`$ is the critical field corresponding to the BL barrier suppression in the ideal type-II superconductors, $`\gamma <1`$.
Provided we neglect the thermal activation effects (see, e.g., ), the vortex penetration into a superconductor is possible, if two conditions are fulfilled:
(i) the current density should be equal to the depairing (Ginzburg-Landau) current density $`j_{GL}`$ to form the normal core;
(ii) the energy of the vortex should decrease with an increase of the distance from the surface, otherwise the vortex could not penetrate into the bulk.
It is obvious that the properties of the near-surface region strongly affect these conditions. In the BL barrier suppression has been analyzed for the case of small ($`L<\lambda `$) or smooth asperities ($`L`$ is the depth of defects, $`\lambda `$ is the London penetration depth). In this case the suppression of the BL barrier was shown to be moderate $`(\gamma 0.9)`$. We believe the effect of the barrier suppression to be the strongest, when the surface defects have the form of thin deep cracks with dimension $`L\stackrel{_>}{_{}}\lambda `$, oriented across the Meissner current lines, and, stretched along an external magnetic field $`H`$. The existense of such cracks leads to enhancement of the current density near the crack edge and, therefore, facilitates creation of vortices and their further penetration into the bulk .
The purpose of this paper is to evaluate the factor $`\gamma `$ for a single wedge-like crack with $`L\lambda `$ (the base wedge angle in the plane $`(x,y)`$ is $`\theta _0`$, the wedge edge is parallel to the direction of an external magnetic field $`H`$, the $`z`$axis is along the wedge edge). The equation for the magnetic field inside a superconductor with a single vortex line reads:
$$\mathrm{\Delta }B_z\frac{1}{\lambda ^2}B_z=\frac{\varphi _0}{\lambda ^2}\delta (𝐫𝐚),B_z|_\mathrm{\Gamma }=H.$$
(1)
We will use a cylindrical coordinate system $`(r,\theta ,z)`$ with the origin at the wedge edge, $`\mathrm{\Gamma }`$ is the contour chosen along both sides of the wedge ($`\theta =0`$ and $`\theta =2\pi \theta _0`$), the point ($`r=a,\theta =\theta _v`$) is the vortex position ($`aL`$). For small distances $`r\lambda ,a\lambda `$ we can neglect the screening effects, and omit second term in the left-hand side of Eq. (1). Using the conformal mapping method, we get the magnetic field distribution near the wedge edge:
$$B_z(r,\theta )=\frac{\mathrm{\Phi }_0}{4\pi \lambda ^2}ln\frac{R^\mu +R^\mu 2\mathrm{cos}\mu (\theta +\theta _v)}{R^\mu +R^\mu 2\mathrm{cos}\mu (\theta \theta _v)}+H(1\beta (\frac{r}{\lambda })^\mu sin\mu \theta ),$$
(2)
where $`\mu =\pi /(2\pi \theta _0)`$, $`R=r/a`$, and $`\beta =\beta (\theta _0)`$ is the dimensionless factor. The Gibbs energy of the vortex per unit length is given by the expression
$$G(a,\theta _v)=\frac{\mathrm{\Phi }_0}{4\pi }\left[\beta H\left(\frac{a}{\lambda }\right)^\mu sin(\mu \theta _v)+\frac{\mathrm{\Phi }_0}{4\pi \lambda ^2}ln\left(\frac{a}{\mu \lambda }\right)+\frac{\mathrm{\Phi }_0}{4\pi \lambda ^2}ln(2sin(\mu \theta _v))+H_{c1}\right].$$
(3)
The most energetically favorable direction of vortex penetration corresponds to $`\theta _v^{}=(2\pi \theta _0)/2`$. For this direction the energy $`G(a,\theta _v^{})`$ has a maximum at $`a_{max}=\lambda (4\pi \mu \beta \lambda ^2H/\mathrm{\Phi }_0)^{1/\mu }.`$ The critical magnetic field $`H^{}`$ corresponding to vanishing of the energy barrier is
$$H_\mu ^{}=\frac{1}{\mu \beta }\left(\frac{\xi }{\lambda }\right)^{1\mu }H_p^{}.$$
(4)
At fields $`HH^{}`$ the current density near the wedge edge is of the order of the depairing current density $`j_{GL}`$, which is necessary to create a normal core. Note that maximum suppression of the BL energy barrier occurs for a thin crack $`(\theta _01)`$, when $`\mu =1/2`$ and the first vortex entry takes place in an external magnetic field
$$H_0^{}=\frac{2}{\beta }\frac{1}{\sqrt{\kappa }}H_p^{},$$
(5)
where $`\kappa =\lambda /\xi `$.
Note that the factor $`\beta `$ is of fundamental interest for explanation of the large reduction of the BL barrier observed experimentally, and can be obtained correctly only on the basis of a detailed solution of the screening problem. The authors of have concluded that $`\beta `$ is of the order of unity, and so their estimate of the entrance field $`H^{}(\xi /\lambda )^{(1\mu )}H_p^{}`$ is valid only within the arbitrary dimensionless factor. We will obtain the exact value of $`\beta `$ for the most interesting case $`\theta _00`$, which is necessary to find the ultimate suppression of the Bean-Livingston barrier.
To obtain the solution it is useful to note that the London equation (1) can be rewritten in the form of the integral equation with auxiliary vortex sources at the border of a superconductor (see also Ref.):
$$\frac{\mathrm{\Phi }_0}{2\pi \lambda ^2}\underset{0}{\overset{\mathrm{}}{}}K_0(|r\rho |/\lambda )n(r)𝑑r=H(\rho >0),$$
(6)
where $`K_0(r)`$ is the McDonald function of zero kind. Using the Wiener-Hopf method, we get the following result:
$$n(r)=\frac{2\lambda H}{\mathrm{\Phi }_0}\left(\frac{e^{r/\lambda }}{\sqrt{\pi r/\lambda }}+\frac{2}{\sqrt{\pi }}\underset{0}{\overset{\sqrt{r/\lambda }}{}}e^{u^2}𝑑u\right).$$
(7)
Using the expression
$$\frac{\lambda ^2}{r}\left(\frac{B}{\theta }|_{\theta =0}\frac{B}{\theta }|_{\theta =2\pi \theta _0}\right)=\mathrm{\Phi }_0n(r)$$
for small $`r\lambda `$ one can obtain the factor $`\beta =2/\sqrt{\pi }`$.
Thus, we obtain the ultimate suppression of the BL barrier. Vortex penetration starts when an external magnetic field oriented along the crack achieves the following value:
$$H_0^{}=\frac{\sqrt{\pi }}{\sqrt{\kappa }}H_p^{}.$$
(8)
For HTSC $`\kappa 10^2`$, hence $`H_0^{}=H_p^{}\sqrt{\pi /\kappa }0.18H_{cm}3.9H_{c1}`$, where $`H_{c1}`$ is the lower critical field.
To summarize, we found out that surface irregularities strongly suppress the BL barrier for vortex penetration into the bulk and the rate of suppression depends on the type of the defects. The maximum suppression occurs for thin wedge-like cracks oriented parallel to $`H`$ and in this case we have found the analytical expression for the critical field $`H^{}`$ of the first vortex entry.
We are grateful to J.R.Clem, A.A.Andronov, I.L.Maksimov, D.Yu.Vodolazov, N.B.Kopnin for useful discussions and valuable comments, and to A.I.Buzdin for correspondence. This work was supported, in part, by the Russian Foundation for Basic Research (Grant No. 99-02-16188) and by the International Center for Advanced Studies (INCAS 99-2-03). |
no-problem/9911/cond-mat9911106.html | ar5iv | text | # Feedback Effect on Landau-Zener-Stückelberg Transitions in Magnetic Systems
## Abstract
We examine the effect of the dynamics of the internal magnetic field on the staircase magnetization curves observed in large-spin molecular magnets. We show that the size of the magnetization steps depends sensitively on the intermolecular interactions, even if these are very small compared to the intra-molecular couplings.
Magnetization dynamics of nanoscale magnets, i.e. systems like Mn<sub>12</sub>-acetate and Fe<sub>8</sub> have been studied experimentally and theoretically lately . At sufficiently low temperatures quantum effects are observed, due to the discreteness of the energy levels involved. When the magnetization of a crystal of such molecules is measured during a sweep of the external magnetic field, a staircase hysteresis loop is obtained. The steep parts of the staircase correspond to the values of the external magnetic field where there is a crossing of adiabatic energy levels. Several aspects of this quantum effect were studied in . In a zero-temperature calculation, one finds that the magnetization can only change in steps, very similar to the steps observed in recent experiments on high-spin molecules Mn<sub>12</sub>-acetate and Fe<sub>8</sub>. At every crossing, only two levels play a role and the transition probability can be calculated using the Landau-Zener-Stückelberg (LZS) mechanism . Two parameters determine the LZS transition: The energy splitting at the crossing and the sweep rate of the magnetic field.
The size of the energy splitting which leads to a LZS transition probability is determined by the off-diagonal terms in the Hamiltonian describing the system. A straightforward perturbative calculation shows that this splitting is roughly scales like $`\mathrm{\Gamma }^{2|\mathrm{\Delta }m|}`$ where $`\mathrm{\Gamma }`$ determines the magnitude of the off-diagonal terms and $`\mathrm{\Delta }m`$ denotes the difference in magnetisation of the two relevant levels. In the absence of a transverse applied field the energy-level splittings in the high-spin molecules mentioned above are so small that the probability for a single LZS transition is effectively zero, unless the applied longitudinal field is rather large (see for example ).
In the crystal the magnetic field felt by a particular molecule is the sum of the external field and the internal field due to the presence of other magnetic molecules. As the inter-molecular magnetic couplings in these materials are weak compared to the intra-molecular interaction between the spins, it seems reasonable to consider the former as a perturbation. The purpose of this paper is to demonstrate that this argument fails in the case of LZS transitions. The point is that the LZS transition probability depends on the rate of change of the effective magnetic field at the crossing, which can be changed significantly by the presence of the internal magnetic field. The magnetization steps are found to be strongly affected by the type of interactions among molecules. We call this mechanism Feedback Effect on Magnetization Steps (FEMS).
We first illustrate the effect for the case of the Mn<sub>12</sub>-acetate molecules. As a model Hamiltonian for this $`S=10`$ system we take
$``$ $`=`$ $`D_1S_z^2D_4(S_x^4+S_y^4+S_z^4)`$ (2)
$`ct\mathrm{sin}\theta S_x(ct\mathrm{cos}\theta +\lambda S_z)S_z.`$
Compared to the model of the extra feature in Hamiltonian (2) is the presence of a mean-field term, the strength of which we parametrize by $`\lambda `$. It is clear that in this mean-field approach any new effect appears as a result of global changes of the internal field generated by all the molecules and is not due to local fluctuations which should be treated separately .
Quantitative results for the zero-temperature non-equilibrium dynamics of model (2) can only be obtained through a numerical integration of the Schrödinger equation. Using standard techniques we compute the magnetization steps for several values of $`\lambda `$. The results for $`D_1=0.64`$, $`D_4=0.004`$, tilt angle $`\theta =1^{}`$ and sweep rate $`c=0.001`$ (see , we use dimensionless units throughout this paper) are shown in Fig. 1.
It is clear that the dynamics of the internal field can change the size of the magnetization steps considerably. FEMS is observed for all $`\lambda 0`$. Note that the values of $`|\lambda |`$ we used are not unrealistic ($`|\lambda |D_4D_1`$), but rather small if we relate $`\lambda `$ to the dipole-dipole interaction which would yield a $`\lambda `$ which is $`10100`$ times larger.
At very low temperatures experiments show steps at lower values of $`H`$ than the ones at which we observe steps in our calculation. In fact, for the set of model parameters given in (2) a much slower sweep rate $`c`$, much too slow for numerical calculations, is required if we want to study the effect of the internal field at all level crossings.
Therefore it is expedient to turn to a toy model inspired by the one used to describe Fe<sub>8</sub> . We take a $`S=2`$ model with the following Hamiltonian :
$`H`$ $`=`$ $`DS_z^2+E(S_x^2S_y^2)`$ (4)
$`+\mathrm{\Gamma }S_x(ct+\lambda S_z)S_z,`$
where we take $`D=1`$, $`E=0.08`$ and $`\mathrm{\Gamma }=0.08`$. These parameters are chosen such that we get two steps with a probability of about one half.
In Fig. 2 we show the magnetization during a sweep of the magnetic field, with a sweep rate $`c=0.01`$, for several values of $`\lambda `$. We see that the FEMS effect is large.
The transition probabilities are given in Table I. We clearly see a large change in the transition probabilities due to the presence of the internal field.
A deeper understanding of the origin of the FEMS effect can be obtained by considering the system of $`N`$ $`S=1/2`$ molecules described by the Hamiltonian
$$=\underset{i=1}{\overset{N}{}}\left[\mathrm{\Gamma }\sigma _i^xJ\underset{j>i}{\overset{N}{}}\sigma _i^z\sigma _j^z+ct\sigma _i^z\right],$$
(5)
where $`c`$ is the sweep rate, $`\mathrm{\Gamma }`$ is the transverse field and $`J`$ determines the interaction strength between the molecules ($`|J|\mathrm{\Gamma }`$). For simplicity we consider couplings between $`z`$-components only and assume the coupling between the molecules to be the same. Since $`|J|`$ is small, we assume that we can make a mean-field-like approximation. The occurrence of FEMS does not depend on these simplifications (see below). This yields a Hamiltonian of a single molecule in a background field:
$$=\mathrm{\Gamma }\sigma _x(ct+\lambda \sigma _z)\sigma _z,$$
(6)
where $`\lambda J`$ is an effective interaction. The system is prepared in the ground state, corresponding to a large negative time $`t`$, and the magnetic field is swept with constant velocity, until a large positive time is reached. Then, in the LZS case with $`\lambda =0`$, the transition probability is given by the well-known LZS formula $`p=1\mathrm{exp}(\pi \mathrm{\Gamma }^2/c)`$. For $`\lambda 0`$ we write the Schrödinger equation corresponding to (6) in component form:
$`iu^{}`$ $`=`$ $`\left(ct\lambda (2|u|^21)\right)u\mathrm{\Gamma }d,`$ (7)
$`id^{}`$ $`=`$ $`\left(ct+\lambda (2|u|^21)\right)d\mathrm{\Gamma }u,`$ (8)
where we also have the normalization condition $`|u|^2+|d|^2=1`$. From numerical simulations we (see below) find that the tunneling is suppressed (enhanced) by the presence of a feedback term with positive (negative) $`\lambda `$. This can be understood in terms of a changed effective sweep rate at the point of the transition. Because the effective magnetic field at the position of the molecule is given by $`ct+\lambda (2|u|^21)`$, the effective sweep rate would be $`c+\lambda d\sigma _z/dt`$.
If $`\lambda `$ is small but non-zero, the mean field term only contributes at the point of the crossing. So we look at a Taylor expansion around the point of the transition $`t_c`$ (to be determined later),
$$u(t)=u_0+u_1(tt_c)+𝒪((tt_c)^2),$$
(9)
and a similar expression for $`d(t)`$. We insert this expansion in (7) and (8) and obtain
$`i\stackrel{~}{u}^{}(\stackrel{~}{t})`$ $`=`$ $`\stackrel{~}{c}\stackrel{~}{t}\stackrel{~}{u}(\stackrel{~}{t})\mathrm{\Gamma }\stackrel{~}{d}(\stackrel{~}{t}),`$ (10)
$`i\stackrel{~}{d}^{}(\stackrel{~}{t})`$ $`=`$ $`\stackrel{~}{c}\stackrel{~}{t}\stackrel{~}{d}(\stackrel{~}{t})\mathrm{\Gamma }\stackrel{~}{u}(\stackrel{~}{t}),`$ (11)
with $`\stackrel{~}{t}=tt_c`$, $`\stackrel{~}{c}=c+4\lambda \mathrm{}(u_0u_1^{})`$ is the renormalized sweep rate, and were $`\mathrm{}(u)`$ is denoting the real part of $`u`$. We define $`t_c`$ as the point at which $`ct+\lambda \sigma _z`$ changes sign, so
$$t_c=\frac{\lambda }{c}\left(12|u_0|^2\right).$$
(12)
This enables us to write $`\stackrel{~}{u}_0=u_0`$ and $`\stackrel{~}{u}_1=u_1`$. To determine these constants we use Zener’s solution and the properties of Weber functions. We find
$$|u_0|^2=\frac{\pi }{4}\delta e^{\pi \delta /4}\left|\frac{1}{\mathrm{\Gamma }(1+\frac{i\delta }{4})}\right|^2=\frac{1}{2}\left(1(1p)^2\right),$$
(13)
where $`p`$ is the new probability for crossing, i.e. $`p=1\mathrm{exp}(\pi \mathrm{\Gamma }^2/\stackrel{~}{c})`$ and $`\delta =\mathrm{\Gamma }^2/\stackrel{~}{c}`$, and $`t_c=\lambda (1p)^2/c`$. The shift of the field at which the transition occurs can be written as $`\mathrm{\Delta }H=\lambda (1p)^2`$. To determine $`\stackrel{~}{c}`$ we calculate
$$\mathrm{}(u_0u_1^{})=\sqrt{\stackrel{~}{c}}\frac{\pi \delta }{2}e^{\delta \pi /4}\mathrm{}\frac{e^{i\pi /4}}{\mathrm{\Gamma }\left(1+\frac{i\delta }{4}\right)\mathrm{\Gamma }\left(\frac{1}{2}\frac{i\delta }{4}\right)}.$$
(14)
We find that (14) can be approximated by
$$\mathrm{}(u_0u_1^{})\sqrt{\frac{\pi \stackrel{~}{c}}{8}}\delta e^{\delta \pi /4},$$
(15)
with an error of maximally $`10\%`$ (see Fig. 3). Within this approximation, $`\stackrel{~}{c}`$ is given by the implicit equation
$$\stackrel{~}{c}c+\lambda \mathrm{\Gamma }^2\sqrt{2\pi /\stackrel{~}{c}}e^{\mathrm{\Gamma }^2\pi /4\stackrel{~}{c}}.$$
(16)
A simple relation can be obtained by replacing $`\stackrel{~}{c}`$ by $`c`$ on the right hand side. Then
$$\stackrel{~}{c}c+\lambda \mathrm{\Gamma }^2\sqrt{2\pi /c}e^{\pi \mathrm{\Gamma }^2/4c},$$
(17)
and
$$p1\mathrm{exp}\left(\frac{\pi \mathrm{\Gamma }^2}{c}\frac{1}{1+\lambda \mathrm{\Gamma }^2\sqrt{2\pi /c^3}e^{\mathrm{\Gamma }^2\pi /4c}}\right).$$
(18)
The resulting probabilities are shown in Fig. 3. The resulting probabilities based on a numerical solution of (15) or (16) for $`\stackrel{~}{c}`$ show similar behavior. Also shown are the results obtained from the exact numerical solution of the Schrödinger equation (6). As a test of the validity of the mean-field approximation we also show the result of four interacting $`S=1/2`$ spins, where we assumed $`\lambda =(N1)J`$. Clearly the exact results confirm the validity of the mean-field approximation and the simple analytic expression (18).
For values of $`\lambda `$ below approximately $`2.0`$ the description in terms of a renormalized sweep rate breaks down, which can also be seen from the singularity in the argument of the exponent in eq. (18). This is because the picture of a simple, single crossing breaks down and the effective magnetic field at the position of the spin will no longer be a strictly increasing function of time. We conclude that the expression (18) captures the main features of FEMS at a single crossing.
The relevant parameters, controlling the size of FEMS, are $`\mathrm{\Gamma }/\sqrt{c}`$ and $`\lambda /\sqrt{c}`$. Only for S=1/2 the energy-level splitting is directly proportional to $`\mathrm{\Gamma }^2`$. For the high-spin molecules this is not case (see above), in particular for the levels with large $`|m|`$. Although in these cases the effective energy level-splitting that enters the approximate two-level description can be small, a rather small value of $`\lambda `$ can nevertheless change the transition probability significantly.
We have shown that the magnetization steps in the hysteresis loops of clusters of high-spin molecules may depend sensitively on the change of the internal magnetic field at these steps. This implies that the dynamics of this internal fields has to be incorporated in a description of the magnetization dynamics, even if its magnitude appears to be small compared to the other model parameters (for large spin). At finite temperatures the effect described in this paper will be enhanced further due to the thermalization to states with lower energy and larger magnetization .
We would like to thank W. Wernsdorfer for illuminating discussions. Support from the Dutch “Stichting Nationale Computer Faciliteiten (NCF)” and from the Grant-in-Aid for Research of the Japanese Ministry of Education, Science and Culture is gratefully acknowledged. |
no-problem/9911/cond-mat9911420.html | ar5iv | text | # Asymmetric Unimodal Maps: Some Results from 𝑞-generalized Bit Cumulants
## Acknowledgments
I acknowledge the partial support of BAYG-C program of TUBITAK (Turkish agency) and CNPq (Brazilian agency) as well as the support from Ege University Research Fund under the project number 98FEN025.
Figure Captions
Figure 1 : The scaling between the standard second cumulant $`C_2^{(1)}`$ and the generalized second cumulant $`C_2^{(q)}`$ for a representative value of ($`z_1,z_2`$) pairs.
Figure 2 : The behaviour of the slope as a function of $`q`$ index for a representative value of ($`z_1,z_2`$) pairs.
Figure 3 : The behaviour of the slope as a function of $`z_2z_1`$ values for a family of four different ($`z_1,z_2`$) pairs. |
no-problem/9911/cond-mat9911127.html | ar5iv | text | # Far-infrared spectroscopy of nanoscopic InAs rings
##
Recent progress in nanofabrication techniques has allowed to construct self-assembled nanoscopic InGaAs quantum rings occupied with one or two electrons each, and submitted to perpendicular magnetic fields $`(B)`$ of up to 12 T. These are the first spectroscopic data available on rings in the scatter-free, few electrons limit in which quantum effects are best manifested. Previous spectroscopic studies dealt with microscopic rings in GaAs-Ga<sub>x</sub>Al<sub>1-x</sub>As heterostructures, fairly well reproduced by classical or hydrodynamical models.
In spite of the lacking of experimental information, the study of nanoring structures has already attracted a strong theoretical interest. We recall that due to the non-applicability of the generalized Kohn theorem, a very rich spectroscopic structure is expected to appear in few electrons nanorings, as anticipated by Halonen et al and also found in recent works.
In this paper we attempt a quantitative description of some spectroscopic and ground state (gs) properties of the experimentally studied nanorings using current-density (CDFT) and time-dependent local-spin density (TDLSDFT) functional theories. The reason for such an attempt is twofold. On the one hand, to contribute to put on a firmer basis the interpretation of current experiments as manifestation of actual properties of few-electrons ring-shaped nanostructures. On the other hand, to disclose the capabilities and limitations of density functional methods to describe such small systems.
Following Ref. , we have modeled the ring confining potential by a parabola
$$V^+(r)=\frac{1}{2}m\omega _0^2(rR_0)^2$$
(1)
with $`R_0`$= 14 nm and the frequency $`\omega _0`$ fixed to reproduce the high energy peak found in the far-infrared (FIR) transmission spectrum at $`B`$= 0. For $`N`$= 2 electrons this yields $`\omega _0`$ 12.3 meV. The electron effective mass $`m^{}`$= 0.063 (we write $`m=m^{}m_e`$ with $`m_e`$ being the physical electron mass) and effective gyromagnetic factor $`g^{}=0.43`$ have been taken from the experiments, and the value of the dielectric constant has been taken to be $`ϵ`$= 12.4.
To obtain the structure of the gs we have resorted to CDFT as described in Refs. , and to obtain the charge density response we have used TDLSDFT as described in Ref. , which has been recently applied to the ring geometry . It is worthwhile to point out that we have not found any significant difference between using CDFT or LSDFT to describe the gs of the studied rings in the range of $`B`$ values of the present work. The suitability of CDFT to describe such a small electronic system has been shown by Ferconi and Vignale comparing the results obtained for a dot with $`N`$= 2 electrons with exact and Hartree-Fock calculations. We refer the reader to the mentioned references for a detailed exposure of the methods.
The results obtained for the $`N`$= 2 ring are presented in Figs. 1-4. We have used a small temperature $`T`$= 0.1 K to work them out. Figure 1 shows that the ring becomes polarized near $`B`$= 3 T. Besides, two other $`B`$-induced changes arise in the gs at $`B`$ 8 T and, more weakly, at $`B`$ 14 T. These changes can be traced back to sp level crossings . As displayed in Fig. 2, the changes in the $`B`$-slope appear when an occupied sp level is substituted by an empty one. At $`B`$ 8 T, this involves the substitution of the $`l`$= 0 sp level by the $`l`$= 2 one, and at $`B`$ 14 T the $`l`$= 1 sp level is substituted by the $`l`$= 3 one. Other level crossings do not involve such substitutions, but a different ordering of the occupied levels and do not seem to produce a substantial effect (see for instance the crossings at $`B`$ 6 and $``$ 11.5 T).
The experimentally observed change in the FIR spectrum around $`B`$= 8 T has been attributed to the crossing of $`l`$= 0 and 1 sp levels on the basis of a simple single-electron model (see also Fig. 5). A realistic description of the crossings requires to incorporate in the theoretical description the spin degree of freedom, of which single electron or Hartree models lack whereas CDFT or LSDFT do not. Yet, we confirm the finding that a magnetic induced transition takes place in the gs when approximately 1 flux quantum penetrates the effective interior area of the ring at $`B`$ 8 T, and predict another one at $`B`$ 14 T when this area is penetrated by $``$ 2 flux quanta.
The changes in $`B`$-slope of the total energy correlate well with these in the electronic chemical potential (the energy of the last occupied sp level in Fig. 2). The gross structure of the chemical potential and total energy displays the well known periodic, Aharonov-Bohm-type oscillation found in extreme sp models:
$$ϵ_l=\frac{\mathrm{}^2}{2mR_0^2}\left(l\frac{e}{\mathrm{}c}R_0^2B\right)^2.$$
(2)
The experimental FIR resonances have been grouped into different modes using a different symbol for each group. Here, we have used the same symbol to represent the experimental resonances in Figs. 3 and 4. Figure 3 shows the dipole charge density strength function in an arbitrary logarithmic scale as a function of the excitation energy. The curves have been offset for clarity. Charge density excitations (CDE) can be identified as ‘ridges’ in the plot, allowing to make a sensible comparison with experiment not only of the peak energies themselves, but also of the way the experimental modes have been grouped. A plot of the more intense CDE’s is presented in Fig. 4 as a function of $`B`$, which is qualitatively similar to that of Halonen et al for an $`N`$= 2 quantum dot with a repulsive gaussian impurity in its center. For completeness, we also show in Fig. 3 the longitudinal spin density strength function for the cases in which the ring is not fully polarized. As both strengths coincide in the non-interacting case, the observed shifts are a measure of the importance of the electron-electron interaction, which affects more the low energy peaks than the high energy ones.
These figures show that the FIR dipole response is splitted into two large groups of peaks. The low energy peaks correspond to transitions involving only $`n`$= 0 sp levels and are $`\mathrm{\Delta }n`$= 0 transitions, whereas the high energy peaks involve $`n`$=0 and 1 sp levels and are $`\mathrm{\Delta }n`$= 1 transitions. One can easily distinguish two sets of resonances, a low-lying $`\mathrm{\Delta }n`$= 0 one, and a high-lying $`\mathrm{\Delta }n`$= 1 one exhibiting the usual Zeeman splitting when a magnetic field is applied. The intensity of the high energy resonance is more than one order of magnitude smaller than that of the low energy one. Experimentally, both sets have similar oscillatory strengths, whereas TDLSDFT yields a $``$ 90-10 % share at most. The calculations in Ref. also yield rather different absorption intensities to these resonances. We have checked that the computed spectrum fulfills the $`f`$-sum rule to within $`98\%`$, thus leaving no room for higher energy, $`\mathrm{\Delta }n>`$ 1 peaks to appear within TDLSDFT.
Besides these Zeeman-splitted resonances, several others show up in the spectrum. We have identified with a +$`()`$ sign these involving changes $`\mathrm{\Delta }|L|`$= 1($`1`$) in the total orbital angular momentum with respect to that of the gs.
At $`B`$ 8 T, the positive $`B`$-dispersion brach of the $`\mathrm{\Delta }n`$= 0 resonance disappears, and a very low-lying, positive $`B`$-dispersion branch shows up. The origin of this transition is the magnetic-induced change in the gs, as it can be easily inferred looking at the $`n`$= 0 sp levels plotted in Fig. 2 and using the dipole selection rule to identify the ones involved in the non spin-flip excitation. A similar transition occurs at $`B`$ 14 T. They are the microscopic explanation of the appearance and disappearance of the ’ridges’ shown in Fig. 3, also found for few electron nanorings. It is worthwhile to notice that the rich structure appearing in these nanorings (see below the $`N`$= 1 case) is a peculiarity that has its origin in the smallness of $`N`$. When $`N`$ is just a few tens, many electron-hole pairs contribute to the building of the resonances and no drastic changes appear in the FIR spectrum.
We have also looked at the $`N`$= 1 ring for which some experimental information is also available. As in the $`N`$= 2 case, we have fixed $`\omega _0`$ so as to reproduce the high energy resonance at $`B`$= 0. This yields $`\omega _0`$ 13.5 meV.
Figure 5 shows several $`n`$= 0 sp levels of the $`N`$= 1 ring. For this system, the total energy $`E(1)`$ is simply the energy of the lowest sp level, $`E(1)=\mu (1)`$. This has been used to calculate the addition energy $`\mu (2)=E(2)E(1)`$ shown in Fig. 8. The dipole charge density strength and the energy of the more intense $`N`$= 1 CDE’s are plotted in Figs. 6 and 7 as a function of $`B`$.
We thus see that the experimental data on FIR transmission spectroscopy reflects that the surface ring morphology of the experimental samples has indeed being translated to a true underlying electronic ring structure, and that a fair quantitative agreement can be found between TDLSDFT calculations and experiment. Our calculations also give support to the way the experimental resonances have been grouped, with the only doubt of the ’dot’ peak at $`B`$= 6 T and $`\omega `$ 16.1 meV which could also be a ’triangle’ peak of $`()`$ character because in this region both branches merge. To unambiguously arrange the peaks into branches and disentangle the $`B`$ dispersion of the modes, it would be essential to experimentally assign the polarization state to the main CDE’s, as is has been done for antidot arrays. This is crucial in the analysis of the theoretical FIR response, which otherwise does not allow us to distinguish between peak fragmentation and different plasmon branches in some cases.
From the theory viewpoint, the main shortcomings are the ’cross’ peaks appearing at around 8 T for $`N`$= 2, and 10 T for $`N`$= 1, as well as clear overestimation of the peak energy of the $`()`$ high energy $`\mathrm{\Delta }n`$= 1 mode, which also lacks of some strength. These drawbacks are also qualitatively present in the calculations of Ref. . It is alike that using other possible confining potentials, like a jellium ring or that of Ref. which yields analytical sp wave functions in the non-interacting case will improve the agreement in view of other possible sources of uncertainty, as for example the precise value of the ring radius $`R_0`$ (we have taken that of Ref. , but larger values would also be acceptable), and the values of $`m^{}`$, $`g^{}`$ and $`ϵ`$ corresponding to InAs. In particular, the effective mass value seems to depend on whether it is extracted from capacitance of from FIR spectroscopy. We have checked that if we take $`m^{}`$= 0.08 we achieve a better description of the ’dot’ peaks in Figs. 3 and 6 at the price of spoiling the description of ’diamond’ and ’triangle’ peaks. Yet, the patterns look qualitatively similar.
Finally, we show in Fig. 8 the addition energies of both rings as compared with the gate voltage shift of the lowest capacitance maximum. It can be seen that the agreement between theory and experiment is rather poor. At $`B`$ 12 T the calculations underestimate the shift voltage around a factor of 3 for $`N`$= 2, and of 2 for $`N`$= 1. We recall that the agreement between capacitance spectroscopy experiments and exact diagonalization calculations of few electron quantum dots is also only qualitative . We cannot discard that using a different radius $`R_0`$ for each ring would not improve the agreement but have not tried this possibility to avoid too much parameter fitting in the calculation. The electron-electron interaction determines the energy difference between $`\mu (1)`$ and $`\mu (2)`$ at $`B`$= 0. A small bump in $`\mu (2)`$ at $`B`$ 2-3 T is the signature of full polarization. A similar structure shows up in the experimental points but between 3-4 T. Interestingly however, the change in the electronic structure at $`B`$ 8 T is visible in the calculated addition energy $`\mu (2)`$.
This work has been performed under grants PB95-1249 from CICYT, Spain, and 1998SGR00011 from Generalitat of Catalunya. A. E. acknowledges support from the DGES (Spain), and A. L. from the German Ministry of Science (BMBF). |
no-problem/9911/hep-ph9911207.html | ar5iv | text | # hep-ph/9911207VUTH 99-24 Reassessment of the Collins Mechanism for Single-spin Asymmetries and the behavior of 𝚫𝒅(𝒙) at large 𝒙.
## I Introduction
One of the major challenges to the QCD-parton model is the explanation of the large (20-40%) single-spin asymmetries found in many semi-inclusive hadron-hadron reactions, of which the most dramatic are the polarization of the lambda in $`pp\mathrm{\Lambda }X`$ and the asymmetry under reversal of the transverse spin of the proton in $`p^{}p\pi X`$. The challenge arises from the fact that in the standard approach the basic “two parton $``$ two parton” reactions involved in the perturbatively treated hard part of the scattering do not possess this kind of asymmetry.
Already some time ago, Efremov and Teryaev suggested a mechanism for these asymmetries utilizing “three parton $``$ two parton” amplitudes for the hard scattering. This, however, necessitates the introduction of a new unknown soft two-parton density, namely the correlated probability of finding in the polarized proton a quark with momentum fraction $`x_1`$ and a gluon with fraction $`x_2`$. This quark-gluon correlator contains the dependence on the transverse spin of the proton. A fully consistent application of the approach has not yet been carried out, though a significant step in the direction has been taken recently by Sterman and Qiu .
Some time ago Sivers and Collins suggested mechanisms for the asymmetries, which are within the framework of the standard “two parton $``$ two parton” picture, but in which the transverse momentum $`𝐤_𝐓`$ of the quark in the hadron plays an essential role. Sivers introduces a parton density which depends on the transverse spin $`𝐒_𝐓`$ of the proton in the form $`𝐒_𝐓(𝐤_𝐓\times 𝐩)`$ where $`𝐩`$ is the momentum of the polarized proton. However such a structure is forbidden by time reversal invariance . Nonetheless, on the grounds that the time-reversal argument could be invalidated by initial or final state interactions, Anselmino, Boglione and Murgia have applied the Sivers mechanism to $`p^{}p\pi X`$ and shown that a very good fit to the data at large $`x_F`$ can be achieved. The problem is that this approach raises a fundamental question: if the initial and final state interactions are important then it is hard to see why the underlying factorization into hard and soft parts is valid. (The advantage of the quark-gluon correlator approach is that it effectively provides initial and final state interactions calculable at the parton level). In the Collins mechanism the asymmetry arises in the fragmentation of a polarized quark into a pion with transverse momentum $`p_T`$, described by a fragmentation function of the form $`\mathrm{\Delta }^ND(z,p_T)`$ (see Section II) which is convoluted with the transverse spin dependent parton density $`\mathrm{\Delta }_Tq`$, about which nothing is known experimentally. This structure is not in conflict with time reversal invariance. The definition of transverse spin dependent parton densities and possible methods of determining them have been studied by Artru and Mekhfi , Cortes, Pire and Ralstone and Jaffe and Ji . Note that in the latter they are referred to as ”transversity distributions” and that notation differs amongst all these papers.
An estimate of the size of the Collins effect was first made by Artru, Czyzewski and Yabuki , but more recently Anselmino, Boglione and Murgia have demonstrated that an excellent fit to the data on $`p^{}p\pi X`$ can be obtained with the Collins mechanism. However their fit is problematic since the $`\mathrm{\Delta }_Tq`$’s used in the fit violate the Soffer bound
$$|\mathrm{\Delta }_Tq(x)|\frac{1}{2}[q(x)+\mathrm{\Delta }q(x)]$$
(1)
at large $`x`$ when $`\mathrm{\Delta }q(x)`$, the usual longitudinal polarized parton density, is taken from any of the standard parametrizations of the longitudinal and unpolarized densities:
* GRSV-GRV = Glück, Reya, Stratmann and Vogelsang \+ Glück, Reya, Vogt ,
* GS-GRV = Gehrmann and Stirling \+ Glück, Reya, Vogt ,
* LSS-MRST = Leader, Sidorov and Stamenov \+ Martin, Roberts, Stirling and Thorne .
Note that in the above parametrizations the polarized densities are linked to particular parametrizations of the unpolarized densities, as indicated.
The key point is that the $`\pi ^{}`$ data demand a large magnitude for $`\mathrm{\Delta }_Td`$ at large $`x`$, whereas $`\mathrm{\Delta }d(x)`$ is almost universally taken negative for all $`x`$, thereby making the Soffer bound much more restrictive for the $`d`$ than for the $`u`$ quark.
This raises an intriguing question. There is an old perturbative QCD argument that strictly in the limit $`x1`$
$$\frac{\mathrm{\Delta }q(x)}{q(x)}1,$$
(2)
which would imply that $`\mathrm{\Delta }d(x)`$ has to change sign and become positive at large $`x`$. The polarized DIS data certainly require a negative $`\mathrm{\Delta }d(x)`$ in the range $`0.004<x0.75`$ where data exist, but there is no reason why $`\mathrm{\Delta }d(x)`$ should not change sign near or beyond $`0.75`$. Indeed there is a parameterization of the parton densities by Brodsky, Burkhardt and Schmidt \[BBS\] which has this feature built into it. The original BBS fit is not really competitive since evolution in $`Q^2`$ was not taken into account, but a proper QCD fit based on the BBS parameterization was shown by Leader, Sidorov and Stamenov to give an adequate fit to the polarized DIS data.
In this paper we address the question of the correct use of the Collins mechanism in which the Soffer bound is respected. We find that it is impossible to get a good fit to the $`\pi ^\pm `$ data when the magnitude of $`\mathrm{\Delta }_Td`$ is controlled by (1) in which $`\mathrm{\Delta }d(x)`$ from any of the standard forms given above is used. On the contrary, and most surprisingly, we find that parametrizations in which $`\mathrm{\Delta }d(x)/d(x)1`$ as $`x1`$ allow a $`\mathrm{\Delta }_Td(x)`$ that leads to an excellent fit to most of the pion data.
In Section II we briefly describe the Collins mechanism, and present our results in Section III. Conclusions follow in Section IV.
## II The Model
As mentioned in the Introduction, we require the Soffer inequality, Eq.(1), to be respected by the distribution functions $`\mathrm{\Delta }_Tu`$ and $`\mathrm{\Delta }_Td`$ determined by our fit. Besides, the positivity constraint $`\mathrm{\Delta }^ND(z)2D(z)`$ must hold, since $`D(z)=\frac{1}{2}[D^{}(z)+D^{}(z)]`$ and $`\mathrm{\Delta }^ND(z)=[D^{}(z)D^{}(z)]`$. Therefore, the parametrizations are set so that these conditions are automatically fulfilled, in the following way. First we build a simple function of the form $`x^a(1x)^b`$ (or $`z^\alpha (1z)^\beta `$, as appropriate), where the powers $`a,b`$ or $`\alpha ,\beta `$ are $`0`$, and we divide by their maximum value. By allowing an extra multiplicative constant factor to vary from $`1`$ to $`1`$, we obtain a function which, in modulus, will never be larger than 1. Then we parameterize $`\mathrm{\Delta }_Tq(x)`$ and $`\mathrm{\Delta }^ND(z)`$ by multiplying the functions we built by the constraint given by the Soffer inequality or the positivity limit. In this way we make sure that the bounds are never broken.
For the transversity distribution functions we set
$$\mathrm{\Delta }_Tu(x)=N_u\frac{x^a(1x)^b}{\frac{a^ab^b}{(a+b)^{a+b}}}\left\{\frac{1}{2}[u(x)+\mathrm{\Delta }u(x)]\right\},$$
(3)
$$\mathrm{\Delta }_Td(x)=N_d\frac{x^c(1x)^d}{\frac{c^cd^d}{(c+d)^{c+d}}}\left\{\frac{1}{2}[d(x)+\mathrm{\Delta }d(x)]\right\},$$
(4)
with $`|N_{u,d}|1`$. Here $`q(x)`$ and $`\mathrm{\Delta }q(x)`$ are the whole distribution functions, i.e. they contain valence and sea contributions (but this is irrelevant at large $`x`$ since there the contribution of the sea is negligible). As in the previous calculation, only $`u`$ and $`d`$ contributions are taken into account in the polarized proton, so that
$$\mathrm{\Delta }_T\overline{u}(x)=\mathrm{\Delta }_T\overline{d}(x)=\mathrm{\Delta }_Ts(x)=\mathrm{\Delta }_T\overline{s}(x)=0.$$
(5)
For the functions $`q(x)`$ and $`\mathrm{\Delta }q(x)`$ we use, for comparison, the “standard” parton parametrizations mentioned in Section I and two further parametrizations, one due to Brodsky, Burkhardt and Schmidt (BBS) which ignores $`Q^2`$-evolution, and a more consistent version of this, due to Leader, Sidorov and Stamenov (LSS)<sub>BBS</sub> which includes the $`Q^2`$-evolution. These will be explained in more detail in Section III.
For the fragmentation function we have
$$\mathrm{\Delta }^ND(z)=N_F\frac{z^\alpha (1z)^\beta }{\frac{\alpha ^\alpha \beta ^\beta }{(\alpha +\beta )^{\alpha +\beta }}}\left[\mathrm{\hspace{0.17em}2}D(z)\right],$$
(6)
with $`|N_F|1`$. Here we take into account only valence contributions, so that isospin symmetry and charge conjugation give
$$\mathrm{\Delta }^ND_{\pi ^+}^u=\mathrm{\Delta }^ND_\pi ^{}^d=\mathrm{\Delta }^ND(z),$$
(7)
$$\mathrm{\Delta }^ND_{\pi ^+}^{\overline{u}}=\mathrm{\Delta }^ND_\pi ^{}^{\overline{d}}=0$$
(8)
and
$$\mathrm{\Delta }^ND_{\pi ^0}^u=\mathrm{\Delta }^ND_{\pi ^0}^d=\mathrm{\Delta }^ND_{\pi ^0}^{\overline{u}}=\mathrm{\Delta }^ND_{\pi ^0}^{\overline{d}}=\frac{1}{2}\mathrm{\Delta }^ND(z).$$
(9)
Notice that $`\mathrm{\Delta }^ND`$ is, in fact, a function of the intrinsic transverse momentum $`𝒑_T`$. $`\mathrm{\Delta }^ND_{h/a^{}}(z,𝒑_T)`$ is defined as the difference between the number density of hadrons $`h`$, a pion in our case, with longitudinal momentum fraction $`z`$ and transverse momentum $`𝒑_T`$, originating from a transversely polarized parton $`a`$ with spin either $``$ or $``$, respectively
$`\mathrm{\Delta }^ND_{h/a^{}}(z,𝒑_T)`$ $``$ $`\widehat{D}_{h/a^{}}(z,𝒑_T)\widehat{D}_{h/a^{}}(z,𝒑_T)`$ (10)
$`=`$ $`\widehat{D}_{h/a^{}}(z,𝒑_T)\widehat{D}_{h/a^{}}(z,𝒑_T),`$ (11)
where the second line follows from the first one by rotational invariance. Details on the integration over the transverse momentum $`𝒑_T`$ and its dependence on $`z`$ are given in Ref. (see Eqs. (17) and (19)). The unpolarized fragmentation function for pions, $`D(z)`$, is taken from Binnewies et al. .
With these ingredients we are now ready to calculate, in complete analogy with Ref. , the $`p^{}p\pi X`$ single spin asymmetry
$$A_N=\frac{d\sigma ^{}d\sigma ^{}}{d\sigma ^{}+d\sigma ^{}}.$$
(12)
Here
$`d\sigma ^{}d\sigma ^{}`$ $`=`$ $`{\displaystyle \underset{a,b,c,d}{}}{\displaystyle \frac{dx_adx_b}{\pi z}d^2𝒑_T}`$ (13)
$`\times `$ $`\mathrm{\Delta }_Tq^a(x_a)q^b(x_b)\mathrm{\Delta }_{NN}\widehat{\sigma }^{abcd}(x_a,x_b,𝒑_T)\mathrm{\Delta }^ND_{\pi /c}(z,𝒑_T),`$ (14)
where
$$\mathrm{\Delta }_{NN}\widehat{\sigma }^{abcd}=\frac{d\widehat{\sigma }^{a^{}bc^{}d}}{d\widehat{t}}\frac{d\widehat{\sigma }^{a^{}bc^{}d}}{d\widehat{t}},$$
(15)
and
$`d\sigma ^{}+d\sigma ^{}`$ $`=`$ $`2d\sigma ^{unp}=2{\displaystyle \underset{a,b,c,d}{}}{\displaystyle \frac{dx_adx_b}{\pi z}}`$ (16)
$`\times `$ $`q^a(x_a)q^b(x_b){\displaystyle \frac{d\widehat{\sigma }^{abcd}}{d\widehat{t}}}(x_a,x_b)D_{\pi /c}(z).`$ (17)
All details about the calculation can be found in Ref. . The relation between the above notation and that of is : $`q^a(x)=f_{a/p}(x)`$ and $`\mathrm{\Delta }_Tq^a(x)=P^{a/p^{}}f_{a/p}(x)`$.
## III Results
We start by running two fits to the $`E704`$ experimental data using the popular GS polarized densities in conjunction with the GRV unpolarized densities, and the latest LSS polarized densities in conjunction with the MRST unpolarized densities. It should be noted that the 1996 analysis of Gehrmann and Stirling was done prior to the publication of a great deal of new, high precision data on polarized DIS, whereas the Leader, Sidorov and Stamenov analysis includes all of the present world data. Fig. 1 shows the complete failure to fit the data when the Soffer bound is implemented using the GS-GRV parametrizations ($`\chi ^2/DOF=25`$ !). The corresponding transverse densities $`\mathrm{\Delta }_Tu(x)`$ and $`\mathrm{\Delta }_Td(x)`$ are shown in Fig. 2. In this fit only $`|N_F|=1`$ is possible and Fig. 2 corresponds to $`N_F=1`$. The sign is discussed later.
A somewhat better picture emerges when using the LSS-MRST results to implement the Soffer bound: Fig. 3. The fit looks reasonable out to $`x_F=0.4`$ but fails beyond that. One finds $`\chi ^2/DOF=6.12`$. The transverse densities are shown in Fig. 4, where the curves correspond to negative $`N_F`$.
Note that in both these fits one finds $`\alpha =\beta =0`$ in Eqn. (6), showing that the magnitude of $`\mathrm{\Delta }^ND(z)`$ is maximized at each $`z`$-value.
The reason for the failure of GS-GRV case and for the relative success of LSS-MRST can be understood by observing in Fig. 5 that the Soffer bound on $`\mathrm{\Delta }_Td(x)`$ is much more restrictive at large $`x`$ in the GS-GRV case. Comparison of Fig. 5 with Fig. 6 also indicates the source of the problem. The asymmetries for $`\pi ^\pm `$ are of roughly equal magnitude whereas the Soffer bound restrictions are much more severe for the $`d`$ quark as a consequence of $`\mathrm{\Delta }d(x)`$ being negative for all $`x`$.
This suggests an intriguing possibility. The polarized DIS data only exist for $`x0.75`$ and there is really very little constraint from DIS on the $`\mathrm{\Delta }q(x)`$ for $`x`$ near to and beyond this value. At the same time there are perturbative QCD arguments which suggest that
$$\frac{\mathrm{\Delta }q(x)}{q(x)}1\mathrm{as}x1$$
(18)
and, indeed, even more precisely, that
$$q(x)\mathrm{\Delta }q(x)(1x)^2q(x)\mathrm{as}x1.$$
(19)
This constraint is almost universally ignored in parameterizing the $`\mathrm{\Delta }q(x)`$, on the grounds that (19) is incompatible with the evolution equations. But this is a “red herring” since the evolution equations do not hold in the region where (19) is valid, approaching the border of the exclusive region.
The imposition of (18) is exactly what we need for $`\mathrm{\Delta }q(x)`$ to change sign and become positive at large $`x`$, thereby diminishing the restrictive power of the Soffer bound on $`\mathrm{\Delta }_Td(x)`$.
In fact there does exist a parametrizations of the $`\mathrm{\Delta }q(x)`$ which respects (18) and (19), namely that of Brodsky, Burkhardt and Schmidt (BBS) . Unfortunately BBS did not include any $`Q^2`$-evolution where determining the numerical values of their parameters from the DIS data, so their fit is not really adequate.
However, Leader, Sidorov and Stamenov made an extensive study, using the BBS functional forms, but including $`Q^2`$-evolution, and found a very good fit (LSS)<sub>BBS</sub> to the polarized DIS data available in 1997.
It can be seen in Fig. 5 that the Soffer bound on $`\mathrm{\Delta }_Td(x)`$ is much less restrictive for the BBS case and that the (LSS)<sub>BBS</sub> bound is rather similar to that of the LSS-MRST case, but is less restrictive for $`x0.7`$. It is important to realize that although the $`\mathrm{\Delta }_Td(x_a)`$ needed in (13) are tiny for such large values of $`x_a`$, this is compensated for by the fact that large $`x_F`$ then demands very small $`x_b`$, where the unpolarized densities grow very large.
Note that the relative signs of $`N_u`$ and $`N_d`$ are opposite, but their absolute signs are not determined since, in principle, $`N_F`$ can be positive or negative. However, if one uses an $`SU(6)_F`$ wave function for the proton, one finds $`\mathrm{\Delta }_Tu`$ positive and $`\mathrm{\Delta }_Td`$ negative, so it seems reasonable to hypothesize that $`N_u>0`$ and $`N_d<0`$. For this reason we have chosen $`N_F`$ to be negative in the above. Note that $`N_u`$ and $`N_d`$ are not a direct measure of the magnitudes of $`\mathrm{\Delta }_Tu`$ and $`\mathrm{\Delta }_Td`$. Their role is linked specifically to the Soffer bound. The relative behavior of $`\mathrm{\Delta }_Tu`$ and $`\mathrm{\Delta }_Td`$ can be seen in Figs. 2, 4, 9, 10.
Indeed, as expected, we find that a significantly better fit to the asymmetry data is achieved using the BBS and the (LSS)<sub>BBS</sub> parametrizations, with $`\chi ^2/DOF=1.45`$ and $`\chi ^2/DOF=2.41`$ respectively. As can be seen in Fig. 7 and 8. The curves reproduce the trends in the data right out to $`x_F0.7`$. Figs. 9 and 10 show how similar the allowed ranges of transverse polarized densities are in the two cases. In Fig. 9, $`0.88|N_F|1`$, whereas in Fig. 10 $`0.91|N_F|1`$. As before the curves correspond to negative $`N_F`$.
The parameter values for all the parametrizations are shown in Table 1, where it should be recalled $`|N_F|`$, $`|N_u|`$ and $`|N_d|\mathrm{\hspace{0.33em}1}`$.
## IV Conclusions
We have demonstrated that the Collins mechanism is able to explain much of the data on the transverse single spin asymmetries in $`p^{}p\pi X`$, namely the data in the region $`x_F0.7`$, if, and only if, the longitudinal polarized $`d`$-quark density, which is negative for small and moderate $`x`$, changes sign and becomes positive at large $`x`$. There is hopeless disagreement when using the longitudinal polarized densities due to Gehrmann and Stirling , and matters are significantly better when using the most up to date parameterization of Leader, Sidorov and Stamenov . But the most successful fits arise from parametrizations which respect the PQCD condition $`\mathrm{\Delta }d(x)/d(x)1`$ as $`x1`$.
For parametrizations of $`\mathrm{\Delta }d(x)`$ with this property there are interesting consequences in polarized DIS, namely, the neutron longitudinal asymmetry $`A_1^n(x)`$ should change sign and tend to $`1`$ as $`x1`$ (see Fig. 11). The region of large $`x`$ has hardly been explored in polarized DIS up to the present. Clearly a study of this region might turn out to be very interesting.
There remains, however, the problem of the $`p^{}p\pi X`$ data at the largest values of $`x_F`$ so far measured, i.e. $`0.7x_F0.82`$. It does not seem possible to account for these asymmetries within the framework of the Collins mechanism. On the other hand Qiu and Sterman , using a “three parton $``$ two parton” amplitude for the hard partonic scattering and a “gluonic pole” mechanism, claim to be able to reproduce the very large asymmetries at $`x_F0.8`$. However their study must be considered as preliminary, since it relies on a completely ad hoc assumption that the essential new twist-three quark-gluon-quark correlator function $`T_F^{(v)}(x,x)`$, for given flavour $`f`$, is proportional to $`q_f(x)`$, and no attempt is made to fit the detailed $`x_F`$-dependence of the data.
ACKNOWLEDGMENTS
The authors are grateful to M. Anselmino and F. Murgia for general comments, and to S. Brodsky, R. Jakob, P. Kroll and I. Schmidt for discussions concerning the condition $`\mathrm{\Delta }q(x)/q(x)1`$ as $`x1`$. E. L. is grateful for the hospitality of the Division of Physics and Astronomy of the Vrije Universiteit, Amsterdam. This research project was supported by the Foundation for Fundamental Research on Matter (FOM) and the Dutch Organization for Scientific Research (NWO). |
no-problem/9911/astro-ph9911285.html | ar5iv | text | # Weak Gravitational Lensing by the Nearby Cluster Abell 3667
## 1 Introduction
Weak gravitational lensing—measurement of the induced shear of distant galaxy images to infer the foreground mass distribution—has now become a standard tool to probe dark matter in galaxy clusters. Since the work of Tyson et al. (1990), weak lensing studies of over twenty galaxy clusters have been published (Mellier, 1999). In all cases to date, the observations have been confined to moderate to high-redshift clusters, $`z_{cl}>0.15`$<sup>1</sup><sup>1</sup>1Lensed arcs have been detected in two low-redshift clusters (Blakeslee & Metzger, 1998; Campusano et al., 1998), but such instances of strong lensing only probe the cores of clusters., for two reasons: (i) the cluster lensing strength is maximized if the angular diameter distance to the foreground cluster is roughly half that of the background source galaxies, which at a limiting depth of $`R25`$ are typically at $`z_s1`$; (ii) a distant cluster subtends an area on the sky which can be roughly encompassed by a single CCD chip, allowing deep exposures of the entire field in moderate observing time. Unfortunately, most of our knowledge of cluster properties comes from X-ray and dynamical (optical redshift) studies, which are much more easily conducted for nearby clusters. This mismatch between the observing requirements for weak lensing and dynamical studies has hampered direct comparison of these mass measuring techniques. It would be beneficial to have weak lensing observations for a sample of nearby clusters. This would enable more detailed studies of the cluster binding mass, baryon fraction, and morphology.
For weak lensing studies, nearby clusters have the advantage that their background galaxies are relatively well resolved (reducing the effect of PSF smearing) and their inferred projected mass distributions are relatively insensitive to the uncertain background source galaxy redshift distribution in the limit $`z_{cl}z_s`$. The advent of CCD mosaic cameras on 4m telescopes, coupled with developments in weak lensing analysis, now make possible wide-field studies of weak lensing in nearby clusters (Joffre et al., 1999; Stebbins et al., 1996).
With these benefits in mind, we have begun a complete, X-ray luminosity-selected weak lensing survey of 19 nearby ($`z_{cl}<0.1`$) Southern clusters (Joffre et al., 1999); this sample, drawn from the XBACS (Ebeling et al., 1996), is also being targeted by the Viper telescope for Sunyaev-Zel’dovich observations (Romer et al., 1999). As of October 1999, we have completed imaging observations of 9 clusters in the sample, using the BTC and Mosaic II cameras at the 4m telescope at CTIO. In this Letter, we present results for our first target, the $`z=0.055`$ cluster Abell 3667.
The major obstacle to weak lensing studies of nearby clusters is the small shear signal they produce. Shear estimates are limited by the intrinsic ellipticity of the background galaxies (shape noise) and PSF anisotropies. To create reliable mass maps we must therefore measure ellipticities for a large number of background galaxies and correct for the PSF anisotropy very accurately. Given that we expect cluster-induced shears of at most $`5\%`$, we have reduced systematic sources of shear, which include atmospheric turbulence, telescope shake, and optical aberration, to less than $`0.5\%`$ across the images (based on the corrected stellar ellipticities). As proof of our ability to make these measurements, we present weak lensing analyses of A3667 based on two sets of observations taken $`1`$ year apart under different seeing conditions and with different observing strategies.
## 2 Observations
Much ancillary information is available for Abell 3667. Based on redshifts for 154 cluster members (Sodre et al., 1992; Katgert et al., 1996), an optical velocity dispersion in the range $`\sigma _v=9701200`$ km/s has been inferred (Sodre et al., 1992; Fadda et al., 1996). From the ROSAT All-Sky Survey, the X-ray luminosity $`L_X=6.5\times 10^{44}`$ ergs/s (Ebeling et al., 1996), among the strongest X-ray sources in the southern sky. The radio map shows a double halo structure (Rottgering et al. 1997). The A3667 field contains two dominant D galaxies; the ROSAT PSPC image (Knopp et al., 1996) and ASCA temperature map (Markevitch et al., 1998) indicate the cluster is undergoing a merger along the direction (Knopp et al., 1996) joining them.
We observed A3667 on two separate runs with the BTC (Wittman et al., 1998) at the CTIO 4m telescope. The BTC is a $`4096\times 4096`$ pixel camera with a pixel scale of $`0.43^{\prime \prime }`$; there are significant gaps between the four chips that must be removed from the final combined image by dithering exposures. In June 1997, two of us (PF and JM) observed A3667 in the $`R`$ and $`B_J`$ bands in relatively poor seeing (combined stellar FWHM of $`1.55^{\prime \prime }\pm 0.15^{\prime \prime }`$ in R). The combined image in each filter covers an approximate area $`42^{}\times 42^{}`$, with a maximum surface brightness limit of 28.6 mag arcsec<sup>-2</sup> in $`R`$ and 28.9 mag arcsec<sup>-2</sup> in $`B_J`$, corresponding to $`1\sigma `$ in the sky. Total observing times were 5500s in $`B_J`$ and 15500s in $`R`$. We denote these the ‘$`\alpha `$’ set of images in our analysis. In September 1998, three of us (JF, TM, RN) observed A3667 under better seeing conditions (combined stellar FWHM $`=1.23^{\prime \prime }\pm 0.07^{\prime \prime }`$ in $`R`$), covering a $`44^{}\times 44^{}`$ area to a maximum surface brightness limit of 28.2 mag arcsec<sup>-2</sup> in $`R`$ and 27.9 mag arcsec<sup>-2</sup> in $`B`$. This corresponded to a total observing time of 12600s in $`R`$ and 2250s in $`B`$. We also obtained $`I`$-band images which we have used to construct a color-magnitude diagram to remove cluster members from the background sample. (Due to fringing effects, the $`I`$-band data was not used in the lensing analysis itself.) We denote these better-seeing images the ‘$`\beta `$’ set.
After debiasing and flatfielding of the frames, the frames were coadded. BTC observations of USNO astrometric fields were used to remove field distortion from the images. Objects were detected using SExtractor (v2.1.0) (Bertin & Arnouts, 1996). Our own code measures quadrupole moments $`Q_{ij}`$ of each object using the adaptive Gaussian weighting scheme of Bernstein, et al.(2000): the image is multiplied by an elliptical Gaussian weight function, the shape of which is successively iterated to match the object’s ellipticity. This routine returns estimates of the ellipticity vectors $`e_1=(Q_{11}Q_{22})/(Q_{11}+Q_{22})`$ and $`e_2=2Q_{12}/(Q_{11}+Q_{22})`$ and their uncertainties. The unsaturated bright stars in each image were used to characterize the PSF anisotropy as a function of position. Following Fischer & Tyson (1997), we convolved the images with a spatially varying kernel which circularizes the stellar images. We then repeat the detection and measurement of background galaxies in the PSF-corrected images. The moments of each galaxy are finally corrected for isotropic PSF dilution using the simulations and analytic results of Bernstein, et al.(2000); the correction factor depends upon galaxy image size relative to the PSF and upon the image profile. Objects in the background galaxy samples for the lensing analysis are selected to have magnitudes $`R<24.75`$, $`B<24.5`$, and $`B_J<25.5`$. They are also required to have half-light radii at least $`1.5`$ times that of the PSF. The $`RI`$ color-magnitude diagram is used to remove red cluster members from the background sample brighter than $`R=22`$. These cuts ensure that the object moments can be accurately measured ($`S/N>810\sigma `$), that stellar contamination is minimal, and that the vast majority of the sample lies well behind the cluster. The resulting samples contain approximately 11,000 background galaxies for the $`\alpha `$ R image, 30,000 for the $`\beta `$ R, 18,000 for the $`\alpha `$ $`B_J`$, and 11,000 objects for the $`\beta `$ $`B`$ image. We note that the $`\beta `$ R image has a larger number of galaxies than the $`\alpha `$ set, although the later has a longer exposure time. This is due to the fact that the $`\beta `$ images covered a larger area on the sky and the poor seeing of the $`\alpha `$ R image makes galaxies and stars difficult to distinguish. Many of these smaller galaxies are lost when we cut on half-light radius.
## 3 Comparison
To study the robustness of the mass maps derived from the two sets of observations, we first examine the consistency of the measured ellipticities of background galaxies. We focus on the shear measurements (as opposed to the mass maps themselves) because we expect the errors to be uncorrelated in different regions of the sky. In principle, differences in seeing, imaging depth, and filters will lead to differences between the shear fields estimated from the two observations. However, after applying the corrections discussed above, the derived shear fields should be strongly correlated.
We trimmed the R and B band images in the $`\alpha `$ and $`\beta `$ datasets to the regions common to both and calculated the mean ellipticities of the background galaxies in $`100`$ angular bins of width $`25^{\prime \prime }`$. To quantify the consistency of the two fields we calculate the $`\chi ^2`$ value for the ellipticities for each filter,
$$\chi ^2=\underset{i=1}{\overset{2}{}}\underset{N_b=1}{\overset{100}{}}\frac{(e_{i\alpha }e_{i\beta })^2}{(\sigma _{e_{i\alpha }}^2+\sigma _{e_{i\beta }}^2)}.$$
The estimate of the ellipticity variance in a spatial bin for a given data set is
$$\sigma _{ei}^2=(N_c/N^2)\sigma _m^2_c+(N_d/N)^2\sigma _{rms}^2/(N_d1),$$
where $`N`$ is the total number of background galaxies in the bin and $`N_c`$ and $`N_d`$ are the numbers of galaxies in the bin with and without counterparts in the other data set. $`\sigma _m^2_c`$ is the average measurement error in $`e_i`$ derived from our measurement uncertainties in $`Q_{ij}`$ for the galaxies common to both sets; $`\sigma _{rms}`$ is the rms spread in the ellipticities of the $`N_d`$ galaxies in the bin not found in the other set. That is, for galaxies measured in both data sets, we use the measurement uncertainty rather than the rms per bin to take into account the correlation between the data sets.
Figure 1a shows the binned ellipticities of background galaxies found in both data sets in the R band; the reduced $`\chi ^2`$ is 1.11 for 200 d.o.f. Fig. 1b compares the ellipticities for all galaxies: in this case, the scatter between the datasets is significantly larger, as expected since they are no longer confined to the same galaxies; the reduced $`\chi ^2`$ for all the objects is 1.19, implying a probability $`P(\chi ^2|200)=3.5\%`$ in the case of random errors. The average ellipticity in Fig. 1b is $`2.57\pm 0.22\%`$ for the $`\alpha `$ set and $`2.42\pm 0.16\%`$ for the $`\beta `$ set. As expected for low redshift cluster lensing, the maximum tangential ellipticity is only $`4\%`$, while the majority of the signal is $`1\%`$. The major contribution to $`\chi ^2`$ comes from a few bins located at the edge of the images; removing the two worst bins drops $`\chi ^2`$ substantially, raising the probability that the ellipticities are consistent within the errors to $`30\%`$. The edges of our images have the largest field distortions and shallowest coverage and are therefore where we expect the largest discrepancy between the data sets. For the blue filters, due to the shallowness of the $`\beta `$ set and the fact that the $`B`$ and $`B_J`$ filters are not identical, there are very few background objects in common ($``$ 1000 vs. 9000 in the R images). The error in $`\chi ^2`$ is dominated by the rms ellipticity, giving a value of $`\chi ^2=1.125`$ or $`P(\chi ^2|200)=10.9\%`$. This is a very strict test of our measurement and correction algorithm, showing that our shear measurements are reproducible. The relatively low formal probabilities may be due to either small residual systematics or slight underestimates of the measurement error. Across the image, the high correlation between the corrected data sets is quite encouraging, with large differences generally confined to small areas near the edge of the field.
We performed mass reconstruction on a $`60\times 60`$ grid, applying a version of the Kaiser and Squires Squires et al. (1996) algorithm separately to the blue and red catalogs and combining them by weighting the mass at each gridpoint by its S/N. We have chosen to plot the S/N of each mass map pixel as this gives a direct picture of which mass peaks are significant. We estimated the the noise in each pixel for this map as follows: we rotated each galaxy orientation through an arbitrary angle and then computed the resulting mass map; we then repeated this procedure 100 times and estimated the noise from the variance of these 100 noise maps. As a check of systematics, we produced mass maps for each image with all galaxy orientations rotated by 45 degrees and found no significant features. In the absence of residual systematic effects such a map should be consistent with noise (Stebbins et al., 1996; Kaiser et al., 1994). As an additional systematics check, we have made shear maps for the stars in each filter; they are consistent with noise, as expected.
The combined convergence maps are shown in Figures 2 and 3 for the $`\alpha `$ and $`\beta `$ data sets. There is remarkable agreement in morphology of the two maps, and the largest mass features appear to be robust. To quantify this correlation, we calculate Pearson’s r between the two mass maps:
$$r=\frac{^N(m_\alpha m_\alpha )(m_\beta m_\beta )}{\sqrt{^N(m_\alpha m_\alpha )^2}\sqrt{^N(m_\beta m_\beta )^2}},$$
where $`m`$ is the value of a mass map pixel and $`N`$ is the total number of gridpoints. $`r=1`$ for completely anticorrelated data, 0 for uncorrelated data, and 1 for completely correlated data. We find a value of $`r=0.6`$ between the two maps, indicating that they are in fact correlated. The formal probability of achieving such a high value of $`r`$ by chance for uncorrelated maps is $`\mathrm{erfc}(|\mathrm{r}|\sqrt{\mathrm{N}/2})`$, negligibly small for 3600 gridpoints. To further quantify the degree of correlation, we calculated Pearson’s r between the 100 noise maps described above and found these maps gave a value of $`0.009\pm 0.089`$. To determine the correlations between the entire maps, rather than the correlation introduced by the coincidence of the central peak, we masked out a box centered on the two maps with a size of $`10^{}`$. When Pearson’s r was computed on the remaining unmasked areas, we still found a value of $`r=0.40`$. This value remained fairly constant as we increased the masked region until the masked region’s size approached that of the two images. Inspection of the maps indicates that even the relatively low significance ($`3\sigma `$) mass peaks which correspond to a convergence of $`0.02`$ are reproducible. We quantified this by masking regions greater than $`3\sigma `$ in either map and computing Pearson’s r on the unmasked portions. The value of the correlation only dropped to $`r=0.27`$. We note that r dropped quickly to zero if we masked out all regions of significance lower than $`3\sigma `$. We have also implemented the mass reconstruction algorithm of Seitz & Schneider (1998) and find the same correlation between features.
The mass map is strongly peaked around the central D galaxy and generally correlates well with both the cluster light and ROSAT X-ray flux distributions (Joffre et al., 2000). At lower significance, there also appears to be mass associated with the second bright D galaxy in the NW of the image and with cluster galaxies in the N and SE of the central D.
## 4 Conclusion
We have detected weak lensing at the $`0.54\%`$ level in a nearby galaxy cluster using two separate sets of images. Despite differences in depth, seeing, and filters, the shear maps are consistent within the errors, and the reconstructed mass maps are strongly correlated. This reproducibility demonstrates the feasibility of using weak lensing to probe the mass distribution in low-redshift clusters. In future work, we will apply these methods to all the clusters in our survey and compare the resulting mass maps with X-ray, optical, and Sunyaev-Zel’dovich data to obtain a more detailed picture of the properties of nearby clusters.
This work is supported in part by Chandra Fellowship grant PF8-1003, awarded through the Chandra Science Center. The Chandra Science Center is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-39073. This research was supported in part by the DOE and by NASA grant NAG5-7092 at Fermilab and NSF PECASE grant AST 9703282 at the University of Michigan. |
no-problem/9911/gr-qc9911065.html | ar5iv | text | # Black Hole Entropy in Induced Gravity and Information Loss
## Abstract
The basic assumption of the induced gravity approach is that Einstein theory is an effective, low energy-form of a quantum theory of constituents. In this approach the Bekenstein-Hawking entropy $`S^{BH}`$ of a black hole can be interpreted as a measure of the loss of information about constituents inside the black hole horizon. To be more exact, $`S^{BH}`$ is determined by quantum correlations between ”observable” and ”non-observable” states with positive and negative energy $``$, respectively. It is important that for non-minimally coupled constituents $``$ differs from the canonical Hamiltonian $``$. This explains why previous definitions of the entanglement entropy in terms of $``$ failed to reproduce $`S^{BH}`$.
The idea of induced gravity has been formulated first by A.D.Sakharov . Its basic assumption is that the dynamical equations for the gravitational field $`g_{\mu \nu }`$ are related to properties of the physical vacuum which has a microscopic structure like a usual medium. A suitable example for the comparison is a crystal lattice. The metric $`g_{\mu \nu }`$ plays a role similar to a macroscopic displacement in the crystal and gravitons are collective excitations of the microscopic constituents of the vacuum analogous to phonons. In the crystal, a deformation results in the change of the energy expressed in terms of displacements. In a similar way, the physical vacuum responses to variations of $`g_{\mu \nu }`$ by changing its energy due to polarization effects. These quantum effects can be described by the effective action $`\mathrm{\Gamma }[g_{\mu \nu }]`$ for the metric
$$e^{i\mathrm{\Gamma }[g_{\mu \nu }]}=[D\mathrm{\Phi }]e^{iI[\mathrm{\Phi },g_{\mu \nu }]}.$$
(1)
The notation $`\mathrm{\Phi }`$ stands for the constituent fields, and $`I[\mathrm{\Phi },g_{\mu \nu }]`$ denotes their classical action. Sakharov’s idea is based on the observation that one of leading contributions to $`\mathrm{\Gamma }[g_{\mu \nu }]`$ has a form of the Einstein action.
Why the idea of induced gravity has something to do with the problem of the black hole entropy? The Bekenstein-Hawking entropy
$$S^{BH}=\frac{1}{4G}𝒜,$$
(2)
where $`𝒜`$ is the area of the horizon and $`G`$ is the Newton constant, appears under analysis of the laws of black hole mechanics in classical Einstein theory. The induced gravity is endowed with a microscopic structure, constituents, which can be related naturally to degrees of freedom responsible for $`S^{BH}`$. As was first pointed out in , the mechanism which makes explanation of $`S^{BH}`$ in induced gravity possible is the quantum entanglement . The constituents in the black hole exterior are in the mixed state due to the presence of the horizon. This state is characterized by the entropy $`S`$ of entanglement. In a local field theory correlations between external and internal regions are restricted by the horizon surface, and so $`S=𝒜/l^2`$. The constant $`l`$ is the same ultraviolet cutoff which is required to make effective action (1) finite . Because the induced Newton constant $`G`$ in $`\mathrm{\Gamma }[g_{\mu \nu }]`$ is proportional to $`l^2`$ entropy $`S`$ is comparable to $`S^{BH}`$.
To understand whether $`S^{BH}`$ is a ”purebred” entanglement entropy one has to avoid the cutoff procedure and consider a theory which is free from the divergences. The Bekenstein-Hawking entropy is a pure low-energy quantity. It was suggested in that the mechanism of generation of $`S^{BH}`$ is universal for the theories where gravity is completely induced by quantum effects, as in Sakharov’s approach or in string theory, and has ultraviolet finite induced Newton constant $`G`$. The last requirement makes the theory self-consistent and enables one to carry out the computations. The universality means that $`S^{BH}`$ may be understood without knowing details of quantum gravity theory (see also for a different realization of this idea).
The models which obey these requirements were constructed in . The constituents are chosen there as non-interacting scalar, spinor and vector fields. They have different masses comparable to the Planck mass $`m_{Pl}`$. The masses and other parameters obey certain constraints which provide cancellation of the leading (or all ) divergences in effective gravitational action (1). In the low-energy limit when curvature $`R`$ of the background is small as compared to $`m_{Pl}^2`$
$$\mathrm{\Gamma }[g_{\mu \nu }]\frac{1}{16\pi G}\sqrt{g}d^Dx(R2\mathrm{\Lambda }).$$
(3)
The induced Newton constant $`G`$ and the cosmological constant $`\mathrm{\Lambda }`$ are the functions of parameters of the constituents and can be computed. In four dimensions one imposes further restrictions to eliminate the cosmological term. To study charged black holes one has to generalize Sakharov’s proposal and consider the induced Einstein-Maxwell theory (see the details in ).
All above models include bosonic constituents (scalars or vectors) with a positive non-minimal coupling to the background curvature. The presence of such couplings is unavoidable if one wants to eliminate the divergences in $`G`$. The Bekenstein-Hawking entropy of a non-extremal stationary black hole has the following form
$$S^{BH}=\frac{1}{4G}𝒜=S𝒬,$$
(4)
$$S=\text{Tr}\rho \mathrm{ln}\rho ,\rho =𝒩e^{\beta _H}.$$
(5)
Here $`S`$ is the thermal entropy of the constituents propagating in Planck region near the black hole horizon and $`\beta _H^1=\frac{\kappa }{2\pi }`$ is the Hawking temperature. The operator $``$ is the canonical Hamiltonian which generates translations along the Killing vector field $`\xi `$ ($`\xi ^2=0`$ on the horizon). Quantity $`𝒬`$ is the vacuum average of an operator which has the form of the integral over the horizon $`\mathrm{\Sigma }`$. $`𝒬`$ is determined only by the non-minimal couplings of the constituents and it can be interpreted as a Noether charge on $`\mathrm{\Sigma }`$ defined with respect to field $`\xi `$ . Formula (4) is absolutely universal in a sense it depends neither on the choice of constituent species nor on a type of a black hole. The entropy $`S`$ is always positive and divergent and so does $`𝒬`$ in the considered models. Subtracting $`𝒬`$ in (4) compensates divergences of $`S`$ rendering $`S^{BH}`$ finite. Thus, non-minimal couplings provide both finiteness of the effective action and of $`S^{BH}`$.
The physical meaning of $`𝒬`$ becomes evident if we analyze the classical energy of the non-minimally coupled constituents defined on a space-like hypersurface $`\mathrm{\Sigma }_t`$ which crosses the bifurcation surface $`\mathrm{\Sigma }`$
$$=_{\mathrm{\Sigma }_t}T_{\mu \nu }\xi ^\mu 𝑑\mathrm{\Sigma }^\nu .$$
(6)
Here $`T_{\mu \nu }`$ is the stress-energy tensor and $`d\mathrm{\Sigma }^\mu `$ is the future-directed vector of the volume element on $`\mathrm{\Sigma }_t`$. For the fields in black hole exterior
$$=\frac{\kappa }{2\pi }𝒬,$$
(7)
where $``$ is the canonical Hamiltonian. The difference between the two energies is a total divergence which results in boundary terms. $`𝒬`$ is the contribution from the horizon $`\mathrm{\Sigma }`$ which is an internal boundary of $`\mathrm{\Sigma }_t`$. The energies $``$ and $``$ play different roles: $``$ is the generator of canonical transformations along $`\xi `$, while $``$ is the observable energy which is related to thermodynamical properties of a black hole. Indeed, there is the classical variational formula
$$\delta M=\frac{\kappa }{2\pi }\delta S^{BH}+\delta $$
(8)
which relates the change of the black hole mass at infinity $`\delta M`$ to the change of the entropy $`\delta S^{BH}`$ under a small perturbation of the energy $`\delta `$ of matter fields.
Suppose now that the total mass $`M`$ is fixed. Then, the change of $`S^{BH}`$ is completely determined by change of the energy $``$ in the black hole exterior. In classical processes the matter is falling down in the black hole. It results in decreasing of the energy $`\delta <0`$ and increasing of the black hole entropy $`S^{BH}`$. In quantum theory $`S^{BH}`$ may decrease due to the Hawking effect. In such a process $`\delta >0`$, and a black hole absorbs a negative energy.
We can now make two observations. First, according to (5), changes of $`S`$ are related to changes of the canonical energy $``$, and because $``$ and $``$ are different, the two entropies, $`S^{BH}`$ and $`S`$, should be different as well. Second, the conclusion whether a physical process results in a loss of the information inside the black hole can be made by analyzing the sign of the energy change $`\delta `$. Note that using for this purpose the sign of the canonical energy would result in a mistake. For instance, if $`𝒬>0`$, there may be classical processes where $``$ and $`S^{BH}`$ increase simultaneously, see (7), (8).
It is known that the thermal entropy $`S`$, defined in (5), can be interpreted as an entanglement entropy . This is true when one is interested in correlations (entanglement) of states characterized by the certain canonical energy $``$. However, to describe a real information loss one has to study correlations between observable and non-observable states which have positive and negative energy $``$, respectively.
Consider a pair creation process near the black hole horizon. The antiparticle of a pair carries a negative energy. It tunnels through the horizon and becomes non-observable. As a consequence, the observed particle is in a mixed state. The entropy which describes the corresponding information loss can be defined by the Shannon formula
$$S^I=\underset{}{}p()\mathrm{ln}p().$$
(9)
Here $`p()`$ is the probability to observe a particle with the certain energy $``$. Obviously, $`p()`$ coincides with the probability for the antiparticle with energy $``$ to tunnel through the horizon. This process can be also described in another form. Consider a Kruskal extension of the black hole space-time near the horizon. Then the surface $`\mathrm{\Sigma }_t`$ consists of the two parts, $`\mathrm{\Sigma }_t^R`$ (right) and $`\mathrm{\Sigma }_t^L`$ (left), which are separated by the bifurcation surface $`\mathrm{\Sigma }`$. The creation of a pair is equivalent to quantum tunneling of a particle moving backward in time from the left region to the right. The corresponding amplitude is the evolution operator from $`\mathrm{\Sigma }_t^L`$ to $`\mathrm{\Sigma }_t^R`$ in Euclidean time $`\tau `$. In the Hartle-Hawking vacuum $`\tau =\beta _H/2`$.
How to describe the tunneling probability? First, note that the energy has a form of the Hamiltonian modified by a ”perturbation” $`𝒬`$. Such a modified Hamiltonian may be considered as an operator which generates evolution of states having certain energies. Then, the operator
$$T=e^{\frac{\beta _H}{2}}$$
(10)
corresponds to the tunneling transition. Second, because a particle on $`\mathrm{\Sigma }_t^L`$ is moving backward in time, $``$ should be defined by (6) with the past-oriented Killing field $`\xi `$. It means that relation (7) should be written now as
$$=+\frac{\kappa }{2\pi }𝒬,$$
(11)
if one leaves for $`𝒬`$ the former expression. (So $`𝒬`$ is positive for positive non-minimal couplings.) The same relation is true for the energy and Hamiltonian of an antiparticle tunneling from $`\mathrm{\Sigma }_R^t`$ to $`\mathrm{\Sigma }_L^t`$.
The probability $`p()`$ which involves transitions from all states in one region to a state with the energy $``$ in the other region is
$$p()=𝒩|e^{(\beta _H+𝒬)}|.$$
(12)
Here $`𝒬`$ is the operator which corresponds to the Noether charge. We now have to calculate $`S^I`$ by using (9) and (12). To this aim it is convenient to introduce a free energy $`F^I(\beta )`$
$$e^{\beta F^I(\beta )}=\text{Tr}e^\beta ,$$
(13)
which makes it possible to rewrite $`S^I`$ as
$$S^I=\beta ^2_\beta F^I(\beta )|_{\beta =\beta _H}.$$
(14)
Our last step is to calculate $`F^I(\beta )`$. This can be done by using our interpretation of $`𝒬`$ as a perturbation. Then in the linear order
$$F^I(\beta )=F(\beta )+\beta _H^1𝒬_\beta ,$$
(15)
$$e^{\beta F(\beta )}=\text{Tr}e^\beta ,$$
(16)
$$𝒬_\beta =\text{Tr}(𝒬e^\beta )e^{\beta F(\beta )}.$$
(17)
Here $`F(\beta )`$ is the standard free energy for a canonical ensemble at the temperature $`\beta ^1`$. $`𝒬_\beta `$ is the thermal average of the Noether charge in this canonical ensemble. As has been argued in , this average is determined by contributions of particles with negligibly small canonical energies $`\omega `$ (the soft modes). The density $`n_\omega `$ of Bose particles at the temperature $`\beta ^1`$ is singular and proportional to the temperature, $`n_\omega (\beta \omega )^1`$. Thus,
$$𝒬_\beta =\frac{\beta _H}{\beta }𝒬_{\beta _H},$$
(18)
where $`𝒬_{\beta _H}𝒬`$ is the average in the Hartle-Hawking vacuum. By using (14), (15) and (16) one finds
$$S^I=\beta ^2_\beta F(\beta )|_{\beta =\beta _H}𝒬_{\beta _H}=SQ.$$
(19)
Here we used the fact that $`F(\beta )`$ determines thermal entropy (5). Equation (19) proves our interpretation of the Bekenstein-Hawking entropy in induced gravity (4) as the entropy related to the information loss.
We conclude with the several remarks.
1) In induced gravity $`S^{BH}`$ counts only the states of the constituents located near the horizon, the ”surface states”. Thus, as was argued in , $`S^{BH}`$ should not be related to internal states of the black hole which cannot influence outside.
2) Our arguments are applied to the entropy of Rindler observers and give for this quantity the finite value $`(4G)^1`$ per unit surface.
3) It was suggested in to relate $`S^{BH}`$ to degeneracy of the spectrum of the mass of the black hole $`M_H`$ measured at the horizon. If mass at infinity $`M`$ is fixed, change of $`M_H`$ coincides with the energy of an antiparticle absorbed by a black hole in the course of the pair creation process. According to arguments above, this energy has to be determined by (11) and it is the operator that appears in density matrix (12). Thus, our present interpretation of $`S^{BH}`$ agrees with .
4) Density matrix (12) differs from the canonical density matrix by insertion of the operator $`𝒬`$ on $`\mathrm{\Sigma }`$. This insertion changes only the distribution of the soft modes. The divergence of the thermal entropy $`S`$ in induced gravity can be related to the degeneracy of states with respect to soft modes because adding a soft mode does not change the canonical energy of the state. The insertion of $`𝒬`$ in (12) removes this degeneracy. At the same time averages of operators located outside $`\mathrm{\Sigma }`$ do not depend on this insertion.
Acknowledgements: This work is supported in part by the RFBR grant N 99-02-18146. |
no-problem/9911/astro-ph9911156.html | ar5iv | text | # Formation and loss of hierarchical structure in two-dimensional MHD simulations of wave-driven turbulence in interstellar clouds
## 1 Introduction
The hierarchical or fractal nature of interstellar clouds has been proposed to result from turbulence because of the expected compression from pervasive supersonic flows, and because of the similarity between cloud structure and the morphology of laboratory turbulence (von Weizsacker 1951; Sasao 1973; Dickman 1985; Scalo 1985, 1987, 1990; Falgarone 1989; Falgarone & Phillips 1990; Falgarone, Phillips, & Walker 1991; Stutzki, et al. 1991; Mandelbrot 1983; Sreenivasan 1991).
There are several important differences between interstellar and laboratory fluids, however (Scalo 1987; Elmegreen 1993a). The interstellar structure comes from variations in the total gas density, while the laboratory structure is usually in some tracer of fluid motions, such as smoke particles or droplets, with the background density nearly uniform. The interstellar case is also highly magnetic, with the likely inhibition of some torsional motions from magnetic tension. Laboratory turbulence is highly vortical. Thus, the analogy between interstellar and laboratory turbulence is not perfect, even though the resulting structures are somewhat similar.
The purpose of this paper is to demonstrate that interacting non-linear magnetic waves provide a physical mechanism for interstellar turbulence that generates whole clouds and the fractal structures inside of clouds without internal sources or any special (e.g., power-law) initial conditions. Computer simulations in two dimensions show cloud and clump formation at a rapidly cooled, compressed interface between incoming streams of shear Alfvén waves. The resulting structures have power-law characteristics in both space and velocity. Goldreich & Sridhar (1995) also considered turbulence driven by shear Alfvén wave interactions, but treated only the incompressible case. Our results imply that molecular cloud turbulence and clumpy structures do not depend on star formation for their generation. They can arise instead from a variety of energy sources outside the cloud, and get transmitted to the region where the cloud forms along mildly non-linear magnetic waves.
The results also imply that a magnetic field is important for general interstellar turbulence: it distributes the energy from stars and other sources over large regions at supersonic speeds, and it converts this energy via non-linear wave mixing into clouds and clumps spanning a wide range of scales. Intercloud wave damping should not prevent this energy redistribution because the intercloud wave speed is fast and the energy sources are relatively infrequent, making the intercloud wavelength long, perhaps many tens of parsecs. Non-linear damping tends to occur on the scale of a wavelength when the perturbation speed is comparable to the Alfvén speed (Zweibel & Josafatsson 1983; Goldreich & Sridhar 1995). It occurs on longer scales when the perturbation speeds are sub-Alfvénic, as is the case for the models here.
The magnetic field is more important for simulations of interstellar turbulence that use a time-dependent energy equation, as is the case here, than it is for simulations that use a fixed adiabatic index to relate pressure and density. The fixed index has an artificial energy source upon decompression that is not present when the full energy equation is used. As a result, waves and turbulent energy can travel great distances without needing a magnetic field when there is a fixed adiabatic index, but they can hardly disperse at all in the field-free case when the full energy equation is used (Elmegreen 1997b, hereafter Paper I).
In a second part of this paper, we experiment with the hierarchical structures that result from turbulence to try to understand how diffuse and translucent clouds, which always seem to have tiny clumps down to substellar scales, make the conversion to star-forming clouds that have stellar-mass dense cores. Of course, self-gravity is ultimately important for this conversion, but the presence of sub-stellar clumps in all diffuse clouds implies that as long as turbulence is strong, the stellar-mass clumps that form can always be sheared and sub-divided into even smaller pieces, preventing self-gravity from dominating turbulent pressures on stellar scales (e.g., Padoan 1995). This effect of turbulence is most clearly revealed by the systematic decrease in the ratio of the clump mass to the turbulent Jeans mass on decreasing scales in molecular clouds (Bertoldi & McKee 1992; Falgarone, Puget, & Pérault 1992; Vazquez-Semadeni, Ballesteros-Paredes, & Rodriguez 1997). Such a decrease follows directly from the Larson (1981) scaling laws (see Elmegreen 1998). How can stars ever form in a turbulent medium if non-linear motions always break up the structures into smaller, more weakly self-gravitating pieces? The answer must be that star formation begins only after the sub-stellar pieces lose their turbulence and smooth out into larger mass clumps.
One implication of this model is that fragmentation is not the key to star formation: star-forming clouds are already fragmented as a remnant of their pre-star formation turbulence. The key to star formation is the smoothing of these fragments into stagnant pools larger than a thermal Jeans mass (Elmegreen 1999). In an extended, well-connected, magnetic medium, turbulent dissipation alone is not enough to initiate this process, because there is a constant flux of turbulent energy into any particular region from waves and motions on larger scales. Only when this external flux stops can the agitated motions begin to decay locally, and only after this decay can the density substructure begin to disappear, allowing stars to form at the thermal Jeans mass and larger.
With this star formation model in mind, we ran several simulations that experiment with possible causes of increased cloud smoothing. One lowers the ionization fraction and thereby decouples the magnetic waves from the gas (Mouschovias 1991; Myers 1998). Another introduces a density gradient from a fixed gravitational potential in order to exclude externally generated waves. The results of these experiments are discussed in Sections 4 and 5. It seems that the mere condensation of a cloud resulting from bulk self-gravity is enough to initiate star formation in Jeans-mass or larger pieces by excluding external non-linear waves.
Aside from these applications to star formation, the primary goal of the present work is to demonstrate that interstellar clouds and their hierarchically clumpy substructures can arise as transient objects in the converging parts of supersonic turbulent flows. The general concept that supersonic turbulence can produce density structure has been around for a long time, but specific applications to interstellar clouds were not taken seriously until recently. The early work by Sasao (1973) on this topic was overwhelmed by the more obvious notion that HII regions (Hills 1972; Bania & Lyon 1980), supernovae (Cox & Smith 1974; Salpeter 1976; McKee & Ostriker 1977) and combinations of these pressures (Tenorio-Tagle & Bodenheimer 1988) directly make clouds behind moving shock fronts. Indeed there is a lot of evidence for the bubble structures that are expected from these centralized pressure sources (Brand & Zealey 1975). In addition, thermal instabilities (Field, Goldsmith & Habing 1969) were proposed to make diffuse clouds, while magnetic and gravitational instabilities (Parker 1966; Goldreich & Lynden-Bell 1965) were proposed to make giant clouds, particularly the kpc-size condensations in spiral arms (Elmegreen 1979). Everything in between was supposed to be made by the build-up and dispersal cycle of collisional agglomeration (e.g., Field & Saslaw 1965; Kwan 1979).
The importance of turbulence as a structuring agent has now reemerged in the astronomical literature. This change slowly followed several key observations, including the recognition of correlated motions in molecular clouds (Larson 1981) and of pervasive small scale structure in diffuse (Low et al. 1984; Scalo 1990), ionized (Spangler 1998), and molecular clouds (e.g., Falgarone, Phillips, & Walker 1991; Stutzki et al. 1998; Falgarone et al. 1998). Correlated motions and power-law structures also appear in HI surveys (Green 1993; Lazarian 1995; Stanimirovic et al. 1999; Lazarian & Pogosyan 1999). LaRosa, Shore & Magnani (1999) found correlated motions in a translucent cloud and suggested the driving source was outside, as in the present model.
Astrophysical models of cloud formation also reflected this change by including turbulence compression as one of several mechanisms (see reviews in Elmegreen 1993a, 1996). Star formation theory changed as well, e.g., by considering not only the influence of converging turbulent flows on the gravitational stability of clumps (Hunter & Fleck 1982), but also the stability of clumps that specifically formed by this compression and whose lifetimes were consequently very short (Elmegreen 1993b). Recent emphasis on cloud formation in turbulence-induced flows is in Ballesteros-Paredes, Vazquez-Semadeni & Scalo (1999), with an application to the Taurus clouds by Hartmann, Ballesteros-Paredes, & Vazquez-Semadeni, et al. (1999). The high latitude clouds (Magnani, Blitz, & Mundy 1995) may be examples as well, because of their short lives. A generalization of turbulence structures to include both cloud and intercloud media is in Elmegreen (1997a).
Other astrophysical turbulence simulations have addressed different problems. In a series of papers, Vazquez-Semadeni, Passot & Pouquet (1995, 1996), Passot, Vazquez-Semadeni,& Pouquet (1995), and Vazquez-Semadeni, Ballesteros-Paredes, & Rodriguez (1997) studied the properties of clouds and star formation in a turbulence model with star formation heating, gaseous gravity, rotation, magnetism, and other effects, to simulate a galactic-scale piece of the interstellar medium.
Gammie, & Ostriker (1996) used a 1D simulation to show that forces parallel to the mean field can support a cloud against self-gravity. Stone, Ostriker, & Gammie (1998) and MacLow, et al. (1998) found the rate of energy dissipation in 3D supersonic MHD turbulence to be about the same as in turbulence without magnetic fields, and suggested that energy input from embedded stars or other sources was necessary to maintain cloud structure for more than a crossing time. Vazquez-Semadeni (1994), Nordlund, & Padoan (1998), and Scalo et al. (1998) found a log-normal probability density distribution generated by compressible turbulence. Hierarchical turbulent structures like those produced here have also been generated in the wakes behind shocked clouds (Klein, McKee & Colella 1994; Mac Low et al. 1994; Xu & Stone 1995).
Padoan (1998) proposed that interstellar turbulence is super-Alfvénic in order to get significant compression. This is different from the cases considered here, where all the cloud motions are sub-Alfvénic. We believe his application of MHD results to interstellar turbulence is inappropriate because the interstellar structure is hierarchically clumped, and super-Alfvénic turbulence can only be super-Alfvénic for one level in a hierarchy of structures. Once a region is compressed, it becomes sub-Alfvénic because the magnetic pressure increases much more than the thermal. Thus the process of turbulence compression would stop after only one level in his model and could never get the observed hierarchical structures. In the MHD simulations discussed here, the hierarchical structure comes from sonic and slightly supersonic motions along the field, driven by transverse magnetic waves. Such sonic disturbances can divide the gas into very small pieces regardless of the local field strength.
The present work in two-dimensions is an extension of the same cloud formation problem considered in one-dimension earlier, first with the simple demonstration that non-linear transverse Alfvén waves push the gas along and make high density structures when they interact (Elmegreen 1990), and then with randomly driven transverse waves at the ends of a 1D grid making hierarchical density structures in the remote regions between (Elmegreen 1991, and Paper I). The first 2D simulations of clumpy structure from interacting magnetic waves were in Yue et al. (1993).
This 1D compression between wave sources is analogous to the “turbulent cooling flow” discussed by Myers & Lazarian (1998), but it is not exactly the same. Here a dense region forms because of a convergence and high pressure from external magnetic waves and the thermal cooling that follows at the compressed interface. Turbulent dissipation at the interface is not as important as thermal cooling for this density enhancement. The turbulent cooling envisioned by Myers & Lazarian leads to a macroscopic thermal instability (Struck-Marcell & Scalo 1984; Tomisaka 1987; Elmegreen 1989). If this were to operate in addition to the thermal variations modeled here, then the density enhancement in the converging region would be even larger that what thermal effects alone give. However, the turbulent energy in the cloud is continuously resupplied from outside, and it gets in easily until the cloud density contrast becomes large (Sect. 5. As a result, the internal turbulent energy does not decrease much for small perturbations in the density, and there is no runaway turbulent cooling in our models.
In what follows, we discuss the MHD algorithm and tests of the numerical code in Section 2.1. This method of solving fluid equations is new to the astronomy community, so we summarize the essential points in some detail (it was used also in Paper I for 1D turbulence, but the equations were not given there). In section 3, the basic MHD simulation that generates hierarchical and scale-free structure from turbulence is discussed. Models with enhanced magnetic diffusion are in section 4 and models with imposed, plane-parallel gravity are in section 5.
## 2 The relaxation algorithm of Jin & Xin
### 2.1 Introduction
The relaxation algorithm developed by Jin & Xin (1995) has been adapted here to include magnetic fields. We also added heating and cooling rates to the energy equation to give two stable thermal states. This is analogous to what we did for the one dimensional problem (Paper I).
The Jin & Zin method is a way to solve systems of conservation equations with no need for artificial viscosity in the equation of motion. It does this by dividing the primary equations of motion and continuity into three equations,
$`{\displaystyle \frac{𝐒}{t}}+{\displaystyle \frac{𝐯}{x}}+{\displaystyle \frac{𝐰}{y}}=0,`$ (1)
$`{\displaystyle \frac{𝐯}{t}}+A{\displaystyle \frac{𝐒}{x}}={\displaystyle \frac{1}{ϵ}}\left(𝐯𝐅_x\left[𝐒\right]\right)`$ (2)
$`{\displaystyle \frac{𝐰}{t}}+B{\displaystyle \frac{𝐒}{y}}={\displaystyle \frac{1}{ϵ}}\left(𝐰𝐅_y\left[𝐒\right]\right).`$ (3)
The vector S is a vector of physical variables consisting of density, momentum density, magnetic field strength, and energy density, as written below. The vectors $`𝐅_x`$ and $`𝐅_y`$ are the two spatial components of the total forces on each physical variable, which are, in general, functions of the physical variables. The scalar $`ϵ`$ is a relaxation rate, taken to be a small positive parameter with the dimensions of time, and $`A`$ and $`B`$ are diagonal matrices with the dimensions of velocity squared. Vectors v and w are intermediate variables that “relax” to the force vectors, $`𝐅_x`$ and $`𝐅_y`$, respectively, on the time scale $`ϵ`$.
To ensure stability, $`A>\left(𝐅_x\left[𝐒\right]/𝐒\right)^2`$ and $`B>\left(𝐅_y\left[𝐒\right]/𝐒\right)^2`$ for all S, which means that each element of $`A`$ and $`B`$ has to equal or exceed the square of the maximum velocity that occurs in the simulation. For our calculations, we take a constant $`aA^{1/2}=B^{1/2}=`$ several times the Alfvén speed in the unperturbed gas.
The Jin & Zin method requires that the physical equations for the problem be written in conservative form (see also Dai & Woodward 1998). It also prescribes how to discretize the spatial and time steps in an efficient and stable way.
### 2.2 Spatial Discretization
The spatial discretization is done in the manner suggested by Jin & Xin (1995). This is second-order accurate, and follows from van Leer (1979), using a piecewise linear interpolation. The resulting discrete forms of equations 13 are, for each component of the vectors S, v, $`𝐅_x`$, and $`𝐅_y`$ :
$`{\displaystyle \frac{}{t}}S+D_x(v,S)+D_y(w,S)=0`$ (4)
$`{\displaystyle \frac{}{t}}v+a^2D_x(v,S)={\displaystyle \frac{1}{ϵ}}\left(vF_x\left[S\right]\right),`$ (5)
$`{\displaystyle \frac{}{t}}w+a^2D_y(w,S)={\displaystyle \frac{1}{ϵ}}\left(wF_y\left[S\right]\right),`$ (6)
where the derivatives for $`v`$ and $`w`$ are
$`D_x(v_{i,j},S_{i,j})={\displaystyle \frac{1}{2\mathrm{\Delta }}}\left(v_{i+1,j}v_{i1,j}\right){\displaystyle \frac{a}{2\mathrm{\Delta }}}\left(S_{i+1,j}2S_{i,j}+S_{i1,j}\right)`$
$`{\displaystyle \frac{1}{4}}\left(\sigma _{x;i+1,j}^{}\sigma _{x;i,j}^{}\sigma _{x;i,j}^++\sigma _{x;i1,j}^+\right),`$ (7)
$`D_y(w_{i,j},S_{i,j})={\displaystyle \frac{1}{2\mathrm{\Delta }}}\left(w_{i,j+1}w_{i,j1}\right){\displaystyle \frac{a}{2\mathrm{\Delta }}}\left(S_{i,j+1}2S_{i,j}+S_{i,j1}\right)`$
$`{\displaystyle \frac{1}{4}}\left(\sigma _{y;i,j+1}^{}\sigma _{y;i,j}^{}\sigma _{y;i,j}^++\sigma _{y;i,j1}^+\right).`$ (8)
The indices $`(i,j)`$ represent the $`(x,y)`$ positions in the computational grid, $`\mathrm{\Delta }`$ is a constant spatial step size, $`a`$ is the maximum velocity for the problem, and $`\sigma `$ is a derivative correction, given by
$`\sigma _{x;i,j}^\pm ={\displaystyle \frac{1}{\mathrm{\Delta }}}\left(v_{i+1,j}\pm aS_{i+1,j}\left[v_{i,j}\pm aS_{i,j}\right]\right)\varphi \left(\theta _{x;i,j}^\pm \right)`$ (9)
$`\sigma _{y;i,j}^\pm ={\displaystyle \frac{1}{\mathrm{\Delta }}}\left(w_{i,j+1}\pm aS_{i,j+1}\left[w_{i,j}\pm aS_{i,j}\right]\right)\varphi \left(\theta _{y;i,j}^\pm \right)`$ (10)
where
$`\theta _{x;i,j}^\pm ={\displaystyle \frac{v_{i,j}\pm aS_{i,j}\left[v_{i1,j}\pm aS_{i1,j}\right]}{v_{i+1,j}\pm aS_{i+1,j}\left[v_{i,j}\pm aS_{i,j}\right]}},`$ (11)
$`\theta _{y;i,j}^\pm ={\displaystyle \frac{w_{i,j}\pm aS_{i,j}\left[w_{i,j1}\pm aS_{i,j1}\right]}{w_{i,j+1}\pm aS_{i,j+1}\left[w_{i,j}\pm aS_{i,j}\right]}},`$ (12)
and
$`\varphi \left(\theta \right)=\mathrm{max}(0,\mathrm{min}[1,\theta ])`$ (13)
is a relatively smooth, slope-limiting function.
### 2.3 Time Discretization
The time stepping is done with a second-order, total variation diminishing (TVD), Runge-Kutta splitting time discretization that keeps the convection terms explicit and the lower order terms implicit (Jin 1995). We write this discretization method in terms of the current time step, with superscript $`n`$, the next time step, with index $`n+1`$, and intermediate values with superscripts $``$, $``$, (1), and (2) (see also Jin & Xin 1995):
$`𝐒^{}=𝐒^n`$ (14)
$`𝐯^{}={\displaystyle \frac{𝐯^n𝐅_x\left(𝐒^{}\right)dt/ϵ}{1dt/ϵ}}`$ (15)
$`𝐰^{}={\displaystyle \frac{𝐰^n𝐅_y\left(𝐒^{}\right)dt/ϵ}{1dt/ϵ}}`$ (16)
$`𝐒^{(1)}=𝐒^{}D_x(𝐯^{},𝐒^{})dtD_y(𝐰^{},𝐒^{})dt`$ (17)
$`𝐯^{(1)}=𝐯^{}a^2D_x(𝐯^{},𝐒^{})dt`$ (18)
$`𝐰^{(1)}=𝐰^{}a^2D_y(𝐰^{},𝐒^{})dt`$ (19)
$`𝐒^{}=𝐒^{(1)}`$ (20)
$`𝐯^{}={\displaystyle \frac{𝐯^{(1)}+𝐅_x\left(𝐒^{}\right)dt/ϵ2\left[𝐯^{}𝐅_x\left(𝐒^{}\right)\right]dt/ϵ}{1+dt/ϵ}}`$ (21)
$`𝐰^{}={\displaystyle \frac{𝐰^{(1)}+𝐅_y\left(𝐒^{}\right)dt/ϵ2\left[𝐰^{}𝐅_y\left(𝐒^{}\right)\right]dt/ϵ}{1+dt/ϵ}}`$ (22)
$`𝐒^{(2)}=𝐒^{}D_x(𝐯^{},𝐒^{})dtD_y(𝐰^{},𝐒^{})dt`$ (23)
$`𝐯^{(2)}=𝐯^{}a^2D_x(𝐯^{},𝐒^{})dt`$ (24)
$`𝐰^{(2)}=𝐰^{}a^2D_y(𝐰^{},𝐒^{})dt`$ (25)
$`𝐒^{n+1}=\left(𝐒^n+𝐒^{(2)}\right)/2`$ (26)
$`𝐯^{n+1}=\left(𝐯^n+𝐯^{(2)}\right)/2`$ (27)
$`𝐰^{n+1}=\left(𝐰^n+𝐰^{(2)}\right)/2.`$ (28)
The time step has to satisfy $`dt>>ϵ`$ to make the $`𝐯^{}`$ and $`𝐰^{}`$ equations stable; we use $`dt=10^2`$ and $`ϵ=10^6`$ for the standard and gravitating slab cases, and $`dt=10^3`$ and $`ϵ=10^9`$ for the experiment with low magnetic diffusion. (Reasons for these timesteps are given in sections 2.4.5 and 4.)
### 2.4 Physical Equations in Conservative Form
#### 2.4.1 Mass Conservation
The variables $`𝐒`$, and the force vectors $`𝐅_x`$ and $`𝐅_y`$, come from the physical equations written in conservative form, which is $`𝐒/t+𝐅=0`$. The mass conservation equation,
$$\frac{\rho }{t}+\rho 𝐯=0$$
(29)
is already in this form for mass density $`\rho `$ and velocity vector v. We rewrite this and other equations in terms of indices $`j`$ to designate the spatial vector components $`x`$ and $`y`$,
$$\frac{\rho }{t}+\frac{\rho v_j}{x_j}.$$
(30)
Then the first terms in the $`𝐒`$, $`𝐅_x`$ and $`𝐅_y`$ vectors are
$$S_1=\rho ;F_1^j=\rho v_j$$
(31)
where $`j=1`$ for the $`x`$ component and $`j=2`$ for the $`y`$ component.
#### 2.4.2 Equation of Motion
The equation of motion can be written in the same way, but gravity, viscosity, and perhaps other forces add additional terms to the right hand side:
$$\frac{𝐒}{t}+𝐅=𝐃.$$
(32)
The viscous terms and gravity are inside D. We do not include any viscosity in the equations because molecular viscosity is negligible in interstellar clouds; all of the energy dissipation occurs in the cooling term, $`\mathrm{\Lambda }`$, which appears in the energy equation. We also ignore self-gravity for simplicity, but some runs have fixed gravity to investigate clumpy structure in regions with smooth density gradients.
To include magnetic diffusion, we write separate equations of motion for the neutrals and ions. For the ions (distinguished by the symbol ”+”), the equation of motion is
$$\rho _+\left(\frac{𝐯_+}{t}+𝐯_+𝐯_+\right)=P_++\rho _+𝐠+\frac{1}{4\pi }𝐁𝐁\frac{1}{8\pi }𝐁𝐁n_+<\sigma a_t>\mu (𝐯_+𝐯)n$$
(33)
where $`<\sigma a_t>`$ is the ion-neutral thermal collision rate for thermal speed $`a_t`$, $`\mu `$ is the reduced ion-neutral mass, $`n_+`$ is the charged particle density, $`n`$ is the total particle density, B is the magnetic field strength, and g is the gravitational acceleration. The total mass density of the ions is negligible in interstellar clouds, so the inertial terms can be dropped. Then we get
$$\frac{1}{4\pi }𝐁𝐁\frac{1}{8\pi }𝐁𝐁=n_+<\sigma a_t>\mu n(𝐯_+𝐯)\omega _+\rho (𝐯_+𝐯),$$
(34)
where $`\omega _+`$ is defined as a collision rate per unit volume, corrected for reduced mass.
For the neutral particles, the equation of motion is
$$\rho \left(\frac{𝐯}{t}+𝐯𝐯\right)=P+\rho 𝐠\omega _+\rho (𝐯𝐯_+).$$
(35)
For this equation, $`\omega _+\rho (vv_+)`$ may be substituted from above to give
$$\rho \left(\frac{𝐯}{t}+𝐯𝐯\right)=P+\rho 𝐠+\frac{1}{4\pi }𝐁𝐁\frac{1}{8\pi }𝐁𝐁.$$
(36)
The ion equation will be used again to write the time evolution of the magnetic field in terms of convective and diffusion terms.
This neutral equation of motion can be converted to the conservative form:
$$\frac{\rho v_i}{t}+\frac{\rho v_iv_j}{x_j}+\frac{}{x_j}\left[\left(p+\frac{B^2}{8\pi }\right)\delta _{ij}\frac{B_iB_j}{4\pi }\right]=\rho g_i$$
(37)
where $`\delta _{ij}=1`$ for $`i=j`$. As a result, the second and third components of $`S`$, $`F`$, and $`D`$, designated by subscripts $`i+1=2,3`$, are, for spatial coordinate indices $`i=1,2`$ and $`j=1,2`$:
$$S_{i+1}=\rho v_i$$
(38)
$$F_{i+1}^j=\rho v_iv_j+\left(p+\frac{B^2}{8\pi }\right)\delta _{ij}\frac{B_iB_j}{4\pi }$$
(39)
$$D_{i+1}=\rho g_i.$$
(40)
With this notation, the equation of motion is
$$\frac{S_{i+1}}{t}+\frac{F_{i+1}^j}{x_j}=D_{i+1},$$
(41)
where $`j=1`$ and 2 for the $`x`$ and $`y`$ components of the force matrix, and $`i=1`$ and 2 for the $`x`$ and $`y`$ components of the momentum density.
#### 2.4.3 Equation of Magnetic Field Evolution
Magnetic field evolution is given by the convection of ions in the usual way:
$$\frac{𝐁}{t}=\times (𝐯_+\times 𝐁).$$
(42)
This may be written in terms of the neutral velocity v:
$$\frac{𝐁}{t}=\times \left(𝐯\times 𝐁\right)\times \left([𝐯𝐯_+]\times 𝐁\right)$$
(43)
where $`𝐯𝐯_+`$ comes from equation (34).
To write this equation in conservative form, we have to expand $`\times \left(𝐯\times 𝐁\right)`$ and rearrange terms. The result is
$$\frac{B_i}{t}+\frac{}{x_j}\left[v_jB_iv_iB_j\right]+\frac{}{x_j}\left[(v_{j+}v_j)B_i(v_{i+}v_i)B_j\right]=0.$$
(44)
The last two terms come from equation (34), which may be re-written as
$$v_{i+}v_i=\frac{1}{\omega _+\rho }\left[\frac{1}{4\pi }\frac{\left(B_kB_i0.5\delta _{ik}B^2\right)}{x_k}\right].$$
(45)
This form of the magnetic field evolution equation indicates that the computational variables are, for $`i=1,2`$,
$$S_{i+4}=B_i,$$
(46)
$$F_{i+4}^j=v_jB_iv_iB_j$$
(47)
and
$$D_{i+4}=\frac{}{x_j}\left[(v_{j+}v_j)B_i(v_{i+}v_i)B_j\right]$$
(48)
where equation (45) has to be used as well. Thus the equation of magnetic field evolution becomes similar to equation (41) for $`i=1,2`$,
$$\frac{S_{i+4}}{t}+\frac{F_{i+4}^j}{x_j}=D_{i+4}.$$
(49)
#### 2.4.4 Energy Equation
The energy equation in conventional form is
$$\frac{U}{t}+𝐯U+(U+p)𝐯=\mathrm{\Gamma }\mathrm{\Lambda },$$
(50)
where
$$U=\frac{p}{\gamma 1}$$
(51)
is the energy density, $`p`$ is the thermal pressure, $`\mathrm{\Gamma }`$ and $`\mathrm{\Lambda }`$ are the thermal heating and cooling rates, and $`\gamma =5/3`$ is the ratio of specific heats for a monatomic or cold molecular gas. Note that these are the fundamental physical variables, and not effective variables, such as turbulent pressure or effective $`\gamma `$, which is sometimes used for shock fronts to automatically include heating and cooling behind the shock. Here the computer code generates by itself what we have come to think of as “turbulent pressure,” and the heating and cooling terms determine the relation between temperature and pressure in a self-consistent way.
The energy equation is converted into conservative form by adding it to the dot product of the velocity with the equation of motion, and then substituting from other equations. The result has the same general form as before, now written with subscript 6 because for a two dimensional problem, the energy equation is the 6th component of the general vector equation:
$$\frac{S_6}{t}+\frac{F_6^j}{x_j}=D_6.$$
(52)
where
$$S_6=U+0.5\rho v^2+B^2/(8\pi )$$
(53)
is the total energy,
$$F_6^j=v_j\left[U+p+0.5\rho v^2+B^2/(4\pi )\right]B_jB_iv_i/(4\pi )$$
(54)
is the energy flux, and
$$D_8=\frac{B_i}{4\pi }\frac{}{x_j}\left[X_jB_iX_iB_j\right]+\mathrm{\Gamma }\mathrm{\Lambda }+\rho v_ig_i,$$
(55)
where
$$X_j=\frac{1}{\omega _+\rho }\left[\frac{1}{4\pi }\frac{\left(B_kB_j0.5\delta _{jk}B^2\right)}{x_k}\right].$$
(56)
Throughout these derivations, we have used the convention that repeated indices are summed.
The heating and cooling terms are designed to give two stable temperature states and an unstable state between them. This is done by taking
$$\mathrm{\Gamma }\mathrm{\Lambda }=\mathrm{\Gamma }_0\rho pC\left(C0.5\right)\left(C1\right)/0.04811$$
(57)
where $`C=\mathrm{log}\left(p/\rho \right)`$. For large temperatures, $`C1`$, the quantity $`\mathrm{\Gamma }\mathrm{\Lambda }`$ is zero and decreasing with increasing $`C`$, so the region cools if $`C>1`$ and heats up if $`C<1`$; this is stable behavior with equilibrium at $`C=1`$. For low temperature, $`C0`$, $`\mathrm{\Gamma }\mathrm{\Lambda }0`$ and decreasing again, so the region is stable there too. At intermediate temperatures, where $`C0.5`$, $`\mathrm{\Gamma }\mathrm{\Lambda }`$ increases with $`C`$, so $`C>0.5`$ leads to more heating and a further increase in $`C`$, while $`C<0.5`$ leads to more cooling and a further decrease in $`C`$; this is unstable behavior. The equilibrium temperatures correspond to $`p/\rho =`$ 1 and 10. For comparison, the initial temperature in all of our simulations corresponds to the stable solution, $`p/\rho =1`$, and the initial Alfvén speed, $`B/\left(4\pi \rho \right)^{1/2}`$, is 10. The coefficient 0.04811 in $`\mathrm{\Gamma }\mathrm{\Lambda }`$ makes the peak values of the cubic part, $`C\left(C0.5\right)\left(C1\right)`$, equal to $`1`$ at $`\mathrm{log}p/\rho =0.211`$ and $`+1`$ at $`\mathrm{log}p/\rho =0.789`$.
The constant $`\mathrm{\Gamma }_0=10^3`$ determines the cooling time. If we write the magnetic field-free equation of energy as $`Dp/Dt=\left(\gamma p/\rho \right)D\rho /Dt+\left(\gamma 1\right)\left(\mathrm{\Gamma }\mathrm{\Lambda }\right)`$, then the cooling time is $`t_{cool}=\left(\left[\gamma 1\right]\mathrm{\Gamma }_0\rho \right)^1`$. Initially, this is $`3/\left(2\mathrm{\Gamma }_0\right)`$ for $`\gamma =5/3`$ and $`\rho =1`$, and this is 1500 for $`\mathrm{\Gamma }_0=10^3`$. This is about the duration of the simulations, so the local cooling time is approximately the simulation run time divided by the density.
#### 2.4.5 Discussion
This completes the derivation of the physical equations in conservative form. The variables actually used by the computer are the $`S`$ variables, so all these equations for $`F`$ and $`D`$ have to be written in terms of these $`S`$ variables. This means, for example, that $`v_1`$ in an equation has to be written as $`S_2/S_1`$, and so on. The $`S`$ variables are initialized after consideration of the physical problem, v is initialized to $`𝐅_x`$, and w is initialized to $`𝐅_y`$. Then the $`S`$ variables are incremented in time, as discussed in Section 2.3.
The unit of time in the simulation is equal to the crossing time of a coordinate cell ($`dx=dy=1`$) at a velocity given by the initial ratio of $`P/\rho =1`$. The time step size in the simulation is much smaller than this time unit, because many types of motions are faster than 1 velocity unit. Initial Alfvén waves move at a speed of 10 velocity units, sound waves in the warm phase move at a speed of $`10^{1/2}\gamma =5.3`$, and Alfvén waves in the warm phase, where the density is low ($`\rho 0.1`$) move at $`10/\left(\rho \right)^{1/2}30`$. Thus we use a time step of 0.01, which is short enough to follow all of these motions.
The equations were solved on an IBM SP parallel computer, with computational space divided up between processors, and communication between processors done with message passing (MPI). A typical run of 1536 time units, which is 153600 time steps, on a grid of $`800\times 640`$, with magnetic diffusion, heating and cooling, took about 150 node-days on an IBM SP with the Power2sc processor. We report three of these runs here, one without gravity and two with a fixed gravitational potential, plus two other runs on a $`400\times 320`$ grid to test the importance of magnetic diffusion.
### 2.5 Tests
#### 2.5.1 Linear Waves
Several tests were run to check the accuracy of the MHD solutions. In all tests, magnetic diffusion was turned off and $`\mathrm{\Gamma }\mathrm{\Lambda }`$ was set equal to zero. All simulations used 64-bit floating point accuracy.
In a simulation with 800 cells and a constant value of the magnetic field strength and density, a perturbation was given to the velocity in a direction perpendicular to the field at a grid position halfway between the two boundaries. The perturbation was very low in amplitude ($`10^2`$ times the sound speed), and it had a sinusoidal time dependence to generate a smooth Alfvén wave. The average wavelength of the wave was measured to be within the factor $`9.6\times 10^5`$ of the theoretical prediction (200 cells), indicating that the linearized Alfvén speed has this accuracy.
#### 2.5.2 Shock jump conditions
In another simulation with the same initial conditions, a larger perpendicular velocity was applied to the same centralized grid point for one sinusoidal cycle, and the resulting two diverging waves quickly steepened into shock fronts as they moved away from this point (cf. Fig. 1). The shocks continued along the field lines gathering mass and decreasing in amplitude. The accuracy of the jump conditions was determined from the rms dispersion of the various measures of the shock speed:
$$\xi _1=[m_y]/[\rho ],$$
(58)
$$\xi _2=\frac{[m_y^2/\rho ]+[p]+[B_x^2]/8\pi }{[m_y]},$$
(59)
$$\xi _3=\frac{[m_xm_y/\rho ]B_x[B_y]/4\pi }{[m_x]},$$
(60)
and
$$\xi _4=\frac{[B_xm_y/\rho ]B_y[m_x/\rho ]}{[B_x]},$$
(61)
where $`m_y=\rho v_y`$ and $`m_x=\rho v_x`$ are the momentum densities parallel and perpendicular to the initial field (equal to $`S_3`$ and $`S_2`$ in the notation of Eq. 38). The time evolution of the perpendicular momentum, $`m_x`$, is in figure 1.
The amplitude of the perturbation was such that the velocity of the resulting motion was intermediate between the sound speed and the Alfvén speed (1 and 10 in these units, respectively). This is the regime of the cloud simulations discussed in the rest of this paper, based on observations of velocities and Alfvén speeds in the interstellar medium.
The jump condition expressions given above are the shock speeds from the mass flux parallel to the field and to the direction of propagation of the wave, the parallel momentum flux, perpendicular momentum flux, and perpendicular magnetic field flux, respectively. These quantities are evaluated at the grid point with the maximum density behind the shock. The shock jump differences denoted by square brackets $`[\mathrm{}]`$ are determined from the differences between the values of various quantities at the density peak and values at points ahead of the shocks by 20 grid spaces (this number 20 does not matter, as long as it places the preshock condition in the unperturbed area). The relative rms deviation between all four determinations of the shock velocity is a measure of the accuracy of the jump conditions. This relative rms value is shown in Figure 1 as a plus symbol, using the right hand axis. It is typically 1% or less, indicating that the shock jump conditions are satisfied to within this accuracy. Note that the shock fronts are not sharp at late times in this test because the velocity is less than the Alfvén speed (compare to Fig. 4 below).
Analogous simulations with perturbations perpendicular to the initial uniform field gave the same order of magnitude for the accuracy of the jump conditions.
#### 2.5.3 Measurement of the $`𝐁=0`$ error
Another important test is for the error in $`B=0`$ to be small. Evans & Hawley (1988), Stone & Norman (1992), and Dai & Woodward (1998) developed codes that explicitly force $`𝐁=0`$, but our code only gives $`𝐁=0`$ to within the numerical accuracy determined by the grid and time stepping. Figure 2 shows the average and rms values of $`𝐁/\left(𝐁𝐁\right)^{1/2}`$ inside the computational grid of the 2D-MHD simulation discussed in section 3. For this evaluation, we considered both the total grid, measuring $`800\times 640`$ with the 800 cell direction parallel to the initial field, and the central portion of this grid, from positions 120 to 680 parallel to the field (outside the sources of excitation for the waves) and 0 to 640 perpendicular to $`𝐁`$. The values of $`𝐁/\left(𝐁𝐁\right)^{1/2}`$ are for the latter, shown at intervals of one time unit, which is 100 time steps, for 1536 time units overall.
Both the average and the rms values of $`𝐁/\left(𝐁𝐁\right)^{1/2}`$ are steady throughout the calculation. The average value is $`\pm 5\times 10^8`$ and the rms is $`7\times 10^5`$. The average is smaller than the rms because this quantity fluctuates over positive and negative values. There is no systematic drift for either the average or the rms.
The lack of any systematic drift in $`𝐁/\left(𝐁𝐁\right)^{1/2}`$ over time, and the smallness of its value, imply that stochastic monopoles, which are most likely the result of round-off errors in calculating B and other variables, are transient and so few in number that they do not affect the code. Considering that the $`560\times 640`$ grid in which $`𝐁/\left(𝐁𝐁\right)^{1/2}`$ was measured has $`3.6\times 10^5`$ cells, there are on average $`\left(7\times 10^5\right)\times \left(3.6\times 10^5\right)25`$ monopoles at any one time, with positive and negative signs canceling each other to give a net monopole number of less than $`\left(5\times 10^8\right)\times \left(3.6\times 10^5\right)0.02`$. Even though the code is not designed to force the monopole number to be zero at all times, this number is still so small that any effects from non-zero $`𝐁`$ are in the noise.
We conclude from this that the magnetic field is sufficiently divergence free in our simulations to represent the magnetic forces and diffusion rates with the same accuracy as the other forces. We do not expect the code to follow all the magnetic field lines to high precision, however, because of the occasional magnetic monopole.
#### 2.5.4 Advection test
Other tests for MHD codes were recommended by Stone et al. (1992). Figure 3 shows the results of an advection test, in which the initial conditions are: $`B_x=0`$, $`B_y=1`$ between grid points 100 and 150, inclusive, and $`v_x=1`$, $`v_y=0`$, $`\rho =1`$ and $`P/\rho =10^{10}`$ everywhere. In this test, a perpendicular magnetic field bundle is advected through the grid on a current moving at supersonic speed $`v_x=1`$. The test is to see how well the initially square pulse reproduces itself after moving for five times its initial width. A measure of squareness is the height of the curl $`\times 𝐁`$, which is $`\left(B_y[i+1]B_y[i1]\right)/2`$ for grid point $`i`$. Larger values of $`\times 𝐁`$ at the ends of the pulse correspond to better advection properties.
In Evans & Hawley (1988), four different numerical schemes were tested, and they gave maximum values of $`\times 𝐁`$ equal to 0.15, 0.18, 0.07, and 0.03. In Stone et al. (1992), three codes were tested, and they gave values of 9, 37, and 60 in their figure 2. Stone et al. used a grid spacing of 0.004 times that used by Evans & Hawley and that used here, so we multiply their $`\times 𝐁`$ values by this factor to get the same scale. The result is then 0.036, 0.15, and 0.24 for the Stone et al. trials. Here we get a value of 0.088 for the peak in $`\times 𝐁`$, which is intermediate between the other tests. The best results were for the piece-wise parallel algorithm considered by Stone et al.. In the present code, the spatial derivatives are accurate only to linear order (cf. 2.2), so this accounts for the lower $`\times 𝐁`$.
#### 2.5.5 MHD Riemann test
Another test recommended by Stone et al. (1992) is an MHD Riemann problem with initial conditions: $`P=1`$, $`\rho =1`$, and $`B_y=1`$ for grid position less than or equal to 400, and $`P=0.1`$, $`\rho =0.125`$, and $`B_y=1`$ for positions 401 or larger, with $`B_x=0.75`$, $`v_x=v_y=0`$ everywhere. Stone et al. actually had a cell size of 0.125 and a total number of cells equal to 800, with the divider at the 400th cell, which is position 50 in his figure 4. Our cell size is 1, so we take 800 total cells with the divider at 400 in the figure. Stone et al. plotted the physical quantities at the time $`t=10`$, which, with our grid, corresponds to $`t=80`$ (since the speeds like $`P/\rho `$ are the same, but our grid is larger by a factor of 8). Figure 4 shows the results. Each variable is in excellent agreement with the results in figure 4 of Stone et al. The numbers in the plot of density correspond to features in the solution (cf. Stone et al. 1992): (1) a fast rarefaction wave, (2) a slow compound wave, (3) a contact discontinuity, (4) a slow shock, and (5) another fast rarefaction wave.
Other tests of the basic relaxation method were shown by Jin & Zin (1995), without magnetic forces, and without the heating and cooling functions appropriate for astrophysical problems.
## 3 Hierarchical structure in a boundary-free turbulent region
### 3.1 Excitation of waves
To experiment with the origin of hierarchical structure in interstellar clouds, we ran many simulations with moderately strong Alfvén waves generated at the top and bottom of a large grid ($`N_y=800`$ cells in the vertical direction plotted here, parallel to the initially uniform B, and $`N_x=640`$ cells in the horizontal direction). These waves traveled towards the center of the grid and interacted there, making an enhanced, irregular density structure. The boundary conditions were periodic in both vertical and horizontal directions. Thus the outgoing waves generated near the top and bottom grid edges meet and mix quickly after they cross these edges, and the inward moving waves meet after a longer time in the center of the grid.
The initial conditions of the grid are a uniform density ($`\rho =1`$) and a uniform magnetic field strength ($`B_y=\left(4\pi \right)^{1/2}\times 10`$; $`B_x=0`$), giving an Alfvén speed of $`v_A=10`$, a uniform pressure $`P=1`$, giving an initial sound speed $`\gamma P/\rho =\gamma =5/3`$, and zero velocity in both directions. The gas is also thermally stable initially ($`\mathrm{\Gamma }=\mathrm{\Lambda }`$) and the initial cooling time is 1500 time units ($`\mathrm{\Gamma }_0=10^3`$). The magnetic diffusion rate was taken to be negligibly small, using $`\omega _+=N_yv_A/dy`$, so the diffusion time would be $`L/v_A`$ for a very sharp field perturbation on the scale of the grid spacing, $`dy`$, where $`L=N_ydy`$ is the total grid distance along the field. Simulations with more rapid diffusion are considered in section 4.
To generate waves, the velocities perpendicular to the field, $`v_x`$, at certain grid points are changed with a pattern of accelerations $`v_x/t=Ae^{\omega t}`$ for random amplitudes $`A`$ and fixed decay rates $`\omega `$. The spatial positions of these accelerations are limited to within 5% and 15% of the grid size in from the ends of the grid in the direction parallel to the initial field. For a grid with $`N_y=800`$ cells parallel to B, the transverse accelerations occur between cells 40 and 120, and 680 and 760 in the $`y`$ direction, and throughout the full width ($`N_x=640`$ cells) in the perpendicular direction. The amplitudes $`A`$ are taken to equal random numbers in an interval from 0 to 1 times some fixed value, chosen to give sufficiently strong perturbations to make the desired density structures, but not so strong that the code diverges by forcing a velocity to exceed the previously assigned maximum velocity $`a`$ (cf. Sect. 2.1). The decay time is taken to be $`\left(\omega \right)^1=N_y/\left(4v_A\right)=20`$ time units, which is the time it takes an Alfvén wave to move over $`1/4`$ of the cells parallel to B.
New accelerations are applied continuously, and the old ones terminated at the same time, so that there is always one acceleration at the bottom of the grid and another at the top of the grid. The interval between accelerations is given by a random number between 0 and 1 multiplied by the time $`\left(8\omega \right)^1`$. The accelerations at any one time are confined to a single grid spacing in $`y`$, parallel to the field, although different accelerations can occur at different times within the intervals $`(0.050.15)N_y`$ and $`(0.850.95)N_y`$, discussed above. The accelerations are also confined to a range of grid points, $`\mathrm{\Delta }x`$, perpendicular to the field (although each one may extend beyond the horizontal edges and wrap around to the other side with the periodic boundary conditions discussed above). The range $`\mathrm{\Delta }x`$ has a distribution of sizes comparable to the distribution of interstellar cloud radii, which is a power law $`n(R)dRR^{3.3}dR`$ (Elmegreen & Falgarone 1996). For the simulations, $`\mathrm{\Delta }x`$ varies randomly with this power law between $`0.05N_x`$ and $`0.25N_x`$.
The perturbations are designed to simulate the movements of distant clouds or other perturbations perpendicular to the magnetic field lines. These clouds presumably force whole magnetic flux tubes to move sideways in the manner simulated, since the parallel motions of clouds do not perturb the field. The cloud motions are also likely to be supersonic although generally sub-Alfvénic in space, in which case they generate strong enough waves to push matter around and influence the density, as in the simulation. As long as the waves are sub-Alfvénic, they travel relatively far from their sources (Zweibel & Josafatsson 1983) and interact to generate density structures wherever they meet. Some diffuse clouds, and much of the structure inside both diffuse clouds and the weakly self-gravitating parts of molecular clouds, can be made in this way, with remote sources of turbulent energy entering the region and moving around on magnetic field lines.
### 3.2 Results
Nonlinear Alfvén waves interact to form an intricate density structure. In the simulations, this structure has many scales because the initial waves have a power-law width distribution, and because, in general, non-linear terms in the MHD equations add the spatial frequencies of two mixing waves, giving an ever-increasing range of spatial frequencies.
#### 3.2.1 Density and temperature maps
Figure 5(top) shows the density distribution of the simulation described above at a time of $`t=1024`$ time units. The display measures $`640\times 480`$, with the 640-cell direction perpendicular to the initial field and representing the full grid size, and the 480-cell direction parallel to the initial field and showing only the central half of the full grid. The density increases monotonically as the color cycles from blue to yellow to red with the full rainbow, and then jumps back to blue again, followed by another cycle to red. This cyclical color display is used to emphasize hierarchical structure. There is also a density threshold ($`\rho <1.21`$) below which the figure shows black. The first blue is at this $`\rho =1.21`$ threshold, the second is at $`\rho =1.6`$, and the highest density is red with a tiny black dot at $`\rho =1.826`$, slightly to the right of the middle. Outside the plotted region, further to the top and bottom, the density gets moderately low in the “intercloud” medium (cells 120–160, and 641–680) and then very low ($`\rho =0.19`$) in the excitation region, (cells 40–120 and 680–760). The grid range for the figure is from 161 to 640 cells in the vertical direction.
The density structure in figure 5(top) is hierarchical, with clumps inside larger clumps. The “cloud” on the left has three levels in the hierarchy, making four total levels if the cloud itself is counted. The density contrast inside the cloud is not large because of numerical limitations and because of the limited range between the sonic and Alfvén speeds, and between the two equilibrium sonic speeds (both only factors of 10). The total density contrast in the whole grid is often a factor of 20 or more, but much of this occurs at the edge of the cloud where the warm intercloud medium is generated. Higher cloud and total contrasts ($`50`$) were achieved in our 1D simulations (Paper I), because the grid contained 8000 cells and the waves could be driven harder. Other 2D runs with stronger wave driving made greater density contrasts, but eventually bombed when a wave velocity in the low density intercloud medium exceeded the maximum allowed by the parameter $`a`$ (cf. Sect. 2.1). Larger values of $`a`$ degraded the accuracy of the simulation given the grid size. Future simulations with larger grids should be able to get around these limitations by allowing larger ratios between the thermal speeds in the two phases and between the Alfvén speed and the cool thermal speed.
The density structures in figure 5 are more ellipsoidal than filamentary, and there are no obvious sharp fronts at the leading edges of the clumps. This lack of shock-like structures occurs because, even though the clumps are all moving in bulk, their speeds are less than the Alfvén speed, and also because Alfvén waves are moving through the clumps in both directions, smoothing out the sharp edges. This gives the model clumps some resemblance to real interstellar cloud clumps, but this resemblance may only be superficial because the real structures of interstellar clouds on these near-thermal scales are not generally resolved.
The complete time evolution of the density structure for this model is in Figure 6, which is an mpeg file available from the electronic version of this paper in the Astrophysical Journal. The mpeg file has 192 frames of size $`640\times 480`$. It is a color representation of the density evolution over a total time of 1529.2 units. Since the initial Alfvén crossing time over the entire grid (800 cells) is $`800/10=80`$, the simulation represents $`19`$ Alfvén crossing times. The thermal crossing time in the cool phase, where the sound speed is $`\gamma `$, is $`800/\gamma 480`$, so the total time is 3.2 thermal crossing times (twice this if we consider only the central cloudy part, which is where the cool gas is). The color code in the mpeg file is cyclical as in figure 5, but there are three color cycles in density, with blue at $`\rho =0.3,`$ 1.7, and 2.0. The maximum density is 2.3, which is red.
The temperature structure at the same time step as in the top of figure 5 is shown in the bottom of figure 5. The highest temperatures at the top and bottom of the grid are black (threshold $`P/\rho >1.7`$), and then the temperature decreases as the colors cycle from red to blue and then again from red to blue. The first red is this $`P/\rho =1.7`$ level, and the second red is at $`P/\rho =1.25`$. The minimum temperature, which is blue, corresponds to $`P/\rho =0.9`$.
The intercloud medium at the top and the bottom of the grid is at a much higher temperature than the cloud in the middle of the grid because the density is low in the intercloud region, and then the $`\mathrm{\Gamma }\mathrm{\Lambda }`$ function finds the high-temperature equilibrium solution. The clumps inside the cloud are slightly warmer than the interclump medium because the clumps are moving compression fronts, not stagnant clouds. This means the clumps have a continuous source of energy from their compression. The lower density and temperature in the interclump medium indicates that the clumps are not confined by an interclump thermal pressure, as is often proposed for interstellar clouds. Instead, the clumps are confined on their leading edges by the ram pressure from supersonic motions through the interclump medium, and they are confined on their trailing edges by the gradient of the magnetic wave energy density that is pushing them along.
The mpeg file shows that the time evolution of the density structure is similar to what we found for 1D wave compression in Paper I: the waves push material along with them as they converge in the center of the grid, and this material builds up to make a “cloud.” The cloud has a low temperature because of its high density, given the heating and cooling functions, which have two stable thermal states. At the same time, the waves clear out the matter from the region around the cloud, leaving an intercloud medium with a low density and a high temperature equilibrium. The pressure in the cloud is higher than the initial pressure of the simulation because the incident waves push on the material they collect in the center. Nevertheless, there is total pressure equilibrium between the cloud and the intercloud medium, with the balance between kinetic, magnetic and thermal pressures changing across the cloud boundary (cf. Paper I). The 1D simulation also showed how the cloud is broken up into many smaller clouds and clumps, which have a hierarchical structure. The resolution in the 1D run of Paper I was 8000 cells, 10 times better than what we have here, so the hierarchy in density could be seen better there.
The present simulations in 2D show the same cloud formation properties and hierarchical structure as the 1D runs in Paper I. This structure changes with time as new waves enter the cloud and the existing waves continue to interact, but it always has the same hierarchical character. Three dimensional models will be necessary to fully simulate interstellar clouds, and it may be that the compression is less for a given wave amplitude in 3D than in 2D, because of the additional degree of freedom for magnetic motions in 3D. Larger compressions can always be applied to get the same level of density enhancements. The formation of hierarchical structure should not depend on the dimensionality of the simulation, however. It seems to be characteristic of non-linear wave interactions in any number of dimensions.
The mpeg file indicates that the small clumps exist for a shorter time than the large clumps. The lifetimes of clumps of various sizes are estimated to be about the sound crossing time inside the clump, regardless of scale. This lifetime is definitely larger than the internal Alfvén crossing time. This result is consistent with the view presented below in section 4 that most of the clumps are sonic or mildly supersonic features created by thermal pressure gradients parallel to the field.
#### 3.2.2 Power Spectra
To quantify this hierarchical structure, we measured the Fourier transform of the density in directions parallel and perpendicular to the initial magnetic field over lengths of 240 cells and 640 cells, respectively (the FFT double-precision subroutine from the IBM ESSL Fortran library accommodates vectors with 640 or 240 elements). The vertical length of 240 cells was chosen to represent the inner part of the cloud; this is the central half of the vertical extent of the grid in figure 5.
For the FFT in the horizontal direction, perpendicular to the initial field, separate FFTs of the horizontal density distributions were made for each vertical grid position between cells 280 and 520, and averaged together. For the FFT in the vertical direction, separate FFTs of the vertical density distributions were made for each horizontal grid position between 0 and 640.
The results for the timestep shown in figure 5 are in figure 7, which plots the Fourier transform power, $`\left(Re^2+Im^2\right)^{1/2}`$ for real and imaginary parts, Re and Im, versus the spatial frequency. The FFTs for the directions parallel and perpendicular to the field are shown on the bottom and top, respectively. The spatial frequency on the bottom figure equals 240 cells divided by the wavelength of the Fourier component, measured in units of the grid spacing. The spatial frequency on the top equals 640 divided by the wavelength. Each plot goes from a spatial frequency of 1, which is for a wave spanning the whole extent of the corresponding direction, to 120 or 320, respectively, which are both for a wavelength of 2 cells.
The sources of excitation contribute somewhat to the FFT in the perpendicular direction. These sources have a power law size distribution between lengths of 32 and 160, which correspond to spatial frequencies of 20 and 4 in figure 7. The top diagram in that figure has a slightly shallower slope in this range than at higher frequencies. Below a spatial frequency of 20 in the perpendicular direction, the slope is $`2.5`$; above 20 it is $`3.6`$, and in the parallel direction, between 10 and 100, it is $`3.2`$.
For comparison with these model results, Green (1993), Stutzki et al. (1998), and Stanimirovic et al. (1999) found a slope of $`2.8`$ for the power spectrum of the line-of-sight integrated intensity structure in HI and CO maps. The 2D model results are not expected to be the same as this, but the fact that both real clouds and the model results give fairly smooth power-law power spectra indicates that both have self-similar structure on a range of scales spanning at least a factor of 10.
#### 3.2.3 Velocity Correlations
The rms velocity in a region of our simulation increases with the size of the region, as for Kolmogorov turbulence. Figure 8 shows the average rms velocities parallel and perpendicular to the initial magnetic field in squares of various sizes, as indicated by the abscissa. The time step is the same as in figure 5. The rms velocity scales as a power law with the size, $`S`$, $`v_{rms}S^\alpha `$, with $`\alpha 1`$ on scales smaller than $`30`$ cells, and $`\alpha 0.3`$ on scales from 30 to 240 cells. The largest box size considered is 240, which is the vertical extent of the cloudy part of the simulation; this is much smaller than the size of the grid ($`800\times 640`$), so edge effects are not likely to influence this velocity correlation. Evidently, the largest scales have a velocity-size correlation similar to Kolmogorov turbulence, with a slope of about $`1/3`$.
Smaller scales have a steeper correlation slope because the velocity differences tend to be too small on small scales. This means that the material tends to move too coherently compared to Kolmogorov turbulence on scales less than about 30 cells. The origin of this steepening could be numerical: although 30 cells should be sufficiently large to be free of resolution errors at the cellular scale, the resolution of shocks and rarefaction fronts is $`520`$ cells in figure 4, depending on shock strength. Also, the range of scales for driving the waves is 32 to 160, which is in the flat part of the velocity correlation.
A physical origin for the steepening at small scales is likely too, because small regions are forced to co-move by the magnetic field (Parker 1992). Indeed, the Alfvén speed is $`10`$ in the turbulence simulation, and this is much larger ($`\times 100`$) than the rms speed where the velocity-size correlation becomes steep in figure 8. This implies that magnetic field tension may be sufficiently strong to overpower the inertial forces from turbulence on small scales. If this is also the case in interstellar turbulence (and the 2D model results still apply in 3D), then the velocity-size correlation in molecular clouds should become steeper than the usual $`0.4`$ power law slope at linewidths less than some small fraction ($``$1% in these simulations) of the Alfvén speed. To observe this, the total range in clump rms turbulent speeds must exceed a factor equal to the inverse of this fraction, considering that the broadest linewidth is usually the Alfvén speed.
Figure 8 also indicates that the rms velocities parallel to the magnetic field are smaller than they are perpendicular to the field. This is because the motion is forced in the perpendicular direction, and the parallel motion responds as a higher order (non-linear) effect. It may be that these two speeds are more similar in real interstellar clouds because all of the motions there are expected to be more non-linear. There could be some decoupling between the compressive and shear waves too (Ghosh et al. 1998).
#### 3.2.4 Summary
Interacting non-linear magnetic waves can make hierarchical, scale-free density structure with an approximately Kolmogorov velocity-size relation out of an initially uniform medium. The clouds and clumps that are produced by this mechanism are similar to what is observed in diffuse and translucent interstellar clouds, and in the parts of molecular clouds that are not strongly self-gravitating. Interstellar cloud structure is therefore likely to be partly the result of non-linear magnetic wave interactions, which is sub-Alfvénic MHD turbulence.
## 4 Experiments with Magnetic Damping
There have been several suggestions that stars begin to form when the minimum length for magnetic waves in the presence of ion-neutral diffusion becomes larger than a Jeans length (e.g., Mouschovias 1991). This idea is based on a model in which stars form in a more-or-less uniform cloud that is supported against self-gravity on large scales by MHD turbulence. When MHD turbulence is no longer possible, which indeed happens on sufficiently small scales in a uniform cloud, gravity wins and the local region collapses.
Our model of star formation is very different, because it proposes that clouds are never uniform. Turbulence from either inside or outside the cloud always gives them high-contrast density structure over a wide range of scales. Star formation begins when this structure melts away on scales larger than the thermal Jeans length. When there is such pervasive structure, particularly with the molecular cloud correlations between density, velocity dispersion, and size, magnetic diffusion does not become relatively more important on small scales. In fact the ratio of the diffusion time to the wave time is about constant on all scales for such a model. This is because smaller regions are denser, and so the neutral gas is more tightly bound to the ions during field line motions in just the right amount to compensate for the heightened magnetic tension (Elmegreen & Fiebig 1993). A cutoff at small scales finally arises when the density is so high that the small grains stop gyrating around the field.
The present code cannot check these ideas directly because there is no self-gravity. Instead, we assessed the importance of magnetic diffusion on cloud structure in a different way, with a series of experiments having different amounts of magnetic diffusion, adjusted through the parameter $`\omega _+`$; this is the ion-neutral collision rate introduced in equation (34). Two runs are compared here. Both models had exactly the same parameters and random numbers in a $`400\times 320`$ grid, and they had the same solutions up to the time 528.2 time units. This time equals 13.2 Alfvén wave crossing times through the vertical grid, and 1.32 sound crossing times for the cool phase, which is long enough to get a cloud in the center. Both runs also had $`\omega _+=10^2N_yv_A/dy=40`$ up to this time, but then one of them continued after this time with the same $`\omega _+`$ and the other continued with $`\omega _+=10^3N_yv_A/dy=4`$. These diffusion rate constants correspond to diffusion times of $`\omega _+/\left(kv_A\right)^2`$ for field gradient $`kB`$, and this time equals $`\omega _0\left(L/v_A\right)\left(kdy\right)^2`$, where $`L=N_ydy`$ is the full length of the grid along the field, $`dy1`$ is the grid spacing, and $`\omega _0=10^2`$ and $`10^3`$ in the two cases, respectively. This means that the diffusion time is $`\omega _0`$ times the product of initial Alfvén wave crossing time and the square of the scale length for magnetic field gradients, measured in grid spacings. The simulation discussed in section 3 had a large $`\omega _+=N_yv_A/dy`$, which gave a diffusion time equal to the Alfvén crossing time times the square of the scale length.
The case with rapid diffusion, $`\omega _0=10^3`$, noticeably lost the perpendicular component of the field inside the cloud, which means that the wave amplitude dropped even though an external excitation with the same amplitude was still applied. The other run, with $`\omega _0=10^2`$, continued with a high internal wave amplitude, following the same excitation from outside the cloud. To follow the rapid diffusion in the first case, we had to decrease the time step by a factor of 10 following this transition at $`t=528.2`$; to be consistent in the second run, the timestep was decreased there too. After the transition, there were 202000 more time steps for each run, or an additional simulation time of 202 time units, which is 5 initial Alfvén wave crossing times through the whole grid.
The resulting density maps (not shown) had surprisingly little differences in the two cases – they were virtually indistinguishable, even though the wave amplitude in the case with rapid diffusion got to be 5 times less than in the other case.
We demonstrate this result in two ways. Figure 9 shows the time development of the rms density and the rms of the perpendicular component of the field, with the rapid diffusion case represented by dashed lines. These rms values are taken along the horizontal rows in the grid, perpendicular to the field. This avoids the overall gradient in the vertical direction from the general cloud structure. Thus, the rms values of the density and perpendicular field component were determined for each of the 121 horizontal rows in the middle of the grid, with each rms calculated from all 320 grid values in the horizontal rows. All these rms values were then averaged over the 121 rows to get a single rms value at each interval of 1.6 time units (every 1600 time steps). The results are plotted in figure 9. The figure shows one full cycle of the cloud’s overall density oscillation (which is still a response from the initial pulse of high pressure during cloud formation). The rms of the density is a measure of the strength of the clumpy structure; it is nearly the same in the high diffusion case as it is in the low diffusion case. The rms of the perpendicular field is a measure of the wave amplitude inside the cloud. It starts the same in the two cases, but gradually dies away in the high diffusion case, ending up about a factor of 5 weaker than in the low diffusion case. Thus the internal magnetic waves die out, but the clumpy structure is unchanged.
The same result is shown again in figure 10, now using Fourier transform power spectra to measure the strengths of the density and wave perturbations. These diagrams were made like figure 7, but now for a grid that is smaller in each direction by a factor 2. The two left-hand diagrams show the average power spectra of the density in the region of the cloud (120 cells parallel to the field, out of 400 total, and the full 320 cells perpendicular to the field) in the parallel-to-field direction on the bottom and perpendicular direction on the top. The two right-hand diagrams show the power spectra of the perpendicular component of the magnetic field. There are three lines in each diagram: the solid line is at the beginning of the experiment, at the time $`t=529.2`$ time units, the long-dashed line is at the end of the experiment with little magnetic diffusion ($`\omega _0=10^2`$), i.e., at $`t=730.2`$ time units, and the short-dashed line is at the end of the experiment with a high rate of magnetic diffusion ($`\omega =10^3`$). Both of the top diagrams also show linear fits to the power spectra from spatial frequencies of 20 to 160, shifted upwards by factors of 100 for clarity. These linear fits reveal the similarities and differences between the power spectra without the noise.
Evidently, the density power spectra are virtually indistinguishable at the ends of the two experiments. The power spectrum of the perpendicular component of the field measured along the field (lower right diagram) is also nearly the same in the two cases; this means that the wave structure along the field is about the same with high and low diffusion. This result is not surprising because both experiments were driven with exactly the same incident waves, and both had the same wave propagation speeds through the grid. However, the power spectrum of the perpendicular component of the field measured across the field is much less at high spatial frequency in the high diffusion case than in the low diffusion case (see the top right diagram where the short-dashed line is below the others). This means that the wave amplitude is lower after some time when the diffusion rate is high, as expected for magnetic waves in general.
What is perhaps surprising about this experiment is that even though the magnetic wave amplitude gets low after some time in the high diffusion case, the density structure is virtually unchanged. This means that enhanced magnetic diffusion does not lead to a loss of cloud structure, even when this structure is directly the result of magnetic wave interactions. How can this be?
The reason for this result is that the density structure in all of the cases considered in this paper comes from motion along the magnetic field that is driven by pressure gradients in this direction. Inside the cloud, this compression is sonic in nature, because of the dominance of the $`P`$ term at high density in the parallel momentum equation. This is true even when the material is pushed at supersonic (but sub-Alfvénic) speeds. The origin of the motion is the noise at the edge of the cloud, which is subject to really strong magnetic waves from outside. The waves themselves weaken and, in the high diffusion case, damp out, as they travel through the high density part of the cloud, but their damage has already been done long before this. The primary influence of the external waves is at the cloud edge, where the magnetic energy and the momentum of interclump motions get converted into cloud density pulses, like puffs of wind, that travel through and interact with each other in the interior of the cloud. Even when the internal magnetic wave energy is damped, these sonic pulses still make essentially the same density structures inside the cloud.
If the density structure inside real interstellar clouds is the result of interacting, non-linear magnetic waves, as in the models discussed here, then this structure would seem to be relatively unaffected by an enhancement in magnetic diffusion that might result from an increase in density or an excess shielding of external radiation. This result suggests that enhanced magnetic diffusion is not the key to the onset of star formation in weakly self-gravitating clouds. Of course, magnetic diffusion can still play a very important role later, during the accretion phase inside a strongly self-gravitating cloud piece, but this process is not simulated here.
## 5 Experiments with Gravitational Density Gradients
In a model where clouds and clumps form by interacting non-linear magnetic waves, the only way the tiny structure can disappear as a necessary precursor to star formation is if both the magnetic waves and the sonic pulses they create damp out before they reach the cloud center. The previous section showed that even when the waves damped out, the sonic pulses that were generated in the intercloud medium and at the cloud edge still remained.
Here we consider a different way to damp the sources of internal cloud structure. This occurs when both the external waves and the sonic pulses have to climb up a steep density gradient before getting to the center. The increased density removes the wave and pulse kinetic energy by momentum conservation, and this leaves the center with relatively little turbulence to drive structure formation. Such a cloud density gradient occurs naturally when the cloud becomes significantly self-gravitating. This means that the gradual contraction of a cloud under the influence of self-gravity should be enough to exclude externally generated turbulence and initiate the decay of tiny cloud structure in the core.
This effect is demonstrated in two ways. First WKB solutions to the wave equation are given for waves traveling through a region with a centralized density enhancement. These analytical solutions show the expected decrease in wave amplitude in the cloud center. Second, two numerical experiments are run on $`800\times 640`$ grids that have a fixed, plane-parallel gravitational force in the direction along the field, which gives them a $`\rho =\mathrm{csc}^2\left[\left(yy_0\right)/H\right]`$ general density structure underneath the wave structure. These two experiments have different scale heights, $`H`$, and when combined with the simulation discussed in Section 3, show a gradual loss of density structure as the scale height decreases and the central density increases.
### 5.1 WKB Solutions to waves in density gradients
We consider here a simple wave of any kind that satisfies the wave equation
$$\frac{^2W}{t^2}=a^2\frac{^2W}{y^2}$$
(62)
for wave amplitude $`W`$ and wave speed $`a(y)`$ that is a function of position, $`y`$. Using the WKB approximation for weak waves, we write
$$W(y,t)=e^{i\omega ti_{\mathrm{}}^yk𝑑y}$$
(63)
for frequency $`\omega `$, wavenumber $`k=2\pi /\lambda `$ and wavelength $`\lambda `$. Substituting this waveform into the wave equation gives a differential equation for $`k(y)`$:
$$\frac{\omega ^2}{a^2}=ik^{}+k^2$$
(64)
for derivative $`k^{}=dk/dy`$. Substituting the real and imaginary components for complex $`k=k_r+ik_i`$ then gives two equations, one real and the other imaginary. We look for pure wave solutions with real $`\omega `$, and this allows us to eliminate one equation, giving as a result a single equation for the real component of $`k`$:
$$k_r^{\prime \prime }\frac{3\left(k_r^{}\right)^2}{2k_r}+2k_r^3\left(\frac{\omega ^2}{a^2}\right)2k_r=0.$$
(65)
The imaginary component of $`k`$ was eliminated from the above equation, but is given by $`k_i=k_r^{}/\left(2k_r\right)`$.
Equation 65 was solved numerically for $`k_r(y)`$. The wave speed is taken to be
$$a(y)=\frac{e^{y/H}+e^{y/H}}{e^{y_e/H}+e^{y_e/H}},$$
(66)
so it equals unity at the edge of the numerical grid, where $`y\pm y_e=5`$, and there is a gradual slow down of the wave to a minimum wave speed of $`2/\left(e^{y_e/H}+e^{y_e/H}\right)`$ at $`y=0`$. In the cloud model, this slow down is the result of an increased density. The desired result is the ratio of the wave amplitude at the center to the incident wave amplitude at the edge. The boundary condition for the integration is $`k_r=\omega /a`$ at $`y=y_e`$.
Figure 11 shows the result. The average wave amplitude inside the central scale height of the grid, between $`y=\pm H`$, is shown as a function of the central wave speed. The wave amplitude decreases as the central wave speed decreases, almost exactly as the square root for this model; i.e., $`<W>a(0)^{1/2}`$.
This decrease in amplitude with increasing density is essentially the result of gradual wave reflection at the cloud edge. The net flux toward the cloud on each side is the difference between the incident and reflected wave fluxes, and by conservation, must equal the respective fluxes in the same directions inside the cloud. We checked the WKB result by considering a cloud/intercloud boundary with a sharp immovable edge and a hard barrier inside the cloud to simulate reflection symmetry through the cloud (this is analogous to our MHD solution, which has waves incident from both sides of the cloud). We calculated the reflection and transmission amplitudes of the waves, and then averaged the internal and external wave energy densities over a factor of 100 in wavenumber (to smooth out resonances). We found again that the average wave amplitude inside and outside the cloud is always proportional to the square root of the local Alfvén speed.
This result differs from the proposal by Xie (1995) that Alfvén waves maintain an amplitude inversely proportional to the square root of density. In our case, the total field strength is about constant and the wave amplitude varies as the inverse fourth root of density. This implies that the wave pressure and energy density are lower inside the cloud than outside, demonstrating the effect of shielding. There is also a net compression of the cloud from this shielding, rather than a wave-pressure equilibrium inside and outside the cloud, as there would be in the case considered by Xie.
### 5.2 MHD solutions to waves in density gradients
The same problem was studied with the MHD code. We ran two more 2D simulations as in section 3 but with an additional acceleration from constant gravity, directed toward the center of the grid in the vertical direction, along the field. The gravitational acceleration, $`g`$, was written as part of the equation of motion in equation 35. Here it given by
$$g(y)=\left(\frac{2a_0^2}{H}\right)\left(\frac{e^{\left(yy_0\right)/H}e^{\left(yy_0\right)/H}}{e^{\left(yy_0\right)/H}+e^{\left(yy_0\right)/H}}\right),$$
(67)
for initial isothermal speed $`a_0`$ given by $`P/\rho =a_0^2=1`$. The corresponding initial condition for density was taken to be the equilibrium value
$$\rho (y)=\left(\frac{e^{y_0/H}+e^{y_0/H}}{e^{\left(yy_0\right)/H}+e^{\left(yy_0\right)/H}}\right)^2.$$
(68)
The midpoint of the grid in the vertical direction is $`y_0=400`$. This density is normalized to equal 1 at the top and bottom edge of the grid ($`y=2y_0`$ and 0, respectively), as in section 3, but now the density is higher in the center by a factor that depends on the scale height. We chose one run with $`H=300`$ cells, giving a central density enhancement of 4.1, and another with $`H=235`$, giving a central density of 8.0.
The simulations were run for as long as that in section 3, with the same random numbers and external wave stimulations. This allowed time for internal adjustments and large-scale cloud oscillations. The results show that the transverse wave velocity inside the cloud, and the level of density fluctuations, both decrease as the central density from gravity increases. Figure 12 shows the power spectra of the density at the bottom and the transverse velocity at the top for the three runs indicated by different line types. These power spectra were taken from Fourier transforms in the transverse direction, as on the top of figure 7. The density power spectra are normalized to the power at zero spatial frequency to remove the differences between the absolute densities in the clouds in these cases (recall that the central density is $`2.5`$ in the gravity-free case, while it is $`4`$ and $`8`$ in the two gravity cases). The sloping lines in the figures are least squares fits to the power spectra between frequencies 20 and 320, shifted upwards by factors of 100 for clarity. The fits indicate more clearly than the noisy power spectra that the gravity-free case, indicated by the solid line, has more power in both transverse wave velocity and density structure than the two centrally condensed cases.
The downward shift in the velocity power spectrum when gravity is added is a factor of 2.1 for the case with a central density of 4 and 3.1 for the case with a central density of 8, measured at a spatial frequency of 20. These numbers compare well with the results in figure 11, considering that the minimum wave speed is proportional to the inverse square root of the density. The downward shifts in the density power spectra at a frequency of 20 are factors of 3.0 and 3.7, respectively. The degradation in density structure is greater than in velocity structure, as expected for density caused by compression. If the wave energy density completely dominated the thermal pressure, then the density variations would scale as the square of the velocity variations.
The power spectra of velocity and density in the direction parallel to the field do not change much when the cloud becomes centrally condensed. This is analogous to what was found for the magnetic diffusion tests, where only the perpendicular direction had any change in magnetic wave amplitude. This is the result of a similar wave structure parallel to the field in all the cases of central condensation, following from the same wave stimuli. The primary differences between the runs are in the wave amplitudes.
These results indicate that the internal velocity and clumpy structure begins to disappear as the overall cloud gets more and more centrally condensed from gravity.
## 6 Conclusions
The Jin & Xin (1995) relaxation algorithm has been adapted to do MHD simulations with an energy equation and magnetic diffusion term relevant to astrophysical problems. The code tested well in comparison with other astrophysical MHD codes, and was judged to be adequate for the problems considered here.
MHD simulations in two dimensions were run for several cases to address the question of how hierarchical and scale-free clumpy structure inside interstellar clouds and in the general interstellar medium might be created, and how this structure might be induced to go away on very small scales as a precursor to star formation.
The simulations in section 3 showed that interacting non-linear Alfvén waves can make a whole 2D cloud and self-similar clumpy structure inside of it. This structure was demonstrated by its power-law power spectrum and by its overall hierarchical appearance. The clumps were rounded rather than filamentary, with no obvious shock structures, and each one existed for about one internal sound crossing time, regardless of scale. The thermal pressure of the interclump medium does not confine the clumps. They owe their existence entirely to kinematic pressures that are associated with their motions and with the relative motions of the surrounding gas. The overall structure had a realistic Kolmogorov velocity-separation relation at large scales, but it had a steeper relation at small scales for unknown reasons.
The simulations in section 4 showed that enhanced magnetic diffusion caused the Alfvén wave amplitude to decrease inside the cloud, but it did not change either the amplitude or the pattern of the density structure. This experiment suggests that, on interstellar scales which span the range between subthermal and superthermal motions, the density structure comes primarily from strong sonic pulses that move along the field and interact in complex ways. Changes in the magnetic diffusion rate do not change these density structures as long as the sonic pulses that drive them continue to have a source.
The simulations in section 5 showed that both the internal waves and the small-scale density structures they help create go away when the overall cloud has a density gradient, as might result from bulk self-gravity. This is because externally generated waves have trouble penetrating a cloud and making small scale structure by non-linear interactions when they have to climb a strong density gradient first. A WKB solution to the wave equation obtained the same result.
We view the processes acting in these simulations to be at the bottom end of the range of scales of interstellar cloud structures, near the thermal pressure limit where turbulent pressures are only slightly larger than thermal. We believe that a larger computational grid would show a more extended hierarchical structure on larger scales, but the same wave interaction processes on small scales. Most observations of real interstellar clouds cannot yet resolve the small scales that are simulated here, but only the larger parts of the hierarchy of cloud structure. This implies that if the models are a guide to reality, then the supersonic turbulence that is observed in interstellar clouds is not really supersonic at the atomic level, but nearly thermal locally, i.e., with relatively small local velocity gradients and few strong shocks. In that case, the appearance of supersonic motions is the result of a superposition of locally near-thermal motions on a wide range of unresolved scales, with a Kolmogorov-type velocity-size correlation generating the largest speeds.
The simulations also suggest a mechanism for star formation that is relevant to the modern model of interstellar clouds, in which many clouds and the clumpy structures inside them come from magnetic turbulence and the associated non-linear wave interactions, and in which these structures initially extend down to very small scales, far past the scale of the minimum stellar mass. Star formation in such a cloud model does not require any fragmentation mechanism, nor any other mechanism that separates out stellar mass units from the background, because these clouds are always highly fragmented anyway, on all scales, including the range covered by stars. That is, stellar mass fragments are in all clouds all the time, as a result of turbulence. Star formation with such a cloud model requires a smoothing mechanism instead, one in which sub-stellar fragments coalesce and meld together to build up smooth pools with stellar masses, without the destructive and dividing influence of turbulence inside and around this pool.
The simulations suggest that enhanced magnetic diffusion is probably not important for this first step in star formation, but the formation of a gradual density gradient from bulk self-gravity is. This result leads to the following scenario for star formation:
Star formation begins in a region of interstellar gas when various processes render it so massive that gravitationally driven motions become comparable to the externally and internally driven turbulent motions. At this time, the cloud begins to contract under its own weight and builds up a density gradient. Such a density gradient will shield the cloud from turbulence in the outside world, and lead to a reduction in the internal turbulent energy, as well as a loss of internal small-scale structure. Because the smallest scales evolve the quickest, this loss of structure will begin on the smallest scales and quickly increase the mass of the smallest smooth, thermally-dominated clumps. When these smallest, thermally-dominated masses increase to the point where gravitationally driven motions inside of them begin to dominate thermal motions, they collapse catastrophically to make one or a few stars each in dense cores. Neighboring clumps do the same, all rather quickly on the scale of the overall cloud evolution, forming a hierarchical arrangement of stars and star clusters on time scales comparable to the turbulent crossing times for those scales. If the self-gravity of the overall cloud is very strong, then this resulting cluster will be very dense, if not, then only a sparse cluster will form. This is in agreement with the densities, structures and formation times of real star clusters (see review in Elmegreen et al. 1999).
At the present time, these initial star-formation processes are beyond the limit of angular resolution in general cloud surveys. They also should occur quickly on the first substellar scales, making the initial smoothing process unlikely to see. This implies that interstellar clouds with and without star formation may look very similar on today’s observable scales, and their difference only show up when the angular resolution is great enough to see structures at $`500`$ pc with far less than a stellar mass. Our prediction is that star-forming clouds will have much less structure on sub-stellar scales than pre- or non-star forming clouds, and that extremely young star-forming regions will have a relative number of substellar clumps that is midway between those of the non- and the active star-forming clouds. That is, the clump mass spectrum will change from a power law to very small scales in non-star-forming clouds to one with a flattened or turned-over distribution function at low mass in pre- or active star-forming clouds, and the mass at this flattening or turnover will increase up to the minimum stellar mass as the cloud becomes more and more active with star formation. Another signature of this process is the hierarchical structure of young stellar positions, whatever the scale and density of star formation, and the appearance of clusters rather quickly, in about a local crossing time.
Acknowledgements: Helpful comments on the manuscript by Dr. A. Lazarian and S. Shore are appreciated. |
no-problem/9911/cond-mat9911412.html | ar5iv | text | # The non-extensive version of the Kolmogorov-Sinai entropy at work
## Abstract
We address the problem of applying the Kolmogorov-Sinai method of entropic analysis, expressed in a generalized non-extensive form, to the dynamics of the logistic map at the chaotic threshold, which is known to be characterized by a power law rather than exponential sensitivity to initial conditions. The computer treatment is made difficult, if not impossible, by the multifractal nature of the natural invariant distribution: Thus the statistical average is carried out on the power index $`\beta `$. The resulting entropy time evolution becomes a smooth and linear function of time with the non-extensive index $`Q<1`$ prescribed by the heuristic arguments of earlier work, thereby showing how to make the correct entropic prediction in the spirit of the single-trajectory approach of Kolmogorov.
05.45.-a,05.45.Df,05.20.Sq
The Kolmogorov-Sinai (KS) entropy is attracting an increasing interest in the field of chaos since it affords a criterion to establish the “thermodynamical” nature of a single trajectory in a way independent of the observation. This is so because the generating partition results in a value of the KS entropy independent of the size of the partition cells : A fact of fundamental importance to ensure the objective nature of the KS criterion. There have been in the recent past some attempts at generalizing the KS entropy by replacing the Shannon entropy, on which the KS is based, with a non-extensive form of entropy given by :
$$H_q=\frac{1\underset{i=1}{\overset{W}{}}p_i^q}{q1}.$$
(1)
The attempts at realizing this purpose are stimulated by the exponentially increasing interest on this kind of non-extensive entropy. The parameter $`q`$, which shall be referred to as *entropix index*, has an interesting physical meaning. Its departure from the standard value $`q=1`$, at which the entropy of Eq. (1) recovers the traditional Gibbs structure, signals either the existence of long-range interactions or of long-time memory. These papers , however, do not explicity evaluate the non-extensive version of the KS entropy. Rather these authors limit themselves to assessing numerically the time evolution of the dynamical property
$$\xi (t)\underset{\mathrm{\Delta }x(0)0}{lim}\frac{\mathrm{\Delta }x(t)}{\mathrm{\Delta }x(0)},$$
(2)
where $`\mathrm{\Delta }x(t)`$ denotes the distance, at time $`t`$, between the trajectory of interest and a very close auxiliary trajectory. The initial distance between the trajectory of interest and the auxiliary trajectory, $`\mathrm{\Delta }x(0)`$, is made smaller and smaller so as to let emerge the kind of sensitivity of the dynamics under examination.
The authors of Refs. related the function $`\xi (t)`$ to the non-extensive version of the KS entropy with heuristic arguments. With these arguments they established that the analytical form to assign to $`\xi (t)`$ for the non-extensive form of the KS entropy to increase linearly is
$$\xi (t)=[1+(1Q)\lambda _Qt]^{1/(1Q)},$$
(3)
where $`\lambda _Q`$ is a sort of Lyapunov coefficient. Throughout this letter we shall be referring to $`Q`$, predicted with entropic arguments, as *true entropic index*. In the specific case $`Q<1`$, of interest here, Eq. (3) means that the distance between the trajectory of interest and the auxiliary trajectory increases as an algebraic power of time. However, the numerical calculations made in Refs. show that the function $`\xi (t)`$ exhibits wild fluctuations, although the intensity of these fluctuations fulfills the prediction of Eq. (3). In Ref. a theoretical prediction was made for the power index $`\beta `$, and consequently for the true entropic index $`Q=(\beta 1)/\beta `$. This prediction reads:
$$\frac{1}{1Q}=\frac{1}{\alpha _{\mathrm{min}}}\frac{1}{\alpha _{\mathrm{max}}},$$
(4)
where $`\alpha _{\mathrm{min}}`$ and $`\alpha _{\mathrm{max}}`$ denote the crowding indices corresponding to the minimum and maximum concentration, respectively.
More recently, the same problem of evaluation of the true entropic index $`Q`$, resulting in the linear increase of entropy as a function of time, was dealt with by the authors of Ref., by means of the numerical calculation of the distribution entropy: The authors of this paper made, in fact, the delicate assumption that if a true $`Q`$ exists, making the *trajectory entropy* increase linearly in time, then the same $`Q`$ makes the *distribution entropy* increase linear in time also. In a sense this letter aims at checking this important assumption. This is a challenging problem, as the ascertainment of the equivalence of these two distinct entropy forms is the object of discussion also in the case of strong chaos
The problem here under study is that of the calculation of
$$H_q(N)\frac{1_{\omega _0\mathrm{}\omega _{N1}}p(\omega _0\mathrm{}\omega _{N1})^q}{q1},$$
(5)
where $`p(\omega _0\mathrm{}\omega _{N1})`$ is the probability of finding the cylinder corresponding to the sequence of symbols $`\omega _0\mathrm{}\omega _{N1}`$. This entropy expression affords a rigorous way of defining the earlier introduced concept of *true entropic index*. If it exists, $`Q`$ is the value of the entropic index $`q`$ making the entropy of Eq.(5) increase linearly in time. In the case $`q=Q=1`$ the $`lim_N\mathrm{}H_q(N)/N`$ becomes the ordinary Kolmogorov-Sinai (KS) entropy. In an earlier work a numerical calculation was made to establish $`Q`$ in the case of a text of only two symbols, with strong correlations. The case of many more symbols would be beyond the range of the current generation of computers. However, when the sequence of symbols is generated by dynamics, as in the case here under study, and the function $`\xi (t,x)`$ of Eq.(2) is available (for convenience, we make now explicit the dependence on the initial condition $`x`$), it is possible to adopt the prescription of Ref., which writes $`H_q(t)`$ of Eq.(5) as
$$H_q(t)\frac{1\delta ^{q1}𝑑xp(x)^q\xi (t,x)^{1q}}{q1},$$
(6)
where the symbol $`t`$ denotes time regarded as a continuous variable. In fact, when the condition $`N>>1`$ applies, it is legitimate to identify $`N`$ with $`t`$. The function $`p(x)`$ denotes the equilibrium distribution density and $`\delta `$ the size of the partition cells: According to Ref. the phase space, a one-dimensional interval, has been divided into $`W=1/\delta `$ cells of equal size.
There is now an important remark to make: The non-extensive form of KS entropy should read as follows,
$$h_Q=\delta ^{Q1}\underset{t\mathrm{}}{lim}\frac{1}{t}\frac{𝑑xp(x)^Q\xi (t,x)^{1Q}}{1Q},Q1.$$
(7)
This apparently means that leaving the ordinary condition $`Q=1`$ has the unwanted effect of making the generalized form of KS entropy dependent on $`\delta `$, thereby losing what we consider to be the most attractive aspect of the KS entropy. We are inclined to believe that this apparent weakness is, on the contrary, an element of strength. We shall see that in the case under discussion in this letter, due to the multifractal character of the natural invariant distribution $`p(x)`$, the prescription of Eq.(7) results in a rate of entropy increase independent of the cell size. In the case of the Manneville map, $`x_{n+1}=x_n+x_n^z,mod\mathrm{\hspace{0.33em}1}`$, it is shown that in the range $`3/2<z<2`$ the stationary correlation function of the variable $`x`$ exists and it is not integrable, thereby suggesting a possible breakdown of the ordinary KS entropy. However, in this case the natural invariant distribution is smooth, rather than multifractal, and as a consequence of that, the request of the independence of the cell size and the adoption of the prescription of Eq.(6) yield the condition $`Q=1`$, in agreement with the conclusions of the work of Ref.. In the case $`Q=1`$ it is straightforward to prove that Eq.(6) results in
$$H_1(t)=𝑑xp(x)\mathrm{ln}\xi (t,x).$$
(8)
On the other hand, in the ordinary case Eq.(3) becomes
$$\xi (t,x)=\mathrm{exp}(\lambda (x)t),$$
(9)
thereby making Eq.(8) result in the well known Pesin relation
$$h_{KS}\underset{t\mathrm{}}{lim}\frac{H_1(t)}{t}=𝑑xp(x)\lambda (x).$$
(10)
In conclusion, in the case of a smooth invariant distribution, either strongly or weakly chaotic, the prescription of Eq.(6) coincides with the Pesin theorem, which allows us to replace the direct calculation of the KS entropy with the numerically easier problem of evaluating Lyapunov coefficients. The case of fractal dynamics implies the existence of the true $`Q1`$, which has to be properly detected looking for the value of $`q`$ making $`H_q(t)`$ linearly dependent on $`t`$. This letter is devoted to providing the guidelines for this search.
We shall focus our attention on the calculation of
$$\mathrm{\Xi }_q(t,\delta )\delta ^{q1}𝑑xp(x)^q\xi (t,x)^{1q}.$$
(11)
Note that if $`\xi (t)`$ of Eq.(3) depended on $`x`$, the value $`\mathrm{\Xi }_Q^{1/(1Q)}`$ resulting from the joint use of Eqs.(11) and (3) would afford a simple recipe to determine the statistical average $`\lambda _Q(x)`$ at $`q=Q`$. In principle, $`\mathrm{\Xi }_q(t,\delta )`$ depends on the cell size $`\delta `$. However, we plan to prove that if it is properly evaluated, this quantity turns out to be independent of $`\delta `$. As done in the earlier work of Refs., we study the logistic map:
$$x_{n+1}=1\mu |x_n|^2,x[1,1]$$
(12)
with the control parameter $`\mu =1.4011551\mathrm{}`$, namely, at the threshold of transition to chaos. In this case the invariant distribution is multifractal, and consequently, expressed as a function of $`x`$, looks like a set of sharp peaks, which, in turn, through repeated zooming, reveal to consists of infinitely many other, sharper, peaks. This means that the direct evaluation of Eq.(6) is hard, since it is difficult to ensure numerically that these fractal properties are reproduced at any arbitrarily small spatial scale. We have to look for a different approach.
Let us replace the average over $`x`$ with the average over the crowding power index $`\alpha `$. In the long-time limit we obtain
$$\mathrm{\Xi }_q(t)\delta ^{q1}𝑑\alpha \delta ^{q\alpha f(\alpha )}t^{\beta (t,\alpha )(1q)}.$$
(13)
This equation rests on assuming dependence on the initial condition only through the power law index $`\beta (t,\alpha )`$ itself which, in fact, according to Anania and Politi Ref., reads
$$\beta (t,\alpha )=\frac{1}{\alpha (t)}\frac{1}{\alpha }.$$
(14)
We shall show with theoretical and numerical arguments that this relation yields Eq.(4) for the exact value of $`Q`$. The symbol $`\alpha `$ denotes the crowding index corresponding to a given initial condition $`x,`$ namely the position of the trajectory at $`t=0`$, and the symbol $`\alpha (t)`$ denotes the crowding index corresponding to the position of the same trajectory at a later time $`t>0`$. According to Anania and Politi
$$\alpha (t)=\frac{ln(1/t)}{ln|x(t+2^k)x(t)|},$$
(15)
where $`k`$ indicates the $`k`$-th generation of the Feigenbaum attractor.
Before proceeding, let us make an assumption which has the effect of accomplishing, within the non-extensive perspective, the Kolmogorov program of an entropy independent of the size of the partition cells. First of all, let us rewrite Eq. (13) in the following equivalent form:
$$\mathrm{\Xi }_q(t)=𝑑\alpha e^{W(f(\alpha )q\alpha +q1)}e^{V(q1)\beta (\alpha ,V)},$$
(16)
where:
$$Wln\delta $$
(17)
and
$$Vln(1/t).$$
(18)
Let assume now:
$$W<<V.$$
(19)
Under the plausible condition that the functions $`f(\alpha )q\alpha +q1`$ and $`\beta (\alpha ,V)`$ are not divergent, this assumption has the nice effect of producing
$$\mathrm{\Xi }_q(t)=𝑑\alpha e^{V(q1)\beta (\alpha ,V)}.$$
(20)
At this stage we make another crucial step. This is suggested by the work of Hata, Horita and Mori. The idea is that of using $`\beta (\alpha ,V)`$ as independent variable so as to write Eq.(20) either as:
$$\mathrm{\Xi }_q(V)=𝑑\beta P(\beta ,V)e^{V(q1)\beta (\alpha ,V)}.$$
(21)
or under the equivalent form:
$$\mathrm{\Xi }_q(t)=𝑑\beta P(\beta ,t)t^{\beta (q1)}.$$
(22)
We follow Ref. again and we adopt the asymptotic property:
$$P(\beta ,t)=t^{\psi (\beta )}P(\beta ,0).$$
(23)
The numerical calculation of the function $`\psi (\beta )`$ is done with a criterion different from that adopted in Ref.. The authors of Ref. fix a window of a given size $`t`$ and move it along the sequence for the purpose of evaluating the frequency of presence within this window of a given algebraic index $`\beta `$. This means that they make an average over many different initial conditions. We, on the contrary, fix a given initial condition, and we increase the size of the window, the left border of which coincides with the initial time condition $`t=0`$, whereas the right border, at the distance $`t`$ from the former, runs over the whole range of observation times. This different criterion is dictated by the specific purpose of evaluating the quantity of Eq.(22) in a way compatible with using only one single trajectory, while apparently the authors of Ref. do not feel the need of fitting this constraint.
The initial condition chosen is $`x=1`$. The reason for this choice is widely discussed in Refs.. This is so because $`\alpha (x=1)=\alpha _{min}`$, therefore ensuring the condition of maximum expansion. In Fig. 1 we show that the choice of $`\alpha _{min}`$ rather than $`\alpha _{max}`$ shifts the distribution $`P(\beta ,n)`$ from the right to the left, namely from a condition close to that of Ref., to a condition favorable for the emergence of $`Q`$. We note that the $`\beta `$-distribution does not drop to zero beyond the value $`\beta 1.3`$, which, according to Eq.(14) is the maximum possible value of the power index $`\beta `$. This is a consequence of the fact that the theoretical prescription of Eq.(15) refers to the time asymptotic limit $`t\mathrm{}`$, whereas the numerical calculation is carried out with an upper bound on time: The maximum value of time explored is in fact $`t_{max}=2^{18}`$. It is expected that with the increase of the time upper bound the distribution tends to drop to zero for values of $`\beta `$ larger than the maximum possible value.
The result of the corresponding numerical calculation is shown in Fig. 2. To make more evident that the central curve, with $`q0.25`$, is that corresponding to the true $`Q`$, we adopt the same procedure as that used by the authors of Ref.. We have fitted the curves $`H_q(t)`$ of Fig. 3 in the interval $`[t_1,t_2]`$ with the polynomial $`H(t)=a+bt+ct^2`$. We define $`R=|c|/b`$ as a measure of the deviation from the straight line. We expect that $`q=Q`$ results in $`R=0`$. We choose $`t_1=100`$ and $`t_2=1000`$ for all $`q`$’s. In the insert of Fig.2 we show that $`R`$ becomes virtually equal to zero for $`q=0.25`$, which is very close to the value $`q=0.24`$ found by the authors of Ref..
The shift from the left to the right distribution shown in Fig.1 is of fundamental importance to find the correct $`Q`$. Further evidence of this fact is obtained by using analytical arguments to evaluate the integral of Eq.(22). We proceed as follows. First, we fit the numerical data on $`\psi (\beta )`$ by means of the function
$$\psi _{fit}(\beta )=const(\beta \beta ^{})^2,$$
(24)
with $`\beta ^{}=1.35\pm 0.05`$. For $`t\mathrm{}`$ the resulting $`P(\beta )`$ becomes equivalent to a Dirac $`\delta `$ function centered at $`\beta ^{}=1.35\pm 0.05`$, thereby making straightforward to find
$$\mathrm{\Xi }_q(t)\frac{t^{\beta ^{}(1q)}}{\mathrm{ln}t}.$$
(25)
If we neglect the logarithm term, the entropy growth becomes linear in time at $`Q=0.259\mathrm{}\pm 0.02`$. A more accurate fitting procedure, taking the distribution asymmetry into account, was proved to produce virtually the same result. This provides a strong support to the conclusion of the numerical results of Fig. 2. As already mentioned earlier, we expect that with increasing the time upper bound the $`\beta `$-distribution will reach a maximum at $`\beta =1/\alpha _{min}1/\alpha _{max}`$ and will drop abrutly to zero for larger $`\beta `$. Consequently, the resulting true $`Q`$ is expected to coincide with that of Eq.(4). In conclusion, we are convinced that the slight discrepancy between our numerical result and the theoretical prediction of Refs. is essentially due to the fact that the numerical observation is carried out at finite times.
The importance of this paper goes much beyond cheking the prediction of Ref. . The result obtained is equivalent to observing only one trajectory moving from the initial condition $`x=1`$. This central result is made possible by the use of the average over $`\beta `$ suggested by the important work of Hata et al. as well as by the result of Ref.. The method of Ref. proves to be an efficient way of expressing the dependence of the trajectory entropy on the sensitivity to the initial conditions. We are convinced that Eq.(6) can be regarded as the proper non-extensive generalization of the Pesin theorem. Consequently, under the assumption that the dependence on initial conditions is realized only through $`\beta (t,\alpha )`$, the slope of the curve of Fig. 2, corresponding to $`q=Q=0.25`$, is the genuine extensive KS entropy of this archetypical condition of weak chaos.
We thank C. Tsallis for reading the draft of this paper and illuminating suggestions. |
no-problem/9911/astro-ph9911018.html | ar5iv | text | # Extragalactic background light absorption signal in the 0.26-10 TeV spectra of blazars.
## Introduction
It has long been thought GS67b that the detection of attenuation effect in the TeV spectra of extragalactic sources caused by pair production $`\gamma +\gamma e^++e^{}`$ with the EBL would be of great value for the understanding of cosmology and many aspects of the astrophysics of the Universe. Finding the cutoff feature in the high energy end of the Markarian 501 (Mrk 501) spectrum ($`>10`$ TeV), reported during this workshop Ah99 , may well be a long awaited signature of such extinction of the highest energy photons. In the presentation Ko99 during this workshop, the claim has been made that the found cutoff is well explained by a semi-empirically derived EBL prediction MS98 and by the simple power-law spectrum intrinsic to the source with an exponent close to two spanning from $`0.2`$ to $`20`$ TeV. The latter implies that the EBL attenuation mechanism below $`10`$ TeV should rather be weak which seems to be confirmed intuitively by the absence of a distinguishable feature in this part of the spectra for both Mrk 501 ($`z=0.03`$) and Mrk 421 ($`z=0.03`$) blazars (Fig. 2Krennrich98 ). We might now ask, whether the proposed SED of EBL is uniquely consistent with experiment, in order to begin its interpretation in terms of astrophysical constraints, or even if the lack of the feature in the low energy part of the TeV AGN spectra does prove an absence of the EBL absorption. Here I argue that we are not yet ready to make such statements neither on theoretical nor experimental basis, and due to ironic coincidence there is still a substantial degree of freedom in the definition of the SED of EBL. In this talk I consider a peculiar degeneracy which allows a certain type of SED of EBL to avoid “apparent” detection in currently available experimental data below $`10`$ TeV due to the unknown properties of intrinsic spectra of the sources. The conclusions which I draw at the end of this talk will show how we can narrow down the existing possibilities even if we use spectral data of only these two AGN which may become available in the near future with the introduction of new $`\gamma `$-ray instruments, such as VERITAS or STACEE.
## The EBL
Any contemporary theoretical consideration of the SED of EBL predicts two well pronounced peaks (Fig. 2). One at $`1`$ $`\mu `$m due to the starlight emitted and redshifted through the history of the Universe, the other at $`100`$ $`\mu `$m generated by re-processing of the starlight by dust, its extinction and reemission. Theoretical modeling of the spectral evolution of the EBL field involve complex astrophysics with many unknown input parameters which specify cosmology, number density and evolution of dark matter halos, distribution of galaxies in them, mechanisms of converting cold gas into stars, the star formation rate (SFR), stellar initial mass function (IMF), supernovae feedback, and the mechanisms by which light is absorbed by dust and reemitted at longer wavelengths SP98 . Semi-Analytical modeling of these processes show that the dominant factors shaping SED of EBL in the region $`110`$ $`\mu `$m are IMF, which provides a source of the UV light, generated mostly by the high mass stars and is therefore dependent on their fraction, and dust extinction which functions as a sink of UV light Bu98 . The region from $`1`$ to $`10`$ $`\mu `$m is primarily determined by the type of cosmology and the SFR. Allowing partial degeneracy between these two factors leads to an ambiguity in the interpretation of the actual SED of EBL MP96 . It is also possible that a non-negligible contribution from a number of pregalactic and protogalactic sources may exist in this interval BCH86 . This EBL fraction is usually not considered in the EBL evolution models nor in semi-empirical EBL estimates. The long wavelength region, from approximately a few $`\mu `$m to $`100`$ $`\mu `$m, is currently predicted with the largest uncertainty, due to poorly defined dust extinction and re-radiation mechanisms, which are crucial ingredients for modeling EBL in this band. In addition, it has been suggested recently Fa99 , that a substantial energy in this wavelength interval may come from quasars. Most of their radiation should be absorbed in the dust and gas of the accretion disk and re-emitted later in the far-infrared. Up to now this contribution has been considered as negligible, but failure to explain existing X-ray background suggests a presence of a large population of the faint quasars generating this diffuse radiation field WF99 . These sources, visible only in X-ray and far-infrared, are expected to be probed by the Chandra mission. Energy ejected into the surrounding media by supernovae and re-radiated later may also be concealed in this wavelength interval PB99 .
At present, the degree of the uncertainty of various theoretical considerations of the SED of EBL is about the same as the distance between current upper and lower experimental limits. Fig 2 shows a compilation VV99 of various EBL detections and limits as well as several theoretical PB99 and semi-empirical MS98 estimates of the SED of EBL. In the current situation a preferential choice of a particular prediction based solely on a theoretical background seems unjustified. We do expect, however, that the SED of EBL is likely to be a function with complex behavior. If we take into account that attenuation of extragalactic TeV $`\gamma `$-rays is an exponential effect, one would intuitively expect appearance of the structures in the observable spectra of AGNs, such as cutoff, for example. Non-existence of any peculiar features in $`0.2510`$ TeV spectra of AGNs should then indicate a very weak absorption effect. The problem, however, is more subtle than it first appears. There is a whole class of non-trivial solutions for the SED of EBL (see VV99 ), shown in Fig. 4, which may well describe starlight peak expected in the $`0.110`$ $`\mu `$m region. The important property of such SEDs is that they do not produce any peculiar change in the observable AGN spectrum. The only effect to be seen is change of the overall attenuation factor, change of the power-law spectral index, and slight change of the spectral curvature. All three of these potentially detectable EBL indicators are perfectly masked by the unknown intrinsic spectrum of the source. The existence of such SED solutions, which have been hinted in DS94 , is due to slow, power-law-like, change of an attenuation coefficient when the SED is proportional to energy of the infrared photon with logarithmic accuracy. Such a case seems to take place in the $`110`$ $`\mu `$m region to which observations in $`0.2510`$ TeV interval are most sensitive. It is an ironic coincidence that current observational window of TeV $`\gamma `$-ray astronomy coincided with the region of a special behavior of SED which makes EBL attenuation effect “invisible.” If we were to move $`\gamma `$-ray observational window to lower or higher energies we would be sensitive to the bands in the EBL spectrum where the SED is rapidly falling. This would produce an exponential effect on the observable spectra of AGNs, such as one detected by the HEGRA collaboration in the region $`1025`$ GeV Ah99 .
Since there are a number of upper and lower limits for the SED of EBL in the $`0.110`$ $`\mu `$m interval, established by various experiments, it becomes possible to constrain the EBL attenuation effect in the spectra of Mrk 421 and Mrk 501 using an explicit form of “invisible” SEDs. Such considerations VV99 lead to the conclusions that the optical depth for $`10`$ TeV photons is bounded within the interval $`0.854.43`$, the EBL contribution to the power-law spectral index at photon energy $`1`$ TeV can be in the range $`0.190.94`$, and spectral curvature does not exceed $`0.22`$ (the Hubble constant used is 65 km s<sup>-1</sup> Mpc<sup>-1</sup>). Analogous constraints derived from the EGRET measurements of the spectral indices of these sources Hart99 ; Kt99 provide similar upper limit for spectral index change due to the EBL attenuation effect ($`<1.0`$). Source luminosity arguments Ah99 suggest that if the optical depth at $`1`$ TeV exceeds $`3`$ then the intrinsic $`\gamma `$-ray luminosity of Mrk 501 is an order of magnitude larger than the luminosity in all other wavelengths which is difficult to explain with realistic parameters of the jet. This places approximately the same upper bound on the absolute value of the $`\gamma `$-ray absorption. Finally, a lack of the curvature in the spectrum of Mrk 421 provides upper limit of $`0.3`$ which is no stronger at the moment than the direct EBL constraints.
The large degree of the uncertainty which still exists in an experimental detection of EBL signal via observations of $`\gamma `$-ray attenuation is due to the, as yet, undetermined initial conditions in the region approximately from 100 to 300 GeV. Although EBL induced attenuation of $`\gamma `$-rays in this interval is rather small for z=0.03, the absorption effect must produce a jump in the spectral index and curvature here. In Fig. 4 I show a derivative of the optical depth with respect to the logarithm of energy, which characterizes the change of the spectral index. Two examples shown correspond to the predictions made in PB99 for Salpeter and Scalo IMFs. Note that the region from 100 GeV to 500 GeV is characterized by a rapid change in the spectral index. This result is, of course, hardly surprising since it is produced by a rapid fall of the SED of EBL in UV band. The subsequent dip above $`1`$ TeV is due to a very low prediction of the models for EBL field in the region around $`10`$ $`\mu `$m. Such a low estimate is not currently supported by the lower bound on EBL from ISO measurements at $`15`$ $`\mu `$Ol97 . Absence of the EBL signature in the spectra of Mrk 421 and Mrk 501 in the $`0.310`$ TeV interval suggests that the behavior of the derivative of optical depth should likely be linear function of the logarithm of photon energy (curve marked “Example” in Fig. 4). The measurements of the AGN spectra in the region $`100300`$ GeV are of crucial importance then to experimentally determine two parameters of this curve and therefore unfold a unique SED ”invisible” solution VV99 . The expected change of spectral index, $`0.190.94`$, of Mrk 501 and Mrk 421 between $`100`$ GeV and $`1`$ TeV due to the EBL absorption is a measurable effect. If the opacity of the Universe to TeV $`\gamma `$-rays is large, most of this change should occur below $`300`$ GeV since such an effect has not been detected in the AGN spectra above this energy Krennrich98 . This would produce a “knee-like” feature in the spectra of these blazars in the $`100300`$ GeV interval. If the attenuation effect is small though, it is possible that the derivative of optical depth remains the same linear function in this energy band generating only a logarithmically small EBL footprint in the AGN spectra. In the latter case, a small curvature, $`1/2d^2\tau /dln(E)^20.1`$, in $`0.11`$ TeV energy range would be the only indicator of EBL presence.
## Conclusions
1.The featureless spectra of the two closest AGNs in the $`0.2510`$ TeV energy band does not guarantee a low attenuation of TeV $`\gamma `$-rays via pair-production with EBL. The large class of “invisible” SED solutions exists VV99 which change only overall attenuation factor, spectral index, and spectral curvature of the observable AGN spectrum. Behavior of these SEDs is consistent with the theoretically expected starlight EBL peak at $`1`$ $`\mu `$m, but due to the unknown intrinsic properties of the sources such an attenuation effect cannot be unambiguously isolated based only on the data from this energy interval. The EBL spectral density suggested in Ko99 for explanation of spectral properties of Mrk 501 is, therefore, only one of many possibilities.
2.Even by observing only two known extragalactic TeV sources, Mrk 501 and Mrk 421, we can hope to constrain or possibly detect SED of EBL if spectral measurements are extended into the $`100300`$ GeV region where change of the spectral index would indicate a turn on of the absorption effect. Detection of this feature by future $`\gamma `$-ray observatories, for example VERITAS or STACEE, will provide a missing piece of information for proper unfolding of the SED of EBL in the wavelength interval above $`0.1`$ $`\mu `$m. Of course, a certain ambiguity of spectra interpretation due to unknown properties of the sources will remain, the presence of a similar feature, which is static in time, in both spectra may allow to disentangle or severely limit intergalactic $`\gamma \gamma `$ extinction effect.
3.It turns out that the most important constraints for limiting or unfolding the SED of EBL may be provided by accurate measurements of the curvature (better than $`0.1`$ per decade in energy) in the spectrum of Mrk 421 through the interval $`25010`$ TeV. At the moment the spectrum of this source during high flaring state has been found consistent with a pure power-law Krennrich98 . If the source cooperates, an improved statistics may give EBL upper limits in the region around a few micrometers lower than the currently available DIRBE results.
## Acknowledgments
I thank Whipple collaboration, J. Bullock and J. Primack for providing data, T. Weekes and S. Fegan for valuable discussions and invaluable help. This work was supported by grants from the U.S. Department of Energy. |
no-problem/9911/astro-ph9911390.html | ar5iv | text | # On the Snow Line in Dusty Protoplanetary Disks
## 1 Introduction
Planetary systems are formed from rotating protoplanetary disks, which are the evolved phase of circumstellar disks produced during the collapse of a protostellar cloud with some angular momentum.
A standard model of such a protoplanetary disk, is that of a steady-state disk in vertical hydrostatic equilibrium, with gas and dust fully mixed and thermally coupled (Kenyon & Hartmann 1987). Such a disk is flared, not flat, but still geometrically thin in the sense defined by Pringle (1981). The disk intercepts a significant amount of radiation from the central star, but other heating sources (e.g. viscous dissipation) can be more important. If dissipation due to mass accretion is high, it becomes the main source of heating. Such are the protoplanetary disks envisioned by Boss (1996, 1998), which have relatively hot (midplane temperature $`T_\mathrm{m}>`$ 1200 K) inner regions due to mass accretion rates of $`10^6`$ to $`10^5M_{}\mathrm{yr}^1`$. However, typical T Tauri disks of age $``$1 Myr seem to have much lower mass accretion rates ($`10^8M_{}\mathrm{yr}^1`$) with all other characteristics of protoplanetary disks (Hartmann et al. 1998, D’Alessio et al. 1998). For disks of such low accretion rates stellar irradiation becomes increasingly the dominant source of heating, to the limit of a passive disk modeled by Chiang & Goldreich (1997). In our paper we will confine our attention to the latter case, without entering the discussion about mass accretion rates.
The optical depth in the midplane of the disk is very high in the radial direction, hence the temperature structure there is governed by the reprocessed irradiation of the disk surface. This is the case of a passive disk (no accretion). At some point along the radial direction the temperature in the midplane would drop below the ice sublimation level – Hayashi (1981) called it the ”snow line”.
In this paper we revisit the calculation of the ”snow line” for a protosolar protoplanetry disk, given its special role in the process of planet formation. We pay particular emphasis on the issues involved in treating the radiative transfer and the dust properties.
## 2 The Model
Our model is that of a star surrounded by a flared disk. In this paper, we have chosen two examples – a passive disk and a disk with a $`10^8M_{}\mathrm{yr}^1`$ accretion rate. Both have the same central star of effective temperature $`T_{}`$= 4000 K, mass $`M_{}`$= 0.5 $`M_{}`$, and radius $`R_{}`$= 2.5 $`R_{}`$. Thus they correspond to the examples used by Chiang & Goldreich (1997) and D’Alessio et al. (1998), respectively. Our disk has a surface gas mass density $`\mathrm{\Sigma }`$= $`r^{3/2}\mathrm{\Sigma }_0`$, with $`r`$ in AU and $`\mathrm{\Sigma }_0`$= 10<sup>3</sup>g cm<sup>-2</sup> for our standard minimum-mass solar nebula model; we varied $`\mathrm{\Sigma }_0`$ between 10<sup>2</sup> and 10<sup>4</sup>g cm<sup>-2</sup> to explore the effect of disk mass on the results.
The emergent spectrum of the star is calculated with a stellar model atmosphere code with Kurucz (1992) line lists and opacities. The disk intercepts the stellar radiation $`F_{irr}(r)`$ at a small grazing angle $`\varphi (r)`$ (defined in §5). The emergent stellar spectrum is input into a code which solves the continuum radiative transfer problem for a dusty envelope. The solution is a general spherical geometry solution with a modification of the equations to a section corresponding to a flared disk (see Menshchikov & Henning (1997) for a similar approach). In that sense, the radiative transfer is solved essentially in 1D (vertically), as opposed to a full-scale consistent 2D case. The appeal of our approach is in the detailed radiative transfer allowed by the 1D scheme.
The continuum radiative transfer problem for a dusty envelope is solved with the method developed by Ivezić & Elitzur (1997). The scale invariance applied in the method is practically useful when the absorption coefficient is independent of intensity, which is the case of dust continuum radiation. The energy density is computed for all radial grid points through matrix inversion, i.e. a direct solution to the full scattering problem. This is both very fast and accurate at high optical depths. Note that in our calculations (at $`r0.1`$ AU) the temperatures never exceed 1500-1800K in the disk and we do not consider dust sublimation; the dust is present at all times and is the dominant opacity source. As in the detailed work by Calvet et al. (1991) and more recently by D’Alessio et al. (1998), the frequency ranges of scattering and emission can be treated separately.
For the disk with mass accretion, the energy rate per unit volume generated locally by viscous stress is given by $`2.25\alpha P(z)\mathrm{\Omega }(r)`$, where the turbulent viscosity coefficient is $`\nu =\alpha c_\mathrm{s}^2\mathrm{\Omega }^1`$, $`\mathrm{\Omega }`$ is the Keplerian angular velocity, $`c_{\mathrm{s}}^{}{}_{}{}^{2}=P\rho ^1`$ is the sound speed, and a standard value for $`\alpha =0.01`$ is used. The net flux produced by viscous dissipation, $`F_{vis}`$, is the only term to balance $`F_{rad}`$ $``$ unlike D’Alessio et al. (1998) we have ignored the flux produced by energetic particles ionization. Then we have the standard relation (see Bell et al. 1997), which holds true for the interior of the disk where accretion heating occurs:
$$\sigma T_{vis}^4=\frac{3\dot{M}GM_{}}{8\pi r^3}\left[1(\frac{R_{}}{r})^{1/2}\right],$$
where $`\dot{M}`$ is the mass accretion rate, and $`M_{}`$ and $`R_{}`$ are the stellar mass and radius.
## 3 The Dust
The properties of the dust affect the wavelength dependence of scattering and absorption efficiencies. The temperature in the midplane is sensitive to the dust scattering albedo (ratio of scattering to total opacity) – a higher albedo would reduce the absorbed stellar flux. As with our choice of mass accretion rates, we will use dust grains with properties which best describe the disks of T Tauri stars.
The modelling of circumstellar disks has always applied dust grain properties derived from the interstellar medium. Most commonly used have been the grain parameters of the Mathis et al. (1977) distribution with optical constants from Draine & Lee (1984). However, recent work on spectral distributions (Whitney et al. 1997) and high-resolution images (Wood et al. 1998) of T Tauri stars has favored a grain mixture which Kim, Martin, & Hendry (1994) derived from detailed fits to the interstellar extinction law (hereafter KMH). Important grain properties are the opacity, $`\kappa `$, the scattering albedo, $`\omega `$, and the scattering asymmetry parameter, $`g`$. The latter defines the forward throwing properties of the dust and ranges from 0 (isotropic scattering) to 1. What sets the KMH grains apart is that they are more forward throwing ($`g`$= 0.40($`R`$), 0.25($`K`$)), and have higher albedo ($`\omega `$= 0.50($`R`$), 0.36($`K`$)) at each wavelength (optical to near-IR). They are also less polarized, but that is a property we do not use here. The grain size distribution has the lower cutoff of KMH (0.005$`\mu `$m) and a smooth exponential falloff, instead of an upper cutoff, at 0.25$`\mu `$m. Since the dust settling time is proportional to (size)<sup>-1</sup>, we performed calculations with upper cutoffs of 0.05$`\mu `$m and 0.1$`\mu `$m. None of these had any significant effect on the temperatures.
## 4 The Temperature Structure and the Snow Line
We are interested in planet formation and therefore want to find the ice condensation line (”snow line”) in the midplane of the disk. Temperature inversions in the disk’s vertical structure (see D’Alessio et al. 1998) may lead to lower temperatures above the midplane, but ice condensation there is quickly destroyed upon crossing the warmer disk plane. We define the snow line simply in terms of the local gas-dust temperature in the midplane, and at a value of 170 K.
In our passive disk, under hydrostatic and radiative equilibrium; the vertical and radial temperature profiles are similar to those of Chiang & Goldreich (1997) and $`T(r)r^{3/7}`$. Here is why. The disk has a concave upper surface (see Hartmann & Kenyon 1987) with pressure scale height of the gas at the midplane temperature, $`h`$:
$$\frac{h}{r}=\left[\frac{rkT}{GM_{}\mu m_H}\right]^{1/2},$$
where $`G`$ is the gravitational constant, $`\mu `$ and $`m_H`$ are the molecular weight and hydrogen mass, $`r`$ is radius in the disk, and $`T`$ is the midplane temperature at that radius. For the inner region (but $`rR_{}`$) of a disk with such concave shape the stellar incident flux $`F_{irr}(r)\varphi (r)\sigma T_{}^4r^2`$, where $`\varphi (r)r^{2/7}`$ (see next section). Here $`T_{}`$ is the effective temperature of the central star. Then our calculation makes use of the balance between heating by irradiation and radiative cooling: $`\sigma T^4(r)=F_{irr}(r)`$. Therefore our midplane temperature will scale as $`T(r)`$= $`T_0r^{3/7}`$ K. This is not surprising, given our standard treatment of the vertical hydrostatic structure of the disk irradiated at angles $`\varphi (r)`$. Only the scaling coefficient, $`T_0=140`$, will be different. The difference with the Chiang & Goldreich model is our treatment of the dust grains $``$ less energy is redistributed inwards in our calculation and the midplane temperature is lower (Figure 1). The model with accretion heating is much warmer inwards of 2.5$`AU`$ where it joins the no-accretion (passive) model $``$ stellar irradiation dominates.
The result above is for our model with $`M_{}=0.5M_{}`$ and $`T_{}=4000`$ K, which is standard for T Tauri stars. It is interesting to see how the snow line changes for other realistic initial parameters. By retaining the same dust properties, this can be achieved using scaling relations rather than complete individual models as shown by Bell et al. (1997) for the pre-main sequence mass range of 0.5 to 2 $`M_{}`$. An important assumption at this point is that we have still retained the same (minimum-mass solar nebula) disk. With our set of equations, the midplane temperature coefficient, $`T_0`$, will be proportional to the stellar mass: $`T_0M_{}^{3/(10k)}`$, where $`k`$ is a function of the total opacity in the disk and $`0k<2`$.
On the other hand, different disk masses for a fixed central star ($`M_{}=0.5M_{}`$ and $`T_{}=4000`$ K) can be modeled for zero accretion rate, by changing $`\mathrm{\Sigma }_0`$ by a factor of 10 in each direction (defined in §2). Here a remaining assumption is the radial dependence of $`\mathrm{\Sigma }r^{3/2}`$; the latter could certainly be $`r^1`$ (Cameron 1995), or a more complex function of $`r`$, but it is beyond the intent of our paper to deal with this. Moreover that we find minimal change in the midplane temperature in the $`r`$ range of interest to us (0.1$``$5 AU). The reason is a near cancellation that occurs between the amount of heating and increased optical depth to the midplane. One could visualize the vertical structure of a passive disk for $`r=0.15.0`$ AU as consisting of three zones: (1) optically thin heating and cooling region (dust heated by direct starlight), (2) optically thin cooling, but optically thick heating layer, and (3) the midplane zone, where both heating and cooling occur in optically thick conditions. The rate of stellar heating of the disk per unit volume is directly proportional to the density, and affects the location and temperature of the irradiation layer. That is nearly cancelled (except for second order terms) in the mean intensity which reaches the midplane. Therefore we find that $`T(r)`$ changes within $`\pm 10K`$ for a change in disk density ($`\mathrm{\Sigma }_0`$) of a factor of 10. Note that $`T(r)`$ is only approximately $`r^{3/7}`$ even for $`r=0.15.0`$ AU; the small effect of density on $`T(r)`$ has an $`r`$ dependence. However, for the purposes of this paper, $`i.e.`$ our chosen volume of parameter space, the effect of disk mass on $`T(r)`$ and the “snow line” is insignificant, and we do not pursue the issue in more detail. Note that for a disk with a heat source in the midplane, $`i.e.`$ with an accretion rate different from zero, the midplane $`T(r)`$ will be strongly coupled to the density, roughly $`\rho ^{1/4}`$, and will increase at every $`r`$ for higher disk masses (e.g. Lin & Papaloizou 1985).
## 5 The Shape of the Upper Surface of the Disk
The ”snow line” calculation in the previous section is made under the assumption that the upper surface of the disk is perfectly concave and smooth at all radii, $`r`$. This is a very good description of such unperturbed disks, because thermal and gravitational instabilities are damped very efficiently (D’Alessio et al. 1999). Obviously this is not going to be the case when an already formed planet core distorts the disk. But even a small distortion of the disk’s surface may affect the thermal balance. The distortion need only be large enough compared to the grazing angle at which the starlight strikes the disk, $`\varphi (r)`$:
$$\varphi (r)=\frac{0.4R_{}}{r}+r\frac{d}{dr}(\frac{h}{r}),$$
where $`h`$ is the local scale height. This small angle has a minimum at 0.4$`AU`$ and increases significantly only at very large distances: $`\varphi (r)0.005r^1+0.05r^{2/7}`$ (e.g. see Chiang & Goldreich 1997).
The amount of compression due to the additional mass of the planet, $`M_p`$, will depend on the Hill radius, $`R_H=r(\frac{M_p}{M_{}})^{1/3}`$, and how it compares to the local scale height, $`h`$. The depth of the depression will be proportional to $`(R_H/h)^3`$. The resulting depressions (one on each side) will be in the shadow from the central star, with a shade area dependent on the grazing angle, $`\varphi (r)`$. The solid angle subtended by this shade area from the midplane determines the amount of cooling and the new temperature in the sphere of influence of the planet core. The question then arises, if during the timescale preceeding the opening of the gap the midplane temperature in the vicinity of the accreting planet core could drop below the ice condensation limit even for orbits with $`r`$ much shorter than the ”snow line” radius in the undisturbed disk. The answer appears to be affirmative and a runaway develops whereby local ice condensation leads to rapid growth of the initial rocky core, which in turn deepens the depression in the disk and facilitates more ice condensation inside the planet’s sphere of influence. Details about the instability which develops in this case will be given in a separate paper.
## 6 Conclusion
When the large fraction of close-in extrasolar giant planets became apparent, we thought of questioning the standard notion of a distant ”snow line” beyond 3$`AU`$ in a protoplanetary disk. Thence comes this paper. We revisited the issue by paying attention to the stellar irradiation and its radiative effects on the disk, thus limiting ourselves to passive or low accretion rate disks.
We find a snow line as close as 0.7$`AU`$ in a passive disk, and not much further away than 1.3$`AU`$ in a disk with 10$`{}_{}{}^{8}M_{}^{}\mathrm{yr}^1`$ accretion rate for $`M_{}=0.5M_{}`$. The result is robust regardless of different reasonable model assumptions $``$ similar values could in principle be inferred from existing disk models (Chiang & Goldreich 1997; D’Alessio et al. 1998). For more massive (and luminous) central stars, the snow line shifts outwards: to 1.0$`AU`$ (1$`M_{}`$) and 1.6$`AU`$ (2$`M_{}`$). The effect of different disk mass is much smaller for passive disks $``$ the snow line shifts inwards by 0.08$`AU`$ for $`\mathrm{\Sigma }_0`$= 10<sup>4</sup>g cm<sup>-2</sup>. Our results differ from existing calculations (in that they bring the snow line even closer in), because the dust grains properties we used have higher albedo and more forward throwing. The dust grains and the disk models we used are typical of T Tauri stars of age $``$1 Myr. So our conclusion is, that if such T Tauri disks are typical of protoplanetry disks, then the snow line in them could be as close-in as 0.7 AU.
Our estimate of the snow line is accurate to within 10%, once the model assumptions are made. These assumptions are by no means good or obvious, and can change the numbers considerably. For a passive disk model, the assumptions that need to be justified are: the equilibrium of the disk, the lack of dust settling (i.e. gas and dust are well mixed), the used KMH properties of the dust grains, and the choice of molecular opacities. For a low accreting disk model, one has to add to the above list: the choice of viscous dissipation model (and $`\alpha `$=0.01).
Finally, we note that these estimates reflect a steady-state disk in hydrostatic equilibrium. The disk will get disturbed as planet formation commences, which may affect the thermal balance locally given the small value of the grazing angle, $`\varphi `$. For a certain planet core mass, an instability can develop at orbits smaller than 1 AU which can lead to the formation of giant planets in situ. What is then the determining factor for the division between terrestrial and giant planets in our Solar System remains unexplained (as it did even with a snow line at 2.7$`AU`$).
We thank N. Calvet, B. Noyes, S. Seager, and K. Wood for reading the manuscript and helpful discussions, and the referee for very thoughtful questions. |
no-problem/9911/astro-ph9911167.html | ar5iv | text | # November 1999 Preprint MPI-Pth/99-55 Neutrino Flight Times in Cosmology
## Abstract
If neutrinos have a small but non-zero mass, time-of-flight effects for neutrino bursts from distant sources can yield information on the large-scale geometry of the universe, the effects being proportional to $`a𝑑t`$, where $`a`$ is the cosmological expansion parameter. In principle absolute physical determinations of the Hubble constant and the acceleration parameter are possible. Practical realization of these possibilities would depend on neutrino masses being in a favorable range and the future development of very large detectors.
The existence of a small but non-zero mass for the neutrino, combined with very long flight times from astronomical sources, offers the possibility for some fundamental investigations, such as tests of the assumptions of relativity . There is increasing evidence for the existence of a small neutrino mass, and here we would like to note the further possibility of using such effects to study the parameters of cosmology. We require the detection of neutrino bursts from events at cosmological distances. This appears very difficult and may never be feasible. On the other hand we would have, along with such classic measures as the dependence on distance of the apparent brightness of “standard candles” or the angular size of “standard measuring rods”, a totally different method for investigating the large scale geometry of the universe. The method might be called “physical” as opposed to “geometric” and would not be subject to questions concerning evolutionary effects in the way that the “standard” candles and rods are.
As opposed to an exactly massless particle, which always travels on the light cone, a neutrino with a small mass has a velocity deviating from the speed of light in a way which depends on the cosmological epoch. Hence the delay in arrival time of the neutrino compared with a photon emitted in the same event, or the relative arrival times among neutrinos of different masses emitted in the same event, contains information on the cosmological epochs through which the neutrino has passed.
This is seen in the standard FRW metric $`ds^2=dt^2a^2(t)(d𝐱)^2`$, where we define $`a(t)`$ to be the expansion factor of the universe normalized to its present value: $`a(t)=R(t)/R(now)`$, so that $`a(now)=1`$. We proceed by finding an equation for the coordinate velocity $`dx^i/dt`$, where $`x^i`$ is along the particle’s flight direction. First we express $`dx^i/dt`$ in terms of $`P^i(t)`$, the spatial part of the contravariant four-momentum $`mdx^\mu /ds`$. From the definition of the metric we have $`a(t)dx^i/dt=[a(t)P^i(t)]/\sqrt{m^2+[a(t)P^i(t)]^2}`$.
Expanding for the relativistic case $`Pm`$ we obtain $`a(t)dx^i/dt1\frac{1}{2}m^2/[a(t)P^i(t)]^2`$ To find $`P^i(t)`$, we now make use of the fact that the covariant or “canonical momentum” $`P_i`$ is constant (since nothing depends on the $`𝐱`$-coordinate). Furthermore, since the different kinds of momenta are related through the metric tensor, they all become equal at $`t(now)`$, where $`a=1`$. Hence we can identify the constant covariant momentum as $`P(now)`$, the momentum at the detector. Thus from $`P^i=g^{ij}P_j=1/(a^2)P_i`$, we obtain $`P^i=1/(a^2)P(now)`$ . Thus we finally have
$$\frac{dx}{dt}\frac{1}{a(t)}a(t)\frac{1}{2}\frac{m^2}{P^2(now)}.$$
(1)
Or introducing $`\mathrm{\Delta }x`$ for the difference in the $`x`$ coordinate of two different particles of mass $`m_2`$ and $`m_1`$ emitted in the same event
$$\frac{d(\mathrm{\Delta }x)}{dt}a(t)\frac{1}{2}\left[\frac{m_1^2}{P_1^2(now)}\frac{m_2^2}{P_2^2(now)}\right]$$
(2)
At the present epoch with $`a=1`$, $`\mathrm{\Delta }x`$ is just the spatial separation of the two particles. Integrating, we have for this separation, or in view of $`vc=1`$ for the time difference in arrival at a detector
$$\mathrm{\Delta }t\mathrm{\Delta }xa(t)𝑑t\frac{1}{2}\left[\frac{m_1^2}{P_1^2(now)}\frac{m_2^2}{P_2^2(now)}\right].$$
(3)
A somewhat different effect follows if we consider a single mass state, or equivalently if the mass differences in question are undetectably small. Given a non-zero mass there is still a time-of-flight effect due to the variation of velocity with energy. This was the original observation of Zatsepin at the beginning of this subject, namely that in a burst there is a range of energies in general and therefore a non-zero mass for the neutrino leads to a dispersion of velocities, and so there is a spreading as the burst travels. For two momenta $`P(now)`$ and $`P^{}(now)`$ we have
$$\mathrm{\Delta }ta(t)𝑑t\frac{1}{2}m^2\left[(\frac{1}{P(now)})^2(\frac{1}{P^{}(now)})^2\right].$$
(4)
Thus the spreading effect is governed by the same cosmological factor $`a𝑑t`$ as the time delay. The integral over $`t`$ is from the time of emission to the present (“$`t(now)`$”) and contains information on the cosmology. With the expansion of $`a(t)`$ for recent epochs $`a(t)=1+H[tt(now)]\frac{1}{2}qH^2[tt(now)]^2+\mathrm{}`$, and the redshift parameter $`z=1/a1=H[tt(now)]+(1+q/2)H^2[tt(now)]^2+\mathrm{}`$, we can give the results of the integration in terms of the directly observable $`z`$, in a low $`z`$ approximation to order $`z^2`$, in terms of the present Hubble constant $`H`$ and acceleration parameter $`q`$:
$$\mathrm{\Delta }t\frac{z}{H}\left[1\frac{3+q}{2}z+\mathrm{}\right]\frac{1}{2}\left[\frac{m_1^2}{P_1^2(now)}\frac{m_2^2}{P_2^2(now)}\right]$$
(5)
while for the burst spreading we have
$$\mathrm{\Delta }t\frac{z}{H}\left[1\frac{3+q}{2}z+\mathrm{}\right]\frac{1}{2}m^2\left[(\frac{1}{P(now)})^2(\frac{1}{P^{}(now)})^2\right].$$
(6)
The first thing to notice about these formulas is that they provide a direct physical determination of $`H`$, the Hubble constant. Knowing the neutrino mass or masses, the neutrino energy and the redshift of the source, measurement of a $`\mathrm{\Delta }t`$ determines $`H`$. It would certainly be interesting to compare a determination of $`H`$ in this “absolute” way with the classical astronomical results.
Observe that it is not necessary to know the distance to the source,— or rather a “distance” is determined through the particle velocities and their $`\mathrm{\Delta }x`$ at the detector. Thus in addition to other classical distance measures in cosmology, like the “luminosity distance” , we may say that there is another, the “neutrino” or “flight time” distance, $`d_\nu `$, defined through $`\mathrm{\Delta }x=\mathrm{\Delta }vd_\nu `$, where $`\mathrm{\Delta }v`$ is $`\frac{1}{2}\left[\frac{m_1^2}{P_1^2(now)}\frac{m_2^2}{P_2^2(now)}\right]`$, or $`\frac{1}{2}m^2\left[(\frac{1}{P(now)})^2(\frac{1}{P^{}(now)})^2\right]`$.
The cosmic acceleration $`q`$ appears in the quadratic correction. Here again, knowing the neutrino masses, their energy, redshift $`z`$ of the source, and the Hubble constant, we could—in principle—find the acceleration parameter from just one event. Recent reports of a non-zero value of $`q`$ from a “standard candle” method using distant supernovas has attracted considerable attention. It would be of the greatest interest if a totally different method could confirm these results.
Although we have carried out the above arguments for the spatially flat ($`k=0`$) FRW metric, it may be verified that due to the existence of a similar constant “canonical momentum” in the other, spatially curved ($`k=\pm 1`$) FRW metrics, the same arguments go through with the same results.
The most straightforward application of these ideas would be to situations where bursts of photons ($`m=0`$) or different neutrino mass states, arriving distinctly separated in time, are detected. Assuming the neutrino masses have been well established by the time such detections become feasible, two well-measured events whose redshifts can be determined and Eq.(5) would determine the Hubble constant and the acceleration parameter. Unfortunately, the observation of events with the different mass components well separated seems unlikely. Given that we expect neutrino masses in the eV range or less, we can express our basic flight time factor as
$$\frac{(m/\mathrm{eV})^2}{2(P/\mathrm{GeV})^2}50\mu \mathrm{sec}/\mathrm{Mpc}$$
(7)
It appears that even at a thousand Mpc, a substantial part of the way across the visible universe, we may only expect some $`\mathrm{msec}`$ delays. There is of course the chance, not yet completely excluded, that the heavier neutrino masses are in the range of some tens of eV, which could lead to delays of minutes, for the most distance sources at GeV energies. Although $`\mu `$sec timing or better is well within the capabilities of photon and neutrino detectors, the duration of the production event and its fluctuations are likely to involve much longer time scales. For example gamma-ray bursts or supernova neutrino bursts last on the order of several seconds. So, unless heavier neutrinos and/or lower energies are accessible, this will make the direct observation of the relative delay of different neutrino mass states or the delay relative to a photon signal difficult to extract . A detailed analysis involving high statistics, with sources at various distances and theoretical modelling of the production event would probably be necessary.
In view of these probable difficulties, it is important to note that the pulse spreading and related effects represented by Eq.(6) offer potentially observable time-of-flight effects involving a cosmological factor, even if distinct masses cannot be resolved. However, unless the event is intrinsically very short, the modelling of the event and the assumption that a certain type of event is the same at different epochs become necessary. We should stress that Eq.(6) only represents travel time effects on the width of the burst, these will in general have to be folded with the red shift of the intrinsic time scale of the event itself. Note, however, that the latter $`z`$ only and do not involve a quadratic term.
One may even carry this kind of speculation a step further and consider the cosmology of neutrino oscillations. That is, we can consider neutrino mass differences so extremely small that coherent oscillation phenomena are possible over cosmological distances . Although not very likely of practical realization, the question involves some interesting conceptual issues and will be presented separately .
It is of course intriguing and of conceptual interest that there is a different, “non-geometric” way of investigating cosmology. As a practical matter, because of the small rate of neutrino interactions and the reduction of the flux with distance, the only hope that these speculations might become reality would seem to lie with the further development of the extremely big, Km size detectors. Since time-of-flight effects decrease with increasing particle energy, for the time delay and burst spreading effects, lower neutrino energies, and instrumentation of detection systems for lower neutrino energies, are favored. Unfortunately detection cross sections are smaller at lower energy, so even larger detectors would seem to be called for. On the other hand, for the hypothetical oscillation effects alluded to above, higher energies seem to be indicated ; there cross section are larger and may tend to compensate for the general decrease of flux with energy.
A fascinating aspect of studying cosmology by these neutrino-based techniques, should it ever be possible, is that the “look back time” is potentially much deeper than with optical methods. A neutrino burst—highly redshifted— can reach us essentially undisturbed after neutrino decoupling, some minutes into the Big Bang. If such bursts exist (say from highly overdense regions collapsing to black holes) the present methods could allow us to determine the cosmology back to these early epochs. At such large values of $`z`$, of course, the above small $`z`$ approximations would have to be replaced by expressions involving the full cosmological model, as would also be necessary for other high $`z`$ observations. If even more weakly interacting and non-zero mass but highly relativistic particles than the usual neutrinos exist (say right-handed neutrinos) the “look back time” goes deeper but the detection difficulties get worse.
The only presently known candidates at present for our procedures are the photon and the neutrinos. Naturally, if heavier but neutral and sufficiently long-lived objects should exist and be emitted in cosmic events, (say the WIMP of dark matter searches) there would be a substantial enhancement for time-of-flight effects. I am thankful to V. Zakharov for this last remark and G. Raffelt for helpful comments on the manuscript. |
no-problem/9911/nucl-th9911079.html | ar5iv | text | # SU(3) realization of the rigid asymmetric rotor within the IBM
## Abstract
It is shown that the spectrum of the asymmetric rotor can be realized quantum mechanically in terms of a system of interacting bosons. This is achieved in the SU(3) limit of the interacting boson model by considering higher-order interactions between the bosons. The spectrum corresponds to that of a rigid asymmetric rotor in the limit of infinite boson number.
It is well known that the dynamical symmetry limits of the simplest version of the interacting boson model (IBM) , IBM-1, correspond to particular types of collective nuclear spectra. A Hamiltonian with U(5) dynamical symmetry has the spectrum of an anharmonic vibrator, the SU(3) Hamiltonian has the rotation-vibration spectrum of vibrations around an axially symmetric shape and the SO(6) Hamiltonian yields the spectrum of a $`\gamma `$-unstable nucleus . There exists another interesting type of spectrum frequently used to interpret nuclear collective excitations which corresponds to the rotation of a rigid asymmetric top and which, up to now, has found no realization in the context of the IBM-1. The purpose of this letter is to extend the IBM-1 towards high-order terms such that a realization of the rigid non-axial rotor of Davydov and Filippov becomes possible. A pure group-theoretical approach is used that allows to establish the connection between algebraic and geometric Hamiltonians not only from the comparison of their spectra but also from the underlying group properties.
Let us first recall some of the aspects that have enabled a geometric understanding of the IBM. The relation between the Bohr-Mottelson collective model and the IBM has been established on the basis of an intrinsic (or coherent) state for the IBM. Via this coherent-state formalism, a potential energy surface $`E(\beta ,\gamma )`$ in the quadrupole deformation variables $`\beta `$ and $`\gamma `$ can be derived for any IBM Hamiltonian and the equilibrium deformation parameters $`\beta _0`$ and $`\gamma _0`$ are then found by minimizing $`E(\beta ,\gamma )`$. It is by now well established that a one- and two-body IBM-1 Hamiltonian can give rise only to axially symmetric equilibrium shapes ($`\gamma _0=0^\mathrm{o}\mathrm{or}\mathrm{\hspace{0.33em}60}^\mathrm{o}`$ and that a triaxial minimum in the potential energy surface requires at least three-body interactions .
Since the relationship between $`\gamma `$-unstable model and rigid triaxial rotor was always an open question, Otsuka et al. investigated in detail the SO(6) solutions of one- and two-body IBM-1 Hamiltonian. They found out that the triaxial intrinsic state with $`\gamma _0=30^\mathrm{o}`$ produces after the angular momentum projection the exact SO(6) eigenfunctions for small numbers of bosons $`N`$. Thus they conclude that for finite boson systems triaxiality reduces to $`\gamma `$-unstability.
If three-body terms are included in the IBM-1 Hamiltonian a triaxial minimum of the nuclear potential energy surface can be found and numerous studies of the corresponding spectra have been performed . However, the existence of a minimum in the potential energy surface at $`0^\mathrm{o}<\gamma <60^\mathrm{o}`$ is not a sufficient condition for a rigid triaxial shape since this minimum can be shallow indicating a $`\gamma `$-soft nucleus. The interest in higher-order terms in IBM-1 has been renewed recently by the challenging problem of anharmonicities of double-phonon excitations in well-deformed nuclei . Although their microscopic origin is not clear at the moment, the occurrence of higher-order interactions can be understood qualitatively as a result of the projection of two-body interactions in the proton-neutron IBM onto the symmetric IBM-1 subspace. For example, it is well known that triaxial deformation arises within the SU(3) dynamical symmetry limit of the proton-neutron IBM without any recourse to interactions of order higher than two.
Nuclear collective states are treated in IBM-1 in terms of $`N`$ bosons of two types: monopole ($`l^\pi =0^+`$) $`s`$ bosons and quadrupole ($`l^\pi =2^+`$) $`d`$ bosons . For a given nucleus, $`N`$ is the half number of valence nucleons (or holes) and is thus fixed. Analytical solutions can be constructed for particular forms of the Hamiltonian which correspond to one of the three possible reduction chains of the dynamical group of the model U(6):
$$\begin{array}{ccc}& & \text{U(5)}\text{SO(5)}\text{SO(3)}\hfill \\ & & \\ \text{U(6)}& & \text{SU(3)}\text{SO(3)}\hfill \\ & & \\ & & \text{SO(6)}\text{SO(5)}\text{SO(3)}\hfill \end{array}.$$
(1)
The SU(3) dynamical symmetry Hamiltonian corresponds to a rotation-vibration spectrum of a vibrations around an axially symmetric shape which in the limit $`N\mathrm{}`$ goes over into the spectrum of a rigid axial rotor . The two other reduction chains in (1) contain the SO(5) group whose Casimir invariant exactly corresponds to $`\gamma `$-independent potential of Wilets and Jean and is responsible for $`\gamma `$-soft character of the spectrum .
To obtain a rigid (at least for $`N\mathrm{}`$) triaxial rotor the starting point is the SU(3) limit of the IBM-1 with higher-order terms in the Hamiltonian. This approach is inspired by Elliott’s SU(3) model where the rotor dynamics is well established for SU(3) irreducible representations (irreps) with large dimensions.
Following Ref. , we consider the most general SU(3) dynamical symmetry Hamiltonian constructed from the second, third and fourth order invariant operators of the SU(3) $``$ SO(3) integrity basis :
$$H_{\mathrm{IBM}}=H_0+aC_2+bC_2^2+cC_3+d\mathrm{\Omega }+e\mathrm{\Lambda }+fL^2+gL^4+hC_2L^2.$$
(2)
Here the following notation is used:
$`\begin{array}{cc}\hfill C_2& C_2[\text{SU(3)}]=2Q^2+\frac{3}{4}L^2,\hfill \end{array}`$ (10)
$`\begin{array}{cc}\hfill C_3& C_3[\text{SU(3)}]=\frac{4}{9}\sqrt{35}[Q\times Q\times Q]_0^{(0)}\frac{\sqrt{15}}{2}[L\times Q\times L]_0^{(0)},\hfill \end{array}`$
$`\begin{array}{cc}\hfill \mathrm{\Omega }& =3\sqrt{\frac{5}{2}}[L\times Q\times L]_0^{(0)},\hfill \end{array}`$
$`\begin{array}{cc}\hfill \mathrm{\Lambda }& =\left[L\times Q\times Q\times L\right]_0^{(0)},\hfill \end{array}`$
where
$`L_q=\sqrt{10}[d^+\times \stackrel{~}{d}]_q^{(1)},`$ (11)
$`Q_q=[d^+\times s+s^+\times \stackrel{~}{d}]_q^{(2)}{\displaystyle \frac{\sqrt{7}}{2}}[d^+\times \stackrel{~}{d}]_q^{(2)},`$ (12)
are SU(3) generators, satisfying the standard commutation relations,
$$\begin{array}{c}[L_q,L_q^{}]=\sqrt{2}(1q1q^{}|1q+q^{})L_{q+q^{}},\hfill \\ [L_q,Q_q^{}]=\sqrt{6}(1q2q^{}|2q+q^{})Q_{q+q^{}},\hfill \\ [Q_q,Q_q^{}]=\frac{3}{4}\sqrt{\frac{5}{2}}(2q2q^{}|1q+q^{})L_{q+q^{}}.\hfill \end{array}$$
(13)
In the context of the shell model, an SU(3) Hamiltonian of the type (2) has been considered by a number of authors . Specifically, it was established that the rotor Hamiltonian can be constructed from $`L^2`$ and the SU(3) invariants $`\mathrm{\Omega }`$ and $`\mathrm{\Lambda }`$. This follows from the asymptotic properties of these SU(3) invariants whose spectra do correspond to rigid triaxial rotor for SU(3) irrep labels $`\lambda ,\mu \mathrm{}`$. In addition, a relation between ($`\lambda ,\mu `$) and the collective variables ($`\beta ,\gamma `$) characterizing the shape of the rotor can be derived . In contrast, all attempts so far to construct a rigid rotor in the SU(3) dynamical symmetry limit of the IBM-1, even if including higher-order terms, are restricted to axial shapes .
A noteworthy difference should be pointed out between the SU(3) realizations in the shell model and the IBM and concerns the irreps that occur lowest in energy. In the shell model the ground-state irrep is dictated by the leading shell-model configuration. For example, it is (8,4) for <sup>24</sup>Mg and (30,8) for <sup>168</sup>Er . In the IBM the lowest representation is determined by the Hamiltonian. One can show that for an SU(3) Hamiltonian with two- and three-body interactions it is either $`(2N,0)`$ or $`(0,N)`$, which corresponds to axially symmetric nucleus. The essential point that is exploited here is that this choice of the lowest SU(3) irrep becomes more general for the IBM-1 Hamiltonian with up to four-body terms.
One can show that with the linear combination
$$F=aC_2+bC_2^2,$$
(14)
any given irrep $`(\lambda _0,\mu _0)`$ that occurs for a system of $`s`$ and $`d`$ bosons can, in principle, be brought lowest in energy. The proof is as follows. The eigenvalue of the second-order SU(3) Casimir invariant is
$$g_2(\lambda ,\mu )=\lambda ^2+\mu ^2+\lambda \mu +3\lambda +3\mu .$$
(15)
Within a given U(6) irrep $`[N]`$ the following SU(3) $`(\lambda ,\mu )`$ values are admissible :
$$\begin{array}{cc}(\lambda ,\mu )=\hfill & (2N,0),(2N4,2),\mathrm{}(2,N1)\mathrm{or}(0,N),\hfill \\ & (2N6,0),(2N10,2),\mathrm{}(2,N4)(0,N3),\hfill \\ & \mathrm{}\hfill \\ & (4,0),(0,2)\mathrm{for}N(\mathrm{mod3})=2,\hfill \\ & (2,0)\mathrm{for}N(\mathrm{mod3})=1,\hfill \\ & (0,0)\mathrm{for}N(\mathrm{mod3})=0.\hfill \end{array}$$
(16)
For the first row in this equation (which corresponds to the largest values of Casimir invariants) one has the relation
$$2N=\lambda +2\mu ,$$
(17)
and consequently $`g_2(\lambda ,\mu )g_N(\lambda )`$. The eigenvalues of (14) are thus given by
$$F_N(\lambda )=ag_N(\lambda )+bg_N^2(\lambda ).$$
(18)
Minimization of this function with respect to $`\lambda `$ gives the following condition for the irrep $`(\lambda _0,\mu _0)`$ to be lowest in energy:
$$\frac{a}{b}=2g(\lambda _0,\mu _0).$$
(19)
It is worth mentioning that one fails to obtain a similar minimization condition with just the second- and third-order SU(3) Casimir invariants.
Once $`(\lambda _0,\mu _0)`$ is fixed, the constant part of the Hamiltonian (2) takes the form
$$\stackrel{~}{H}_0=H_0bg_2^2(\lambda _0,\mu _0)+cg_3(\lambda _0,\mu _0),$$
(20)
where $`g_3(\lambda ,\mu )`$ is the expectation value of $`C_3`$\[SU(3)\]
$$g_3(\lambda ,\mu )=\frac{1}{9}(\lambda \mu )(2\lambda +\mu +3)(\lambda +2\mu +3),$$
(21)
and the Hamiltonian (2) can be rewritten as
$$H_{\mathrm{IBM}}=\stackrel{~}{H}_0+d\mathrm{\Omega }+e\mathrm{\Lambda }+\stackrel{~}{f}L^2+gL^4,$$
(22)
where $`\stackrel{~}{f}=f+hg_2(\lambda _0,\mu _0)`$.
To see the relation between this Hamiltonian and that of the rotor, we rewrite (22) further as
$$H_{\mathrm{IBM}}=\stackrel{~}{H}_0+d(L_iQ_{ij}L_j)+e(L_iQ_{ik}Q_{kj}L_j)+\stackrel{~}{f}L^2+gL^4,$$
(23)
where $`Q_{ij}`$ are cartesian components of the quadrupole tensor (12). This operator has the same functional form as the rigid rotor Hamiltonian
$$H_{\mathrm{rotor}}=A_i\mathrm{L}_i^2=(L_iM_{ij}L_j),$$
(24)
where $`\mathrm{L}_i`$ ($`L_i`$) are the components of the angular momentum $`L`$ in the intrinsic (laboratory) frame, $`M_{ij}`$ is a moments of inertia tensor with the constant components. The parameters of inertia $`A_i`$ are related to the principal moments of inertia $`_i`$ as
$$A_i=\frac{\mathrm{}^2}{2_i}.$$
(25)
The rotor moments of inertia tensor is connected with the SU(3) quadrupole tensor components through
$$M_{ij}=[\stackrel{~}{f}+gL(L+1)]\delta _{ij}+dQ_{ij}+eQ_{ik}Q_{kj}.$$
(26)
Note that, in general, the components of the quadrupole tensor $`Q_{ij}`$ fluctuate around their average values. When these fluctuations are negligible, the spectrum of the IBM-1 Hamiltonian (23) is close to the spectrum of rigid asymmetric rotor with corresponding moments of inertia.
The analogy between the rigid rotor and SU(3) dynamical symmetry Hamiltonians can be studied from a group-theoretical point of view. The dynamical group of the quantum rotor is the semidirect product T<sub>5</sub> $``$ SO(3) where T<sub>5</sub> is generated by the five components of the collective quadrupole operator $`\overline{Q}`$. The operators $`\overline{Q}`$ and $`L`$ satisfy the commutation relations
$$\begin{array}{c}[L_q,L_q^{}]=\sqrt{2}(1q1q^{}|1q+q^{})L_{q+q^{}},\hfill \\ [L_q,\overline{Q}_q^{}]=\sqrt{6}(1q2q^{}|2q+q^{})\overline{Q}_{q+q^{}},\hfill \\ [\overline{Q}_q,\overline{Q}_q^{}]=0,\hfill \end{array}$$
(27)
which define the rotor Lie algebra t<sub>5</sub> $``$ so(3). The only difference between (13) and (27) is in the last commutator.
The replacement $`QQ/\sqrt{C_2}`$ in the su(3) algebra leads to $`[Q_q,Q_q^{}]0`$ for $`\lambda ,\mu L`$. Thus for large $`\lambda `$ and $`\mu `$ the su(3) algebra contracts to the rigid rotor algebra t<sub>5</sub> $``$ so(3). The irreps of t<sub>5</sub> $``$ so(3) are characterized by the $`\beta `$ and $`\gamma `$ shape variables which can be related to SU(3) irrep labels $`\lambda `$ and $`\mu `$ as in Ref. :
$$\begin{array}{c}\kappa \beta \mathrm{cos}\gamma =\frac{1}{3}(2\lambda +\mu +3),\hfill \\ \kappa \beta \mathrm{sin}\gamma =\frac{1}{\sqrt{3}}(\mu +1),\hfill \end{array}$$
(28)
where $`\kappa `$ has to be determined from parameterization
$$\overline{Q}_q=\kappa \beta [\delta _{q0}\mathrm{cos}\gamma +\frac{1}{\sqrt{2}}(\delta _{q2}+\delta _{q,2})\mathrm{sin}\gamma ],$$
(29)
with $`\beta 0`$ and $`0^\mathrm{o}\gamma 60^\mathrm{o}`$.
The difference between the shell model SU(3) realization and the SU(3) dynamical symmetry limit of the IBM can be visualized on a $`(\beta ,\gamma )`$ plot which gives the relation between $`(\beta ,\gamma )`$ and the SU(3) labels $`(\lambda ,\mu )`$ (see Figure 1). The SU(3) irreps valid for the IBM (marked by circles) are only a subset of those which are allowed in the shell model in accordance with the Pauli principle (e.g., Fig. 2 in Ref. ).
From a group-theoretical point of view, the difference is seen from the following considerations. The invariant symmetry group of the asymmetric rotor is the point symmetry group D<sub>2</sub>, whose irreps can be classified as $`A_1`$, $`B_1`$, $`B_2`$ and $`B_3`$. In the contraction limit, the $`(\lambda ,\mu )`$ irrep of SU(3) reduces to one of the D<sub>2</sub> irreps according to the even or odd values of $`\lambda `$ or $`\mu `$. Since the IBM only allows even $`\lambda `$ and $`\mu `$, only totally symmetric $`A_1`$ levels of the asymmetric rotor can be represented. These are also the asymmetric rotor levels, which should be considered in connection with nuclear spectra.
As an example, in Figure 2 the spectrum of the Hamiltonian (2) is shown with the parameters $`a=72`$ keV, $`b=0.1`$ keV, $`d=0.1`$ keV, $`f=25`$ keV, $`c=e=g=h=0`$ and $`H_0=12960`$ keV for the two lowest SU(3) irreps $`(10,10)`$ and $`(14,8)`$ of an $`N=15`$ boson system and compared with the spectrum of an asymmetric rotor of highest asymmetry ($`\gamma =30^\mathrm{o}`$). The matrices of the operators $`\mathrm{\Omega }`$ and $`\mathrm{\Lambda }`$ in the Elliott’s basis can be found in Refs. . The SU(3) spectrum consists of $`(\lambda ,\mu )`$-multiplets within which the levels are arranged in bands characterized by Elliott’s quantum number $`K`$ where for even $`\lambda `$ and $`\mu `$
$$\begin{array}{c}K=\mathrm{min}\{\lambda ,\mu \},\mathrm{min}\{\lambda ,\mu \}2,\mathrm{},2\mathrm{or}\mathrm{\hspace{0.33em}0},\hfill \\ L=K,K+1,\mathrm{},\mathrm{max}\{\lambda ,\mu \}\mathrm{for}K0,\hfill \\ L=0,2,\mathrm{},\mathrm{max}\{\lambda ,\mu \}\mathrm{for}K=0.\hfill \end{array}$$
(30)
The spectrum of a triaxial rotor consists of a ground-state band with $`L=0,2,4,\mathrm{}`$ and an infinite number of the so-called abnormal bands: $`L=2,3,4,5,\mathrm{}`$ (1st abnormal band), $`L=4,5,6,7,\mathrm{}`$ (2nd abnormal band), and so on. Contrary to the axially symmetric case, the projection of the angular momentum $`L`$ on the intrinsic $`z`$-axis no longer is a good quantum number. Also, the spectrum contains $`L/2+1`$ states for $`L`$ even and $`(L1)/2`$ states for $`L`$ odd. It is seen that the low-energy spectrum corresponding to $`(10,10)`$ irrep is remarkably close to the spectrum of the asymmetric rotor. The resemblance includes the prominent even-odd staggering in the first abnormal band which is a perfect signature to distinguish axial, rigid or soft triaxial rotors (see, e.g. Refs. ). Eventual differences between the triaxial rotor and the SU(3) calculation are caused by the finite number of allowed $`L`$-values for each $`K`$ in a given $`(\lambda ,\mu )`$ irrep.
Although the above considerations were limited to the SU(3) dynamical symmetry, we would like to stress that this is not the only possible realization of rigid triaxiality in the IBM-1. As has been demonstrated recently , a rotational spectrum can be generated by only quadratic $`[Q^0\times Q^0]^{(0)}`$ and cubic $`[Q^0\times Q^0\times Q^0]^{(0)}`$ SO(6) invariants where
$$Q_q^0=[d^+\times s+s^+\times \stackrel{~}{d}]_q^{(2)}$$
(31)
is an SO(6) generator. This can be understood from a group-theoretical point of view. In the limit $`[Q_q^0,Q_q^{}^0]0`$ the so(6) algebra contracts to the rigid rotor algebra t<sub>5</sub> $``$ so(3) and a rigid rotor realization with SO(6) dynamical symmetry is obtained in IBM-1.
Inspired by this result, it would be of interest to inspect the Hamiltonian of the type (2)
$$H_{\mathrm{IBM}}=H_0+\alpha (L_iQ_{ij}^\chi L_j)+\eta (L_iQ_{ik}^\chi Q_{kj}^\chi L_j)+\zeta L^2+\xi L^4,$$
(32)
where $`Q^\chi `$ is a general quadrupole operator
$$Q_q^\chi =[d^+\times s+s^+\times \stackrel{~}{d}]_q^{(2)}+\chi [d^+\times \stackrel{~}{d}]_q^{(2)}.$$
(33)
One expects that, provided the commutation relations for $`L`$ and $`Q^\chi `$ are close to those for the rotor (27), the spectrum of the Hamiltonian (32) should resemble that of the rigid rotor.
In summary, an IBM-1 realization of the rigid rotor has been suggested. Although the example has been restricted to the SU(3) dynamical symmetry, any Hamiltonian of a similar type constructed with general quadrupole operator and with the angular momentum operator produces a rigid rotor spectrum under the condition of appropriate commutation relations. The required and sufficient condition to obtain the rotor dynamics is the contraction of the dynamical algebra of the Hamiltonian to the rigid rotor algebra t<sub>5</sub> $``$ so(3).
We gratefully acknowledge discussions with P. von Brentano, F. Iachello, D. F. Kusnezov, S. Kuyucak, A. Leviatan, N. Pietralla, A. A. Seregin and A. M. Shirokov. N.A.S. and P.V.I. thank the Institute for Nuclear Theory at the University of Washington for its hospitality and support. Yu.F.S. thanks GANIL for its hospitality The work is partly supported by Russian Foundation of Basic Research (grant 99-01-0163). |
no-problem/9911/cond-mat9911103.html | ar5iv | text | # Electron transport through a mesoscopic metal-CDW-metal junction
## I Introduction
The problem of charge transport through an inhomogeneous system (e.g., a dielectric junction between two metallic leads) continues to attract considerable attention from both purely theoretical and practical points of view. It is especially interesting in the case of a one-dimensional (1D) junction where the interaction (electron-electron or electron-phonon) drastically changes the system properties. The most studied example is a 1D quantum metallic wire connected to 2D electron reservoirs. It is known that the low energy properties of electrons in the wire can be described by the Luttinger liquid (LL) model. Since electrons do not propagate freely in a LL, a nontrivial problem arises: How are conduction electrons converted into propagating charged excitations at the metal-LL boundary?
A few years ago, it was shown that, for adiabatic contacts (i.e., when all the inhomogeneities are smooth on a scale of the electron Fermi length), the dc conductance of a pure quantum wire is perfect, $`G_0=2e^2/h`$, and temperature independent (the factor of two accounts for spin degeneracy). This result is not surprising since electrons coming from the leads are not backscattered by the effective barrier represented by the LL piece of the wire. The linear conductance, according to the Landauer-Büttiker approach, should then be equal to the conductance quantum $`G_0`$. Therefore, the dc conductance of a pure quantum wire in adiabatic contact with source and drain leads is not renormalized by the interaction.
The electron-electron interaction does affect the current-voltage characteristics of a LL junction if either impurities are present or electrons are backscattered by Umklapp (U)-processes. U-processes lead, as a rule, to the formation of a gap in the spectrum of charged excitations and in this case one may consider the junction as a dielectric wire.
Recently, Ponomarenko and Nagaosa studied electron transport through a finite Mott-Hubbard dielectric. By mapping the problem at low temperatures onto the quantum impurity problem in an infinite LL (the so-called Kane-Fisher problem ) they obtained a surprising result: The conductance of a finite-length correlated dielectric, adiabatically connected to metallic leads equals, at $`T=0`$, the conductance of a perfect metal junction $`G_0`$. However, an exponentially small conductivity, typical of a tunnel junction, is restored already at very small temperatures $`T_i\mathrm{\Delta }_L\mathrm{exp}(2\mathrm{\Delta }/\mathrm{\Delta }_L)`$, where $`\mathrm{\Delta }`$ is the Mott-Hubbard gap and $`\mathrm{\Delta }_L=\mathrm{}v_\rho /L`$ ($`L`$ is the length of dielectric and $`v_\rho `$ is the velocity of the charged excitations). This result was later generalized to the transport of fractional charge through a finite-length correlated dielectric.
The purpose of the present paper is to consider the electron transport through a junction formed by a Peierls-Fröhlich dielectric where a charge density wave (CDW) is present. The problem of conversion of electronic current into CDW current at the boundary metal-Peierls dielectric was studied for the first time over a decade ago. In Ref. (see also Ref. ) the Peierls dielectric was treated in the so-called adiabatic approximation, when the dynamics of the electrons is considered for a fixed phonon field (the order parameter), which is found self-consistently (for a review, see Refs. and ). For an order parameter with a time-independent phase, the transport problem can be reduced to solving the Bogoliubov-de Gennes equation for the electron wave function in the piecewise constant effective field of the order parameter. The calculations are very close to those for a metal-superconductor junction. There is only one important difference: While in a superconductor the condensate wave function is formed by two particles with opposite momenta, in a Peierls state the corresponding spinor consists of a particle and a hole with momenta differing by $`2k_F`$. As a result, the scattering of particles on the off-diagonal barrier produced by the order parameter is not of the Andreev type. For electrons in the metallic leads with energies inside the gap there is always a strong backscattering. It is then evident that the quasiparticle contribution to the conduction will be exponentially small, $`G\mathrm{exp}(2\mathrm{\Delta }_0/\mathrm{\Delta }_L)`$, for temperatures and voltages less than $`\mathrm{\Delta }_0`$ ($`\mathrm{\Delta }_0`$ is the Peierls gap at $`T=0`$).
One should not forget, however, that there exists another channel for charge transport in Peierls-Fröhlich systems. Namely, the electronic current coming from the metallic leads can be converted into a CDW current which freely propagates through the junction. The process of conversion is perfect when it involves an incommensurate CDW and the contacts between the Peierls dielectric and the metallic leads are adiabatic. In other words, when the phase degree of freedom of the order parameter in the Peierls dielectric is correctly treated, one finds that the conductance of an impurity-free adiabatic CDW junction is perfect at low temperatures. This result was proved in Refs. and using different approaches.
In Ref. we related the property of perfect conductance of an incommensurate CDW junction to the existence of an anomalous chiral symmetry in the system. In this paper we focus our attention instead on the commensurate case, when the electron filling factor is a rational number and no chiral symmetry exists. We show that quantum fluctuations make the conductance of a commensurate, finite Peierls dielectric perfect at $`T=0`$, much in analogy to the case of a Mott-Hubbard dielectric. As temperature is increased, however, the conductance rapidly decreases (we call this the instanton regime of quantum transport). The perfect conductance is lost at temperatures $`T\mathrm{\Delta }_i\sqrt{\mathrm{\Delta }_L^{\text{CDW}}E_s^{}}\mathrm{exp}(E_s^{}/\mathrm{\Delta }_L)`$, where $`\mathrm{\Delta }_L^{\text{CDW}}=\mathrm{}v_c/L`$, $`v_c`$ is the velocity of the CDW, and $`E_s^{}`$ is the rest energy of quantum CDW solitons. For higher temperatures, quantum phase fluctuations are suppressed and the conductance scales approximately as $`T^2`$, reaching a global minimum value $`G_iG_0\mathrm{exp}(2E_s^{}/\mathrm{\Delta }_L^{\text{CDW}})`$ at temperatures $`T\sqrt{\mathrm{\Delta }_L^{\text{CDW}}E_s^{}}\mathrm{\Delta }_L^{\text{CDW}}`$. A further increase of temperature induces the appearance of fractionally charged solitons which dominate the conduction in the system (we call this the soliton regime of quantum transport). Thus, at temperatures $`TE_s^{}`$ the conductance sharply increases, saturating at the value $`G_M=G_0/M^2`$ ($`M`$ is the commensurability index), which also corresponds to a perfect transport, but of fractionally charged solitons ($`q_s=e/M`$) instead of bare electrons. This step-like behavior in $`G(T)`$ terminates at $`T\mathrm{\Delta }_0`$, where the conductance rapidly reaches the pure metallic value $`G_0`$.
It is important to remark that the Mott-Hubbard and the Peierls-Fröhlich systems are not identical. There are two energy scales in the latter which are not present in the former. These scales are the phonon energy $`\mathrm{}\overline{\omega }`$ (here $`\overline{\omega }\omega (2k_F)`$ denotes the frequency of the phonons with momentum $`\pm 2k_F`$) and the rest energy $`E_s^{}`$ of quantum solitons of the commensurate CDW. In principle, the existence of these additional scales could lead to a more complicated temperature dependence of the conductance than that found in Refs. and . However, we will argue that the phonon energy scale, in particular, does not lead to any new features.
This paper is organized as follows. In Sec. II we introduce the effective phase Lagrangian that describes commensurate CDW junction. The conductance and its temperature dependence in the instanton regime are discussed in Secs. III; the soliton regime is explored in Sec. IV. Final remarks and the conclusions are presented in Sec. V.
## II The Peierls-Fröhlich system
The low-energy dynamics of a one-dimensional electron-phonon system is normally governed by the Lagrangian
$``$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }^2}{g^2}}+{\displaystyle \frac{\dot{\mathrm{\Delta }}^2+\mathrm{\Delta }^2\dot{\phi }^2}{g^2\overline{\omega }^2}}+\overline{\psi }\left(i\mathrm{}\gamma _\mu ^\mu \mathrm{\Delta }e^{i\gamma _5\phi }\right)\psi `$ (2)
$`+_{\text{com}}.`$
Here, $`u(x,t)=(\mathrm{\Delta }/2)\mathrm{cos}(2k_Fx+\phi )`$ is the magnitude of the phonon field with momentum $`Q\pm 2k_F`$ and $`g`$ is the bare constant of the electron-phonon interaction. We have defined $`\overline{\psi }\psi ^{}\gamma _0`$, with $`\psi ^{}=(\psi _L^{},\psi _R^{})`$ comprising left- and right-moving electron fields. The Dirac matrices $`\gamma _\mu `$ ($`\mu =0,1`$) and $`\gamma _5`$ in 1D are represented by the Pauli matrices, whereas the partial derivatives follow the convention $`_\mu (_t,v_F_x)`$. The last term in Eq. (2) represents the commensurability energy that exists for systems with rational electron filling $`\nu =N/M`$ with respect to the underlying lattice model ($`N<M`$). The commensurability index is defined as $`M=\pi N/k_Fa>2`$, where $`a`$ is the lattice constant. The appearance of the commensurability energy is due to $`M`$-fold Umklapp scattering. This term will be specified later.
The interacting 1D electron-phonon system model presented in Eq. (2) has already been studied by many authors (e.g., see reviews in Refs. and ). It was shown that a gap $`\mathrm{\Delta }_0`$ develops in the electron spectrum at low temperatures. For spin-1/2 fermions and for a vanishingly small phonon frequency $`\overline{\omega }0`$, even a weak electron-phonon interaction produces a gap. Let us, for simplicity, consider only spinless electrons. In this case, for a finite $`\overline{\omega }`$, the electron-phonon interaction should be sufficient to produce a gap, while, at the same time, the junction can be sufficiently long to make the influence of finite-size effects on the gap negligible. In the classical Peierls theory, where all fluctuations are neglected, the gap for a weak electron-phonon coupling ($`\lambda =2g^2/\pi \mathrm{}v_F1`$) takes the familiar BCS-like form $`\mathrm{\Delta }_0\epsilon _F\mathrm{exp}(1/\lambda )`$, where $`\epsilon _F`$ is the electron Fermi energy. For a quasi-1D systems, this gap is the modulus of the order parameter $`\mathrm{\Delta }_0e^{i\phi }`$ of the Peierls state which develops at temperatures $`T<T_P`$ (the Peierls transition temperature $`T_P`$ is determined by the strength of inter-chain interaction and, usually, $`T_P\mathrm{\Delta }_0`$). The fluctuations of the phase field $`\phi `$ in a pure 1D system destroy the long-range order. Nevertheless, the fact that $`\mathrm{\Delta }_00`$ minimizes the energy of the system shows that at $`T\mathrm{\Delta }_0`$ the fluctuations of the modulus of the phonon field are strongly suppressed.
In what follows we will assume that the ground state of a sufficiently long ($`L\mathrm{}v_F/\mathrm{\Delta }_0`$) CDW junction is characterized by a gap $`\mathrm{\Delta }_0`$ in the single-particle spectrum. This gap will be the largest relevant energy in the problem ($`\mathrm{}\overline{\omega }\mathrm{\Delta }_0`$).
To proceed further, it is convenient to bosonize the fermion degrees of freedom in Eq. (2): according to standard rules,
$$i\overline{\psi }\gamma _\mu ^\mu \psi \frac{1}{8\pi v_F}_\mu \eta ^\mu \eta ,$$
(3)
$$\overline{\psi }\psi \frac{1}{\pi \alpha }\mathrm{cos}\eta ,$$
(4)
and
$$\overline{\psi }\gamma _5\psi \frac{i}{\pi \alpha }\mathrm{sin}\eta ,$$
(5)
where $`\eta `$ represents the boson field and $`\alpha `$ is a short-range cutoff of the order of the lattice spacing $`a`$. The corresponding model is known in the literature as the phase Hamiltonian approach to interacting electron-phonon systems. When the modulus of the phonon field is frozen ($`T\mathrm{\Delta }_0`$), we are left with two bosonic (phase) variables, $`\eta ,\varphi `$, whose dynamics is governed by the Lagrangian
$``$ $`=`$ $`{\displaystyle \frac{\mathrm{}}{8\pi v_F}}\left[(_t\eta )^2v_F^2(_x\eta )^2\right]+{\displaystyle \frac{\mathrm{\Delta }_0^2}{g^2\overline{\omega }^2}}(_t\phi )^2`$ (7)
$`+{\displaystyle \frac{\epsilon _F\mathrm{\Delta }_0}{\pi \mathrm{}v_F}}\mathrm{cos}(\eta \phi )+{\displaystyle \frac{2\mathrm{\Delta }_0^2\omega _0^2}{g^2\overline{\omega }^2M^2}}\mathrm{cos}(\eta \phi +M\phi ).`$
The last term in Eq. (7) is the commensurability energy written in the bosonized form, where $`\mathrm{}\omega _0\mathrm{}\overline{\omega }M(\mathrm{\Delta }_0/\epsilon _F)^{M/21}`$ ($`\omega _0\overline{\omega }`$) is the so-called energy of the commensurability “pinning” of the CDW.
The Lagrangian in Eq. (7) is readily generalized to the case of interacting electrons. For a forward scattering electron-electron interaction one can replace the first two terms in Eq. (7), describing the free electron motion, by the analogous Lagrangian for a LL,
$$_{LL}=\frac{\mathrm{}}{8\pi K_\rho v_\rho }\left[(_t\eta )^2v_\rho ^2(_x\eta )^2\right],$$
(8)
where $`v_\rho `$ is the charge velocity and $`K_\rho `$ is the bare correlation parameter. These quantities can be expressed in terms of the bare interaction constants (e.g., see Ref. ).
We will use the Lagrangian density of Eq. (7) to analyze the electron transport through a finite CDW junction of the length $`L`$ (see Fig. 1). Let us assume that the electron-phonon interaction exists only in the region $`L/2<x<L/2`$ and is adiabatically switched off outside the junction. To model the metal-CDW-metal junction one can multiply the last three terms in Eq. (7) by the step function $`\mathrm{\Theta }(x+L/2)\mathrm{\Theta }(L/2x)`$, confining the electron-phonon interaction to the finite region of length $`L`$.
For a homogeneous system, the model represented by Eq. (7) was studied in Ref. for two different cases: (i) at a fixed phase of the phonon field, $`\phi =\text{const}`$; (ii) for an “adiabatic” CDW motion, when $`\phi (x,t)=\eta (x,t)+\text{const}`$. As we will see in what follows the last ansatz leads to the correct quantum description of the CDW. Thus it can be called the quantum regime of the CDW dynamics.
The physical motivation for studying the regimes mentioned above is the following. Both cosine terms in Eq. (7) describe the backward scattering of electrons by the lattice distortions. Notice that the term arising from the multiple Umklapp processes enters into Eq. (7) with a coefficient much smaller than that for the direct electron-phonon interaction. Therefore, in the case of a fixed phase \[ansatz (i) above\], one can neglect the effects of commensurability and Eq. (7) is reduced to the bosonic form of a Lagrangian describing free massive Dirac fermions (with the gap equal to $`\mathrm{\Delta }_0`$). It gives us the Bogoliubov-de Gennes equations for the electron wave function in the piecewise constant field of the order parameter. So the ansatz (i) corresponds to the situation when the dynamics of CDW is totally ignored. It describes only the quasiparticle contribution to the conductivity which at temperatures $`T\mathrm{\Delta }_0`$ is exponentially small and thus can be neglected.
At low temperatures one cannot neglect the quantum fluctuations of the phonon field $`\phi (x,t)`$ and a quantum description of the CDW dynamics has to be adopted. The correct formulation assumes that the electrons are in “adiabatic motion” with the lattice distortion \[ansatz (ii)\]. In the Appendix we justify this assumption by considering a simplified field-theoretical model for the CDW. For the ansatz (ii) the most troublesome term ($`\epsilon _F`$) in Eq. (7), which arises from the direct backscattering of electrons by phonons, disappears already at the classical level. The resulting model is equivalent to the theory of a commensurate CDW alone. For interacting electrons in Eq. (8), the corresponding Lagrangian takes the form
$$_{CDW}=\frac{\mathrm{}}{8\pi K_cv_c}\left[(_t\phi )^2v_c^2(_x\phi )^2+\frac{2\mathrm{\Omega }^2}{M^2}\mathrm{cos}M\phi \right],$$
(9)
where
$$K_c=\frac{v_c}{v_\rho }K_\rho ,$$
(10)
$$v_c=\frac{v_\rho }{\sqrt{1+8\pi K_\rho v_\rho \mathrm{\Delta }_0^2/g^2\overline{\omega }^2\mathrm{}}},$$
(11)
and
$$\mathrm{\Omega }^2=\frac{8\pi K_cv_c}{\mathrm{}}\left(\frac{\mathrm{\Delta }_0\omega _0}{g\overline{\omega }}\right)^2.$$
(12)
For noninteracting electrons ($`K_\rho =1`$ and $`v_\rho =v_F`$), Eq. (9), taken in the limit $`v_F\mathrm{\Delta }_0^2g^2\overline{\omega }^2\mathrm{}`$, coincides with the well-known Lagrangian for a commensurate CDW. The corresponding parameters (CDW velocity and pinning energy) are exactly the same ones found in the standard approach by integrating out the fermionic degrees of freedom in Eq. (2), namely,
$$v_cv_F\sqrt{\frac{g^2}{8\pi \mathrm{}v_F}}\left(\frac{\mathrm{}\overline{\omega }}{\mathrm{\Delta }_0}\right)v_F\text{with}\mathrm{\Omega }\omega _0.$$
(13)
From a mathematical point of view, Eq. (9) is a sine-Gordon theory with well-known quantum properties. Notice that in the regime of quantum transport our model \[Eq. (9)\], coincides (up to the bare parameters $`K_c`$, $`v_c`$, and $`\mathrm{\Omega }`$) with the Lagrangian derived in Ref. . Therefore, the low temperature properties of a CDW junction should be similar to those of a correlated dielectric. Our analysis of the conductance of a CDW junction can now be performed in analogy to that in Ref. . Thus we will be brief in the mathematical details and will concentrate on the physical interpretations of our main results.
## III The instanton contribution to the conductance
In an infinite system, the sine-Gordon model of Eq. (9) is characterized by an infinite set of equivalent vacua $`|0_n=|2\pi n/M`$ ($`n`$ is integer). In a finite 1D system the ground state is nondegenerate and if the length of the system is sufficiently long, a unique vacuum can be found in the dilute instanton gas approximation . The instanton approach to the thermodynamics of finite CDW system was put forward for the first time in Refs. and , where the partition function for a CDW ring threaded by a magnetic field was calculated. Vacuum-to-vacuum tunneling introduces a new energy scale
$$\mathrm{\Delta }_i\frac{\mathrm{}v_c}{L}\sqrt{\frac{S_t^0}{2\pi \mathrm{}}}\mathrm{exp}\left(\frac{S_t^0}{\mathrm{}}\right)$$
(14)
($`S_t^0`$ is the one-instanton tunnel action), which is nothing but the width of the instanton band. In terms of the bare parameters shown in Eqs. (9), (10), (11), and (12) the one-instanton action reads
$$S_t^0=\frac{E_sL}{\mathrm{}v_c}=\frac{1}{K_cM^2}\frac{2}{\pi }\frac{\mathrm{}\mathrm{\Omega }}{\mathrm{\Delta }_L^{\text{CDW}}},$$
(15)
where $`E_s`$ is the rest energy of the classical kink.
In the full quantum description the bare parameters of the sine-Gordon model are renormalized by the interaction. For the gapped phase the renormalized parameters are (see also the review Ref. )
$$K_c^{}M^2=1\text{with}E_sE_s^{},$$
(16)
where $`E_s^{}`$ is the energy of the quantum soliton. As it was noticed in Ref. , this renormalization is equivalent to the summation of all multi-loops contributions to the renormalized one-instanton tunnel action $`S_t^0S_t=E_s^{}/\mathrm{\Delta }_L`$. While the second prescription in Eq. (16) is physically evident, the first one needs some comment. The simplest way to understand this result (so-called Luther-Emery free fermion point ) is to consider the situation when the chemical potential exceeds the gap. Let us at first rescale the field $`\phi \phi /M`$ in Eq. (9). Then the same Lagrangian with the renormalized parameters ($`K_cK_c^{}`$ and $`\mathrm{\Omega }\mathrm{\Omega }^{}`$) has to describe the quantum phase of the sine-Gordon model. In the case when chemical potential exceeds the gap $`E_s^{}`$, the system can be treated as a gas of weakly interacting solitons at density $`\rho _s`$. It is then described by the first two terms of the rescaled Lagrangian
$$_{}=\frac{1}{8\pi v_cK_c^{}(\rho _s)M^2}\left[(_t\phi )^2v_c^2(_x\phi )^2\right].$$
(17)
In the vicinity of a commensurate-incommensurate phase transition ($`\rho _s0`$), the quantum solitons are noninteracting particles. In the bosonic language, it means that $`K_c^{}(0)M^2=1`$ (see also Ref. ). The model in its gapped phase, at this point, is equivalent to spinless massive Dirac fermions.
The crucial difference between the closed CDW system studied in Refs. and and a finite Peierls-Fröhlich conductor connected to metallic leads is in the boundary conditions imposed by the electron reservoirs on the CDW dynamics. These boundary conditions can be derived by integrating out electrons outside the CDW piece of the junction. As it was shown in Ref. , this procedure results in a Caldeira-Leggett (CL) action for the boundary CDW field $`\phi (x=\pm L/2,t)`$. Physically, it means that electrons in the leads induce friction (which appears as a logarithmic interaction between instantons) in the vacuum-to-vacuum tunneling. Although, numerically, the logarithmic interaction only changes the exponential prefactor in the tunnel energy shift \[Eq. (14)\], it drastically affects the instantons trajectories with a nonzero total topological charge. The action taken on these paths diverges, and only trajectories with zero net charge contribute to the partition function . This very property allows one to use a dual representation to studying transport properties of CDW junction at low temperatures. In terms of the Josephson-like field $`\stackrel{~}{\mathrm{\Theta }}(x,t)`$, dual to the CDW field $`\phi (x,t)`$, the Lagrangian which yields the same partition function as that calculated in the dilute instanton gas approximation takes the form (see Ref. )
$`\stackrel{~}{}`$ $`=`$ $`{\displaystyle \frac{\mathrm{}}{8\pi v_c}}\left[(_t\stackrel{~}{\mathrm{\Theta }})^2v_c^2(_x\stackrel{~}{\mathrm{\Theta }})^2\right]`$ (19)
$`\mathrm{\Delta }_i^{(R)}\delta (x)\mathrm{cos}\left(\stackrel{~}{\mathrm{\Theta }}(x,t)/M\right).`$
Here $`\mathrm{\Delta }_i^{(R)}`$ is the instanton shift of the vacuum energy, Eq. (14), renormalized by the friction. The $`1/M`$ factor in the last term is needed to take into account the fractional topological charge $`q_t=2\pi /M`$ of individual instantons. The induced CL-action does not introduce any new dimensional parameter to the problem and affects only the prefactor in Eq. (14) ( corrected by the multiloop contributions, Eq (16)). Thus, up to an irrelevant numerical factor, we have $`\mathrm{\Delta }_i^{(R)}\mathrm{\Delta }_i`$ (friction, of course, could only diminish the one-instanton vacuum energy shift).
After rescaling the dual field $`\stackrel{~}{\mathrm{\Theta }}/M\mathrm{\Theta }`$, Eq. (19) is transformed into the Lagrangian for a quantum impurity problem in the infinite LL (the Kane-Fisher problem ) with a correlation parameter $`K_c=1/M^2`$. The desired conduction in the initial problem (CDW- junction) is related to the dimensionless conductance $`g_{KF}`$ of the dual problem by a simple expression
$$G_i(T)=\frac{e^2}{h}\left[1M^2g_{KF}(T)\right].$$
(20)
Notice the factor $`M^2`$ in Eq. (20). In order to map the dual model, Eq. (19), onto the known problem, we rescaled the dual field $`\stackrel{~}{\{}\mathrm{\Theta }(x,t)`$. Since, in general, the conductance is proportional to the square of the dynamical field, the dimensionless conductance of the dual problem is $`M^2g_{KF}`$. Physically, the additional factor of $`M^2`$ originates from the correct definition of the Josephson-like current in the model of Eq. (19). Namely, it comes from the $`M`$-fold backscattering current induced by the potential difference $`MV`$, where $`V`$ is the voltage across the junction.
The quantity $`g_{KF}(T)`$ is known exactly (e.g., see Ref. , where the current-voltage dependence for the Kane-Fisher problem was obtained by using the dual transformation method). For the CDW system the commensurability index $`M`$ is an integer larger than 2 and the corresponding correlation parameter is small, $`K_c1`$. In this case, the low- and high-temperature asymptotics of the conductance take a simple form (up to irrelevant numerical constants)
$$M^2g_{KF}(T,K_c1)\{\begin{array}{cc}\left(\frac{T}{\sqrt{K_c}\mathrm{\Delta }_i}\right)^{2/K_c},\hfill & T\sqrt{K_c}\mathrm{\Delta }_i\hfill \\ 1\left(\frac{\sqrt{K_c}\mathrm{\Delta }_i}{T}\right)^2,\hfill & T\sqrt{K_c}\mathrm{\Delta }_i.\hfill \end{array}$$
(21)
According to Eqs. (20) and (21), the conductance of a metal-CDW-metal junction is perfect at $`T0`$. Loosely speaking, very slow (low-energy) electrons from the metal leads, when arriving at the CDW junction, see a strongly fluctuating, translationally invariant electron-phonon system and, consequently, are not backscattered by the lattice distortions. A significant backscattering appears when the Heisenberg time $`t_H\mathrm{}/\epsilon `$ for quasiparticles coming from leads becomes comparable or smaller than the characteristic lifetime of the perturbative vacuum ($`\mathrm{}/\mathrm{\Delta }_i`$). It is clear that at $`T>\mathrm{\Delta }_i`$ the instanton mechanism of charge transport predicts a strongly suppressed conduction.
At finite temperatures, however, there is a mechanism competing with the instanton contributions to charge transport in CDW systems. It is associated with the thermally excited fractionally charged solitons of the commensurate CDW. The soliton contribution to the persistent current in a CDW ring was considered in Refs. and . Thus, we now proceed to calculate the soliton part, $`G_s`$, of the conductance of a mesoscopic metal-CDW-metal junction.
## IV The soliton contribution to the conductance
It is physically evident that at sufficiently “high” temperatures, $`T\mathrm{\Delta }_L`$, the transport coefficients under consideration are determined by the bulk properties of a Peierls- Fröhlich system. In the quantum regime of transport a commensurate CDW is described by a quantum sine-Gordon model \[the Lagrangian in Eq. (9) at the point $`K_c^{}M^2=1`$\]. It is known that this point corresponds to noninteracting Dirac fermions. Thus, at temperatures $`T\mathrm{\Delta }_L`$ the problem of electron transport through a CDW junction can be mapped onto the well-known problem of transport of Dirac fermions. The latter in its turn is mathematically equivalent to that of quasiclassical transport through a CDW junction. The only important distinction is that in our case the electric charge of the Dirac fermions is fractional $`q_s=e/M`$ (they are solitons of the quantum sine-Gordon model).
The soliton contribution to the conductance can be calculated using Landauer formula,
$$G_s(T)=\frac{q_s^2}{h}_{\mathrm{}}^{\mathrm{}}𝑑\epsilon \left(\frac{f_{FD}}{\epsilon }\right)T_t(\epsilon ),$$
(22)
where $`f_{FD}`$ is the Fermi-Dirac distribution function and $`T_t(\epsilon )`$ is the transmission probability of massless fermions through a “gapped” region. The transmission coefficient can be found by matching the wave functions at the boundary points $`x=\pm L/2`$. The result is (see also Ref. )
$$T_t(\epsilon )=\frac{\epsilon ^2E_s^2}{\epsilon ^2E_s^2+E_s^2\mathrm{sin}^2\left(\sqrt{\epsilon ^2E_s^2}/\mathrm{\Delta }_L\right)}.$$
(23)
By carrying out the integral in Eq. (22), it is easy to find the following low-$`T`$ ($`TE_s^{}`$) and high-$`T`$ asymptotes
$$G_s(T)\{\begin{array}{cc}\frac{e^24}{hM^2}\mathrm{exp}(\frac{2E_s^{}}{\mathrm{\Delta }_L}),\hfill & \mathrm{\Delta }_LTE_s^{}\hfill \\ \frac{e^2}{hM^2}\left(12\pi \frac{E_s^2}{\mathrm{\Delta }_LT}\right),\hfill & TE_s^2/\mathrm{\Delta }_L\hfill \end{array}.$$
(24)
As one can see from Eq. (24), the soliton conductance at low temperatures is exponentially small and temperature independent. It corresponds to the tunneling of fractionally charged particles through a dielectric region. At high temperatures ($`T>E_s^{}`$), the thermally excited solitons and antisolitons yield a perfect (in terms of the fractional charge $`q_s`$) conductance $`G_M=q_s^2/h`$.
The total conductance can be represented by the interpolative formula $`G(T)G_i(T)+G_s(T)`$. This is schematically shown on Fig. 2. From Eqs. (20) and (21) one can readily find that the “instanton” part of the conductance matches the soliton contribution, Eq. (24), at temperatures $`T_m\sqrt{\mathrm{\Delta }_LE_s^{}}\mathrm{\Delta }_L`$. As a result, in a wide temperature interval $`T_m<T<E_s^{}`$ the conductance is exponentially small and practically temperature independent, namely, $`G_mG_0\mathrm{exp}(2E_s^{}/\mathrm{\Delta }_L)`$. This trough-like shape of the $`G\times T`$ curve changes at $`TE_s^{}`$ to the step-like form, with $`GG_M=G_0/M^2`$ which characterizes the transport of fractional charge along the CDW junction.
## V Final remarks
It is evident from the above considerations that in the case of an incommensurate CDW, where the last term in Eq. (7) is absent, the temperature dependence of the dc conductance is much simpler than for a commensurate CDW. The quantum theory of an incommensurate CDW is equivalent in many aspects to a theory of a Luttinger liquid. So, for adiabatic contacts the conductance of an incommensurate CDW junction equals $`G_0`$ and it is temperature independent. For a commensurate CDW the step in the conductance which corresponds to the soliton mechanism of conductivity will last (for a purely 1D system) up to temperatures of order $`\mathrm{\Delta }_0`$. At this point, the conductance begins to increase and eventually reaches the pure metallic value, $`e^2/h`$, due to the contribution of thermally excited quasiparticles (electrons and holes). Moreover, the temperature suppression of the gap $`\mathrm{\Delta }(T)`$ in the quasiparticle spectrum also becomes important. For quasi-1D systems, the restoration of metallic conductivity will happen, of course, at much low temperatures, in the vicinity of the Peierls phase transition.
The above consideration holds for the case when electron-phonon interaction in 1D system is strong enough to produce Peierls gap. Otherwise, the interacting electron system falls into the Luttinger liquid class of universality with parameters determined both by electron-electron and electron-phonon interactions (see Refs. and ).
###### Acknowledgements.
This work was partially supported by the NRC Grant for Twinning Program between USA and Ukraine and by the Brazilian agency CNPq. I.V.K. thanks E. Bogachek, L. Gorelik, U. Landman, A. Nersesyan, and R. Shekhter for fruitful discussions. I.V.K. is also greatful to the School of Physics at the Georgia Institute of Technology for hospitality. E.R.M. thanks P. Lee for critical comments.
## A
Using a simpler version of the Lagrangian of Eq. (7), we will argue in favor of the ansatz (ii) of Sec. II. Let us thus begin with the following Lagrangian, describing a relativistic, bosonized electron-phonon system of the incommensurate type,
$``$ $`=`$ $`{\displaystyle \frac{1}{2}}_\mu \eta ^\mu \eta +{\displaystyle \frac{\rho _0^2}{2}}_\mu \varphi ^\mu \varphi +A\rho _0\mathrm{cos}\beta (\eta \varphi )`$ (A2)
$`{\displaystyle \frac{\beta }{2\pi }}_\mu \eta ϵ^{\mu \nu }a_\nu ,`$
where $`\eta `$ is the bosonic field related to the electronic degrees of freedom, while $`\varphi `$ and $`\rho _0`$ represent the phase and amplitude of the phonon field (we only allow for phase fluctuations). The gauge field $`a_\nu `$ is the external source of electric field. Already at the classical level we can see that there exist two distinct modes in the system. Setting the electric field to zero, we find the equations of motion
$$_\mu ^\mu (\eta +\rho _0^2\varphi )=0$$
(A3)
and
$$_\mu ^\mu (\eta \varphi )+A\beta \left(\rho _0+\frac{1}{\rho _0}\right)\mathrm{sin}\beta (\eta \varphi )=0.$$
(A4)
Equation (A3) describes a massless mode, whereas (A4), a sine-Gordon equation, gives rise to a massive mode. We expect the low-energy physics to be controlled by the former. Indeed, the electrical conductance of this system can be obtained from the expectation value of the current
$$J^\nu =i\frac{\mathrm{ln}Z}{a_\nu }|_{a=0},$$
(A5)
where the partition function is given by
$$Z=D\eta D\varphi e^{i{\scriptscriptstyle d^2x}}.$$
(A6)
Introducing “relative” and center-of-mass” fields
$$\chi \eta \varphi $$
(A7)
and
$$\xi \frac{\varphi +\rho _0^2\varphi }{1+\rho _0^2},$$
(A8)
we can write this partition function as $`Z[a]=Z_{\text{rel}}[a]Z_{\text{CM}}[a]`$, where
$$Z_{\text{rel}}=D\chi e^{i{\scriptscriptstyle d^2x_{\text{rel}}}}$$
(A9)
with
$`_{\text{rel}}`$ $`=`$ $`{\displaystyle \frac{\rho _0^2}{2(1+\rho _0^2)}}_\mu \chi ^\mu \chi +A\rho _0\mathrm{cos}\beta \chi `$ (A11)
$`{\displaystyle \frac{\beta \rho _0^2}{2\pi (1+\rho _0^2)}}_\mu \chi ϵ^{\mu \nu }a_\nu `$
and
$$Z_{\text{CM}}=D\xi e^{i{\scriptscriptstyle d^2x_{\text{CM}}}}$$
(A12)
with
$$_{\text{CM}}=\frac{1+\rho _0^2}{2}_\mu \xi ^\mu \xi \frac{\beta }{2\pi }_\mu \xi ϵ^{\mu \nu }a_\nu .$$
(A13)
The path integration in Eq. (A12) can be carried out exactly, since the action is quadratic. The modes described by the Lagrangian $`_{\text{CM}}`$ are massless (gapless). On the other hand, the modes in $`_{\text{rel}}`$ are massive due to the presence of the cosine term. The contribution of these modes to the current is strongly suppressed. At low temperatures, the massive modes can be neglected all together in comparison to the massless ones. This, in turn, is equivalent to setting $`\chi =\text{const.}`$ in the theory, in accordance to the ansatz (ii) of Sec. II. As a result, the current is solely determined by the center-of-mass motion,
$$J^\nu =i\frac{\mathrm{ln}Z_{\text{CM}}}{a_\nu }|_{a=0}.$$
(A14) |
no-problem/9911/astro-ph9911306.html | ar5iv | text | # Hollow Core?
## 1. Introduction
The mean profiles are believed to be the cross-cut of the emission windows of pulsars. Early days, the conal double profiles helped to establish the hollow cone beam model. More components of accumulated profiles led Backer (1976) to suggest that pulsar emission beams are composed of a hollow cone and a central beam. Rankin(1983) further pointed out there exist two distinct types of emission components: one central (filled) core component and two conical components. To seperate emission components in mean pulse profiles, Wu et al. (1992) proposed to use Gaussian fitting to the mean profile. This approach was developed by many authors, e.g. Kramer et al. (1994), Kuzmin & Izvekova (1996). Here we discuss the Gaussian components of PSR B1237+25 and conclude that the core is not filled but is hollow.
## 2. PSR B1237+25
PSR B1237+25 was the prototype of five component pulsars (e.g. Rankin 1983). Yet at some frequencies observations apparently show six components. The Gaussian-fitting to the average profile gaves quantitatively the parameters of 6 individual emission components of PSR B1237+25. In our work, some profiles were taken from Phillips et al. (1992) and Barter et al. (1982) (at frequencies 130MHz, 320MHz, 430MHz, 610MHz, 1418MHz,and 2380MHz) and some from Kuzimin et al. (1998) (at 100MHz, 200MHz, 400MHz, 600MHz, 1400MHz, and 4700MHz). We found that the all decomposed components are in a well-organized pattern, rather than randomly located. That implies that the decomposed six components are physically the same for all the frequencies. The most interesting is the two components at the central part, which demonstrates that the core is actually from a hollow beam.
In fact, this is not the unique case. Wu et al. (1998) decomposed PSR B2045+16 at some frequencies and also found that there are six components. Krammer et al. (1994) fitted 6 components to 1.42 GHz profile of PSR B1929+10. Kuzmin & Izvekova (1996) found that PSR B0329+54 are best fitted by six components over a large frequency range.
Some theoretical models have been proposed to explain the core components, for example, the Inverse Compton Scattering (ICS) model. The model (see Qiao & Lin 1998 for details) can produce naturally the most complex beam of core plus two cones, it also explains the frequency behaviors of pulse profiles. When an observer’s line of sight intersects the outer cone, the inner cone and the hollow core which all can be naturally given by the model, six components can be observed, almost exactly what we got for PSR B1237+25. Using the ICS model, we found that an impact angle greater than $`0.4^{}`$ can not be able to produce the phase seperation of the two core components $`\mathrm{\Delta }\varphi `$ at frequency 130 MHz and 320 MHz.
## 3. Conclusions and Discussions
Gaussian decompositions of PSR B1237+25 result in six individual components from the all profiles at six frequencies, and we found the core beam is actually hollow. This is exactly the case of small impact angle in the Inverse Compton Scattering model, $`\beta <0.5^{}`$. We noticed that Lyne & Manchester (1988) also got a similar impact angle through an independent way. According to the ICS model, the emission region of core is close to the surface of neutron stars, and the magnetic field there should be dipole. Multipole field is not necessary to be included to explain the hollow core emission.
### Acknowledgments.
We are very grateful to Prof. Kuzmin, Drs. Zhang B., Xu R.X., Hong B.H., and Mr. Gao. X.Y., Pan J. and Wang H.G. for helpful discusses. This work is partly supported by NSF of China, the Climbing Project-the National Key Project for Fundamental Research of China, and by Doctoral Program Foundation of Institution of Higher Education in China.
## References
Becker D., 1976, ApJ 209, 895
Barter, N., Morris,D., Sieber,W.,& Hankins,T.,1996, ApJ 258,776
Kuzmin A.D., Izvekova V.A., 1996, Astronomy Letters 22, 394
Kuzmin A.D., et al., 1998, A&AS 127, 355
Lyne A.G. & Manchester R.N., 1988, MNRAS 234, 477
Phillips A. & Wolszczan A., 1992, ApJ 385, 273
Qiao G.J. & Lin W.P., 1998, A&A 333, 172
Rankin J.M. 1983, ApJ 274, 333
Wu X.J., & Manchester R.N., 1992a, in IAU Colloq. 128, p.362
Wu X.J., et al. 1998, AJ 116, 1984 |
no-problem/9911/cond-mat9911284.html | ar5iv | text | # Statistics of Wave Functions in Coupled Chaotic Systems
## Abstract
Using the supersymmetry technique, we calculate the joint distribution of local densities of electron wavefunctions in two coupled disordered or chaotic quantum billiards. We find novel spatial correlations that are absent in a single chaotic system. Our exact result can be interpreted for small coupling in terms of the hybridization of eigenstates of the isolated billiards. We show that the presented picture is universal, independent of microscopic details of the coupling.
In disordered or chaotic systems the dynamics is in general too complex to find exact eigenstates for a given disorder or boundary configuration. Therefore, one is interested in statistical properties of physical quantities like wave functions, energy levels, etc. The complete information about these quantities can be extracted from distribution functions that can be calculated either numerically or analytically. Various methods of studying the statistics have been developed recently and this gave the possibility to achieve a good understanding of many disordered and ballistic chaotic systems (for a recent review see, e.g. ).
Now it is well established that under very general conditions the statistics of chaotic systems is universal and well described by random matrix theory (RMT) or the zero-dimensional version of the supermatrix $`\sigma `$-model. For the distribution function of the amplitudes of wave functions one obtains within this approximation the famous Porter-Thomas distribution , which is simply a Gaussian distribution of the amplitudes. Then, fluctuations of the wave functions at different points are statistically independent.
In this Letter, we present results from studying statistical properties of wave functions of two coupled chaotic systems. To be more specific, we consider two quantum chaotic billiards coupled in such a way that electrons (or electromagnetic waves) can penetrate from one billiard into the other. Studying such types of models is definitely interesting, because they can be relevant for double quantum dot systems recently fabricated on the basis of a 2D electron gas and investigated experimentally . One can also imagine that in the nearest future microwave experiments on double billiards will be performed. However, the importance of studying coupled chaotic systems is much more general and results obtained can be relevant e.g. for systems of weakly coupled complex atoms and molecules. To the best of our knowledge, neither numerical nor analytical results from a study of wave functions of coupled chaotic systems exist in the literature.
Wave functions are the most interesting objects in coupled systems, because even at very small coupling they can drastically change if any two levels of the corresponding isolated systems are close to each other. Statistics of such hybridized wave functions can be quite unusual, but, at the same time, possess interesting universal features.
The probability for the local density $`V|\varphi _\alpha (x)|^2`$ of a state with wave function $`\varphi _\alpha (x)`$ and energy $`ϵ_\alpha `$ as well as its spatial correlations can be described by the joint distribution function (JDF) $`f_2(p_1,p_2)`$, defined as:
$$f_2(p_1,p_2)=\mathrm{\Delta }\underset{\alpha }{}\delta \left(ϵϵ_\alpha \right)\underset{i=1,2}{}\delta \left(p_iV|\varphi _\alpha (x_i)|^2\right),$$
(1)
where the brackets $`\mathrm{}`$ denote disorder averaging, $`\mathrm{\Delta }=\left(\nu V\right)^1`$ is the mean level spacing, $`V`$ is the volume of the total system and $`\nu `$ is the average density of states. The function $`f_2(p_1,p_2)`$ is the probability that the local densities at energy $`ϵ`$ are equal to $`p_1`$ and $`p_2`$ at points $`x_1`$ and $`x_2`$ correspondingly.
If $`p_1`$ and $`p_2`$ are not very large the statistics of the wave functions is universal and does not depend on whether the chaotic motion is due to disorder or a non-integrable shape of the billiard. The direct calculation using the supersymmetry technique or RMT gives for single billiards in the unitary case:
$$f_2(p_1,p_2)=\mathrm{exp}(p_1)\mathrm{exp}(p_2),$$
(2)
which is just the Porter-Thomas distribution. Proper formulae can be written for the other ensembles but we concentrate here on studying a chaotic system with broken time reversal symmetry. This case is most simple for calculations but the interesting effects obtained below are general. The unitary ensemble is easily realized in quantum dots by applying a magnetic field. As concerns microwave cavities, time reversal invariance can be broken by a special preparation of the cavity walls .
The function $`f_2(p_1,p_2)`$, Eq. (1), characterizes correlations of the amplitudes of the electromagnetic waves in microwave experiments and can be measured directly . At the same time, this function determines the statistics of the conductance peak heights $`G_{\mathrm{max}}`$ in quantum dots in the regime of Coulomb blockade. The conductance $`G_{\mathrm{max}}`$ can be expressed through the tunnel rates of the contacts to external leads, $`\mathrm{\Gamma }_i\left|\varphi \left(x_i\right)\right|^2`$ :
$$G_{\mathrm{max}}\frac{e^2}{T}\frac{\mathrm{\Gamma }_1\mathrm{\Gamma }_2}{\mathrm{\Gamma }_1+\mathrm{\Gamma }_2}.$$
(3)
Eq. (3) is valid for $`\mathrm{\Gamma }_iT\mathrm{\Delta }`$, where $`T`$ is temperature and $`\mathrm{\Delta }`$ is the mean level spacing in a single dot. Then we have for the distribution function $`P\left(g\right)`$ of the dimensionless conductance $`g=T/(\mathrm{\Gamma }e^2)G_{\mathrm{max}}`$:
$$P(g)=dp_1dp_2\delta \left(g\frac{p_1p_2}{p_1+p_2}\right)f_2(p_1,p_2).$$
(4)
The relevance of Eq. (2) for conductance fluctuations in single dots has been confirmed in experiments . Unfortunately, conductance fluctuations were not studied in an experiment on double quantum dots . In a recent theoretical work the averaged resonance conductance and its fluctuations have been considered for two coupled dots in the limit of weak coupling based on the assumption that wavefunctions of the corresponding isolated dots are Porter-Thomas distributed even near the point-contact. In contrast, we present a general microscopic derivation of the function $`f_2(p_1,p_2)`$, Eq. (1), for two coupled quantum chaotic billiards for arbitrary coupling. We find novel universal correlations that do not exist in a single billiard.
The system we consider is represented in Fig. (1). The Hamiltonian $`H`$ can be written as:
$$H=H_1+H_2+H_T,$$
(5)
where $`H_1`$ and $`H_2`$ correspond to the isolated dots:
$$H_i=\underset{\alpha ;i=1,2}{}ϵ_\alpha ^ic_\alpha ^{i+}c_\alpha ^i.$$
(6)
In Eq. (6), $`ϵ_\alpha ^i`$ are the single particle energies of dot $`i`$, and $`c^+`$ and $`c`$ are creation and annihilation operators. We want to emphasize that the billiards are not identical and can have arbitrary shapes and configurations of disorder. The coupling between the billiards is described by a tunneling Hamiltonian $`H_T`$ that can be written as:
$$H_T=\underset{\alpha ,\beta }{}t_{\alpha \beta }c{}_{\alpha }{}^{1}{}_{}{}^{+}c_{\beta }^{2}+\mathrm{c}.\mathrm{c}.$$
(7)
Assuming that the particles tunnel through a surface $`S`$ between the two dots, the tunneling matrix element $`t_{\alpha \beta }`$ takes the form:
$$t_{\alpha \beta }=t_0_S\mathrm{d}^{d1}x\sqrt{V_1V_2}\varphi _{\alpha }^{1}{}_{}{}^{}(x)\varphi _\beta ^2(x).$$
(8)
In Eq. (8) we integrate over the surface $`S`$, $`V_i`$ is the volume of dot $`i`$, $`t_0`$ is the tunneling rate per unit surface and $`\varphi _\alpha ^i(x)`$ is the single particle wave function of state $`\alpha `$ at a point $`x`$ of the tunnel contact in dot $`i`$.
Starting with the Hamiltonian, Eqs. (5-8), we perform calculations using the supersymmetry technique . Following this method the JDF can be written in a form given by an integral over supermatrices $`Q_1`$, $`Q_2`$ :
$`f_2(p_1,p_2)`$ $`=`$ $`\underset{\genfrac{}{}{0pt}{}{\delta 1}{\gamma 0}}{lim}{\displaystyle \frac{\mathrm{d}}{\mathrm{d}\delta }}{\displaystyle \frac{\delta }{4}}{\displaystyle _0^1}dtdQ_1dQ_2\delta \left(p_1+{\displaystyle \frac{\gamma t\delta }{2\mathrm{\Delta }}}\mathrm{Str}(\mathrm{\Pi }_{ab}Q_1)\right)`$ (10)
$`\delta \left(p_2{\displaystyle \frac{\gamma (1t)\delta }{2\mathrm{\Delta }}}\mathrm{Str}(\mathrm{\Pi }_{rb}Q_2)\right)𝒞[Q_i]\mathrm{exp}\left(F\left[Q\right]\right).`$
The free energy $`F=F_0+F_T`$ contains the free part $`F_0`$ for the single dots:
$$F_0=\frac{\gamma }{4}\underset{i=1,2}{}\mathrm{\Delta }_i^1\mathrm{Str}(\mathrm{\Lambda }Q_i),$$
(11)
and the term $`F_T`$ describing the coupling between them:
$$F_T=\frac{\alpha }{4}\mathrm{Str}(Q_1Q_2)$$
(12)
The function $`𝒞[Q_i]`$ in Eq. (10) is:
$`𝒞[Q_i]=\mathrm{Str}\left(\left(\mathrm{\Pi }_{af}\mathrm{\Pi }_{rf}\right)\left(D_1Q_1+D_2Q_2\right)\right),`$
$`\mathrm{\Delta }_i=\left(\nu V_i\right)^1`$ is the mean level spacing in dot $`i`$, $`D_i=\mathrm{\Delta }/\mathrm{\Delta }_i`$ and $`\mathrm{\Delta }^1=\nu \left(V_1+V_2\right)`$. The projectors $`\mathrm{\Pi }_{ij}`$ select certain elements of the supermatrices $`Q_i`$, $`\mathrm{\Lambda }=\mathrm{diag}\{1,1,1,1,1,1,1,1\}`$, $`\mathrm{Str}`$ is the supertrace (for more details of the notations see ).
The parameter $`\alpha \lambda _F^{d1}S|t_0|^2\left(\mathrm{\Delta }_1\mathrm{\Delta }_2\right)^1`$ is the dimensionless coupling constant, with $`\lambda _F`$ being the Fermi wavelength. In $`F_T`$ we have kept only the term of lowest order in $`\alpha `$. Other terms of order $`\alpha ^n`$ are smaller by a factor $`\left(\lambda _F^{d1}/S\right)^{n1}`$ and this corresponds to many-channel tunneling between the dots. Therefore, as long as $`\lambda _F^{d1}/S1`$, we can consider not only small couplings but also $`\alpha 1`$. On the other hand, neglecting higher order terms for point contacts is correct only for $`\alpha 1`$. Note, that in the limit $`\alpha \mathrm{}`$ fluctuations of $`\left(Q_1Q_2\right)`$ are suppressed and one comes to the model for a single dot with volume $`V=V_1+V_2`$. The opposite limit, $`\alpha 0`$, can only be performed after taking $`\gamma 0`$.
To perform the integration we use the standard parametrization . All manipulations are not very difficult for the unitary ensemble. After a rotation of $`Q_2`$ only one set of Grassmann variables remains in the exponent, which is convenient for integrating out the Grassmann variables. Due to the presence of the delta functions, the integration over the non-compact bosonic sector is simplified in the limit $`\gamma 0`$ and we get the final result valid for arbitrary values of the coupling $`\alpha `$:
$`f_2(p_1,p_2)=\sqrt{\alpha }\mathrm{exp}\left(D_1p_1D_2p_2\beta \right)`$ (14)
$`{\displaystyle \underset{ik}{}}D_i\left(q_k+\alpha \right)^{1/2}\left[A_i+B_i\left(q_i+\alpha \right)+C_i\left(q_i+\alpha \right)^2\right].`$
In all the expressions $`ik`$ and both indices can take values $`1`$ and $`2`$. We have defined for compact notation $`q_i=2D_kp_i`$, $`\beta =\sqrt{(q_1+\alpha )(q_2+\alpha )}`$ and:
$`A_i`$ $`=`$ $`3D_1D_2m+\left(3D_k^2+D_i^2\right)n,`$
$`B_i`$ $`=`$ $`\left(D_i^2m+3D_1D_2n\right)\left(1+\beta ^1\right)\beta ^1,`$
$`C_i`$ $`=`$ $`D_i^2n\left(1+3\beta ^1+3\beta ^2\right)\beta ^2,`$
$`m`$ $`=`$ $`\mathrm{cosh}\alpha {\displaystyle \frac{\mathrm{sinh}\alpha }{2\alpha }},n={\displaystyle \frac{1}{2}}\mathrm{sinh}\alpha .`$
Eq. (14), although being exact, is rather complicated and we discuss now its asymptotics. In the limit $`\alpha 0`$ corresponding to almost isolated quantum dots the JDF vanishes for all $`p_1`$, $`p_2`$ excluding the case when either $`p_1`$ or $`p_2`$ is zero, where it diverges. The divergencies are present in the terms proportional to $`\beta ^4`$ in $`C_i`$ and the terms proportional to $`\beta ^2`$ in $`B_i`$. Then we find:
$$f_2(p_1,p_2)=D_2^2\delta (p_1)e^{D_2p_2}+D_1^2\delta (p_2)e^{D_1p_1}.$$
(15)
The particle is localized either in dot $`1`$, then $`p_2=0`$ and $`p_1`$ is distributed via Porter-Thomas, or vice versa. The factors $`D_i`$ appear, because the dimensionless densities $`p_i`$ have been normalized with respect to the volume of the total system and not of the corresponding single dot.
In the opposite limit $`\alpha \mathrm{}`$ of strong coupling between the dots we can easily obtain from Eq. (14) the Porter-Thomas distribution, Eq. (2), for the total system with volume $`V`$.
So, in the limit of strongly coupled and isolated billiards we obtain the Porter-Thomas distribution. In both cases fluctuations of the amplitudes of the wave functions at different points are Gaussian and not correlated. But what happens if the coupling constant $`\alpha `$ is finite although small? Can one have for such $`\alpha `$ correlations between the wave functions at different points within one dot or, maybe, correlations between different dots?
The answer to this question can be found taking in Eq. (14) the limit $`\alpha 1`$ and $`p_1,p_2\alpha `$, which gives:
$`f_2(p_1,p_2)`$ $``$ $`\sqrt{{\displaystyle \frac{\alpha }{8}}}\left(D_1D_2\right)^{3/2}\mathrm{exp}\left[\left(\sqrt{u_1}+\sqrt{u_2}\right)^2\right]`$ (17)
$`\left[3\left(u_1^{1/2}+u_2^{1/2}\right)+\left({\displaystyle \frac{1}{2}}+\sqrt{u_1u_2}\right)\left(u_1^{3/2}+u_2^{3/2}\right)\right],`$
where $`u_i=D_ip_i`$. We see, that the exponential cannot be factorized as in the case of the ordinary Porter-Thomas distribution, which demonstrates novel correlations that are absent in a single dot. This quite remarkable result means that putting a weakly penetrable wall in a quantum billiard yields correlations of the wave functions across the wall, although without the wall they would be absent. Of course, making the wall less and less penetrable reduces correlations due to the prefactor $`\sqrt{\alpha }`$ in Eq. (17) but this decay is slow.
The form of the function $`f_2(p_1,p_2)`$, Eq. (17), is surprisingly universal with only one sample specific parameter $`\alpha `$ describing the tunneling. It is natural to assume that such universality follows from some general physical principles. Indeed, the correlations between the wave functions of the chaotic billiards is a consequence of the quantum mechanical hybridization of states. This hybridization is most effective when the coupling between the billiards is weak and when any two levels of different billiards are close to each other.
Let us show that Eq. (17) can be obtained in the limit $`\alpha 1`$, $`p_{1,2}1`$ from this picture. The derivation is not as general and we are able to reproduce Eq. (17) for point contacts only. Instead of Eq. (8) we consider a tunneling point contact at $`x_t`$:
$$t_{\alpha \beta }=t_0\sqrt{V_1V_2}\varphi _{\alpha }^{1}{}_{}{}^{}(x_t)\varphi _\beta ^2(x_t).$$
(18)
The dimensionless coupling constant is now $`\alpha \frac{|t_0|^2}{\mathrm{\Delta }_1\mathrm{\Delta }_2}`$. We assume that the main contribution comes from configurations where two levels of the dots $`1`$ and $`2`$ are close to each other and so we consider only two eigenstates with energy $`\xi ^1`$ in dot $`1`$ and energy $`\xi ^2`$ in dot $`2`$. Including the tunneling results in a hybridization of the states. Following we write the energies of the hybridized states in first order of degenerate perturbation theory:
$$ϵ_\pm =\xi _+\pm \sqrt{\xi _{}^2+|t_{12}|^2},$$
(19)
where $`\xi _+=\frac{1}{2}(\xi ^1+\xi ^2)`$ and $`\xi _{}=\frac{1}{2}(\xi ^1\xi ^2)`$. The local densities $`|\varphi _\pm (x_{1,2})|^2`$ in dots $`1,2`$ at the points $`x_1`$ and $`x_2`$ near the contact can now be expressed in terms of the eigenfunctions $`\varphi ^1\left(x_1\right)`$ and $`\varphi ^2\left(x_2\right)`$ of the isolated dots:
$`|\varphi _\pm (x_1)|^2`$ $`=`$ $`\left[1+𝒜_{}\right]^1|\varphi ^1(x_1)|^2,`$ (20)
$`|\varphi _\pm (x_2)|^2`$ $`=`$ $`\left[1+𝒜_{}^1\right]^1|\varphi ^2(x_2)|^2,`$ (21)
where $`𝒜_{}=|t_{12}|^2\left(\xi _{}\sqrt{\xi _{}^2+|t_{12}|^2}\right)^2`$.
In order to calculate the function $`f_2(p_1,p_2)`$, the local densities $`|\varphi _\pm (x_{1,2})|^2`$, Eqs. (19-20), should be substituted into Eq. (1) and the latter averaged over disorder. Instead of doing so, we replace the averaging over the disorder by an averaging over the level mismatch $`\xi _{}`$ and the densities $`p_i=V_i|\varphi ^i(x_i)|^2`$ in all points. Assuming that the densities in the isolated dots are not correlated, being Porter-Thomas distributed, and the level mismatch distribution being given by a function $`s`$:
$$\mathrm{\Delta }^1s(\xi _{}/\mathrm{\Delta })\mathrm{d}\xi _{},$$
(22)
we can perform the averaging for the function $`f_2(p_1,p_2)`$. Now Eq. (1) becomes:
$`f_2(p_1,p_2)`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{\Delta }}}{\displaystyle \underset{\pm }{}}{\displaystyle _0^{\mathrm{}}}\left({\displaystyle \underset{i=1}{\overset{4}{}}}\mathrm{d}q_ie^{q_i}\right){\displaystyle _{\mathrm{}}^{\mathrm{}}}d\xi _{}s\left(\xi _{}/\mathrm{\Delta }\right)`$ (24)
$`\delta \left(p_1D_1^1[1+𝒜_{}]^1q_3\right)\delta \left(p_2D_2^1\left[1+𝒜_{}^1\right]^1q_4\right)`$
where $`𝒜_{}=\left(t_0^2q_1q_2\right)^1\left(\xi _{}\sqrt{\xi _{}^2+t_0^2q_1q_2}\right)^2`$.
Evaluation of the integrals is straightforward. First we integrate over $`q_3,q_4`$ and then over $`\xi _{}`$. In the limit $`p_1,p_2\alpha `$, $`\alpha 1`$ the main contribution to the integral comes from $`\xi _{}t_0`$ and we can replace the function $`s\left(\xi _{}/\mathrm{\Delta }\right)`$ by the value $`s\left(0\right)`$. This shows that the result is independent of the distribution of the mismatch $`\xi _{}`$. The remaining integrals can be performed easily and we reproduce Eq. (17) up to a constant factor of order unity which cannot be found from such a consideration.
Nevertheless, this simple evaluation cannot replace the explicit derivation of the general Eq. (14) with the supersymmetry technique, because Eq. (14) is valid for an arbitrary $`\alpha `$ and $`p_{1,2}`$. Even in the limit $`\alpha 1`$, $`p_{1,2}1`$ the evaluation based on the two-level approximation was done for point contacts only. If we used instead of Eq. (18) the more general Eq. (8), we would have in the expression for $`|t_{12}|^2`$ combinations of the type $`_i\varphi ^i(x)\varphi ^i(x^{})`$ with $`|xx^{}|\lambda _F`$. However, the Porter-Thomas distribution is not applicable for describing wave functions at different points separated by atomic distances.
In contrast, Eq. (14) does not depend on the structure of the tunnel contact and contains as the only parameter $`\alpha `$. We have, therefore, identified universal statistics of hybridized levels that is independent of microscopic details and follows only from the existence of a coupling between two otherwise statistically independent systems.
As a natural application of the general formula, Eq. (14), let us calculate the distribution function for the peaks of the conductance in a double dot structure in the regime of Coulomb blockade. Substituting Eq. (14) into Eq. (4) we can evaluate immediately the distribution function $`P\left(g\right)`$, where $`g`$ is the dimensionless conductance. The result of the computation is represented in Fig. (2). It should be mentioned that Eq. (4) and, hence, the plot in Fig. (2) are valid for a symmetric setup with both dots of the same volume and equal couplings to the leads (generalizing to an asymmetric situation can easily be done). For small couplings $`\alpha 1`$ the statistical properties of the peak conductance are correctly described by the two level approximation . In this region small values of the conductance peaks are most probable. For large $`\alpha `$ the distribution is almost flat.
In conclusion, we derived the joint distribution function of the amplitudes of wave functions for coupled chaotic systems in the unitary ensemble. We have discovered novel universal correlations that are absent in a single chaotic system and presented simple physical arguments explaining the origin of these correlations. The results obtained can be applicable for describing a large class of coupled chaotic systems.
AT wants to thank A. Altland, I. L. Aleiner and F. W. J. Hekking for useful discussions. This research was supported by the SFB 237 of the DFG. |
no-problem/9911/astro-ph9911495.html | ar5iv | text | # On the Globular Cluster IMF below 1 M⊙ Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555
## 1 Introduction
The IMF is a critical ingredient in our understanding of a large number of basic astronomical phenomena such as the formation of the first stars, galaxy formation and evolution, and the determination of the absolute star formation rate. It also plays a dominant role in any star formation theory as the end result of molecular cloud contraction and fragmentation. Moreover, the IMF is one of the important factors determining the rate of cluster disruption via internal and external evolution (relaxation and tidal shocking) and, in consequence, of the possible dark matter content of galaxy halos. In this latter context, a single power law IMF increasing as $`dN/dmm^\alpha `$ with $`\alpha 2`$ all the way to very low substellar masses is required to substantially affect the baryonic mass budget of the halo (Chabrier & Méra 1997; Graff & Freese 1996).
The actual measurement of a MF is a complex process whose ultimate precision and reliability rests heavily on a very careful quantitative analysis of all sources of possible random and systematic error. The basic uncertainties presently stem mainly from sample contamination, incompleteness, errors in the mass-luminosity and color-magnitude relations, the age, distance, and extinction of the stars, their evolution, mass segregation, and unresolved binaries. The IMF itself depends crucially, in most cases, on knowledge of the age and on any subsequent effects such as external dynamical evolution of a cluster in a galactic tidal field. For all these reasons, it has proven very difficult to pin down the shape of the IMF observationally with the required reliability and accuracy in a wide variety of stellar environments (Scalo 1998, 1999). The slope of the MF at the lowest mass end of the stellar MS and, in particular, whether or not there is a turn-over at the lowest masses before the H-burning limit and whether or not the IMF is universal or rather depends on the initial physical conditions in the natal environment are critical open issues at present.
Globular clusters represent, in principle, the ideal sample from which to deduce the stellar IMF and properly answer the above questions. They offer a large statistically significant sample of relatively bright, coeval, equidistant stars with, in most cases, relatively small variations of chemical composition and extinction within each cluster. They were all formed very early in the history of the Galaxy and there is no evidence of subsequent star formation episodes. The binary fraction outside the core is less than $`1015\%`$ and has an insignificant effect on the measured LF (Rubenstein & Bailyn 1999). Mass segregation is a relatively straightforward and well understood phenomenon quantifiable by simple Michie–King models such as those used by Meylan (1987, 1988). The only potentially serious obstacle is related to the possible modification of the IMF by the effect of tidal interactions with the Galaxy potential. This interaction, integrated over the orbit and time, is expected to slowly decrease the slope of the global mass function of the cluster (Vesperini 1998) thereby effectively masking the original IMF from our present day observations, no matter how precise and detailed they are.
Since deep LF of a dozen globular clusters (GC) in our Galaxy have now been accurately measured, we are in a good position to address observationally the issue of if and, possibly, how the interaction history of these clusters, whose Galactic orbits are reasonably well known, affects their LF in the mass range where the signature is expected to be most significant. In this paper, we show that LF obtained at or just beyond the half-light radius of these clusters surveyed are completely insensitive to this history and that they can indeed be used to deduce an uncontaminated stellar IMF below $`1`$ M for these stars.
## 2 Observational Data
The main characteristics of the data used for this study are summarized in Table 1 and their relevant presently available physical parameters are listed in Table 2. All the listed clusters have well measured LF in the critical range $`6.5<M_I<10`$ and some even well beyond these limits. We have restricted this study to absolute $`I`$ magnitudes greater than $`4.5`$ corresponding to a mass of $`0.75`$ M to avoid the mass range near the turn-off where cluster age and instrument saturation might significantly affect the determination of the LF (Silvestri et al. 1998; Baraffe et al. 1997; De Marchi et al. 1999). The LF of these clusters in number per $`0.5`$ magnitude bins as determined by analysis of their color–magnitude diagrams (CMD) are plotted with the relevant $`1\sigma `$ error-bar in Figure 1, as a function of the absolute $`I`$-band magnitude obtained using the distance moduli given in Table 2.
The statistical errors shown in Figure 1 have been determined by combining in quadrature the uncertainty resulting from the Poisson statistics of the counting process with that accompanying the measurement of the photometric incompleteness. The family of curves shown in Figure 1 represents a very homogeneous sample of objects all observed and analyzed with the same basic techniques well outside the core in a region where the low-mass MS is well populated. The most obvious feature of the observed LF is the peak located at $`M_I8.59`$ with a rising and descending part on each side. Only the clusters NGC 6341, NGC 6397, NGC 6656, NGC 6752, and NGC 6809 extend significantly beyond $`M_I=10`$ in this sample due to the difficulty of obtaining reliable LF at such faint luminosities for the more distant objects.
## 3 Conversion to a Mass Function
The observed local LF (i.e. $`dN/dM_I`$) shown in Figure 1 can be converted into the corresponding MF (i.e. $`dN/d\mathrm{log}m`$) by the application of a mass-luminosity relation (ML) as follows:
$$dN/dM_I=dN/d\mathrm{log}m\times d\mathrm{log}m/dM_I$$
(1)
Thus, the observed LF is simply the product of the MF with the derivative of the ML relation. The critical step here, therefore, is intimately connected to the proper realization of the appropriate ML relation for the age and metallicity of the cluster. A number of possibilities exists presently but the most reliable are the theoretical ML relations explicitly computed for the appropriate observational bandpasses by Alexander et al. (1997), Silvestri et al. (1998), Baraffe et al. (1997, 1998) and Cassisi (1999). Subtle differences between the calculations can be considerably amplified by the derivative process that is required to transform a LF into a MF and vice versa as shown in Equation 1. The main reason for these differences lies in the use of the gray atmosphere approximation in the first two models while the Baraffe et al. (1997) and Cassisi (1999) approach relies on a self-consistent non-gray model atmosphere to provide the correct boundary conditions for the interior integration (Chabrier, Baraffe, & Plez 1996). Another reason is probably connected to the differing equations of state used by the different authors. In any case, the very fact that there are significant differences in the various approaches that could definitely affect the final transformation strongly argues that we should use the models that adopt the fewest approximations to the physical processes underlying the emitted spectrum and that adequately fit the widest possible range of available data on low mass, low metallicity stars with the minimum of adjustable parameters. Presently, this advantage lies with the Baraffe et al. (1997; see Chabrier et al. 1999 for the most recent review) and Cassisi (1999) models that we, therefore, will use exclusively in the following discussion. The two models are, fortunately, practically indistinguishable from one another in the I band and our mass range. This consistency between independent determinations increases our confidence in the basic reliability of the M-L relation used here.
Since it is common practice to derive a MF from a given LF by dividing the latter by the derivative of the ML relation, in principle we could apply Equation 1 to the data in Figure 1 and derive the MF directly through the inversion of the LF. In this way, however, the contribution of the experimental errors and of the uncertainties inherent in the models would become difficult to disentangle in the final result. Instead, we prefer to assess the validity of a model MF by converting it into the observational plane and comparing directly the resulting LF with the observed one, precisely as indicated by the formalism of Equation 1. Particular care should be used, however, when assuming a functional form for the MF. Since the MF is often defined as the differential probability distribution of stellar mass $`m`$ per unit logarithmic mass range, i.e. $`F(\mathrm{log}m)`$, it is convenient to use the logarithmic index $`x=d\mathrm{log}F(\mathrm{log}m)/d\mathrm{log}m`$ to characterize the MF slope locally, over a narrow mass interval. As Scalo (1998) points out, however, this characterization might seem to imply that the MF is a power law with fixed index $`x`$, and thus it might seem to justify extending over a wide mass range an index derived over a much narrower mass interval. This assumption is probably responsible for much of the confusion still affecting the shape of the MF of GC stars near the H-burning limit.
In fact, as recently shown by De Marchi, Paresce, & Pulone (2000), the deepest LF available for NGC 6397 rules out the possibility that its MF is represented by a power-law distribution with a single value of the exponent $`x`$. We refer the reader to that paper for a complete discussion of the details, but report here the main points of that derivation for sake of clarity:
1. The expected V and I magnitudes and colors from Baraffe et al. (1997, 1998) appropriate to the distance, age, and metallicity of NGC 6397 fit very well the observed optical and near-IR CMD with no adjustable parameters;
2. The LF of this cluster has now been observed by three different techniques from the optical to the near IR by two different groups all of which give the same result throughout the wide luminosity range between $`6.0M_I12.5`$ within observational errors;
3. The resulting MF cannot be reproduced by a single power-law function over this range as shown in Figure 2, because after multiplication with the appropriate ML relation such a function cannot simultaneously fit both the rising and descending portions of the LF;
4. The mass function that best fits the combined data on NGC 6397 is a log-normal distribution (see Equation 2) with mass scale $`m_c0.3`$ and standard deviation $`\sigma 1.7`$. The LF to which this MF gives rise is shown as a dashed curve in the bottom panel of Figure 2.
We cannot claim, of course, that all the other clusters in our study, and especially those whose observed LF do not extend as far beyond $`M_I10`$ in Figure 1 as NGC 6397, have to have their MF in the shape of that of NGC 6397 necessarily, since we do not yet have data in the fainter regime. They might have continuously rising MF down to the H-burning limit as espoused by Chabrier & Méra (1997), for example, but the ascending parts are essentially indistinguishable from one another and the implication at least is that they would have a similar behavior below the peak. This is certainly true for NGC 6656 as discussed in De Marchi & Paresce (1997) as well as for NGC 6341, NGC 6752, and NGC 6809 since their LF extend to $`M_I=11`$ well beyond the peak and are not compatible with an underlying power-law MF, regardless of the value of its index $`x`$.
In Figure 3, we show the log-normal distributions that accurately reproduce the LF plotted in Figure 1 over the whole magnitude range spanned by the observations. Solid lines mark the portion of the MF that have been fitted to the data, while the dashed lines represent the extrapolation of the same MF to fill in the range $`0.090.7`$ M . The dashed lines in Figure 1 represent the theoretical LF obtained with the MF shown in Figure 3 that best fit the observed LF. The log-normal MF is characterized by only two parameters namely the characteristic mass $`m_\mathrm{c}`$ and the standard deviation $`\sigma `$ and takes on the form:
$$\mathrm{ln}f\left(\mathrm{log}m\right)=A\frac{\left[\mathrm{log}(m/m_c)\right]^2}{2\sigma ^2}$$
(2)
where $`A`$ is a normalization constant. The average values of the parameters for this sample of clusters are $`<m_c>=0.33\pm 0.03`$ and $`<\sigma >=1.81\pm 0.19`$. The uncertainties accompanying $`<m_c>`$ and $`<\sigma >`$ represent the scatter of the individual values of $`m_c`$ and $`\sigma `$ which are given for each cluster in Table 2. It should be noted that the relatively small values of $`\sigma `$ in Table 2 imply that for $`m<m_c`$ the MF drops not only in the logarithmic plane, but also in linear units, i.e. the number of stars per unit mass decreases with decreasing mass below the peak. A simple, unbiased measure of the steepness of the rise to the maximum of the MF shown in Figure 3 that does not depend on any preconceived notion on the shape of the MF is $`\mathrm{\Delta }\mathrm{log}N`$, defined as the logarithmic ratio of the number of lower to higher mass stars taken from the MF in the mass range between $`m=m_c`$ and $`m=0.7`$ M . This is probably the most convenient parameter to describe the region of the mass distribution most likely to be affected by external and internal dynamics and is listed in Table 2 for each cluster. Another advantage of $`\mathrm{\Delta }\mathrm{log}N`$ is that it is defined in a mass range where the stellar surface structure is best understood and all presently available models for the ML relation are in good agreement with each other (Silvestri et al. 1998) and is, in consequence, least likely to be subject to uncertainties due to the LF to MF conversion.
## 4 Correction for Mass Segregation
The MF shown in Figure 3 need to be corrected for the effects of mass segregation due to energy equipartition as has been extensively discussed in Pulone et al. (1999), for example, in the case of NGC 6121 (M 4). To assess the magnitude of the expected effect on the shape of the local MF, we have run standard multi-mass Michie–King models for the clusters in our sample. As a typical example, in Figure 4 (left panel), the expected local MF is plotted as a function of radial position and compared to the input global MF (dashed lines) for the case of NGC 6397. As can be seen in this figure, the largest departure of the local MF from the global MF occurs in the innermost regions of the cluster ($`r0.1r_h`$) but significant deviations also occur beyond the half-light radius. Near the half-light radius the deviations are insignificant.This is shown graphically in the right panel of Figure 4, where we plot the standard deviation of the local from the global MF as a function of radial position for this cluster. These results are quite consistent with the expectations of previous work along these lines by Richer et al. (1991) and Pryor, Smith, & McClure (1986).
It is clear from this result that, provided the measurements are carried out close to the half-light radius where the effects of mass segregation are minimal, the deviation between the local MF and the global one is basically unmeasurable with present techniques. This result was confirmed for NGC 6397 where the global MF was found to be essentially indistinguishable from the MF determined at the half light radius (De Marchi et al. 2000). While the majority of the LF in our sample fulfill this requirement and can, therefore, be left safely unaltered, those of NGC 6341, NGC 7078, and NGC 7099 have been obtained farther out in the cluster, at about $`4r_h`$ (see Table 1) in a region where the deviations can be significant as indicated in Figure 4. The effects of mass segregation must be accounted for in these cases as they might otherwise lead to global MF that appear steeper than they really are. The corrected $`\mathrm{\Delta }\mathrm{log}N`$ for these clusters obtained by using the appropriate Michie-King models are listed in Table 2. The effect of this correction is a decrease, as expected, of the value of $`\mathrm{\Delta }\mathrm{log}N`$ for all three clusters.
## 5 Physical Data and Tidal Disruption
In Table 2, we list the main physical parameters of the clusters surveyed so far. Since our main objective is to search for a signature of the cluster’s dynamical history on its low mass MF, we have included in this table whatever is known about its orbit in the Galactic tidal field. The space motion data were obtained from the work of Dauphole et al. (1996), Odenkirchen et al. (1997), and Dinescu, Girard, & Van Altena (1999). These data can be used in a theoretical model to determine the change with time of the cluster’s main characteristics such as total mass, mass and luminosity functions, tidal radii, central concentrations, relaxation times, etc. Both N-body and Fokker-Planck models of increasing sophistication have been used recently to compute such evolution (Takahashi & Portegies Zwart 1999; Gnedin, Hernquist, & Ostriker 1999; Vesperini 1998, 1997; Vesperini & Heggie 1997; Gnedin & Ostriker 1997; Capriotti & Hawley 1996; Murali & Weinberg 1997). Although different authors use different initial conditions and approximations to the complex tidal interaction mechanisms, the generally physically plausible final result is a flattening of an assumed power-law low mass MF with time due to the preferential evaporation of lower mass stars forced by two-body relaxation out to the cluster periphery where the evaporation process is accelerated by tidal shocks.
A direct calculation of this phenomenon for a specific cluster orbit has not been carried out yet but an indirect indication at least of the magnitude of the effect can be gleaned from the recent calculations of the time to disruption $`T_d`$ of specific clusters carried out by Gnedin & Ostriker (1997) and by Dinescu et al. (1999). These times are given in Gyr in Table 2 (assuming a value of 10 Gyr for a Hubble time) where the two values of the total destruction rate given by Gnedin & Ostriker (1997) for the two galactic models used in their calculations have been averaged in column (10). The observed clusters cover quite a large range of $`T_d`$ from a minimum of 4 Gyr for NGC 6397 to 213 Gyr for NGC 5272 (using Gnedin & Ostriker’s values), or from 2 Gyr for NGC 6121 to 275 Gyr for NGC 5272 (following Dinescu et al. 1999). These values should in principle be regarded as upper limits to the true $`T_d`$, as both Gnedin & Ostriker and Dinescu et al. treat the internal dynamical evolution of the clusters by using single-mass Michie–King models and, thus, tend to underestimate the effects of mass segregation. Although differences exist between the values of $`T_d`$ as given by Gnedin & Ostriker and Dinescu et al. (with the latter being usually larger), an inspection of Table 2 shows that no one particular orbital parameter or the cluster mass by itself is sufficient to foretell what the fate of the cluster will be. Even, for example, the cluster’s perigalactic distance or its height above the plane are not well correlated with $`T_d`$. This means that the overall impact of the repeated bulge and disk shocks on the cluster over its lifetime is not easily predictable from a simple glance at the orbital parameters but only from the use of calculations over the entire orbit such as those referred to above.
All things being equal, then, we would expect that the clusters with the largest times to destruction $`T_d`$, i.e. those that have suffered the least tidal disruption, to have the largest low to high mass number ratio $`\mathrm{\Delta }\mathrm{log}N`$. The actual situation is shown in Figure 5 where we have plotted the time to disruption of Gnedin & Ostriker (1997) as a function of $`\mathrm{\Delta }\mathrm{log}N`$. The best linear fit to this distribution is a straight line with zero abscissa at $`T_d=113\pm 3`$ and having a slope of $`2.9`$ with a formal error of $`\pm 0.2`$. A horizontal line drawn at $`\mathrm{log}T_d1.5`$, however, would still give an acceptable fit. Within the errors, then, there is no discernible trend in this direction and the conclusion at this point is, therefore, quite clear: the global MF of the clusters in our sample show no evidence of evolution with time within the quoted errors.
The MF logarithmic ratios $`\mathrm{\Delta }\mathrm{log}N`$ plotted in Figure 5 do seem to have a statistically significant scatter about the mean beyond that expected from measurement errors alone. What this scatter is due to is not at all clear at the moment since we cannot simply ascribe it to tidal effects. Intrinsic variations of the observed magnitude in the IMF slope in this mass range are generated naturally in Adams & Fatuzzo’s (1996) hypothesis due to variations in conditions under which accretion is choked off by the appearance of winds from the proto-star. The same effect is predicted by Elmegreen’s (1999a, 1999b) hierarchical cloud sampling model due to both the inherently random sampling process and the variation of the thermal Jeans mass with initial conditions. If confirmed with more precise data, this effect, if real, may be a sensitive indicator of natal cloud conditions such as temperature and pressure.
In light of the very small range spanned by $`m_c`$ and $`\sigma `$ deduced for all our clusters, the conclusion is, then, that a single form of the MF can easily reproduce all the 12 deep LF obtained so far and, since there is no obvious dependence on dynamical history over an extremely wide range of conditions, that this MF is most likely to represent the initial distribution of stellar masses in the cluster, namely the IMF.
Finally, we should note that, although we have argued on the basis of the data presented above that there seems not to be any, as yet, measurable effect of tidal interactions with the galaxy showing up at or just beyond the half-light radius and implying a massive ongoing disruption event in any of the 12 clusters in our sample, there is evidence of this effect in the LF of NGC 6712 as recently measured with the VLT (De Marchi et al. 1999). This cluster’s MF, if extrapolated to the relevant mass range of the others, would show a $`\mathrm{\Delta }\mathrm{log}N0.1`$. This result implies that some clusters are much more capable than others in shielding their interiors very effectively from tidal disruption while others, like NGC 6712 are very vulnerable to this effect.
How this may work in practice is starting to be understood by recent theoretical and numerical simulation studies (Takahashi & Portegies Zwart 1999; Gnedin, Hernquist & Ostriker 1999). For example, such calculations do predict that most of the clusters in our present sample are quite stable being located well inside the survival boundaries of the vital diagrams plotted by Gnedin & Ostriker (1997). They may have survived so far relatively undisturbed in the interior at least due to special initial conditions (high mass and concentration) and a relatively benign shock history even though their outer parts may well show indications of tidal heating (Drukier et al. 1998; Leon, Meylan & Combes 1999). In any case, they are unlikely to have lost more than $`1\%`$ of their mass due to tidal shocking (Combes, Leon & Meylan 1999). NGC 6712, on the other hand, may be one of the few caught in the brief period before complete disruption. Takahashi & Portegies Zwart (1999) predict that, initially, NGC 6712 had more than 1000 times its present mass.
## 6 Discussion
### 6.1 Comparison with Previous Work
A preliminary comparison of the properties of the first deep cluster LF measured with the HST was carried out by De Marchi & Paresce (1997, and references therein). In those papers, we showed that the shape of the LF, all measured near the half-light radius, seemed to bear little or no relation with the past dynamical history of the clusters nor with their position in the Galaxy. In spite of the widely different dynamical properties of the low-metallicity clusters NGC 6397, NGC 6656, and NGC 7078, near their half-light radius these three objects feature what in practice is the same LF below $`0.6`$ M (see Figure 1). Our conclusions, then, on the basis of a much more limited data base and uncertain models were fortuitously substantially similar to those in this paper.
From their comparison of the LF of nine clusters (all of which are also part of the present study), Chabrier & Méra (1997) concluded that the MF of GC is “well described by a slowly rising power-law $`dN/dmm^\alpha `$ with $`0.5\alpha 1.5`$ down to $`0.1`$ M ,” at variance with what we show in Figure 3. Several factors might be at the origin of this discrepancy. First, in order to compare LF measured in different wavelength bands, Chabrier & Méra converted them all into bolometric luminosities. Although the claim is that the LF of the same cluster observed through different filters should yield the same bolometric LF, mixing data with theory makes the true uncertainty much more difficult to estimate. Second, two of the LF that they used are now known to be incorrect at the low mass end, namely that of NGC 6397 of Cool et al. (1996) later amended in King et al. (1998) and that of NGC 5139 measured by Elson et al. (1995) and recently corrected by De Marchi (1999). Both LF overestimated the number of objects below $`0.3`$ M , thus mimicking an increase in the MF where a flattening should have occurred instead.
A third effect, partly ensuing from the second, is that, having noticed the discrepancy existing then between the LF of NGC 6397 of Cool et al. and that of Paresce et al. (1995) at the low mass end, Chabrier & Méra were forced to exclude NGC 6397 from their analysis. But since the LF of NGC 6397 reaches the faintest luminosities observed so far, ignoring it prevents a reliable determination of the MF at the lowest mass end. Finally, they also ignore the flattening of the MF of NGC 6656 and NGC 6341 below $`0.2`$ M . All of this, combined with the considerable structure that one sees in the MF above $`0.3`$ M , makes any claim based on a single exponent power-law mass distribution extending all the way to $`0.1`$ M for GC presently unsustainable. The only way to modify this conclusion would be to assume an error in the Baraffe et al. (1997) and Cassisi (1999) ML relations at the lowest masses. The only known reason for a discrepancy would be in the possible formation of dust in the atmosphere but this effect should be minimal in stars of such a low metallicity.
Piotto et al. (1997) have also noticed the unsuitability of a single power-law distribution to represent the MF of GC. In fact, the MF that they obtain by applying the ML relation of D’Antona & Mazzitelli (1995) or that of Alexander et al. (1997) to their LF deviate from a single exponent power-law, even when they restrict their investigation to the small mass range below $`0.4`$ M . As Figure 5 and Table 2 clearly show, this is not unexpected because the peak of the MF is located at $`m_c0.33`$ M . A drop-off below the peak at $`0.3`$ M is also found by King et al. (1998) whose LF, if anything, is slightly steeper than ours (see the comparison shown in Figure 5 of De Marchi et al. 2000).
Piotto & Zoccali (1999) again use a power-law fit to the MF of a larger sample of clusters — even if strong departures from a pure power-law distribution are clearly evident at both the higher and lower mass ends — to claim the existence of a correlation between the rate of destruction of their sample of clusters with the slope of their best fit power law to the MF. On the other hand, restricting, for example, Figure 5 only to the clusters studied by Piotto & Zoccali (1999) would not show any correlation between the time to disruption of a cluster with $`\mathrm{\Delta }\mathrm{log}N`$. The reason for this discrepancy is not clear but it may have to do with a combination of smaller sample, a power law slope that bears little relation with the mass distribution of the stars in the clusters, and rough estimates of the effects of mass segregation. This last point is absolutely essential for proper inter-cluster comparisons especially in view of the fact that most HST–WFPC 2 LF tend to be taken at constant sky offsets ($`4\stackrel{}{\mathrm{.}}5`$) from the core and that, therefore, the farther away the cluster the farther out physically is the LF sampled and the segregation correction greatest.
Another problem commonly encountered in this type of work is that what seem like significant differences in the LF are almost completely washed out in the MF from which they descend especially once the effects of mass segregation are accounted for. Thus, no meaningful comparison between clusters can be made on the basis of the LF alone and, since possible effects of evaporation are impressed on the MF, it is exclusively in this plane that they must be sought.
Finally, Silvestri et al. (1998) argue that the MF of the entire MS of NGC 6397 is consistent with a shallow slope power-law if their most recent models for the ML relation are used to convert the cluster LF shown in Figure 1. As these authors point out, this is due to their ML relation being steeper than that of Baraffe et al. (1997) at the low end of the MS. As we already mentioned above, these models rely on a grey atmosphere approximation that is not self-consistent and that, therefore, cannot be preferred to the latter that rely on a completely self-consistent approach which provides excellent matches to a wide variety of experimental data.
Silvestri et al. (1999) also address the issue of how a change in the distance scale of globular clusters will affect their MF, showing that longer distances result in shallower MF, or, better, in a more pronounced flattening at the low-mass end. In view of the still debated revision of the distances to globular clusters based on the new Hipparcos data (see e.g. Gratton et al. 1997; Pont et al. 1997), we have adopted the pre-Hipparcos values in our analysis. Nevertheless, since the distance moduli of the 9 globular clusters studied by Gratton et al. (1997) are in excess of those of Djorgovski (1993) by only $`0.22\pm 0.10`$ mag, using the new scale would not change our results significantly.
### 6.2 Comparison with Other Clusters
How does the globular cluster IMF derived here compare with MF derived in other stellar clusters? The present situation in this regard is summarized in Figure 6 where we have drawn the most recently deduced MF in several Galactic cluster populations. The data are for the Orion Nebula Cluster from Hillenbrand (1997), IC348 from Luhman et al. (1998) and Herbig (1998), the Pleiades as reported by Bouvier et al. (1998), and the Hyades from Reid & Hawley (1999) and Gizis, Reid & Monet (1999). For comparison, the MF deduced from the NGC 6397 LF shown in Figure 3 is also reproduced here in the same format. Since the Galactic halo may well be populated mainly by the disruption of globular clusters, we also show in this figure the MF of the halo as obtained from the LF of Dahn et al. (1995) confirmed by Fuchs & Jahreiss (1998) and Gizis & Reid (1999), converted to a MF by Chabrier & Méra (1997), and corrected for binaries by Graff & Freese (1996).
The measurement peculiarities and sources of uncertainties in these measurements are exhaustively discussed and quantitatively evaluated by these authors so that they should be regarded as the most precise and up to date determinations of the MF in the $`0.13`$ M mass range. We have distilled their measurements into the best fitting power law in the appropriate mass range. The data are shown in logarithmic mass units on the abscissa and the MF represented as the $`\mathrm{log}dN/d\mathrm{log}m`$ in the ordinate. The slope of the power law is then $`x=1\alpha `$ where $`\alpha `$ is the slope in linear mass units ($`\alpha =2.35`$ for the Salpeter IMF). The vertical position of the lines is arbitrary, of course, so we have shifted them up or down for enhanced visibility.
Errors associated with the measurements of the slopes and masses of several tenths of a dex in both axes should be considered typical. It should also be noticed that, in many cases, the number of objects in the faintest bins is very small ($`12`$). The effect of unresolved binaries on the MF is estimated by most authors but it does remain as a caveat to keep in mind until much higher spatial resolution observations become available (Kroupa 1995). Mass segregation may also play a role in some cases like the Hyades which may explain the higher than average $`m_c`$ for this cluster.
A striking aspect of the results shown in Figure 6 is the similarity between the various MF despite the substantial differences in environment and physical characteristics such as metallicity and age. To be sure there are possible variations and inconsistencies in the details but overall the trend is pretty clear, namely a Salpeter-like increase in numbers with decreasing mass from $`3`$ M to $`1`$ M always followed by a definite break and flattening extending down to $`0.10.2`$ M with slopes in the range $`0<x<0.5`$. Moreover, all the measurements that reach close to the H-burning limit with reasonable completeness and statistical significance indicate a turnover below $`0.2`$ M . Certainly no one simple power-law can possibly explain the data shown in this figure, thus ruling out a scale free IMF in any of these cases.
It is quite conceivable, then, that, at least to the level of accuracy of the present data, all the MF schematically represented in this figure come from basically the same underlying type of distribution function that increases with mass from the substellar limit to a peak somewhere between $`0.2`$ and $`0.5`$ solar masses and then drops steeply beyond $`1`$ M . More specifically, convolving a log-normal distribution function with the limited mass resolution and counting statistics presently available could easily generate the segmented power law MF shown in Figure 6. In other words, there seems to be no reason to think that the shape of the IMF from which the various samples are taken is much different from a log-normal implied by the globular cluster data. It is possible that all the MF of the clusters shown in Figure 6 could be the result of a single log-normal since the evident cluster to cluster scatter of peak mass could be completely due to measurement error or systematic effects like mass segregation. In this case, the log-normal implied by the globular cluster data would be a truly universal function essentially independent of age or metallicity.
Another possibility is that this scatter is real and related to some fundamental characteristic of the cluster. The small number of clusters for which reliable MF have been obtained so far precludes, for the moment, precise conclusions such as whether or not there is a trend for $`m_c`$ in clusters to increase with age as one might be attempted to deduce from the data in Figure 6. Moreover, there is a group of embedded clusters like $`\rho `$ Oph and NGC 2024 that do show a steadily rising MF all the way from $`1`$ M to well below the H-burning limit at $`0.08`$ M (Williams et al. 1995; Comeron et al. 1996). These MF taken at face value cannot be reconciled with the log-normal form discussed so far unless their $`m_c`$ is located deep in the lower reaches of the brown dwarf regime. The Pleiades themselves seem to have a rising MF with decreasing mass below the H-burning limit even if the stellar part is well represented by a log-normal distribution (Bouvier et al. 1998).
These cases would argue for a non universal IMF which could be sensitive to peculiar physical conditions affecting the formation of very low mass stars and brown dwarfs (Evans 1999; Liebert 1999). It is possible to entertain the idea, for example, that physical conditions in dense, massive clusters like the ONC or the globular clusters discussed here are not conducive to their formation. Because there are many open issues surrounding the accurate determination of MF in embedded clusters by means of LF modelling (Luhman et al. 1998; Meyer et al. 1999), and because it is still difficult to pin down the mass peak in young clusters with great enough accuracy, it is probably still too early to tell if this is a serious hypothesis or not but it does raise at least the very exciting possibility of using the bottom of the MS as a sensitive diagnostic of initial conditions in the original star forming regions.
### 6.3 Comparison with Theory
How plausible is a log-normal distribution of the kind advocated here from a purely theoretical perspective? As first pointed out by Larson (1973) and Zinnecker (1984), when the star formation process depends on a large number of independent physical variables, the resulting IMF can be approximately described by a log-normal distribution function of the form discussed in the previous section. As developed in greater detail recently by Adams & Fatuzzo (1996), the observed values of the mass scale $`m_c`$ and the standard deviation $`\sigma `$ can even be used to set rough limits on the actual physical variables entering into the theory if, as they claim, the mass of a star is self-determined by the action of an outflow. The values of these two parameters obtained for the globular cluster sample discussed in the previous section are quite consistent with our present, admittedly limited, knowledge of the conditions in the star-forming environment. In general, in this particular formulation of the theory, very low mass stars and brown dwarfs are relatively rare since they require natal clouds having unrealistically low effective sound speeds.
On the other hand, it is well known that the IMF cannot be completely described by a log-normal form since it is very unlikely that so many variables are involved in the formation process and the greatest deviations will be in tails at the extremes of the function. Thus, it is still quite plausible theoretically to have cases where the lowest mass end of the IMF deviates even significantly from the log-normal form as in the case of the embedded clusters NGC 2024 and $`\rho `$ Oph discussed above. Hierarchical fragmentation may be quite relevant in setting the form of the IMF in the low mass range discussed in this paper (Larson 1995) and this process also would be expected to yield, in principle, a log-normal IMF under the proper circumstances. Recent numerical simulations of the formation of proto-stellar cores from the collapse of dense, unstable gas clumps and subsequent evolution through competitive accretion and interactions such as those expected in a dense cluster, predict a mass spectrum described by a log-normal function quite similar to the ones derived in Figure 6 (Klessen & Burkert 1999) lending even more support to the idea that these represent the original mass function of these clusters.
A completely different purely mathematical approach taken by Elmegreen (1997, 1999a, 1999b) recently arrives at very similar conclusions as to the form of the underlying stellar IMF. In this formalism, proto-stellar gas is randomly sampled from clouds with self-similar hierarchical structure giving rise to an IMF that looks remarquably similar to the one outlined in the previous section. This includes a power law section at intermediate masses and a flattening and turn-over at low masses due to the inability of gas to form stars below the thermal Jeans mass. Of particular interest in our context here is the natural occurrence of IMF fluctuations of several tenths slope due to random variations around a universal IMF quite similar to those observed for the $`\mathrm{\Delta }\mathrm{log}N`$ of our cluster sample. This theory would then quite naturally explain the scatter observed in this parameter shown in Figure 5. As more data is gathered, if this finding is confirmed it could be used as a strong constraint on theory. On the other hand, such a scatter acts to obscure or even obliterate any sign of possible tidal effects on the MF and again explains why these effects are not at all evident in the data shown in Figure 5. Since the thermal Jeans mass does depend somewhat on environmental conditions, this theory might also be able to explain the possible inter-cluster variation of $`m_c`$ seen in Figures 3 and 6.
## 7 Summary and Conclusions
We have analyzed in detail the implications for the IMF of our present reasonably good knowledge of the MS LF of a dozen Galactic globular clusters covering a wide range of physical and orbital characteristics. We have shown, first, that they can be converted to a MF by the application of a ML relation that incorporates all the relevant internal and atmospheric physics of low mass low metallicity stars appropriate to the cluster sample under investigation. We have, then, calculated the possible effect of mass segregation due to energy equipartition on the locally derived MF and find that for nine of the twelve clusters no correction is required as they were obtained very close to the half-light radius where the deviation is negligible. For the other three clusters, corrections are applied that reduce the observed number gradient between $`0.7`$ M and the mass peak.
The MF obtained in this way could all descend from the same log-normal form of the global MF within a small range of mass scales and standard deviation. The MF of the four clusters of the sample whose LF extend significantly beyond the mass peak at $`0.33`$ M cannot be reproduced by a single power law throughout the MS mass range explored in this paper, but would require at least an unphysical double power law. We, then, explored the possible modification of these global MF with orbital history of the individual clusters by comparing the number gradient of their MF with theoretical estimates of their survivability in the Galactic potential. No statistically significant effect is found, no matter what particular model is used. We conclude that the effect, if present at all in this type of clusters, is completely obscured by the present observational uncertainties and that, therefore, the global MF we measure today must be, within those uncertainties, identical to the original MF namely the IMF.
We explored, finally, the plausibility of this conclusion by examining the measured structure of the MF of much younger clusters that could shed light on the shape of the original globular cluster MF. For many of the best measured clusters, we find convincing evidence that they also exhibit a log-normal shaped IMF in the stellar mass range of the same type deduced from the globulars, albeit, possibly, with a wider range of mass scales and standard deviations. Both the shape and the scatter are roughly consistent with presently available theoretical models. A few deeply embedded clusters do show evidence of possible deviations from this result although there are still some questions as to the validity of the measurement techniques in these difficult environments.
Thus, the conclusion seems robust at this point that most cluster stars originate from a quasi-universal IMF below $`1`$ M having the shape of a log-normal whose precise mass scale and standard deviation may fluctuate from one particular environment to another due to the effects of random sampling or differing physical conditions depending on which model is appropriate. It is also clear that much remains to be done to clarify and establish the range of validity of this conclusion and to understand the origin of the possible deviations such as those found for some embedded clusters. This investigation should yield a bountiful harvest of information on the stellar IMF in the near future. Of particular importance in this endeavour, will be securing a sufficiently large, clean sample of stars of the same physical and kinematical type in a wide variety of environments and ages and to develop the most accurate models of their energy output as a function of mass.
###### Acknowledgements.
We would like to thank France Allard, Isabelle Baraffe, Santi Cassisi, Gilles Chabrier, Dana Dinescu, Bruce Elmegreen, Oleg Gnedin, Pavel Kroupa, and Simon Portegies Zwart for useful discussions and an anonymous referee for comments and suggestions that significantly strengthened the paper. We are particularly grateful to Luigi Pulone for having computed the effects of mass segregation on the MF of the clusters in our sample. |
no-problem/9911/cond-mat9911149.html | ar5iv | text | # Metallic nonsuperconducting phase and d–wave superconductivity in Zn–substituted La1.85Sr0.15CuO4
\[
## Abstract
Measurements of the resistivity, magnetoresistance and penetration depth were made on films of La<sub>1.85</sub>Sr<sub>0.15</sub>CuO<sub>4</sub>, with up to 12 at.% of Zn substituted for the Cu. The results show that the quadratic temperature dependence of the inverse square of the penetration depth, indicative of d–wave superconductivity, is not affected by doping. The suppression of superconductivity leads to a metallic nonsuperconducting phase, as expected for a pairing mechanism related to spin fluctuations. The metal–insulator transition occurs in the vicinity of $`k_Fl1`$, and appears to be disorder-driven, with the carrier concentration unaffected by doping.
\]
Although there is strong evidence for d–wave symmetry of the order parameter in high–T<sub>c</sub> superconductors , earlier experiments do not distinguish between mechanisms that lead to pure $`d_{x^2y^2}`$ symmetry , and others that allow an admixture of s–wave pairing .
In this letter we describe the suppression of superconductivity by disorder, with the conclusion that pure d–wave symmetry continues until the superconductivity disappears. The experiment is based on the fact that disorder strongly suppresses d–wave pairing, and may therefore lead to a transition from the superconducting state to a normal–metal state. Any s–wave pairing would be less strongly affected, so that in its presence superconductivity would be expected to persist until, with greater disorder, it is destroyed at the metal–insulator transition.
Studies of the $`T_c`$–suppression in electron–irradiated YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (YBCO) , or in Zn–doped YBCO and La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO) did not address the question of the nature of the nonsuperconducting phase. Our previous study showed the existence of a metallic nonsuperconducting phase in LSCO with variouys impurities , but was subject to criticism because it was done on polycrystalline, ceramic specimens.
We studied a series of single–crystalline La<sub>1.85</sub>Sr<sub>0.15</sub>Cu<sub>1-y</sub>Zn<sub>y</sub>O<sub>4</sub> films, with zinc content, $`y`$, from 0 to 0.12, and complete suppression of superconductivity for $`y>`$ 0.055. We find that with increasing $`y`$ the transition from the superconducting state is to a metallic state, with the carrier concentration unaffected by the addition of the zinc. This is in contrast with the carrier–driven transition that is observed with a change in the strontium content. We also measured the superconducting penetration depth ($`\lambda `$), and find that it remains proportional to $`T^2`$ when $`y`$ is increased, suggesting that there is no s–wave component. As $`y`$ increases to 0.12, the metal–insulator transition is approched in the vicinity of $`k_F\mathrm{}=1`$, where $`k_F`$ is the Fermi wave vector and $`\mathrm{}`$ is the electronic mean free path, suggesting that the transition is disorder–driven.
The $`c`$–axis oriented films, with thicknesses between 5000 and 9000 Å , were grown by pulsed laser deposition on LaSrAlO<sub>4</sub> substrates. The films were patterned by photolithography and wires were attached with indium to evaporated silver pads. Standard six–probe geometry was used to measure the Hall voltage and the magnetoresistance. The specimens were mounted in a dilution refrigerator, and cooled to 20 mK without a magnetic field, and to 45 mK in the presence of a field. The magnetoresistance was measured with low–frequency ac, in magnetic fields up to 8.5 T, in the longitudinal (field parallel to the $`ab`$–plane) and transverse (field perpendicular to the $`ab`$–plane and to the current) configurations. A second set of specimens was prepared for penetration depth measurements. $`\lambda (T)`$ was obtained from the mutual inductance of two coaxial coils fixed on opposite sides of the superconducting film .
The zinc content in the films was checked to confirm that it is the same as that of the targets . The films have some substrate–induced strain, and varying amounts of oxygen vacancies so that they had a range of resistivities at each value of y . For each y a group of 6 to 10 films was made, and it was possible to select films with residual resistivities within 30% of those for bulk single crystals . In these selected films superconductivity persists to $`y_c`$ = 0.055, while for larger resistivities $`T_c`$ vanishes earlier. In ceramic specimens $`y_c`$ = 0.03 .
The increase in the residual resistivity, $`\mathrm{\Delta }\rho _0`$, as a function of $`y`$ is shown in the inset in Fig. 1. The residual resistivity is determined by extrapolation to zero temperature of the linear high–temperature resistivity, and $`\mathrm{\Delta }\rho _0`$ is calculated with respect to a zinc–free film with $`\rho _0`$=36.5$`\mu \mathrm{\Omega }cm`$ and $`T_{c0}`$=35.2K. It is seen that $`\mathrm{\Delta }\rho _0`$ increases linearly with $`y`$ at a rate of 3.3 $`\mu \mathrm{\Omega }cm`$ per at.% of Zn until $`y`$ reaches 0.1. For larger concentrations the rate increases rapidly, signaling the approach to the metal–insulator transition. The resistivity in a two–dimensional system from s–wave impurity scattering follows the formula $`\mathrm{\Delta }\rho _0=4(\mathrm{}/e^2)(y/n)dsin^2\delta _0`$, where $`n`$ is the carrier concentration, $`d`$ is the distance between the CuO<sub>2</sub> planes (6.5 Å in LSCO), and $`\delta _0`$ is a phase shift . The dashed line shows the unitarity (maximal) limit corresponding to $`\delta _0=\pi /2`$. We use $`n=0.15`$, as the carrier concentration is shown by Hall–effect measurements to be almost independent of $`y`$ . It is seen that the scattering is about half as effective as in the unitarity limit. This result is close to the result for Zn–doped YBCO single crystals , but differs from that reported for single crystals of Zn–doped LSCO where the scattering was claimed to exceed the unitarity limit . In fact, the discrepancy seems to be primarily the result of a difference in the method of calculating the residual resistivities .
The scattering rate for nonmagnetic impurities, $`1/\tau _{IMP}`$, is related to the residual resistivity, $`\mathrm{\Delta }\rho _0`$, by the formula $`1/\tau _{IMP}=2\pi \lambda _{TR}k_B\mathrm{\Delta }\rho _0/\mathrm{}\frac{d\rho }{dT}`$, where $`\lambda _{TR}`$ is an electron–boson transport coupling constant, and the value of $`\frac{d\rho }{dT}`$ is from the temperature range 200K to 300K where the resistivity is boson–mediated. We display the $`T_c`$–suppression by plotting the ratio $`\mathrm{\Delta }\rho _0/\frac{d\rho }{dT}`$, since the errors related to uncertainties of size and homogeneity of the specimens cancel in this ratio . This is shown in Fig. 1, where $`T_c`$ is normalized to $`T_{c0}`$ = 35.2 K. The solid line shows the best fit to the Abrikosov–Gorkov (AG) formula , $`\mathrm{ln}\frac{T_{c0}}{T_c}=\mathrm{\Psi }\left(\frac{1}{2}+\frac{\mathrm{}}{4\pi \tau k_BT_c}\right)\mathrm{\Psi }\left(\frac{1}{2}\right)`$ (where $`\mathrm{\Psi }`$ is the digamma function), with the pair–breaking scattering rate $`1/\tau `$ equal to $`1/\tau _{IMP}`$ and the fitting parameter $`\lambda _{TR}`$ equal to 0.18. There is a slight deviation of the theoretical curve from the experimental points on Fig. 1 for $`T_c/T_{c0}<0.25`$. The value of $`\lambda _{TR}`$ of 0.18 (indicating the weak–coupling limit) is a factor of two smaller than the value we estimate from high–temperature resistivity data, for $`\frac{d\rho }{dT}`$ equal to the average of the experimental values, 2.5$`\mu \mathrm{\Omega }`$ cm/K, and $`\mathrm{}\omega _P=0.8eV`$ ,.
Both the deviation of the curve for small $`T_c`$ and the reduced value of $`\lambda _{TR}`$ may be related to effects which are not included in the AG formula, such as the anisotropy of the impurity scattering (which leads to a difference between $`1/\tau `$ and $`1/\tau _{IMP}`$ ) and the spatial variation of the order parameter . The dependence of $`T_c/T_{c0}`$ on $`\mathrm{\Delta }\rho _0`$ by itself does not allow us to distinguish between these effects. The deviations could also be caused by the presence of an s–wave component of the superconducting order parameter, and this possibility is examined below.
Fig. 2 shows the temperature dependence of the penetration depth for four films with values of $`y`$ between 0 and 0.03, and values of $`T_c`$ from 28 K to 6.2 K. The specimen with the lowest $`T_c`$ has $`T_c/T_{c0}`$ equal to 0.18 which is in the regime where the AG formula begins to deviate from the experimental data (see Fig. 1). For d–wave symmetry of the order parameter, disorder leads to a quadratic temperature dependence $`\lambda ^2(T)\lambda ^2(0)T^2`$ for $`TT_C`$, while exponential behavior, $`\lambda ^2(T)\lambda ^2(0)\mathrm{exp}(\mathrm{\Delta }_{\mathrm{min}}/kT)`$, is expected for $`T\mathrm{\Delta }_{\mathrm{min}}`$ in an s–wave superconductor . Here the magnitude of the minimum of the energy gap, $`\mathrm{\Delta }_{\mathrm{min}}`$, is expected to increase with disorder for anisotropic s–wave pairing . The quadratic temperature dependence is well documented for zinc–free LSCO films with different amounts of intrinsic disorder . It is evident from Fig. 2 that the quadratic dependence gives a much better description of the data for all values of $`y`$. An attempt to fit a straight line to the low–temperature portion of the curves, as shown in the inset, leads to energy gaps decreasing with increasing $`y`$, inconsistent with the expectations for anisotropic s–wave symmetry. A similarly negligible effect of zinc doping on the $`T`$–dependence of $`\lambda `$ has also been reported for YBCO . We note that nonmagnetic disorder should produce a rapid decrease of the zero–temperature value of the superfluid density, $`n_S\left(0\right)\lambda (0)^2`$, together with a steep decrease of $`T_c`$ . Our results show that $`n_S(0)`$ decreases by a factor of 1.3 per percent of impurity, which is slightly slower than the rate reported for YBCO but close to a theoretical predictions for a d–wave superconductor .
Fig. 3 shows the temperature dependence of the resistivity down to 20 mK for several films with large amounts of zinc. The film with $`y`$ = 0.055 is still superconducting, with a transition whose midpoint is at 3.5 K. The films with larger values of $`y`$ (0.08, 0.10, and 0.12), are nonsuperconducting, and their resistivity increases approximately as $`\rho _{ab}\mathrm{ln}(1/T)`$ as $`T`$ is lowered, but below about 300 mK the increase slows down and the resistivity is clearly finite in the $`T=0`$ limit. This behavior is markedly different from the evolution of the $`ab`$–plane resistivity with strontium content, where nonsuperconducting specimens exhibit hopping conductivity indicative of insulating behavior in the $`T=0`$ limit.
The inset to Fig. 3 shows the temperature dependence of the Hall coefficient, $`R_H`$, for three nonsuperconducting films. A slow increase of $`R_H`$ followed by saturation below 300 mK is seen in the low–temperature region. The magnitude of $`R_H`$ is close to that observed for $`y=0`$ , indicating that the specimens remain metallic up to the highest doping level, $`y=0.12`$, without any change in the carrier concentration.
The saturation of the resistivity shown in Fig. 3 could signal some remanence of the superconducting phase. We examined this possibility by magnetoresistance measurements. Fig. 4 shows the resistivity as a function of magnetic field, applied in the longitudinal and transverse configurations, for two films, with $`y`$ equal to 0.055 and 0.08, which are superconducting and nonsuperconducting, respectively, in the absence of the field. The curves are for constant temperatures from 2 K to 45 mK. The magnetoresistance is positive for the film with $`y`$ = 0.055 for both field configurations. This is similar to the magnetoresistance in LSCO without zinc and is clearly caused by the suppression of superconductivity by the magnetic field. On the other hand the magnetoresistance of the film with $`y`$ = 0.08 is negative for both configurations, with the magnitude of the transverse magnetoresistance about twice as large as the longitudinal. If we attribute the longitudinal magnetoresistance entirely to spin–related isotropic scattering, the difference between the transverse and longitudinal magnetoresistance gives the orbital part, which is then also negative. This result demonstrates that superconducting fluctuations are absent in the specimen with $`y`$=0.08 and that the metallic nonsuperconducting phase is uniform without any macroscopic superconducting inclusions.
The values of the conductivity at $`T`$ = 20 mK, $`\sigma _0`$, for three metallic films with $`y`$ = 0.08, 0.10, and 0.12, are equal to 1430, 1098, and 436 $`(\mathrm{\Omega }cm)^1`$, respectively. Using a linear extrapolation to $`\sigma _0`$ = 0 we estimate that the metal–insulator transition occurs at $`y_{MI}`$ 0.14. We also estimate the magnitudes of $`k_F\mathrm{}`$ for these specimens as equal to 2.4, 1.8, and 0.7, respectively from the relation for a 2D free–electron system, $`k_F\mathrm{}=hd\sigma _0/e^2`$. We see that the metal-insulator transition occurs in the vicinity of $`k_F\mathrm{}=1`$, as expected for disorder–induced localization in contrast to suggestions that the superconductor–insulator transition in this system occurs at $`h/4e^2`$ = 6.5 k$`\mathrm{\Omega }`$ . The value of $`y_{MI}`$ is remarkably small compared to the fraction of the nonmetallic constituent required for the metal–insulator transition in amorphous systems , showing that the Zn creates extremely effective localization centers in the CuO<sub>2</sub> plane (see, e. g., Ref. ) and that in addition to the effect of disorder, the scattering by Zn–impurities is enhanced by electron-electron interactions in this strongly correlated system.
In related work we have shown that with increasing $`y`$ the high–temperature susceptibility of the ceramic targets evolves toward the Curie–Weiss relation, $`\chi =C/(T+\mathrm{\Theta })`$, with $`\mathrm{\Theta }`$ reaching about 40K at $`y`$ = 0.14 . This result indicates that large Zn–doping, while removing Cu–spins, restores some antiferromagnetic ordering in LSCO. It has been suggested before on the basis of neutron scattering experiments that substitution of a small amount of zinc in LSCO ($`y`$ = 0.012) may stabilize a short -range–order spin–density–wave state , and our result appears to be consistent with this suggestion. Related magnetic effects are presumably responsible for the large negative magnetoresistance and for the saturation of the resistivity at low temperatures.
The experiments described here indicate that the suppression of superconductivity by Zn–doping in LSCO proceeds without a change of the symmetry of the order parameter, up to the point where superconductivity disappears and the normal metallic phase is reached. This result favors models that predict pure d–wave symmetry, as for example, the spin–fluctuation exchange model. With further doping the metal–insulator transition occurs in the vicinity of $`k_F\mathrm{}=1`$, showing that it is disorder–driven.
We would like to thank J. Annett and A. Millis for helpful discussions, and Elisabeth Ditchek for help with the specimen preparation. This work was supported by the Polish Committee for Scientific Research, KBN, under grants 2P03B 09414 and 2P03B 10714, and by the Naval Research Laboratory. Work at OSU was supported by DOE grant No. DE-FG02-90ER45427 through the Midwest Superconductivity Consortium. |
no-problem/9911/cond-mat9911279.html | ar5iv | text | # Far-infrared magnetotransmission of YBa2(ZnxCu1-x)3O7-δ thin films
## Abstract
Measurements of the far infrared magnetotransmission of YBa<sub>2</sub>(Zn<sub>x</sub>Cu<sub>1-x</sub>)<sub>3</sub>O<sub>7-δ</sub> thin film ($`x`$ 0.025) deposited on a wedged MgO substrate are reported. The application of magnetic field perpendicular to the $`ab`$ plane produces at low temperature a linear increase of transmission for frequencies below $``$ 30 cm<sup>-1</sup>. We present a model of high frequency vortex dynamics which qualitatively explains these results.
There is a widespread agreement that high frequency conductivity is influenced by a vortex dynamics. It is estimated that the main characteristic frequencies of the vortex system are in the far infrared (FIR) region. It makes FIR magneto-optics a suitable tool to study vortex dynamics. In spite of its importance only a few experimental data can be found in the literature (see e.g. ). Here, the results of the FIR transmission measurements made on a Zn doped YBaCuO thin film in magnetic fields up to 5.4 T are reported and explained by a new vortex dynamics model.
The measurements were performed at several frequencies from 13.5 to 142.9 $`\mathrm{cm}^1`$ on the high quality thin film YBa<sub>2</sub>(Zn<sub>x</sub>Cu<sub>1-x</sub>)<sub>3</sub>O<sub>7-δ</sub> prepared by the laser ablation deposition on a wedged MgO substrate. From the critical temperature $`T_\mathrm{c}=64.3\mathrm{K}`$ the parameter $`x`$ was estimated as $`x0.025`$. The dependence of the far-infrared transmission on the magnetic field apllied perpendicularly to the $`ab`$ plane was acquired on our laser based spectrophotometer FIRM described in details earlier . The radiation transmitted through the sample kept at temperature 3 K is measured by a silicon bolometer; stability of the laser line is monitored by a pyroelectric detector. Measurement of a wire mesh proved that bolometer is not influenced by a stray magnetic field. The field was swept slowly at a rate of 1 T/min. Equilibrium conditions were preserved as evidenced by the coincidence of the relative transmittance monitored with increasing and decreasing magnetic field - see Fig. 1.
The frequency dependence of the transmission slope $`\frac{Tr(B)Tr(0)}{BTr(0)}`$ is displayed in Fig. 2. For frequencies below $`30\mathrm{c}\mathrm{m}^1`$ it is positive, at frequency of $`85\mathrm{c}\mathrm{m}^1`$ it is negative, whereas at $`143\mathrm{c}\mathrm{m}^1`$ hardly any field dependence was observed.
This behaviour can qualitatively be explained by our model of vortex dynamics . At low temperatures the influence of normal state fluid may be neglected, so that the electrodynamic responce is determined by vortices and superconducting fluid. Equations of motion of these two mutually interacting subsytems may be written as
$$m\dot{𝐯}_\mathrm{s}=e𝐄\omega _\mathrm{c}(𝐯_\mathrm{s}𝐯_\mathrm{L})\times 𝐳,$$
(1)
$$m\dot{𝐯}_\mathrm{L}=\alpha ^2𝐫_\mathrm{L}\frac{1}{\tau _\mathrm{L}}𝐯_\mathrm{L}+\mathrm{\Omega }(𝐯_\mathrm{s}𝐯_\mathrm{L})\times 𝐳,$$
(2)
where $`𝐫_\mathrm{L}`$ and $`𝐯_\mathrm{L}`$ are the position and velocity of the vortex lattice, while $`𝐯_\mathrm{s}`$ is velocity of the superconducting fluid. The interaction of superconducting fluid and the vortex lattice is mediated by the Magnus force with $`\mathrm{\Omega }`$ and $`\omega _\mathrm{c}`$ being angular frequency of the circular vortex motion and cyclotron frequency of the superconducting fluid. Pinning force and vortex damping are characterised by pinning frequency $`\alpha `$ and vortex relaxation time $`\tau _\mathrm{v}`$.
Solving these two equations for the right (+) and left (-) hand polarized mode and using the expression for the current $`𝐣=n_\mathrm{s}e𝐯_\mathrm{s}=\sigma 𝐄`$ the conductivity $`\sigma ^\pm `$ may be expressed as
$$\sigma ^\pm =ϵ_0\omega _p^2\frac{\pm \omega /\tau _v\mathrm{i}(\alpha ^2\omega ^2\pm \mathrm{\Omega }\omega )}{(\pm \omega \omega _\mathrm{c})(\alpha ^2\omega ^2\pm \mathrm{i}\omega /\tau _\mathrm{v})+\mathrm{\Omega }\omega ^2}$$
(3)
where $`\omega _\mathrm{p}`$ is plasma frequency. For linearly polarized light used in the experiment the transmission coefficient may be expressed as $`Tr=|\frac{1}{2}(t^++t^{})|^2`$ where $`t^\pm `$ are the transmission amplitudes given by $`t^\pm =\frac{n+1}{Z_0d\sigma ^\pm +n+1}`$. Here $`d=150`$ nm is the film thickness, $`n=4`$ is refractive index of the substrate and $`Z_0=377\mathrm{o}\mathrm{h}\mathrm{m}`$ is impedance of the free space. Using parameters displayed as insert in Fig. 2, the experimental and calculated data are in good agreement.
###### Acknowledgements.
The authors thank to M.Jelinek for the sample preparation. This work was supported by MŠMT program ”KONTAKT ME 160 (1998)”. |
no-problem/9911/hep-lat9911007.html | ar5iv | text | # Quark Confinement and Surface Critical Phenomena Presented by J. Kuti at Lattice 99, Pisa, June 29 - July 3, 1999. Supported by DOE Grant DE-FG03-97ER40546. UCSD/PTH/99-19, FERMILAB-Conf-99/279-T
## 1 The spectrum of the confining flux
A rather comprehensive determination of the rich energy spectrum of the confined chromoelectric flux between static sources in the fundamental representation of $`\mathrm{SU}(3)_\mathrm{c}`$ was reported earlier for separations r ranging from 0.1 fm to 4 fm. The full spectrum is summarized in Fig. 1 with different characteristic behavior on two scales separated approximately at r=2 fm.
### No string spectrum for r $`2`$ fm
The $`\mathrm{\Sigma }_\mathrm{g}^+`$ ground state is the familiar static quark-antiquark potential which is dominated by the rather dramatic linearly-rising behavior once $`\mathrm{r}`$ exceeds about $`0.5\mathrm{fm}`$. Although the empirical function $`\mathrm{E}_{\mathrm{\Sigma }_\mathrm{g}^+}(\mathrm{r})=\mathrm{c}/\mathrm{r}+\sigma \mathrm{r}`$ approximates the ground state energy very well for $`\mathrm{r}0.1\mathrm{fm}`$, the fitted constant $`\mathrm{c}=0.3`$ has no relation to the running Coulomb law whose loop expansion breaks down before $`\mathrm{r}=0.1\mathrm{fm}`$ separation is reached . Early indoctrination on the popular string interpretation of the confined flux for $`\mathrm{r}2\mathrm{fm}`$ was mostly based on the observed shape of the $`\mathrm{\Sigma }_\mathrm{g}^+`$ ground state energy: the linear shape of the ground state potential for $`\mathrm{r}0.5\mathrm{fm}`$ and the approximate agreement of the curvature shape for $`\mathrm{r}0.5\mathrm{fm}`$ with the ground state Casimir energy $`\pi /(12\mathrm{r})`$ of a long confined flux . The excitation spectrum clearly contradicts earlier claims on the simple string interpretation of the linearly rising confining potential. The gluon excitation energies lie well below the string predictions and the level orderings and degeneracies are in violent disagreement with expectations from a fluctuating string.
### Goldstone modes for r $`>2`$ fm
A feature of any low-energy description of a fluctuating flux sheet in euclidean space is the presence of Goldstone excitations associated with the spontaneously-broken transverse translational symmetry. These transverse modes have energy separations above the ground state given by multiples of $`\pi /r`$ (for fixed ends). The level orderings and approximate degeneracies of the gluon energies at large r match, without exception, those expected of the Goldstone modes. However, the precise $`\pi /\mathrm{r}`$ gap behaviour is not observed. The spectrum is consistent with massive capillary waves on the surface of the flux sheet, with a cutoff dependent mass gap. The most likely explanation for this gap is Peierls-Nabarro lattice pinning of the confining flux sheet at small correlation lengths. Our new results on the same spectrum in SU(2) for D=3,4, and a detailed test of the strong coupling spectrum in SU(2) for D=3 lend further support to the above summary of the earlier findings.
## 2 Z(2) model at D=3
The purpose of the Z(2) project is to understand flux formation and the string excitation spectrum from high statistics simulations and the loop expansion on the analytic side. The WKB approximation of flux formation in the sine-Gordon field representation of the monopole plasma was discussed earlier . The Z(2) model maps into the Ising model by duality transformation which was exploited before in the study of large Wilson loops . By invoking universality, the critical region of the Z(2) model is mapped into D=3 $`\mathrm{\Phi }^4`$ scalar field theory in the study of flux formation. The confining flux sheet of the Wilson loop corresponds to a twisted surface in the Ising representation which is described by a classical soliton solution of the $`\mathrm{\Phi }^4`$ field equations. Excitations of the flux are given by the spectrum of the fluctuation operator $`=^2+\mathrm{U}^{\prime \prime }(\mathrm{\Phi }_{\mathrm{soliton}})`$ where $`\mathrm{U}(\mathrm{\Phi })`$ is the field potential energy of the $`\mathrm{\Phi }^4`$ field. The spectrum of the fluctuation operator $``$ of the finite surface is determined from a two-dimensional Schrödinger equation with a potential of finite extent . In the limit of asymptotically large surfaces, the equation becomes separable in the longitudinal and transverse coordinates. The transverse part of the spectrum is in close analogy with the quantization of the one-dimensional classical $`\mathrm{\Phi }^4`$ soliton. There is always a discrete zero mode in the spectrum which is enforced by translational invariance in the transverse direction. Figs. 2,3,4,5 illustrate some of the results which are consistent with our findings in QCD.
## Acknowledgements
One of us (J. K.) would like to acknowledge valuable discussions with S. Renn, P. Hasenfratz, and F. Niedermayer on surface Goldstone modes. |
no-problem/9911/math9911164.html | ar5iv | text | # On singular fibres of complex Lagrangian fibrations
## 1. Introduction
First we define complex Lagrangian fibration.
###### Definition 1.
Let $`(X,\omega )`$ be a Kähler manifold with a holomorhpic symplectic two form $`\omega `$ and $`S`$ a smooth manifold. A proper flat surjective morphism $`f:XS`$ is said to be a complex Lagrangian fibration if a general fibre $`F`$ of $`f`$ is a Lagrangian submanifold with respect to $`\omega `$, that is, the restriction of 2-form $`\omega |_F`$ is identically zero and $`dimF=(1/2)dimX`$.
Remark. A general fibre $`F`$ of a complex Lagrangian fibration is a complex torus by Leauville’s theorem.
The plainest example of a complex Lagrangian fibration is an elliptic fibration of $`K3`$ surface over $`^1`$. In higher dimension, such a fibre space naturally occurs on a fiber space of an irreducible symplectic manifold ( \[7, Theorem 2\] and \[8, Theorem 1\] ). When the dimension of fibre is one, a complex Lagrangian fibration is a minimal elliptic fibration and whose singular fibre is completely classified by Kodaira \[4, Theorem 6.2\]. In this note, we investigate singular fibres of a projective complex Lagrangian fibration whose fibre is 2-dimensional.
###### Theorem 1.
Let $`f:XS`$ be a complex Lagrangian fibration on 4 dimensional symplectic manifold and $`D`$ the discriminant locus of $`f`$. Assume that $`f`$ is projective. For a general point $`x`$ of $`D`$, $`f^1(x)`$ is the following one:
1. There exists an étale finite covering $`f^1(x)^{}f^1(x)`$ and $`f^1(x)^{}`$ is isomorphic to the product of an elliptic curve and a Kodaira singular fibre of type $`I_0`$, $`I_0^{}`$, $`II`$, $`II^{}`$, $`III`$, $`III^{}`$, $`IV`$ or $`IV^{}`$.
2. $`f^1(x)`$ is isomorphic to a normal crossing variety consisting of a minimal elliptic ruled surface. The dual graph of $`f^1(x)`$ is the Dynkin diagram of type $`A_n`$, $`\stackrel{~}{A_n}`$ $`D_n`$ or $`\stackrel{~}{D_n}`$. If the dual graph is of type $`\stackrel{~}{A_n}`$ or $`\stackrel{~}{D_n}`$, each double curve is a section of the ruling. In the other cases, the double curve on each edge components is a bisection or a section and other double curve is a section. ( See figures 1 and 2 (pp 1). )
Combining Theorem 1 with \[7, Theorem 2\] and \[8, Theorem 1\], we obtain the following corollary.
###### Corollary 1.
Let $`f:XB`$ be a fibre space of a projective irreducible symplectic manifold. Assume that $`dimX=4`$. Then, for a general point $`x`$ of the discriminat locus of $`f`$, $`f^1(x)`$ satisfies the properties of Theorem 1 (1) or (2).
Remark. Let $`S`$ be a $`K3`$ surface and $`\pi :S^1`$ an elliptic fibration. The induced morhpism $`f:\mathrm{Hilb}^2S^2`$ gives examples of singular fibres above except whose dual graphs are $`A_n`$ or $`D_n`$. The author does not know whether a normal crossing variety whose dual graph is $`A_n`$ or $`D_n`$ occur as a singular fibre of a fibre space of an irreducible symplectic manifold.
This paper is organized as follows. In section 2, we set up the proof of Theorem 1. The key proposition is stated and proved in section 3. Section 4 and 5 are devoted to the proof of Theorem 1.
Acknowledgment. The author express his thanks to Professors A. Fujiki, Y. Miyaoka, S. Mori and N. Nakayama for their advice and encouragement.
## 2. Preliminary
(2.1) In this section, we collect definitions and some fundamental materials which are necessary for the proof of Theorem 1.
###### Definition 2.
Let $`f:X\mathrm{\Delta }^1`$ be a proper surjective morphism from algebraic variety to an unit disk. $`f`$ is said to be semistable degeneration if $`f`$ satisfies the following two properties:
1. $`f`$ is smooth over $`\mathrm{\Delta }^10`$.
2. $`f^{}\{0\}`$ is a reduced normal crossing divisor.
###### Definition 3.
Let $`f:X\mathrm{\Delta }^1`$ and $`f^{}:X^{}\mathrm{\Delta }^1`$ be proper surjective morphisms from algebraic varieties to unit disks. We call $`f`$ is isomorphic to $`f^{}`$ (resp. birational) if there exists an isomorphism (resp. a birational map) $`g:XX^{}`$ such that $`f^{}g=f`$.
(2.2) We refer the fundamental properties of an abelian fibration.
###### Lemma 1.
Let $`f:(X,\omega )S`$ be a projective complex Lagrangian fibration.
1. The discriminant locus of $`f`$ is pure codimension one.
2. Let $`F`$ be an irreducible component of a fibre of $`f`$ and $`j:\stackrel{~}{F}F`$ a resolution of $`F`$. Then $`j^{}(\omega |_F)=0`$.
Proof.
1. Let $`h:𝒜\mathrm{\Delta }^2`$ be a projective flat morphism over a two dimensional polydisk. Assume that a general fibre of $`h`$ is an abelian variety and $`h`$ is smooth over $`\mathrm{\Delta }^20`$. For the proof of Lemma 1 (1), it is enough to prove that $`h`$ is smooth morphism. Let $`𝒜^{}:=𝒜h^1(0)`$. Since $`h`$ is smooth over $`\mathrm{\Delta }^20`$ and projective, there exists an étale finite cover $`\pi :𝒜^{}\overline{𝒜}^{}`$ and a smooth abelian fibration $`\overline{h}^{}:\overline{𝒜}^{}\mathrm{\Delta }^20`$ with a section. Since $`\mathrm{\Delta }^20`$ is simply connected, $`\overline{𝒜}^{}`$ has a level $`n`$ structure. We put $`_g[n]`$ the moduli space of $`g`$-dimensional abelian varieties with level $`n`$-structure. There exists a morphism from $`t:\mathrm{\Delta }^20_g[n]`$ and we extend $`t`$ on $`\mathrm{\Delta }^2`$ by Hartogs theorem. The moduli $`_g[n]`$ has the universal family $`𝒜_g[n]_g[n]`$. Considering the pull back of $`𝒜_g[n]`$ by $`t`$, we obtain a smooth abelian morphism $`\overline{h}:\overline{𝒜}\mathrm{\Delta }^2`$ which is the extension of $`\overline{h}^{}:\overline{𝒜}^{}\mathrm{\Delta }^20`$. Since $`𝒜^{}\overline{𝒜}^{}`$ is a finite morphism, we extend this morhpsim to a finite morphism $`\nu :𝒜^{}\overline{𝒜}`$. The codimension of $`\overline{𝒜}\overline{𝒜}^{}`$ is two. By purity of branch loci, $`\nu `$ is étale. Hence $`h^{}:𝒜^{}\mathrm{\Delta }^2`$ is a smooth abelian fibration. By construction, $`𝒜^{}`$ is isomorphic to $`𝒜`$ in codimension one. Let $`A`$ be a $`h`$-ample divisor on $`𝒜`$ and $`A^{}`$ a proper transform on $`𝒜^{}`$. Since every fibre of $`h^{}`$ is an abelian variety, $`A^{}`$ is $`h^{}`$-ample. Thus we obtain that $`𝒜`$ is isomorphic to $`𝒜^{}`$ and $`h`$ is smooth.
2. Let $`A`$ be an $`f`$-ample divisor. We consider the following function:
$$\lambda (s):=_{X_s}\omega \overline{\omega }A^{dimS2},$$
where $`X_s:=f^1(s)`$ $`(sS)`$. Since $`f`$ is flat, $`\lambda (s)`$ is a continuous function on $`S`$ by \[2, Corollary 3.2\]. Thus $`\lambda (s)0`$ on $`S`$ and
$$_F\omega \overline{\omega }A^{dimS2}=0.$$
Since $`F`$ and $`\stackrel{~}{F}`$ is birational, $`j^{}\omega =0`$ on $`\stackrel{~}{F}`$.
$`\mathrm{}`$
(2.3) We review basic properties of the mixed hodge structure on a simple normal crossing variety.
###### Lemma 2.
Let $`X:=X_i`$ be a simple normal crossing variety. Then
$$F^1H^1(X,)=\{(\alpha _i)H^0(X_i,\mathrm{\Omega }_{X_i}^1)|\alpha _i|_{X_iX_j}=\alpha _j|_{X_iX_j}\}.$$
Proof. Let
$$X^{[k]}:=_{i_0<\mathrm{}<i_k}X_{i_0}\mathrm{}X_{i_k}\text{(disjoint union)}.$$
For an index set $`I=\{i_0,\mathrm{}i_k\}`$, we define an inclusion $`\delta _j^I`$
$$\delta _j^I:X_{i_0}\mathrm{}X_{i_k}X_{i_1}\mathrm{}X_{i_{j1}}X_{i_{j+1}}\mathrm{}X_{i_k}.$$
We consider the following spectral sequence \[3, Chapter 4\]:
$$E_1^{p,q}=H^q(X^{[p]},)E^{p+q}=H^{p+q}(X,),$$
where $`D_2:E_1^{p,q}E_1^{p+1,q}`$ is defined by the
$$\underset{|I|=p}{}\underset{j=0}{\overset{p}{}}(1)^j\delta _j^I.$$
Since this spectral sequence degenerates at $`E_2`$ level (\[3, 4.8\]), we deduce
$$\mathrm{Gr}_1^W(H^1(X,))=\mathrm{Ker}(_iH^1(X_i,)\stackrel{D_2}{}_{i<j}H^1(X_iX_j,)).$$
Moreover $`F^1W_0=0`$, $`F^1H^1(X,)=F^1\mathrm{Gr}_1^W(H^1(X,))`$. Thus we obtain the assertion of Lemma 2 from the definition of $`D_2`$. $`\mathrm{}`$
###### Lemma 3.
Let $`f:X^{}X`$ be a birational morphism between smooth algebraic varieties. Assume the following two conditions:
1. There exists a simple normal crossing divisor $`Y`$ on $`X`$ such that $`f`$ is isomorphic on $`XY`$.
2. $`Y^{}:=(f^{}Y)_{\mathrm{red}}`$ is a simple normal crossing divisor.
Then $`F^1H^1(Y^{},)F^1H^1(Y,)`$.
Proof. We consider the following exact sequence of morphisms of Mixed Hodge structures.
$$H^0(Y^{},)\stackrel{\alpha }{}H^1(X,)H^1(X^{},)H^1(Y,)H^1(Y^{},)\stackrel{\beta }{}H^2(X,)$$
Note that each morphism has weight $`(0,0)`$. Since $`H^i(X,)`$ carries pure Hodge structure of weight $`i`$, $`\alpha `$ and $`\beta `$ are $`0`$-map. Moreover $`F^1H^1(X,)F^1H^1(X^{},)`$. Thus we deduce that $`F^1H^1(Y^{},)F^1H^1(Y,)`$. $`\mathrm{}`$
## 3. Kulikov model
(3.1) In this section, we prove the key proposition of the proof of Theorem 1. First we refer the the folloing theorem due to Kulikov, Morrison \[9, Classification Theorem I\] and Persson \[10, Proposition 3.3.1\].
###### Theorem 2.
Let $`g^{}:T^{}\mathrm{\Delta }`$ be a semistable degeneration whose general fibre is an abelian surface. Then there exists a semistable degeneration $`k:𝒦\mathrm{\Delta }`$ such that $`k`$ and $`g^{}`$ is birational and $`K_𝒦_k0`$. Moreover, exactly one of the following cases occurs:
1. $`𝒦_0`$ is an abelian surface.
2. $`𝒦_0`$ consists of a cycle of minimal elliptic ruled surfaces, meeting along disjoint sections. Every selfintersection number of double curve is $`0`$.
3. $`𝒦_0`$ consists of a collection of rational surfraces, such that the double curves on each component form a cycle of rationall curves; the dual graph $`\mathrm{\Gamma }`$ of $`Y_0^{^{}}`$ is a triangulation of $`S^1\times S^1`$.
We call $`𝒦`$ a Kulikov model of type I, II or III according the case occurs (1), (2) or (3).
(3.2) We state the key propositon.
###### Proposition 1.
Let $`f:(X,\omega )S`$ be a projective complex lagrangian fibration on 4-dimensional symplectic manifold and $`D`$ the discriminant locus of $`f`$. If we take a general point $`x`$ of $`D`$ and an unit disk $`\mathrm{\Delta }^1`$ on $`S`$ such that $`\mathrm{\Delta }^1`$ and $`D`$ intersects transversally at $`x`$ and $`T:=X\times _S\mathrm{\Delta }^1`$ is smooth, then
1. $`t:T\mathrm{\Delta }^1`$ is birational to the quotient of Kulikov model $`𝒦`$ of Type I or Type II by a cyclic group $`G`$.
2. There exists a nonzero $`G`$-equivariant element of $`F^1H^1(𝒦,)`$.
(3.3) For the proof of Proposition 1, we need the following Lemmas.
###### Lemma 4.
Let $`\nu :YX`$ be a birational morphism such that $`(f\nu )^{}D`$ is a simple normal crossing divisor. Then for a general point $`x`$ of the discriminant locus $`D`$ of $`f`$,
$$F^1H^1(Y_x,)0$$
where $`Y_x:=f^1(x)`$.
Proof. Let $`E:=((f\nu )^{}D)_{\mathrm{red}}`$ and $`E=E_i`$. We take an open set $`U`$ of $`S`$ which satisfies the following two conditions:
1. $`D|_U`$ is a smooth curve.
2. $`f\nu |_U:(E|_{f^1(U)})^{[k]}D|_U`$ is a smooth morphism for every $`k`$.
We consider the following exact sequences:
$$0\mathrm{\Omega }_{E_i}^2\mathrm{\Omega }_{E_i/D}^20$$
$$0(f\nu )^{}\mathrm{\Omega }_D^2\stackrel{\alpha }{}(f\nu )^{}\mathrm{\Omega }_D^1\mathrm{\Omega }_{E_i/D}^10$$
Since $`\omega `$ is nondegenerate, $`\nu ^{}\omega 0`$ on $`E_i`$ on a non $`\nu `$-exceptional divisor $`E_i`$ . By Lemma 1 (2), the restriction of $`\omega `$ on every irreducible component of a fibre of $`f`$ is zero. Thus $`\nu ^{}\omega =0`$ in $`\mathrm{\Omega }_{E_i/D}^2`$. On the contrary, $`(f\nu )^{}\mathrm{\Omega }_D^2=0`$, we deduce
$$\alpha (\nu ^{}\omega )0$$
for non $`\nu `$-exceptional divisor $`E_i`$. Therefore, for a general point $`x`$ of $`D`$, $`H^0(E_{i,x},\mathrm{\Omega }_{E_{i,x}}^1)0`$ where $`E_{i,x}`$ is the fibre of $`E_iD`$ over $`x`$. We denote by $`\alpha _i`$ the restriction $`\alpha (\nu ^{}\omega )`$ to $`E_{i,x}`$. If $`E_{i,x}E_{j,x}\mathrm{}`$, $`\alpha _i=\alpha _j`$ on $`E_{i,x}E_{j,x}`$ by the construction. By Lemma 2, we deduce that $`F^1H^1(E_x,)0`$. $`\mathrm{}`$
###### Lemma 5.
Let $`k:𝒦\mathrm{\Delta }^1`$ be a Kulikov model of type I or type II. Assume that $`k`$ is birational to a projective abelian fibration $`t^{}:T^{}\mathrm{\Delta }^1`$. Then
1. $`k`$ is a projective morphism.
2. Every birational map $`𝒦𝒦`$ is birational morphism.
Proof. We may assume that $`T^{}`$ is a relatively minimal model over $`\mathrm{\Delta }^1`$. Then $`T^{}`$ and $`𝒦`$ is isomorphic in codimension one, since $`T^{}`$ and $`𝒦`$ have only terminal singularities and $`K_T^{}`$ is $`t^{}`$-nef. Let $`A^{}`$ be a $`t^{}`$-ample divisor on $`T^{}`$ and $`A`$ a proper transform on $`𝒦`$. Since $`T^{}`$ and $`𝒦`$ is isomorhpic in codimension one, $`A`$ is $`k`$-big. If $`A`$ is $`k`$-nef, we conclude that $`T^{}`$ is isomorhpic to $`𝒦`$ by relative base point free theorem \[5, Theorem 3-1-2\] Then $`k`$ is projective, $`𝒦`$ is the unique relative minimal model and every birational map $`𝒦𝒦`$ is birational morphism. Thus we will prove that $`A`$ is $`k`$-nef. Since every big divisor on abelian surface is ample, $`A`$ is $`k`$-nef if $`𝒦`$ is of type I. In the case that $`𝒦`$ is of type II, we investigate the nef cone of each component of the central fibre of $`𝒦`$. Let $`V`$ be a component of the central fibre. Then $`K_V2e`$, where $`e`$ is a double curve. Since $`e`$ is a section and $`e^2=0`$, the nef cone of $`V`$ is spaned by $`e`$ and a fibre $`l`$ of the ruling of $`V`$. We deduce that every effective divisor on $`V`$ is nef. Therefore $`A`$ is $`k`$-nef in the case that $`𝒦`$ is of type II. $`\mathrm{}`$
(3.4) Proof of Proposition 1. Let $`\nu :YX`$ be a birational morphism such that $`(f\nu )^{}D`$ is a simple normal crossing divisor. We define $`T^{\prime \prime }:=\nu ^{}T`$. If we choose $`x`$ generally, $`F^1H^1(T_0^{^{\prime \prime }},)0`$ by Lemma 4. By Semistable reduction theorem \[6, Theorem $`11^{}`$\], there exists a generically finite surjective morphism $`\eta :T^{}T^{\prime \prime }`$ such that $`t^{}:T^{}\mathrm{\Delta }^1`$ is a semistable degeneration. By Theorem 2, there exists the Kulikov model $`k:𝒦\mathrm{\Delta }^1`$ which is birational to $`t^{}`$. We denote by $`𝒦_0`$ the central fibre of $`𝒦`$. Then $`F^1H^1(𝒦_0,)0`$ since $`F^1H^1(𝒦_0,)F^1H^1(T_0^{^{}},)`$ due to Lemma 3 and $`F^1H^1(T_0^{^{\prime \prime }},)0`$. Thus $`𝒦`$ is of type I or type II. Let $`G`$ be the galois group of a cyclic extension $`K(T^{})/K(T)`$ and $`g`$ a generator of $`G`$. Since $`k`$ is birational to $`t^{}`$, there exists a biratinal map $`\mathrm{\Phi }_g:𝒦𝒦`$ correponding to $`g`$. By Lemma 5, $`\mathrm{\Phi }_g`$ is a birational morphism and $`G`$ acts on $`𝒦`$ holomorphically. Therefore $`T`$ is birational to the quotient $`𝒦/G`$. We claim that $`F^1H^1(𝒦_0,)`$ carries a nonzero $`G`$-equivariant element. Let $`Z`$ be a $`G`$-equivariant resolution of indeterminancy of $`T^{}𝒦`$. Then $`F^1H^1(T_0^{^{}},)F^1H^1(Z_0,)F^1H^1(𝒦_0,)`$ by Lemma 3. Let $`\alpha `$ be a nonzero element of $`F^1H^1(T_0^{^{\prime \prime }},)`$. The pull back of $`\alpha `$ in $`F^1H^1(Z_0,)`$ is non zero $`G`$-equivariant element. Thus there exists a nonzero $`G`$-equivariant element in $`F^1H^1(𝒦_0,)`$. $`\mathrm{}`$
## 4. Classification of Type I degeneration
(4.1) In this section, we prove the following proposition.
###### Proposition 2.
Let $`t:T\mathrm{\Delta }^1`$ be an abelian fibration which is birational to the quotient of a Kulikov model $`𝒦`$ of type I by a cyclic group $`G`$. Assume that
1. $`T`$ is smooth.
2. $`K_T_t0`$.
3. There exists a nonzero $`G`$-equivariant element of $`F^1H^1(𝒦_0,)`$.
Then the representation $`\rho :G\mathrm{Aut}H^1(𝒦_0,)`$ is faithful and the central fibre $`T_0`$ of $`T`$ satisfies the properties of Theorem 1 (1)
(4.2) We need the following lemma to prove Proposition 2.
###### Lemma 6.
The central fibre $`𝒦_0`$ admits an $`G`$-equivariant elliptic fibration over an elliptic curve.
Proof. Since $`𝒦_0`$ is an abelian surface, it is enough to prove that $`𝒦_0`$ admits a $`G`$-equivariant fibration. Let $`g`$ be a generator of $`G`$. We consider the following morphism:
$$H^1(𝒦_0,)H^1(𝒦_0,)\stackrel{(\mathrm{id}g^{})}{}H^1(𝒦_0,).$$
Since $`G`$ is a cyclic group, the kernel of $`\mathrm{id}g^{}`$ is $`G`$-invariant. Moreover this kernel is nonzero by Proposition 1. Therefore $`H^1(𝒦_0,)`$ has a $`G`$-equivariant sub Hodge structure and we conclude that $`𝒦_0`$ admits a $`G`$-equivariant fibration. $`\mathrm{}`$
(4.3) Proof of Proposition 6. We will construct a suitable resolution $`Z`$ of $`𝒦/G`$ and the unique relative minimal model $`W`$ of $`Z`$ over $`\mathrm{\Delta }^1`$. By Lemma 6, there exists a $`G`$-equivariant elliptic fibration on the central fibre $`𝒦_0`$ of $`𝒦`$. We denote this fibration by $`\pi :𝒦_0C`$. By construction, the action of $`G`$ on $`C`$ is translation. Let $`H`$ be the kernel of the representation $`\rho :G\mathrm{Aut}H^1(𝒦_0,)`$. Since the action of $`H`$ on $`𝒦_0`$ is a translation, $`𝒦/H`$ is smooth. It is enough to consider the action of $`G/H`$ on $`𝒦/H`$ for the investigation of the singularities of $`𝒦/G`$. If the action of $`G/H`$ on $`C/H`$ is translation, then $`𝒦/G`$ is smooth. Moreover $`𝒦/G`$ is the unique relative minimal model over $`\mathrm{\Delta }^1`$ since it has no rational curves. On the contrary, $`T`$ is a relative minimal model over $`\mathrm{\Delta }^1`$, $`T𝒦/G`$. By construction, the central fibre of the quotient $`𝒦/G`$ is an hyperelliptic surface. Since every hyperelliptic surface is the étale quotient of the product of elliptic curves, $`T_0(𝒦/G)_0`$ satisfies the property of Theorem 1. We claim that the representation $`\rho :G\mathrm{Aut}H^1(𝒦_0,)`$ is faithful. If $`H`$ is not trivial, then $`𝒦`$ is not $`G`$-invariant and $`K_{𝒦/G}\sim ̸0`$. Since $`𝒦/GT`$ and $`K_T0`$, $`H`$ is trivial. In the following, we assume that the action of $`G/H`$ on $`C/H`$ is trivial. Since $`\pi :𝒦_0C/H`$ is $`G/H`$-equivariant, the singularites of $`𝒦/G`$ consists of several copies the product of a surface quotient singularity and an elliptic curve. The list of surface quotient singularities which occur above is found in \[1, Table 5 (p157)\]. We construct the relative minimal resolution $`Z`$ of $`𝒦/G`$ by the minimal resolution of surface quotient singularities. If the singularities of $`𝒦/G`$ consists of the product of Du Val singularities and an elliptic curve only, then $`Z`$ is a relative minimal model over $`\mathrm{\Delta }^1`$ and we put $`W=Z`$. In other cases, we obtain a relative minimal model $`W`$ after birational contractions of $`Z`$ (cf. \[1, pp 156–158\]). In both cases, $`W`$ is the unique minimal model by the similar argument in Lemma 5. Since $`W`$ is birational to $`T`$ and $`T`$ is a relative minimal model over $`\mathrm{\Delta }^1`$, $`TW`$. By construction, the central fibres $`W_0`$ admit a fibration over $`C/H`$. Note that the fibre of $`W_0C/H`$ is a Kodaira singular fiber of type $`I_0^{}`$, $`II`$, $`II^{}`$, $`III`$, $`III^{}`$, $`IV`$ or $`IV^{}`$. Since $`\mathrm{Sing}W_0`$ forms multi sections of $`W_0C/H`$, there exists an étale finite cover $`\stackrel{~}{C}C/H`$ and the base change $`W_0\times _{C/H}\stackrel{~}{C}`$ is isomorphic to the product of a Kodaira singular fibre and an elliptic curve. Finally, we prove that the representation $`\rho :G\mathrm{Aut}H^1(𝒦_0,)`$ is faithful. We derive a contradiction assuming that $`H`$ is not trivial. If $`W=Z`$, then there exists a morphism $`\eta :W𝒦/G`$ such that $`\eta ^{}K_{𝒦/G}K_W`$. Since the action of $`H`$ on $`𝒦_0`$ is translation, $`K_𝒦`$ is not $`G`$-equivariant and $`K_{𝒦/G}\sim ̸0`$. However, $`K_W\eta ^{}K_{𝒦/G}`$ and $`K_W0`$, that is a contradiction. If $`W\cong ̸Z`$ we consider the base change $`Z_1:=Z\times _{𝒦/G}𝒦/H`$. Since $`Z`$ is obtained by blowing up along singular locus of $`𝒦/G`$ (cf. \[1, pp 158\]) and the singular locus of $`𝒦/G`$ consists of elliptic curves, $`Z_1`$ is smooth. Let $`\eta _1:Z_1W`$, $`\eta _2:Z_1𝒦/H`$ and $`k/H:𝒦/H\mathrm{\Delta }^1`$. We denote by $`F`$ the central fibre of $`k/H`$ with reduced structure. Note that $`(k/H)^{}(0)=mF`$, where $`m`$ is the order of $`H`$. Then
$$\eta _1^{}K_W\eta _2^{}K_{𝒦/H}\eta _2^{}F.$$
By adjunction formula, $`K_{𝒦/H}(m1)F`$. Thus $`K_W\sim ̸0`$. However, this is a contradiction because $`K_W0`$. $`\mathrm{}`$
## 5. Classification of Type II degeneration
(5.1) In this section, we prove the following proposition and Theorem 1.
###### Proposition 3.
Let $`t:T\mathrm{\Delta }^1`$ be an abelian fibration which is birational to the quotient of a Kulikov model $`𝒦`$ of type II by a cyclic group $`G`$. Assume that
1. $`T`$ is smooth.
2. $`K_T_t0`$.
3. There exists a nonzero $`G`$-equivariant element of $`F^1H^1(𝒦_0,)`$.
Then the representation $`\rho :G\mathrm{Aut}H^1(𝒦_0,)`$ is faithful and the central fibre $`T_0`$ of $`T`$ satisfies the properties of Theorem 1 (2).
(5.2) For the proof of Proposition 3, we investigate the action of $`G`$ on the central fibre of $`𝒦`$.
###### Lemma 7.
Let $`g`$ be a generator of $`G`$ and $`m`$ the smallest positive interger such that every component is stable under the action of $`g^m`$. We denote by $`H`$ the subgroup of $`G`$ generated by $`g^m`$. Then
1. The representation $`\sigma :G\mathrm{Aut}F^1H^1(𝒦_0,)`$ is trivial.
2. The action of $`H`$ is free and the central fibre of the quotient $`𝒦/H`$ is a cycle of mininal elliptic ruled surfaces.
Proof.
1. By Proposition 3, there exists a $`G`$-equivariant element in $`F^1H^1(𝒦_0,)`$. Since $`dimF^1H^1(𝒦_0,)=1`$, every element of $`F^1H^1(𝒦_0,)`$ is $`G`$-invariant.
2. From the assumption there exists an action of $`g^m`$ on each component of the central fibre of $`𝒦`$. Let $`V`$ be a component of the central fibre and $`\pi :VC`$ a ruling. Since every fibre of $`\pi `$ is $`^1`$ and $`C`$ is an elliptic curve, $`g^m`$ maps a fibre of $`\pi `$ to a fibre of $`\pi `$, that is, $`\pi `$ is $`g^m`$-equivariant. From Lemma 7 (1) and Lemma 2, holomorphic one forms on $`V`$ is invariant under the action of $`g^m`$. Thus, the action of $`g^m`$ on $`C`$ is translation. Therefore $`V/H`$ is a minimal elliptic ruled surface. From the assumption that each component is stable under the action of $`g^m`$, the central fibre of the quotient $`𝒦/H`$ is a cycle of minimal elliptic ruled surfaces. $`\mathrm{}`$
(5.3) Proof of Proposition 3. From Lemma 7, $`𝒦/H`$ is smooth and the central fibre of $`𝒦/H`$ is a cycle of minimal elliptic ruled surfaces. Let $`\mathrm{\Gamma }`$ be the dual graph of the central fibre of $`𝒦`$ and $`g`$ a generator of $`G`$. Considering $`𝒦/H`$ in stead of $`𝒦`$, we may assume that the action of $`g^m`$ is trivial if the action of $`g^m`$ on $`\mathrm{\Gamma }`$ is trivial.
(5.3.1) If the action of $`G`$ is free, $`𝒦/G`$ is smooth and this is a relative minimal model over $`\mathrm{\Delta }^1`$. Every component of the central fibre of $`𝒦/G`$ is the minimal elliptic ruled surface $`V`$ which has a section $`e`$ such that $`K_V2e`$ and $`e^2=0`$. Thus we show that $`T𝒦/G`$ by similar argument as in the proof of Lemma 5. We claim that the representation $`\rho :G\mathrm{Aut}H^1(𝒦_0,)`$ is faithful. If $`\mathrm{Ker}\rho `$ is not trivial, then $`K_𝒦`$ is not $`G`$-equivariant and $`K_{𝒦/G}\sim ̸0`$. However $`K_T0`$, we obtain $`\mathrm{Ker}\rho `$ is trivial. Since $`\mathrm{\Gamma }`$ is a Dynkin diagram of type $`\stackrel{~}{A_n}`$ and $`G`$ is a cyclic group, the action of $`G`$ on $`\mathrm{\Gamma }`$ is either rotation or reflection.
1. If the action of $`G`$ on $`\mathrm{\Gamma }`$ is rotation, the central fibre of $`𝒦/G`$ is a cycle of minimal elliptic ruled surfaces. Each double curve is a section of a minimal elliptic ruled surface.
2. If the action of $`G`$ on $`\mathrm{\Gamma }`$ is reflection, the central fibre $`(𝒦/G)_0`$ of $`𝒦/G`$ is a chain of minimal elliptic ruled surfaces. Let $`V`$ be the edge component of the central fibre and $`V^{}`$ the component such that $`VV^{}\mathrm{}`$. Note that $`(𝒦/G)_0=2V+2V^{}+\text{(other components)}`$. By adjunction formula,
$$K_VV^{}|_V.$$
Therefore, the double curve $`VV^{}`$ is a bisection of the ruling of $`V`$. Every other double curve is a section.
(5.3.2) If the action of $`g`$ is not free, we need the following lemma.
###### Lemma 8.
If the action of $`G`$ has fixed points, then the action of $`G`$ on $`\mathrm{\Gamma }`$ is reflection and it preserves two vertices. The fixed locus of the central fibre consists of sections or bisections of the ruling of components corresponding to the fixed vertices.
Assuming this Lemma, the central fibre of the quotient $`𝒦/G`$ is a chain of minimal elliptic ruled surfaces. The singularities of $`𝒦/G`$ consists of several copies of the product of $`A_1`$ singlarity and an elliptic curve. Thus the unique relative minimal model $`W`$ over $`\mathrm{\Delta }^1`$ is obtained by blowing up along singlar locus. Since $`T`$ is a relative minimal model over $`\mathrm{\Delta }^1`$, $`WT`$. The dual graph of the central fibre of $`W`$ is $`A_n`$, $`D_n`$ or $`\stackrel{~}{D_n}`$. The double curve on the edge component is a bisection or a section. Every other double curve is section. We claim that the representation $`\rho :G\mathrm{Aut}H^1(𝒦_0,)`$ is faithful. If $`H`$ is not trivial, $`K_𝒦`$ is not $`G`$-equivariant. Since $`K_W`$ is the pull back of $`K_{𝒦/G}`$ is crepant, $`K_W\sim ̸0`$ if $`H`$ is not trivial. However, $`K_T0`$, that is a contradiction.
(5.4) Proof of Lemma 7. If the action of $`G`$ on $`\mathrm{\Gamma }`$ is rotation, there exists no fixed points. Thus the action of $`G`$ on $`\mathrm{\Gamma }`$ is reflection. We derive the contradiction assuming that $`G`$ fixes one of edges of $`\mathrm{\Gamma }`$. Let $`C`$ be the elliptic curve corresponding to the edge which is fixed by $`G`$. From Lemma 2 and Lemma 7, the action of $`G`$ on $`C`$ preserves holomorphic one form on $`C`$. Therefore $`C`$ is fixed locus of the action of $`G`$. The singularities of the quotient $`𝒦/G`$ consist of several copies of the product of $`A_1`$ singularity and an elliptic curve. Let $`w:W𝒦/G`$ be the blowing up along $`C`$. We denote by $`V_i`$ each components of $`W_0`$. Let $`V_0`$ be the exceptional divisor coming from the blowing up along $`C`$. Since the central fibre $`W_0`$ of $`W`$ is a chain of minimal ellitic ruled surfaces, there exists componets $`V_1`$ and $`V_2`$ such that $`V_0V_1\mathrm{}`$, $`V_2V_1\mathrm{}`$ and $`V_iV_j=\mathrm{}`$ $`(i=0,1,2,j0,1,2)`$. Then $`W_0=V_0+2V_1+2V_2+\text{(Other component)}`$. Since $`W`$ is smooth along $`V_1`$,
$$K_{V_1}K_W+V_1|_{V_1}(\frac{1}{2}V_0V_2)|_{V_1}.$$
by adjunction formula. Let $`l`$ be a fibre of ruling of $`V_1`$. Then
$$K_lK_{V_1}+l|_l(\frac{1}{2}V_0V_2).l.$$
Since every double curve of $`W_0`$ is a section, $`\mathrm{deg}K_l=3/2`$. However this is a contradiction because $`l^1`$. Therefore $`G`$ fixes two vertices. Let $`V`$ be one of the component correponding to the fixed vertices and $`\pi :VC`$ the ruling of $`V`$. Since $`C`$ is an elliptic curve and every fibre $`\pi `$ is $`^1`$, $`\pi `$ is $`G`$-equivariant. By Lemma 7 (1), the action of $`G`$ on $`V`$ preserves one form on $`V`$. Since the action of $`G`$ is not free, $`G`$ acts on $`C`$ tirivially. There exist two fixed points on each fibre of the ruling of $`V`$. Thus we obtain the rest of assertion of Lemma 7. $`\mathrm{}`$
The proof of Proposition 3 is completed. $`\mathrm{}`$
(5.5) Proof of Theorem 1. We take a general point $`x`$ of the discriminant locus of $`f`$ and an unit disk $`x\mathrm{\Delta }^1`$ such that $`T:=X\times _S\mathrm{\Delta }^1`$ is smooth. By Proposition 1, the abelian fibraton $`T\mathrm{\Delta }^1`$ satisfies assumptions of Proposition 2 or 3. Then $`T_0`$ satisfies the assertions of Theorem 1 by Proposition 2 and 3. ∎ |
no-problem/9911/astro-ph9911290.html | ar5iv | text | # Detection of a 5-Hz QPO from X-ray Nova GRS 1739-278
## 1 INTRODUCTION
X-ray novae are outbursts of X-ray emission, often from black hole candidates, that exceed their quiescent flux levels by many orders of magnitude. About 50 X-ray novae have been detected in the last 35 years, each lasting typically several hundred days before their return back to quiescence (Chen, Shrader & Livio (1997)). Perhaps all of them are recurrent, but the recurrence period is unknown for most. Probably, just small percentage of such systems in the Galaxy have been discovered so far.
During outburst, X-ray novae are typically found in one of several distinguishable spectral states(e.g. Sunyaev et al. (1994); Tanaka & Lewin (1995); Tanaka & Shibazaki (1996)). In the high state, the spectrum is composed of a bright thermal component and an extended steep power-law with photon index around $``$2.5. In the low state, the spectrum is a hard power-law with a photon index of $``$1.5 and an exponential high energy cutoff. A third state, the so-called very high state has also been recognized. It has a two component spectrum similar to the high state, but with somewhat stronger power-law component and with much more prominent fast variability (Miyamoto et al. (1991); Takizawa et al. (1997)). Many X-ray novae demonstrate state transitions during the outburst that are associated with X-ray flux changes. It is widely believed that both the state transitions and the flux variations are regulated by variations in the accretion rate. Similar states and state transitions were also observed in the persistent black hole systems, namely, Cyg X-1 and GX 339-4, but the dynamics of these systems is much slower, so they are more often observed in one of these states for extended period of time with occasional unpredictable transitions (Sunyaev & Truemper (1979); Makishima et al. (1986); Dove et al. (1998); Trudolyubov et al. (1998)).
A new hard X-ray source GRS 1739-278 was discovered near the Galactic Center on March 18, 1996 by the SIGMA/Granat gamma-ray telescope (Paul et al. 1996). Initial SIGMA localization was refined by the TTM/Kvant instrument (Borozdin et al. 1996). VLA radio observations revealed a radio source within the TTM error region (Durouchoux et al. 1996). Mirabel et al. (1996) measured the optical/infrared flux from this object.
In 1996, the source was observed in the X-ray band by ROSAT (Greiner et al. 1997), Granat (Vargas et al. 1997), RXTE (Takeshima et al. 1996), and the Kvant module of the Mir Space Station. Borozdin et al. (1998) presented the spectral analysis of Mir-Kvant and RXTE data, and classified the GRS 1739-278 as a soft X-ray nova and a black-hole candidate. In this letter, we report on the discovery of a 5-Hz QPO in the power density spectrum, which supports the classification of this system as a binary of black hole with low-mass companion.
## 2 OBSERVATIONS AND DATA REDUCTION
The RXTE satellite observed X-ray nova GRS 1739-278 on March 31, 1996 and nine more times from May 10 through May 29 of that year, each with an exposure of several kiloseconds. The total exposure was about 24 ksec.
The RXTE satellite has two co-aligned spectrometers with $``$1 degree field of view each: a set of five xenon proportional counters, PCA, with maximum sensitivity in 4-20 keV energy range; and a scintillation spectrometer, HEXTE, which consists of eight NaI(Tl)/CsI detectors that are sensitive to 15-250 keV photons. HEXTE detectors are combined into two independent clusters of four detectors each, which alternate between measuring the source and the X-ray background.
We used PCA Binned and Single Binned mode data in our timing analysis. We generated power density spectra (PDS) in the 0.001–256 Hz frequency range (2–13 keV energy band). For lower frequencies (below 0.3 Hz) a single Fourier transform on the data binned in $`0.125`$ s time intervals was performed. For higher frequencies we summed together the results of Fourier transforms made over short stretches of data with $`0.002`$ s time bins. The resulting spectra were logarithmically rebinned when necessary to reduce scatter at high frequencies and normalized to square root of $`rms`$ fractional variability. The standard technique of subtracting the white–noise level due to the Poissonian statistics, corrected for the dead–time effects, was employed (Vikhlinin, Churazov & Gilfanov (1994); Zhang et al. (1995)). For spectral analysis we used FTOOLS v.4.2 and the PCA response matrix v.3.3 (see Jahoda et al. 1997 for computations of the matrix and Stark et al. 1997 for simulations of the background).
## 3 LIGHT CURVE OF THE SOURCE
Fig. 1 shows GRS 1739–278 light curve as measured with RXTE all-sky monitor (ASM) during the 1996 outburst. Overall shape of the outburst may be characterized as fast-rise-exponential-decay (FRED - Chen, Shrader & Livio (1997)). However, the rise of this outburst was not particularly fast, and decay was interrupted by multiple secondary maxima. Secondary maxima were observed in FRED-type light curves of many X-ray novae (Sunyaev et al. (1994); Tanaka & Shibazaki (1996)).
The RXTE pointed instruments observed GRS 1739–278 during the decay of the outburst. The first observation (March 31) was made during the initial decay after the main maximum. X-ray flux from the source was about 600 mCrab in PCA band (2-30 keV). A series of observations were also performed in May after the secondary maximum. By that time the source had faded to 250-300 mCrab. During all of the May observations GRS 1739–278 displayed very similar energy spectra and featureless PDS shapes.
## 4 5-HZ QPO IN POWER DENSITY SPECTRUM
Construction of the PDS for the first PCA/RXTE observation (March 31, 1996 - Fig. 2) revealed the presence of a QPO feature with central frequency near 5 Hz (see Table 1 for the QPO parameters). The QPO was seen clearly in 2-13 keV band, but was not significant at higher energies, where the number of photons was small and hence the errors were larger. Also present in the PDS was a band-limited noise component, and significant variability at low frequencies. The variability of the source during the March 31 observation is shown in Fig. 3. In contrast, much weaker fast variability was detected in the RXTE observations of the same source in May 1996 (Fig. 2).
The pointing direction for the observations of GRS 1739–278 was offset by about 11 arcmin in order to reduce count rate from the pulsar-burster GRO J1744–28 (Takeshima, Canizzo & Corbet (1996)). However, we were still concerned about the possible contamination of our power density spectra by this bright nearby source. So we analyzed the data from its observation of March 30, 1996, just one day before the first observation of GRS 1739–278 with RXTE took place. The flux from GRO J1744–28 in the PCA band (2-30 keV) was 925 mCrab, while for GRS 1739–278 (next day) it was only 604 mCrab. We built a PDS for GRO J1744–28 to compare it with PDS of GRS 1739–278. The result is presented in Fig. 4. A prominent peak at $``$2 Hz corresponding to the pulsar period dominates the PDS of GRO J1744–28. But there is no indication of a $``$2 Hz peak in the PDS we derived for GRS 1739–278. So we conclude that contamination of the GRS 1739–278 observations by GRO J1744–28, if any, was not a significant factor, and that the detected $``$5 Hz QPO do belong to GRS 1739–278.
The energy spectra for all RXTE observations of GRS 1739–278 (see examples in Fig.2) have the shape which is typical for X-ray novae (Tanaka & Shibazaki (1996)). In general, such spectra are well fitted by a two-component model composed of ”multicolor” accretion disk component (Makishima et al. 1986) in the soft part of the spectrum with a power-law component at higher energies. Detailed spectral analysis of GRS 1739–278 observations was presented by Borozdin et al. (1998, 1999). During the observation on March 31, 1996, when the QPO was detected, the power-law component in the energy spectrum was more prominent. During the subsequent observations in May 1996 the QPO was not detected, the power-law component waned, and no hard flux was detected by HEXTE.
Strong rapid variability in X-ray band and extended power-law component in energy spectrum are both features of black hole binaries, when in very high state (Miyamoto et al. (1991); van der Klis (1995)). We see that GRS 1739–278 was in this state on March 31 1996, and made a transition down to a high state sometime before May 10.
## 5 DISCUSSION
QPO features in power density spectra have been identified for a variety of black hole candidates including GX 339–4 (Miyamoto et al. (1991)), Nova Muscae (Miyamoto et al. (1993); Ebisawa et al. (1994)), XTE J1748-288 (Revnivtsev, Trudolyubov & Borozdin (1999)), and 4U 1630–47 (Trudolyubov, Borozdin & Priedhorsky (2000)). Significant low-frequency variability has been observed in Galactic microquasars GRS 1915+105 (Morgan et al. (1997)) and GRO J1655–40 (Remillard et al. (1999)), and also during the 1998 outburst of recurrent X-ray Nova 4U 1630–47 (Trudolyubov, Borozdin & Priedhorsky (2000)). All of these objects were in their very high states during those observations. In many cases a correlation between the intensity of PDS noise components and relative strength of the power-law spectral component was reported (e.g. Miyamoto et al. (1993); Cui et al. (1999); Revnivtsev, Trudolyubov & Borozdin (1999); Trudolyubov, Churazov & Gilfanov (1999)). GRS 1739–278 fits well into this picture as another example of transient LMXB (low-mass X-ray binary) and a black hole candidate.
An interesting energy spectrum was observed with TTM telescope on Mir-Kvant module in March 1996 during the rise of the flux from the GRS 1739–278 (Borozdin et al. (1998)). It is well described by a power law with absorption and does not require the introduction of an additional soft blackbody component. At the same time, the slope of the power-law component (2.3-2.7) is much steeper than the typical value for the low state of black-hole candidates (1.5-2.0). A similar spectrum was observed earlier by Ginga and Granat satellite from the X-ray Nova Muscae 1991 (Grebenev et al. (1991); Gilfanov et al. (1991); Ebisawa et al. (1994)) and by TTM/Kvant and SIGMA/Granat from KS 1730-312 (Borozdin et al. (1995); Trudolyubov et al. (1996)). Later this type of spectrum was observed with RXTE from XTE J1748-288 (Revnivtsev, Trudolyubov & Borozdin (1999)), 4U 1630-47 (Trudolyubov, Borozdin & Priedhorsky (2000)), and XTE J1550-564 (Sobczak et al. (1999)). These examples show that the power-law shape of the spectrum with a variable slope is typical of soft X-ray Novae during their flux rise and near the primary maximum. There is a clear tendency for the spectrum to steepen as the outburst progresses. However, the total X-ray luminosity is sometimes even higher than in the very high state observed later during the same outburst (Revnivtsev, Trudolyubov & Borozdin (1999); Trudolyubov, Borozdin & Priedhorsky (2000)). The observation of power law spectrum with variable slope at the early phase of X-ray novae outbursts is important because this is when an accretion disk around black hole is formed. The generation of such spectrum should be a key element of any sound dynamical model for an accreting black hole.
## 6 CONCLUSION
Using PCA/RXTE archival data we detected, for the first time, a QPO feature in the PDS of X-ray nova GRS 1739-278. QPO harmonics near 10 Hz and strong band-limited noise at low frequencies were also observed. Both the PDS and the two-component energy spectrum for this observation displayed the properties typical for the very high state in black hole candidates.
In the later stages of the 1996 outburst, GRS 1739-278 transitioned into a high state with much weaker rapid variability and soft energy spectra that were dominated by thermal component with an effective temperature $``$1 keV. Observation of the two canonical states strongly supports the identification of GRS 1739-278 as a black hole candidate based on the similarity of its X-ray properties with other black hole X-ray binaries.
We note, that at the beginning of the outburst GRS 1739-278 exhibited a power-law energy spectrum with a variable slope and tendency to steepen with time (Borozdin et al. (1998)). Similar spectra have been measured from several other X-ray novae during the early stages of their outbursts. These black hole candidates therefore seem to display a clear pattern in their spectral evolution.
## 7 ACKNOWLEDGMENTS
The RXTE public data were extracted from the HEASARC electronic archive operated by the Goddard Space Flight Center (NASA). We used also quick-look results provided by the ASM/RXTE team. Authors are grateful to Dr. Thomas Vestrand for his valuable suggestions, which helped us to improve the paper. |
no-problem/9911/cond-mat9911260.html | ar5iv | text | # Whispering Vortices
## Abstract
Experiments indicating the excitation of whispering gallery type electromagnetic modes by a vortex moving in an annular Josephson junction are reported. At relativistic velocities the Josephson vortex interacts with the modes of the superconducting stripline resonator giving rise to novel resonances on the current-voltage characteristic of the junction. The experimental data are in good agreement with analysis and numerical calculations based on the two-dimensional sine–Gordon model.
Whispering gallery modes are universal linear excitations of circular and annular resonators. They have first been observed in form of a sound wave traveling along the outer wall of a walkway in the circular dome of St. Paul’s Cathedral in London and were investigated by Lord Rayleigh and others . In the 2 meter wide walkway, which forms a circular gallery of 38 meter diameter about 40 meters above the ground of the cathedral, the whispering of a person can be transmitted along the wall to another person listening to the sound on the other side of the dome. The investigations by Rayleigh led to the conclusion that the whisper of a person does excite acoustic eigenmodes of the circular dome which can be described using high order Bessel functions. This acoustic phenomenon lends its name “whispering gallery mode” to a number of similar, mostly electromagnetic excitations in circular resonators. Whispering gallery modes are of strong interest in micro-resonators used for ultra small lasers . Most recently, circular resonators with small deformations, in which chaotic whispering gallery modes were observed, attracted a lot of attention . Here we describe the experimental observation of electromagnetic whispering gallery modes excited by a vortex moving in an annular Josephson junction of diameter less than $`100\mu \mathrm{m}`$.
A long Josephson junction is an intriguing nonlinear wave propagation medium for the experimental study of the interaction between linear waves and solitons . In this letter we report the excitation of whispering gallery type electromagnetic modes by a topological soliton (Josephson vortex) moving at relativistic velocities in a wide annular Josephson junction. We make use of the same Josephson vortex for both exciting and detecting the whispering gallery mode. These modes are manifested by their resonant interaction with the moving vortex, which results in a novel fine structure on the current-voltage characteristic of the junction. Our experiments are consistent with the recently published theory based on the two-dimensional sine-Gordon model. We present numerical calculations based on this model, which show good agreement with experiments.
Electromagnetic waves in an annular Josephson junction are described by the perturbed sine-Gordon equation (PSGE) for the superconducting phase difference $`\varphi `$ between the top and bottom superconducting electrodes of the junction . The Josephson vortex, often also called fluxon, corresponds to a twist over $`2\pi `$ in $`\varphi `$. It carries a magnetic flux equal to the magnetic flux quantum $`\mathrm{\Phi }_0=h/2e=\mathrm{2.07\hspace{0.17em}10}^{15}\mathrm{Vs}`$. Physically, this flux is induced by a vortex of the screening current flowing across the junction barrier. Linear excitations in this system are Josephson plasma waves that account for small amplitude oscillations in $`\varphi `$. The maximum phase velocity of electromagnetic waves in such a junction is the Swihart velocity given by $`c_0=\lambda _J\omega _p`$, where $`\lambda _J`$ is the Josephson length and $`\omega _p`$ the plasma frequency . In zero external magnetic field the PSGE for an annular Josephson junction of width $`w<\lambda _J`$ can be written as
$$\left(^2\frac{^2}{t^2}\right)\varphi \mathrm{sin}\varphi =\gamma +\alpha \frac{\varphi }{t}\beta ^2\frac{\varphi }{t},$$
(1)
where space and time are normalized by $`\lambda _J`$ and $`\omega _P^1`$, respectively. In Eq. (1) $`^2^2/t^2`$ is the D’Alembert wave operator, $`\mathrm{sin}\varphi `$ is the nonlinear term due to the phase-dependent Josephson current and $`\gamma `$ is the normalized bias current. The damping terms $`\alpha \varphi /t`$ and $`\beta ^2\varphi /t`$ are inversely proportional to the quasiparticle resistance across the junction barrier and to the quasiparticle impedance of the electrodes, respectively. For the junctions of width $`w<\lambda _J`$ considered in this paper, a homogeneously distributed bias current $`\gamma `$ as in Eq. (1) is justified. In contrast, for junctions with $`w>\lambda _J`$ the bias current may contribute to the boundary conditions of Eq. (1) .
A vortex steadily moving at a velocity $`u`$ driven by the Lorentz force due to the bias current $`\gamma `$ generates a voltage $`Vu`$ across the Josephson junction. This voltage can be monitored in experiment. The radiation associated with the time-dependent fields described by Eq. (1), i.e. the magnetic field $`H|\varphi |`$ and the electric field $`E\varphi /t`$, can be measured either directly (for certain junction geometries) or through its interaction with the moving vortex.
In contrast to most of the previous experiments focusing on quasi-one-dimensional annular Josephson junctions, we investigate comparatively wide, effectively two-dimensional junctions. We have fabricated a set of 5 annular Josephson junctions (A $`\mathrm{}`$ E) with the ratio $`\delta =r_i/r_e`$ between the inner radius $`r_i`$ and the fixed outer radius $`r_e=50\mu \mathrm{m}`$ being varied between $`\delta =0.94`$ and $`\delta =0.60`$ (see Table I). The junctions are made at Hypres Inc. using Nb-Al/AlO<sub>x</sub>-Nb trilayer technology and employ the standard biasing geometry as shown in Fig. 1. Due to the fabrication technology, the junction area is surrounded by a small passive region about $`2\mu \mathrm{m}`$ wide, which is omitted from Fig. 1 for clarity. In the passive region the top and bottom electrodes are separated by a $`200\mathrm{nm}`$ thick SiO<sub>2</sub> layer, which act as a small stripline in parallel to, but with electrical parameters different from the junction itself.
All junctions show a homogeneous bias current distribution, inferred from the large value of the vortex-free critical current at zero field, which is close to the theoretical limit. Their critical current density is $`j_c160\mathrm{Acm}^2`$ and the London penetration depth is $`\lambda _L90\mathrm{nm}`$ at $`4.2K`$. The thicknesses of the top and the bottom superconducting electrode are both well in excess of $`\lambda _L`$. Accordingly, the characteristic parameters are estimated as $`\lambda _J30\mu \mathrm{m}`$ and $`\nu _p\omega _p/2\pi 50\mathrm{GHz}`$. All presented measurements were done at $`T=4.2\mathrm{K}`$ using a well shielded low noise measurement setup.
We could realize single and multiple vortex states repeatedly and reproducibly in any of the junctions. Vortices were trapped by applying a small bias current during cooling down from the normal to the superconducting state. Single-vortex states are identified as the lowest quantized voltage step observed on the current-voltage characteristics. Also, a characteristic change of the critical current modulation with magnetic field accompanied by a suppression of the critical current by a factor of more than 100 at zero field, as reported earlier , was observed when a vortex was trapped in the junction.
In Figure 2 the single-vortex characteristics of the junctions A to E are shown. The current scale is normalized by the flux-free zero-field critical current of each junction. The voltage of each characteristic is multiplied by a factor $`\xi =c_0/\overline{c}_0`$ (see Tab. I) calculated using the approach by Lee et al. , in order to substract the effect of a small passive region. $`c_0`$ ($`\overline{c}_0`$) is the wave velocity in the junction neglecting (including) the passive region.
A striking novel feature noticed in Fig. 2 is the fine structure on the vortex resonance which appears with increasing the junction width. The fine structure is most clearly visible for the widest junction E (see inset of Fig. 2). We argue that the observed fine structure can be well understood as due to the interaction of the moving vortex with the linear whispering gallery modes . The linear modes of the annular Josephson junction resonator are solutions to the inhomogeneous D’Alembert equation in the polar coordinates $`(r,\theta )`$
$$\left(\frac{1}{r}\frac{}{r}r\frac{}{r}+\frac{1}{r^2}\frac{^2}{\theta ^2}\frac{^2}{t^2}1\right)\varphi ^{(\mathrm{lin})}=0,$$
(2)
which is found from Eq. (1) neglecting all perturbations ($`\gamma `$, $`\alpha /t`$, $`\beta ^2\varphi /t`$) and approximating the nonlinearity as $`\mathrm{sin}\varphi \varphi `$ to take into account the gap in the plasmon excitation spectrum. In zero external magnetic field the boundary conditions
$$\frac{\varphi ^{(\mathrm{lin})}}{r}(r=r_i,r_e)=0$$
(3)
have to be fulfilled. In terms of the electromagnetic waves in the junction, Eq. (3) corresponds to a total internal reflection condition. For large angular wave numbers $`k1`$ and wide junctions $`\delta 1`$ a solution to (2) is given by
$$\varphi _k^{(\mathrm{lin})}(r,\theta ,t)=AJ_k(\omega _kr)\mathrm{exp}(ik\theta )\mathrm{exp}(i\omega _kt),$$
(4)
where A is an arbitrary amplitude factor, $`J_k`$ is the Bessel function of the first kind and $`\omega _k`$ is the angular frequency associated with the mode $`k`$ satisfying the boundary condition (3) at the external radius. By calculating the dependence of $`\omega _k`$ on $`k`$, one obtains the dispersion relation of whispering gallery modes in the annular junction.
The resonance condition between the angular frequency $`\mathrm{\Omega }`$ of the vortex rotation in the ring and the characteristic frequency $`\omega _k`$ of a whispering gallery mode can be expressed as
$$\mathrm{\Omega }=\omega _k/k.$$
(5)
In a dispersion diagram Eq. (5) determines the crossing points between the straight dispersion line of the vortex $`\omega ^{(\mathrm{v})}=\mathrm{\Omega }k`$ and the dispersion curve $`\omega ^{(\mathrm{lin})}=\omega _k`$ of the linear modes. At low enough damping, a vortex moving at the angular frequency $`\mathrm{\Omega }`$ comes into resonance with a whispering gallery mode of wavenumber $`k`$. If the spacing in $`\mathrm{\Omega }`$ between the resonances for different $`k`$ is large enough, this effect can be observed as fine structure resonances on the single-vortex current-voltage characteristic of a wide annular Josephson junction. The increase of the excited wavenumber $`k`$ with decrease of the vortex velocity is a characteristic feature of the interaction between the Josephson vortex and the whispering gallery modes of the junction. This property is clearly manifested in our numerical calculations presented below.
To confirm the above interpretation of our experimental results, we performed direct numerical simulations of Eq. (1) in polar coordinates with the boundary conditions (3) using our junction parameters (see Table I). We calculated current-voltage characteristics $`V(\gamma )=\mathrm{\Phi }_0\mathrm{\Omega }(\gamma )\omega _p/(2\pi )`$ and two-dimensional phase profiles $`\varphi (r,\theta ,t)`$ for various bias points using a plasma frequency of $`\omega _p/2\pi =52.4\mathrm{GHz}`$ determined from experimental data. The damping parameter $`\alpha =0.03`$ was chosen close to its estimated experimental value at $`T=4.2\mathrm{K}`$, $`\beta `$ here was set to $`0`$. The calculated characteristics for junctions A to E are plotted in Fig. 3. Clearly, the fine structure on the current-voltage characteristics of the wide junctions D and E is very well reproduced in the simulation. Figure 4 shows the phase profiles at bias points on subsequent fine structure resonances of junction E. Evidently, a clear whispering gallery structure is found here. With decreasing bias, the increase of the angular wave number of the mode from resonance to resonance is observed.
Using the resonance condition (5) and the proportionality between the angular frequency of the vortex $`\mathrm{\Omega }`$ and the voltage $`V`$ measured in experiment, the fine structure resonances can be fitted according to the formula
$$V=\mathrm{\Phi }_0\frac{\omega _p}{2\pi }\mathrm{\Omega }=\mathrm{\Phi }_0\frac{\omega _p}{2\pi }\frac{\omega _k}{k},$$
(6)
where $`k`$ is integer. In the limit of $`\delta 1`$ and $`k1`$ the linear mode spectrum can be analytically approximated as $`\omega _k=r_e^1(k+0.808k^{1/3})`$ . More accurately, we have calculated the values of $`\omega _k`$ numerically solving Eq. (2) with the boundary conditions (3). The best fit to the experimental data is found for the plasma frequency $`\omega _p/2\pi =52.4\mathrm{GHz}`$ and the wavenumber $`k=7`$ for the highest voltage resonance (see Fig. 5a). This value of $`k`$ is exactly the one found for the highest resonance in numerical simulations (see Fig. 4a). The voltages of the resonances found in simulation (Fig. 5b) are close to the ones calculated from the dispersion relation (dotted lines). The current-voltage characteristic of junction E simulated for the damping parameters $`\alpha =0.02`$ and $`\beta =0.0012`$ is shown in Fig. 5b (open squares). Clearly, taking into account the quasiparticle surface losses ($`\beta `$) leads to a larger differential resistance of the resonance, which closely resembles the experimental data in Fig. 5a.
Considering the resonance condition Eq. (5) and the dispersion of the linear modes $`\omega _k`$, it can be shown that the density of resonances in voltage and the wave number of the lowest excited mode increase with decreasing junction width $`w=r_er_i`$. This fact was verified by numerical calculations for junction D, where the lowest mode number excited on the top of the resonance was found to be $`k=9`$ (see Fig. 4d). For very narrow rings no fine structure is observed in experiment and in simulation, due to the overlapping of neighboring resonances in the presence of damping.
The origin of the observed fine structure has also been confirmed to be due to the interaction of the vortex with the whispering gallery modes of the junction by investigating its dependence on temperature, number of trapped vortices, and external magnetic field. At high temperatures, no whispering gallery resonances are excited due to the large intrinsic damping, i.e. no fine structure is observed. Decreasing the temperature below $`4.2\mathrm{K}`$, fine structure is observed in all samples A to E; also the differential resistance of the resonances decreases with temperature. Moreover, we found that the voltages of the fine structure resonances scale with the number $`n`$ of moving vortices (or vortex/anti-vortex pairs). Therefore, for $`n>1`$ the fine structure gets clearly resolved in voltage and also more pronounced, because several vortices coherently pump the whispering gallery mode. No dependence of the fine structure step voltage positions on small external magnetic fields was noticed. We have also investigated more narrow annular junctions with a wide idle region both experimentally and theoretically . In this case, the spectrum of the whispering gallery modes (and, thus, of the fine structure) is strongly influenced by the geometry and the electrical properties of the passive region. The fine structure recently reported in Ref. appears to be consistent with our observations.
In summary, we have presented experimental and numerical evidences for the excitation of whispering gallery modes by vortices moving in wide annular junctions. This novel effect has been observed at sufficiently low damping for annular junctions in a wide range of electrical and geometrical parameters. It is very robust with respect to small external perturbations such as variations in bias current density, boundary conditions or junction inhomogeneities. The resonance frequencies have been calculated and quantitative agreement with experimental data and numerical simulations better than one percent has been reached. Thus, the vortices appear to whisper (generate radiation) at frequencies between $`250`$ and $`450`$ GHz in the annular whispering gallery of $`100\mu \mathrm{m}`$ diameter.
We thank A. Franz and D. Kruse for fruitful discussions. V. V. K. is grateful for the support by the Russian Foundation for Basic Research (Grant no. 9702-16928) and for partial support by the German Ministry of Science and Technology (BMBF Grant no. 13N6945/3). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.