id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9912/hep-lat9912012.html
ar5iv
text
# Topology in QCD with 4 flavours of dynamical fermions Partially supported by MURST and by EC TMR program ERBFMRX-CT97-0122. ## 1 INTRODUCTION Topology plays an essential role in determining the low energy features of strong interaction physics. In this paper we present a lattice study of the topological properties of QCD in presence of four flavours of degenerate dynamical staggered fermions. A relevant quantity is the topological susceptibility, defined in the continuum as $$\chi d^4x_\mu 0|\mathrm{T}\left\{K_\mu (x)Q(0)\right\}|0,$$ (1) where $`K_\mu (x)`$ is the Chern current and $`Q(x)=_\mu K_\mu (x)`$ is the topological charge density, $$Q(x)=\frac{g^2}{64\pi ^2}ϵ^{\mu \nu \rho \sigma }F_{\mu \nu }^a(x)F_{\rho \sigma }^a(x).$$ (2) Eq. (1) defines the prescription for the singularity of the time ordered product when $`x0`$ . The value of $`\chi `$ in the quenched theory is related to the $`\eta ^{}`$ mass by the Witten-Veneziano formula. It has been successfully measured on the lattice , confirming the Witten-Veneziano prediction. In full QCD with spontaneous chiral symmetry breaking, $`\chi `$ is related to the quark condensate $$\chi =\frac{m_q}{N_f}\overline{\psi }\psi _{m_q=0}+o(m_q),$$ (3) where $`m_q`$ is the quark mass and $`N_f`$ the number of flavours. We want to test Eq. (3) for $`N_f=4`$ and different values of $`m_q`$. Another relevant quantity is the slope at $`q^2=0`$ of the topological susceptibility, $`\chi ^{}`$, defined as $$\chi ^{}\frac{d\chi (q^2)}{dq^2}|_{q^2=0}=\frac{1}{8}d^4xQ(x)Q(0)x^2,$$ (4) where $`\chi (q^2)=d^4xe^{iqx}Q(x)Q(0)`$. In the quenched theory the consistency of the Witten-Veneziano mechanism requires $`\chi ^{}`$ to be small. Instead in full QCD it is expected to be larger and related to the singlet axial charge (and so to the proton spin crisis) . Therefore its determination is of particular importance. Unfortunately, techniques which work well for the lattice determination of $`\chi `$, are not straightforwardly applicable to the measurement of $`\chi ^{}`$. We will present a preliminary estimate of $`\chi ^{}`$ and discuss perspectives for future refinements. ## 2 TOPOLOGICAL SUSCEPTIBILITY We have simulated the full theory using the HMC algorithm with 4 flavours of staggered fermions at $`\beta =5.35`$ and four different values of the bare quark mass, $`am_q=0.010,0.015,0.020,0.050`$, performing for each mass value respectively 6000,1500,1000 and 3000 units of molecular dynamics time. We have used a $`16^3\times 24`$ lattice and the Wilson action for the pure gauge sector. The field theoretical method has been used to determine $`\chi `$. Given a discretization $`Q_L(x)`$ of the topological charge density, we define the lattice susceptibility: $$\chi _L=\underset{x}{}Q_L(x)Q_L(0)=\frac{Q_L^2}{V}.$$ (5) $`Q_L`$ is related by a finite multiplicative renormalization $`Z`$ to the continuum topological charge<sup>1</sup><sup>1</sup>1In the full theory mixings of $`Q_L`$ with fermionic operators appears,which anyway can be shown to be negligible .. Moreover $`\chi _L`$ in general does not meet the continuum prescription used in Eq. (1) and, due to contact terms in the product $`Q(x)Q(0)`$ at small $`x`$, a further additive renormalization appears $$\chi _L=a^4Z^2(\beta )\chi +M(\beta ).$$ (6) The heating method has been used to evaluate the renormalizations. $`Z`$ is determined by measuring $`Q_L`$ on a sample of thermalized configurations obtained by heating a semiclassical one-instanton configuration. Similarly $`M`$ is obtained by measuring $`\chi _L`$ on a sample with zero topological background, using the fact that Eq. (1) leads to $`\chi =0`$ in the trivial topological sector. Smearing techniques have been used to improve the operator $`Q_L`$ , in order to reduce the renormalizations and obtain a better accuracy on $`\chi `$ after subtraction. We have performed 2 smearing steps, starting from a standard discretization of $`Q`$, with a smearing coefficient $`c=0.9`$, as reported in . In Fig. 1 we report the results obtained for $`a^4\chi `$ versus the quark mass. The clear dependence of $`a^4\chi `$ on $`m_q`$ disappears when going to physical units. We have fixed the scale by measuring the string tension, obtaining $`a^2\sigma =0.073(7)`$ at $`am_q=0.05`$ and $`a^2\sigma =0.033(4)`$ at $`am_q=0.01`$. From these values, assuming $`\sqrt{\sigma }=440\mathrm{MeV}`$, we obtain $`a(am_q=0.01)=0.081(5)\mathrm{fm}`$ and $`a(am_q=0.05)=0.0121(6)\mathrm{fm}`$, from which $`(\chi )^{1/4}(am_q=0.01)=(153\pm 16)\mathrm{MeV}`$ and $`(\chi )^{1/4}(am_q=0.05)=(153\pm 11)\mathrm{MeV}`$. We cannot rule out possible systematic errors coming both from the poor sampling of topological modes at the lowest quark masses and from the determination of the physical scale. Indeed at $`am_q=0.01`$ we have determined the lattice spacing also by measuring $`m_\rho `$ and $`m_\pi `$, obtaining a different result, $`a=0.101(5)\mathrm{fm}`$. Using this value we obtain $`(\chi )^{1/4}(am_q=0.01)=(123\pm 10)\mathrm{MeV}`$, which is closer to the theoretical expectation coming from Eq. (4), $`(\chi )^{1/4}110\mathrm{MeV}`$. ## 3 ESTIMATE OF $`\chi ^{}`$ On the lattice we can define $$\chi _L^{}=\frac{1}{8}\underset{x}{}Q_L(x)Q_L(0)x^2.$$ (7) A relation similar to Eq. (6) holds in this case : $$\chi _L^{}=a^4Z^2(\beta )\chi ^{}+M^{}(\beta ).$$ (8) While $`Z`$ in the previous equation is the same appearing in Eq. (6), since it is purely related to the renormalization of the topological charge, $`M^{}`$ is a new additive renormalization, containing mixings of $`\chi _L^{}`$ to operators of equal or lower dimension. More generally, defining the two-point correlation function of the topological charge density on the lattice, $`Q_L(x)Q_L(0)`$, one can write $$Q_L(x)Q_L(0)=a^8Z^2Q(x)Q(0)+m(x);$$ (9) $`m(x)`$ indicates mixings with terms of equal or lower dimension in the Wilson OPE of $`Q_L(x)Q_L(0)`$, and is related to $`M`$ and $`M^{}`$ in the following way: $$M=\underset{x}{}m(x);M^{}=\underset{x}{}m(x)x^2.$$ (10) $`M^{}`$ cannot be determined by the heating method, since in this case the continuum $`\chi ^{}`$ is not constrained to be zero in the trivial topological sector. Therefore the techniques used for $`\chi `$ cannot be straightforwardly applied to the determination of $`\chi ^{}`$. In principle $`M^{}`$ can also be computed by lattice perturbation theory, but in practice this approach is not particularly successful, especially when dealing with smeared operators for which the convergence properties of the perturbation series worsen. We stress that also cooling techniques, which can usually be used to determine $`\chi `$, fail in the determination of $`\chi ^{}`$: only the zero-moment of the two-point function $`Q(x)Q(0)`$, i.e. $`\chi `$, is topologically protected, since it can be expressed in terms of the global topological charge, $`\chi =Q^2/V`$, and $`Q`$ is quasi-stable under cooling. This is not the case for higher moments, and in particular for $`\chi ^{}`$. One possibility is to use an improved topological charge operator for which $`M^{}`$, even if not computable, is known to be small and thus negligible in Eq. (8). If we follow this ansatz for the 2-smeared operator used previously we obtain, from our simulation at $`am_q=0.01`$, $`\sqrt{|\chi ^{}|}20\mathrm{MeV}`$, in good agreement with the value expected from sum rules , $`\sqrt{|\chi ^{}|}=(25\pm 3)\mathrm{MeV}`$. However, the systematic error deriving from neglecting $`M^{}`$, $`M^{}/\chi ^{}`$, can be estimated to be of the same order of magnitude as $`M/\chi 40\%`$ at $`am_q=0.01`$, that is still quite large. A better estimate of $`\chi ^{}`$ requires deeper knowledge of the two-point correlation function. In the continuum $`Q(x)Q(0)`$ is known to be negative, by reflection positivity, for $`x>0`$, and positive and singular for $`x=0`$. Similarly, when using a lattice action which preserves reflection positivity, we expect $`Q_L(x)Q_L(0)<0`$ whenever the two operators $`Q_L(0)`$ and $`Q_L(x)`$ do not overlap. This is clear in Fig. 2, where a determination at $`am_q=0.01`$ of $`Q_L(x)Q_L(0)`$ averaged over spherical shells of width $`\delta x=0.6a`$ is reported for 1 and 2 smearings. The information on $`Q_L(x)Q_L(0)`$ is not enough to extract $`Q(x)Q(0)`$: according to Eq. (9), also $`m(x)`$ is needed. However we know that $`m(x)`$ comes from contact terms, so it must be zero at large $`|x|`$. Therefore, for a given operator $`Q_L(x)`$, there must be an $`x_0`$ such that, for $`|x|>x_0`$, $`m(x)`$ can be ignored in Eq. (9) and $`Q(x)Q(0)`$ can be easily extracted, the value of $`Z`$ being known from the determination of $`\chi `$. A practical way to extract $`x_0`$ is the following: $`Q_L(x)Q_L(0)`$ is determined for two operators $`Q_{L1}`$ and $`Q_{L2}`$ and one looks for a plateau at large $`|x|`$ in the ratio of the two functions, corresponding to the squared ratio of the multiplicative renormalizations, $`(Z_1/Z_2)^2`$. In this way we have determined $`Q(x)Q(0)`$ for 1 and 2 smearing steps at large $`|x|`$ and $`am_q=0.01`$, as shown in Fig. 3. There is good agreement between the two determinations, as expected. Work is in progress to extrapolate the information at large $`|x|`$ to smaller distances, thus allowing a more careful determination of $`\chi ^{}`$.
no-problem/9912/gr-qc9912037.html
ar5iv
text
# Untitled Document A Quantum Mechanical Derivation of Gamow’s Relation For the Time and Temperature of the Expanding Universe Subodha Mishra and D. N. Tripathy Department of Physics and Meteorology & Centre for Theoretical Studies Indian Institute of Technology, Kharagpur-721302, India Institute of Physics, Sachivalaya Marg, Bhubaneswar-751005, India Abstract The quantum mechanical approach developed by us recently for the evolution of the universe is used to derive an alternative derivation connecting the temperature of the cosmic background radiation and the age of the universe which is found to be similar to the one obtained by Gamow long back. By assuming the age of the universe to be $``$ 20 billion years, we reproduce a value of $``$ 2.91 K for the cosmic back-ground radiation, agreeing well with the recently measured experimental value of 2.728 K. Besides, this theory enables us to calculate the photon density and entropy associated with the background radiation and the ratio of the number of photons to the number of nucleons, which quantitatively agree with the results obtained by others. It has been, by now, accepted that the most important theory for the origin of the universe is the Big Bang Theory , according to which the present universe is considered to have started with a huge explosion from a superhot and a superdense stage. Theoretically one may visualize its starting from a mathematical singularity with infinite density. This also comes from the solutions of the type I and type II form of Einstein’s field equations . What follows from all these solutions is that the universe has originated from a point where the scale factor $`R`$ (to be identified as the radius of the universe) is zero at time $`t=0`$, and its derivative with time is taken to be infinite at this time. That is, it is thought that the initial explosion had happened with infinite velocity, although, it is impossible for us to picture the initial moment of the creation of the universe. An indication in support of the Big Bang theory is the expansion of the universe, which has been established by means of the Hubble’s law as $$v=H_0d,$$ $`(1)`$ where $`v`$ is the radial recession velocity of the galaxy and $`d`$ is the distance of the galaxy from us and $`H_0`$ is known as the Hubble constant. The quantity $`(1/H_0)`$, which is known as the Hubble time, is a measure for the maximum age of the universe. Since, it is very difficult to correctly determine the distance $`d`$ to the galaxies, there is a great uncertainty in the estimated value of $`H_0`$. The correct value for the age of the universe seems to lie in between 10 to 20 billion years. The most important evidence for the Big Bang theory is the microwave background radiation, which was discovered by Penzias and Wilson with an effective temperature of $`3.5K`$. However most recent measurement on the cosmic background radiation using Far Infrared Absolute Spectrometer (FIRAS) has yielded a value of 2.728 K. The characteristic of this radiation is that it is almost absolutely isotropic, that is, it comes to us from all directions with the same intensity. This means that the radiation is not due to stars or galaxies which are the measure of the inhomogeneities of the universe. The only plausible explanation for the origin of the cosmic background radiation is that the universe has, perhaps, passed through a state of very high density and high temperature in its early state, and the present temperature of $`2.728K`$ is nothing but the remnant of the intense heat of the Big Bang, which has been redshifted into the microwave region. Since, within a time period very close to the Big Bang explosion $`(t=0)`$, the universe was lying in its ‘radiation dominated era’, there was no possibility for the formation of elementary particles with finite mass at that stage. The actual creation of material particles must have taken place a few seconds after the Big Bang. In the mid 1970s, Gamow suggested that the high density and high temperature required for the synthesis of elements, existed in the few moments after the Big Bang. In his simplified picture, Gamow assumed the universe to be intially made of neutrons and photons. As one knows, the neutrons are charge free particles found in the nuclei of atoms, while photons are the quanta of the electromagnetic field that constitute light. Gamow arrived at a relation, relating the temperature $`T`$ of the universe with time $`t`$ after its Big Bang, which is given as $$T=\left(\frac{3c^2}{32\pi Ga}\right)^{1/4}t^{1/2}K,$$ $`(2a)`$ where $`a`$ is the radiation constant and all other constants in the above equation have their usual meaning. A numerical estimate of the factor within the bracket in the above equation gives $$T=1.5\times 10^{10}t^{1/2}K.$$ $`(2b)`$ Later on, the above relation was modified by Hayashi by taking into account the effects of the thermal equilibrium among the particles, like neutrons, protons, electron-positron pairs and neutrino-antineutrino pairs, present in the universe at the very temperatures that existed right at the beginning of the universe. Hayashi obtained a relation, which is given as $$T=10^{10}t^{1/2}K.$$ $`(3)`$ As one can see from Eq. (3), it differs from Gamow’s derivation \[Eq. (2b)\] with respect to the extra factor of (3/2). Based on the relation $`(2b)`$, Gamow made the prediction that a very faint back-ground of radiation, known as the relic of the Big Bang, should exist at the present epoch of the universe. This was subsequently verified by Penzias and Wilson , who reported an isotropic radiation background with a temperature of $`3.5K`$, in the microwave region. If one takes Eq. (3) to be correct, then, to reproduce a temperature of $`2.728K`$ the present age of the universe would be $`425`$ billion years instead of 20 billion years, where the latter one has been known to be very close to the accepted value. Gamow’s relation would give a value of $`956`$ billion years for the age, which is much more absurd. Recently, we have developed a quantum mechanical theory for a system of self- gravitating particles like the stars that have exhusted all the nuclear power at their respective cores. In these systems, the particles interact with each other gravitationally. Using a singular form of single particle density to account for the distribution of particles within the system, we have been able to obtain a compact expression for the radius of a neutron star. Comparing this with the Schwarzschild radius, we arrive at a critical value for the mass of the neutron star beyond which it should go over to the stage of a blackhole. Our value for the critical mass, seems to agree with those of other theoretical calculations. Applying such a theory to a white dwarf, we have succeeded to reproduce the so called Chandrasekhar limit for its critical mass. In a subsequent work, we have used the above theory to make a study about on the evolution of the present universe by visualizing it as a system constituted of a large number of self-gravitating ficticious particles, fermionic in nature, interacting with each other through gravitational potentials. As far as the neutron stars are concerned, they are obviously constituted of fermions. For the universe, it is being said that the major constituent of the total mass of the present universe is made of the Dark Matter (DM). Since neutrinos are considered to be the most probable candidates for the particles of the DM, we are justified to say that the universe is constituted of particles that are fermions. Proceeding in a manner similar to the one used by us for the study of stars, we arrive at an expression for the radius of the universe which, after invoking the Mach principle , assumes a form involving only the fundamental constants like $`G`$, $`\mathrm{}`$ , $`c`$ and the mass $`m`$ of the constituent particles. Following this expression, we have made an estimate of the total mass of the universe, which is found to be agreeing with the results of other theoretical calculations . Our calculated value for $`(\dot{G}/G)`$ is also in good agreement with those of many earlier workers. There are many other interesting results that follow from this theory, which demand to have a deeper study on the subject. In this present paper, we want to apply the very theory to make an estimate of the temperature of the cosmic background radiation, whose most recent value has been reported to be $`2.728K`$ . Using our expression, we also try to discuss about the production of the various elementary particles that took place in the early universe. All these have been dealt with, in the next section. The Hamiltonian used by us recently for the study of a system of self-gravitating particles is written as $$H=\underset{i=1}{\overset{N}{}}(\frac{\mathrm{}^2}{2m})_i^2+\frac{1}{2}\underset{i=1}{\overset{N}{}}\underset{j=1,ij}{\overset{N}{}}v(\stackrel{}{X}_i\stackrel{}{X}_j),$$ $`(4)`$ where $`v(\stackrel{}{X}_i\stackrel{}{X}_j)=g^2/\stackrel{}{X}_i\stackrel{}{X}_j`$, having $`g^2=Gm^2`$, $`G`$ being the universal gravitational constant and $`m`$ the mass of the constituent particles whose number is $`N`$. Since the measured value for the temperature of the cosmic microwave background radiation is $`2.728K`$, it lies in the neighbourhood of almost zero temperature. We,therefore, use the zero temperature formalism for the study of the present problem. Under the situation $`N`$ is extremely large, the total kinetic energy of the system is obtained as $$<KE>=\left(\frac{3\mathrm{}^2}{10m}\right)(3\pi ^2)^{2/3}𝑑\stackrel{}{X}[\rho (\stackrel{}{X})]^{5/3},$$ $`(5a)`$ where $`\rho (\stackrel{}{X})`$ denotes the single particle density to account for the distribution of particles (fermions) within the system, which is considered to be a finite one. Eq. (5a) has been written in the Thomas-Fermi approximation. The total potential energy of the system, in the Hartree-approximation, is now given as $$<PE>=(\frac{g^2}{2})𝑑\stackrel{}{X}𝑑\stackrel{}{X}^{}\frac{1}{\stackrel{}{X}\stackrel{}{X}^{}}\rho (\stackrel{}{X})\rho (\stackrel{}{X}^{}).$$ $`(5b)`$ Inorder to evaluate the integral in Eq. (5a) and Eq. (5b), we had chosen a trial single-particle density $`\rho (\stackrel{}{X})`$ which was of the form : $$\rho (\stackrel{}{X})=\frac{Ae^x}{x^3},$$ $`(6)`$ where $`x=(r/\lambda )^{1/2}`$, $`\lambda `$ being the variational parameter. As one can see from Eq. (6), $`\rho (\stackrel{}{X})`$ is singular at the origin. Its interpretation has already been given in our earlier papers . After evaluating the integrals in Eq. (5) to find the total energy $`E(\lambda )`$ of the system, we minimize it with respect to $`\lambda `$ and thereby we obtain the value of the energy of the system corresponding to its lowest energy state (ground state). Following the expression for $`<KE>`$ evaluated at $`\lambda =\lambda _{min}=\lambda _0`$, we write down the value of the equivalent temperature $`T`$ of the system, using the relation $$\begin{array}{cc}\hfill T=& (\frac{2}{3})(\frac{1}{k_\beta })[\frac{<KE>}{N}]\hfill \\ \hfill =& (\frac{2}{3})(\frac{1}{k_\beta })(0.015442)N^{4/3}(\frac{mg^4}{\mathrm{}^2})\hfill \end{array}$$ $`(7)`$ The expression for the radius $`R_0`$ of the universe, as found by us earlier , is given as $$R_0=4.047528(\frac{\mathrm{}^2}{mg^2})/N^{1/3}.$$ $`(8)`$ After invoking Mach’s principle , which is expressed through the relation $`(\frac{GM}{R_0c^2})1`$, and using the fact that the total mass of the universe $`M=Nm`$, we are able to obtain the total number of particles $`N`$ constituting the universe, as $$N=2.8535954(\frac{\mathrm{}c}{Gm^2})^{3/2}.$$ $`(9)`$ Now, substituting Eq. (9) in Eq. (8), we arrive at the expression for $`R_0`$, as $$R_0=2.8535954(\frac{\mathrm{}}{mc})(\frac{\mathrm{}c}{Gm^2})^{1/2}.$$ $`(10)`$ As one can see from above, $`R_0`$ is of a form which involves only the fundamental constants like $`\mathrm{},c,G`$ and $`m`$. Now, eliminating $`N`$ from Eq. (7), by virtue of Eq. (9),we have $$T=\frac{2}{3}(0.0625019)(\frac{mc^2}{k_\beta }).$$ $`(11)`$ Let us now assume that the radius $`R_0`$ of the universe is approximately given by the relation $$R_0ct,$$ $`(12)`$ where $`t`$ denotes the age of the universe at any instant of time. The Hubble’s law as indicated in Eq. (1), also implies that the universe is expanding uniformly. Although, it is so for the universe, all the galaxies are not uniformly expanding. Considering a photon of light with wave length $`\lambda `$ travelling a distance of separation ’d’ between two galaxies at rest with respect to each other, one has $`d=ct`$, where ’t’ is the time it takes for light to travell the space between the galaxies. Because of the expansion of the universe, the galaxies move away from each other at a velocity $`v`$ known as the radial velocity. During this time ’t’, the galaxies are separated by a distance $`\mathrm{\Delta }d`$ given by $`\mathrm{\Delta }d=vt`$. Thus, one finds that $`\frac{\mathrm{\Delta }d}{d}=(\frac{v}{c})`$. From this, it follows that the greater is the relative velocities of the galaxies, the greater is the separation attained in the time interval $`t`$. The importance of Hubble law, as stated through Eq. (1), is that the galaxies were closer in the past than they are now. As we have stated earlier, the Hubble time $`(\frac{1}{H_0})`$ represents the maximum age of the universe, because the galaxies themselves slow down the expansion of the universe. Even though the galaxies are farther apart, they still exert a gravitational force on each other. Their mutual gravity continuously acts to pull other galaxies together. This means that the universe was expanding faster in the past than it is now. As indicated in Eq. (12), the velocity of expansion of the universe is being approximated to be equal to the velocity of light ’c’. Following Eq. (10) and Eq. (12), we write $`m`$ as $$m=(\frac{\mathrm{}^3}{Gc^3})^{1/4}(2.8535954)^{1/2}\frac{1}{\sqrt{t}}.$$ $`(13)`$ A substitution of $`m`$, from the above equation, in Eq. (11), enables us to write $$T=0.070388(\frac{1}{k_\beta })(\frac{c^5\mathrm{}^3}{G})^{1/4}t^{1/2}$$ $`(14a)`$ $$=0.070388[(\frac{c^3}{G})\frac{\pi ^2}{60\sigma }]^{1/4}t^{1/2},$$ $`(14b)`$ where $`\sigma =(\frac{\pi ^2k_\beta ^4}{60\mathrm{}^3c^2})`$, is the Stefan-Boltzman constant . Substituting the numerical value of $`\sigma `$, which is equal to $`5.669\times 10^5erg/cm^2.deg^4.sec`$, and the present value for the universal gravitational constant $`G`$ $`[G=6.67\times 10^8dyn.cm^2.gm^2]`$, in Eq. (14b),we obtain $$T=(0.23172\times 10^{10})t^{1/2}K.$$ $`(15)`$ As one can very well see, Eq. (15) is of the same form as obtained by Gamow \[Eq. (2b)\] leaving aside the multiplying constant 0.23172. If we accept the age of the universe to be close to $`20\times 10^9yr`$, which we have used here, with the help of Eq. (15), we arrive at a value for the cosmic background temperature equal to $`2.91K`$. This is very close to the measured value of 2.728 K as reported from the most recent Cosmic Background Explorer (COBE) satellite measurements . However, to reproduce the exact value of 2.728 K for the cosmic background temperature from our expression, Eq. (15), we would require an age of $`22.832\times 10^9yr`$ for the universe. By virtue of the expression given in Eq. (14b), we find $$\sigma T^42.4547\times 10^5(\frac{\pi ^2c^3}{60G})\frac{1}{t^2}.$$ $`(16)`$ The very form of the above equation suggests that the factor in its right hand side (rhs) can be identified as the energy density of the electromagnetic radiation at a time $`t`$. The radiation of this form is belived to follow the black- body law. The very agreement of our calculated result with the most accurate value for the temperature of the background radiation shows that age of the universe is very close to $`20\times 10^9yr`$. This also creates a kind of confidence in us regarding the correctness of our theory compared to others, inspite of its basic difference from the conventional approaches, relating to the evolution of the universe. Using Eq. (15), we have made an estimation of the temperature of the universe at various stages of its evolution in time. Comparing the energy, associated with the temperature $`T`$, with $`mc^2`$, we calculate the masses of the elementary particles formed at various times. This is shown in table-I of this paper. From the table, we notice that when the age of the universe was less than 5 sec, the formation of electron and positron was possible, while when the age of the universe was less than $`1.2\times 10^4`$ sec, the formation of muons and their antiparticles must have taken place. For the formation of mesons and their antiparticles, which needs a temperature of $`1.6\times 10^{12}K`$, the corresponding age of the universe would be less than $`7\times 10^5sec`$ .As far as the nucleon (neutron and proton) and their antiparticles were concerned, they must have been formed before an age of $`1.5\times 10^6`$ sec. Thus, the period between $`t=7\times 10^5`$ sec and 5 sec may be called as the lepton era, while the period before $`7\times 10^5`$ sec is called the hadron era. The very early era which is known as the planck era corresponds to the period $`t<10^{43}`$ sec, $`(temperature<10^{32}K)`$. During this period, gravity is considered to be playing a major role and it is to be, possibly, quantized at that stage. Having evaluated the expression in the rhs of Eq. (16), the energy of the electromagnetic radiation radiated per unit area per unit time is given as $$u=1.6345\times 10^{33}(\frac{1}{t^2}),$$ $`(17)`$ where $`t`$ is the age of the universe in sec at any instant of time. The entropy $`S`$ associated with the microwave back-ground radiation is obtained as $$S=\frac{16Vu}{3cT}=2.9058(\frac{V}{T})\times 10^{23}(\frac{1}{t^2}).$$ $`(18)`$ Assuming the present universe to be spherical, its volume $`V`$ is given as $`V=(\frac{4\pi }{3})R_0^3`$, where $`R_0`$ denotes its radius. Taking $`R_02.16\times 10^{28}`$ cm, which corresponds to the age $`t=22.832\times 10^9yr`$, since $`(R_0ct)`$, the photonic entropy of the present universe is calculated to give $$S=2.369\times 10^{73}(\frac{1}{T})erg/deg,$$ $`(19a)`$ For $`T=2.728K`$, it becomes, $$S=0.86\times 10^{73}10^{73}erg/deg.$$ $`(19b)`$ The equilibrium number of photons associated with the microwave background radiation is given as $$\overline{N}_\gamma =\frac{V2\zeta (3)}{\pi ^2\mathrm{}^3c^3}k_\beta ^3T_0^3(410.0)V.$$ $`(20)`$ Following this, the photon density is found to be $`(\frac{\overline{N}_\gamma }{V})410`$, which is in very good agreement with the estimated value of 400 found by doing a calculation of the total energy density carried by the cosmic microwave background radiation. Using Eq. (20), we have calculated the total number of photons in the present universe, which becomes $$\overline{N}_\gamma =1.74\times 10^{88}.$$ $`(21)`$ Considering the fact that the number of nucleons, $`N_n`$, in the present universe is $`6.30\times 10^{78}`$, , we obtain $$(\frac{\overline{N}_\gamma }{N_n})0.28\times 10^{10}.$$ $`(22)`$ This agrees with the value $`(0.140.33)\times 10^{10}`$ as speculated by several earlier workers following calculations on baryogenesis. To conclude, we find that the theory developed by us recently for the evolution of the universe proves to have its further success in reproducing the temperature of the cosmic background radiation correctly. Besides, it also succeeds to reproduce the photon density associated with the background radiation, and the value of the ratio $`(\overline{N}_\gamma /N_n)`$, which nicely match with the results predicted by others. TABLE - I | | Age of the | Temperature (T) in K as | Temperature (T)in K | | --- | --- | --- | --- | | | universe (t) | calculated from | for the formation of | | | in sec. | Eq. (15) | elementary particles | | | | | | | | 5 | $`1\times 10^9`$ | $`6\times 10^9(e^+,e^{})`$ | | | | | | | | $`1.2\times 10^4`$ | $`2.1\times 10^{11}`$ | $`1.2\times 10^{12}(\mu ^+,\mu ^{})`$ and their antiparticles | | | | | | | | $`7\times 10^5`$ | $`2.8\times 10^{11}`$ | $`1.6\times 10^{12}(\pi ^0,\pi ^+,\pi ^{})`$ and their antiparticles | | | | | | | | $`1.5\times 10^6`$ | $`1.9\times 10^{12}`$ | $`10^{13}`$ (protons, neutron and their antiparticles) | | | | | | | | $`10^{43}`$ | $`7.3\times 10^{30}`$ | $`10^{32}`$ (planck mass) | | | | | | | | | | | References Alan H. Guth, in Bubbles,voids and bumps in time: the new cosmology ed. James Cornell (Cambridge University Press,Cambridge,1989). G. Contopoulos and D. Kotsakis, Cosmology, (Springer-Verlag,Heidelberg, 1987). F. L. Zhi and L. S. Xian ,Creation of the Universe,(World Scientific,Singapur,1989). A. A. Penzias and R. W. Wilson,Astrophys.Jour. 142, 419 (1965). D. J. Fixen, E. S. Cheng, J. M. Gales, J. C. Mather, R. A. Shafer and E. L. Wright, Astrophysics. J. 473, 576 (1996); A. R. Liddle, Contemporary Physics, 39, no 2, 95,(1998). G. Gamow, Phys. Rev, 70, 572 (1946). J. V. Narlikar, The Structure of the Universe,(Oxford University Press, London 1977). D. N. Tripathy and Subodha Mishra, Int. J. Mod. Phys. D 7, 3 , 431 (1998). S. Chandrasekhar, Monthly Notices Roy.Astron. Sco. 91,456 (1931). D. N. Tripathy and Subodha Mishra, Int. J. Mod. Phys. D 7, 6, 917 (1998). P. S. Wesson, Cosmology and Geophysics,(Adam Hilger Ltd,Bristol,1978). E. R. Harrison,Cosmology, (Cambridge: Cambridge University Press,1981). D. A. McQuarrie, Statistical Mechanics, (Harper & Row, New York, 1976). R. K. Pathria, Statistical Mechanics,(Pergamon Press, Oxford, 1972). A. M. Boesgaard and G. Steigman, Ann. Rev. Astron. 23, 319 (1985). I. Affleck and M. Dine, Nucl. Phys. B249, 361 (1985).
no-problem/9912/hep-th9912052.html
ar5iv
text
# 1 Introduction ## 1 Introduction Geometric attempts to generalize the Yang-Mills construction to $`p`$-form gauge fields with $`p>1`$ have led to no-go results that indicate that this goal cannot be achieved while maintaining spacetime locality . In fact, self-interactions of $`p`$-form gauge fields are so constrained that one can completely list them, even if one drops any a priori geometric interpretation of the $`p`$-forms as connections for extended objects. This task was explicitly performed in , where the following question was analyzed. Consider the free action, $$I=d^nx\underset{a}{}\left(\frac{1}{2(p_a+1)!}H_{\mu _1\mathrm{}\mu _{p_a+1}}^aH^{a\mu _1\mathrm{}\mu _{p_a+1}}\right),$$ (1.1) for a system of (non-chiral) exterior form gauge fields $`B_{\mu _1\mathrm{}\mu _{p_a}}^a`$ of degree $`2`$. Here, the $`H^a`$’s are the “field strengths” or “curvatures”, $`H^a`$ $`=`$ $`{\displaystyle \frac{1}{(p_a+1)!}}H_{\mu _1\mathrm{}\mu _{p_a+1}}^adx^{\mu _1}\mathrm{}dx^{\mu _{p_a+1}}=dB^a,`$ (1.2) $`B^a`$ $`=`$ $`{\displaystyle \frac{1}{p_a!}}B_{\mu _1\mathrm{}\mu _{p_a}}^adx^{\mu _1}\mathrm{}dx^{\mu _{p_a}}.`$ (1.3) We assume throughout that the spacetime dimension satisfies the condition $`n>p_a+1`$ for each $`a`$ so that all the $`p_a`$-forms have local degrees of freedom. The action (1.1) is invariant under the abelian gauge transformations, $$B^aB^a+d\mathrm{\Lambda }^a,$$ (1.4) where $`\mathrm{\Lambda }^a`$ are arbitrary $`p_a1`$ forms. The equations of motion, obtained by varying the fields $`B_{\mu _1\mathrm{}\mu _{p_a}}^a`$, are given by, $$_\rho H^{a\rho \mu _1\mathrm{}\mu _{p_a}}=0d\overline{H}^a=0,$$ (1.5) where $`\overline{H}^a`$ is the dual of $`H^{a\rho \mu _1\mathrm{}\mu _{p_a}}`$. The question addressed in was: what are the consistent (local) interactions that can be added to the free action (1.1)? Interaction terms are said to be consistent if their preserve the number (but not necessarily the form) of the independent gauge symmetries. Of course, one can always add to (1.1) gauge-invariant interaction terms constructed out of the curvature components and their derivatives, $$f(H_{\mu _1\mathrm{}\mu _{p_k+1}}^{(k)},_\nu H_{\mu _1\mathrm{}\mu _{p_k+1}}^{(k)},\mathrm{},_{\nu _1\mathrm{}\nu _q}H_{\mu _1\mathrm{}\mu _{p_k+1}}^{(k)})d^nx.$$ (1.6) Being strictly gauge-invariant, these terms actually do not deform the gauge symmetries. One may, however, also search for interaction terms that deform not only the action, but also the gauge transformations. These turn out to be extremely scarce, as the following theorem indicates: ###### Theorem 1.1 Besides the obvious gauge-invariant interactions, the only consistent interaction vertices that can be added to (1.1) have the Noether form, $$V=\underset{(A)}{}g_{(A)}V_{(A)}$$ (1.7) where the $`g_{(A)}`$ are the coupling constants and the $`V_{(A)}`$ read $$V_{(A)}=j^{(t)}B^{(t)}.$$ (1.8) Here, $`j^{(t)}`$ are gauge-invariant conserved $`(np_t)`$-forms, $`dj^{(t)}0`$, and therefore, are exhausted by the exterior polynomials in the curvature forms $`H^{(k)}`$ and their duals $`\overline{H}^{(k)}`$ . Because $`j^{(t)}`$ must have exactly form-degree $`np_t`$, so that the form degree of the integrand of (1.8) matches the spacetime dimension $`n`$, there may be no vertex of the type (1.7) for given spacetime dimension and form-degrees of the exterior form gauge fields. For example, a set of $`2`$-form gauge fields admits gauge symmetry-deforming non-trivial interactions only in $`n=4`$ dimensions and these are of the Freedman-Townsend type . Other examples of vertices of the form (1.8) involving $`p`$-form gauge fields of different form degrees are provided by the Chapline-Manton interactions . The analysis of also enabled one to exhibit new symmetry-deforming interactions, but again only in special dimensions (see also ; these interactions have been further analysed in ). In (1.8), the $`j^{(t)}`$ are exterior polynomials in $`H^{(k)}`$ and $`\overline{H}^{(k)}`$ with coefficients that can involve $`dx^\mu `$. If one imposes Lorentz invariance, bare $`dx^\mu `$’s cannot appear. Note also that if $`(n1)`$-forms are included, an infinite number of couplings (1.8) may in general be constructed since arbitrary powers of the duals (which are zero forms) can appear. The vertices (1.7) have a number of remarkable properties: 1. First, while the strictly gauge-invariant vertices may involve derivatives of the individual components $`H_{\mu _1\mathrm{}\mu _{p_k+1}}^{(k)}`$ of the curvatures, the vertices (1.8) are very special: they can be expressed as polynomials in the exterior product (“exterior polynomials”) in the (undifferentiated) forms $`B^{(k)}`$, $`H^{(k)}`$ and $`\overline{H}^{(k)}`$. This is not an extra requirement. Rather, this property follows directly from the demand that (1.7) defines a consistent interaction. 2. If the vertices (1.7) do not involve the duals $`\overline{H}^{(k)}`$, one recovers the familiar Chern-Simons terms . These are off-shell gauge-invariant up to a total derivative and so, do not deform the gauge transformations. Vertices (1.7) involving the duals are only on-shell gauge-invariant up to a total derivative. These vertices do deform the gauge transformations. 3. Although the vertices (1.7) deform the gauge symmetries when they involve the duals $`\overline{H}^{(k)}`$, they do not modify the algebra of the gauge transformations (to the first order in the coupling constants considered here) because they are linear in the $`p`$-form potentials. This is in sharp contrast with the Yang-Mills construction, which yields a vertex of the form $`\overline{H}^aB^bB^c`$. There is thus no room for an analog of the Yang-Mills vertex for exterior forms of degree $`2`$. How the result is amended in the presence of $`1`$-forms will be discussed at the end. 4. The fact that the gauge transformations remain abelian to first-order in the coupling constant is not in contradiction with . Indeed, we focus here only on symmetries of the equations of motion that are also symmetries of the action. Furthermore, the non-abelian structure uncovered in concerns symmetries associated with non-trivial global features of the spacetime manifold, which are rigid symmetries . The above theorem was stated and discussed in but a complete demonstration of it was not given. The purpose of this paper is to fill this gap. As we shall see, the proof has an interest in itself since it illustrates various cohomologies arising in local field theory. We conclude this introduction by observing that the interaction vertices are in general not duality-invariant, in the sense that an interaction vertex that is available in one version of the theory may not be so in the dual version where some of the $`p`$-form potentials are traded for “dual” $`(np2)`$-form potentials. ## 2 Consistent interactions and Local BRST Cohomology Our approach to the problem of constructing consistent interaction vertices for a gauge theory is based on the BRST symmetry. As shown in , the question boils down to computing the local BRST cohomological group at ghost number zero in the algebra of local $`n`$-forms depending on the fields, the ghosts, the antifields and their derivatives. These groups are denoted by $`H^0(s|d)`$. The cocycle condition reads, $$sa+db=0,$$ (2.1) where $`a`$ (respectively $`b`$) is a local $`n`$-form (respectively $`(n1)`$-form) of ghost number zero (respectively one). Trivial solutions of (2.1) are of the form, $$a=sm+dn$$ (2.2) where $`m`$ (respectively $`n`$) is a local $`n`$-form (respectively, $`(n1)`$-form) of ghost number $`1`$ (respectively $`0`$). One often refers to (2.1) as the “Wess-Zumino consistency condition” . If $`a`$ is a solution of (2.1), its antifield-independent part defines a consistent interaction; and conversely, given a consistent interaction, one can complete it it by antifield-dependent terms to get a BRST cocycle (2.1). As explained in , it is necessary to include the antifields in the analysis of the cohomology in order to cover symmetry-deforming interactions. In the case at hand, the gauge symmetries are reducible and the following set of antifields is required , $$B^{a\mu _1\mathrm{}\mu _{p_a}},B^{a\mu _1\mathrm{}\mu _{p_a1}},\mathrm{},B^{a\mu _1},B^a.$$ (2.3) The Grassmann parity and the antighost number of the antifields $`B^{a\mu _1\mathrm{}\mu _{p_a}}`$ associated with the fields $`B_{\mu _1\mathrm{}\mu _{p_a}}^a`$ are equal to $`1`$. The Grassmann parity and the antighost number of the other antifields is determined according to the following rule. As one moves from one term to the next one to its right in (2.3), the Grassmann parity changes and the antighost number increases by one unit. Therefore the parity and the antighost number of a given antifield $`B^{a\mu _1\mathrm{}\mu _{p_aj}}`$ are respectively $`j+1`$ modulo $`2`$ and $`j+1`$. Reducibility also imposes the following set of ghosts, $$C_{\mu _1\mathrm{}\mu _{p_a1}}^a,\mathrm{},C_{\mu _1\mathrm{}\mu _{p_aj}}^a,\mathrm{},C^a.$$ (2.4) These ghosts carry a degree called the pure ghost number. The pure ghost number of $`C_{\mu _1\mathrm{}\mu _{p_a1}}^a`$ and its grassmann parity are equal to 1. As one moves from one term to the next one to its right in (2.4), the Grassmann parity changes and the ghost number increases by one unit up to $`p_a`$. We denote by $`𝒫`$ the algebra of spacetime forms with coefficients that are polynomials in the fields, antifields, ghosts and their derivatives. The action of $`s`$ in $`𝒫`$ is the sum of two parts, namely, the “Koszul-Tate differential $`\delta `$” and the “longitudinal exterior derivative $`\gamma `$”: $$s=\delta +\gamma ,$$ (2.5) where we have, $`\delta B_{\mu _1\mathrm{}\mu _{p_a}}^a`$ $`=`$ $`0,`$ (2.6) $`\delta C_{\mu _1\mathrm{}\mu _{p_aj}}^a`$ $`=`$ $`0,`$ (2.7) $`\delta \overline{B}_1^a+d\overline{H}^a`$ $`=`$ $`0,`$ $`\delta \overline{B}_2^a+d\overline{B}_1^a`$ $`=`$ $`0,`$ (2.8) $`\mathrm{}`$ $`\delta \overline{B}_{p_a+1}^a+d\overline{B}_{p_a}^a`$ $`=`$ $`0,`$ and, $`\gamma B^{a\mu _1\mathrm{}\mu _{p_a+1j}}`$ $`=`$ $`0,`$ (2.9) $`\gamma B^a+dC_1^a`$ $`=`$ $`0,`$ (2.10) $`\gamma C_1^a+dC_2^a`$ $`=`$ $`0,`$ $`\mathrm{}`$ $`\gamma C_{p_a1}^a+dC_{p_a}^a`$ $`=`$ $`0,`$ (2.12) $`\gamma C_{p_a}^a`$ $`=`$ $`0.`$ (2.13) In the above equations, $`C_j^a`$ is the $`(p_aj)`$-form whose components are $`C_{\mu _1\mathrm{}\mu _{p_aj}}^a`$. Furthermore, we have systematically denoted (as above) the duals by an overline to avoid confusion with the \*-notation of the antifields. The actions of $`\delta `$ and $`\gamma `$ on the individual components of the antifields (2.3), ghosts (2.4) and their derivatives are easily read off from the above formulas (recalling that $`\delta (dx^\mu )=\gamma (dx^\mu )=0`$, $`[_\mu ,\delta ]=0,[_\mu ,\gamma ]=0`$). ## 3 General procedure for working out BRST cohomology In order to prove the theorem, we shall solve the BRST cocycle condition by proceeding as in the Yang-Mills case . To that end, one expands the cocycles and the cocycle condition according to the antighost number. Thus, if $`a`$ is a BRST cocycle (modulo $`d`$), then its various components in the expansion, $$a=a_0+a_1+a_2+\mathrm{}+a_k,antigh(a_i)=i,$$ (3.1) must fulfill the chain of equations, $`\gamma a_0+\delta a_1+db_0`$ $`=`$ $`0,`$ $`\mathrm{}`$ $`\gamma a_{k1}+\delta a_k+db_{k1}`$ $`=`$ $`0,`$ (3.3) $`\gamma a_k+db_k`$ $`=`$ $`0.`$ (3.4) The last equation in this chain no longer involves the differential $`\delta `$ and can be easily solved. The idea, then, is to start the resolution of the cocycle condition from $`a_k`$ and to work one’s way up until one reaches $`a_0`$, which is the quantity of physical interest. \[Recall that $`a_0`$ defines a consistent deformation of the Lagrangian. And conversely, if $`a_0`$ is a consistent deformation of the Lagrangian, then one may complete it by terms of positive antighost number, as in (3.1), so as to construct a BRST cocycle $`a`$. Furthermore, trivial BRST cocycles (in the cohomological sense) correspond to trivial deformations (i.e., deformations that can be absorbed through redefinitions of the field variables) . The reconstruction of the cocycle $`a`$ from $`a_0`$ stops at some antifield number $`k`$ because $`a_0`$ is polynomial in the derivatives (see the argument in section 3). Before doing this, we shall introduce some useful notations and give a few solutions. In the analysis of the BRST cohomology, it turns out that two combinations of the fields and antifields play a central rôle. The first one combines the field strengths and the duals of the antifields and is denoted $`\stackrel{~}{H}^a`$, $$\stackrel{~}{H}^a=\overline{H}^a+\underset{j=1}{\overset{p_a+1}{}}\overline{B}_j^a.$$ (3.5) The second one combines the $`p_a`$-forms and their associated ghosts and is denoted $`\stackrel{~}{B}^a`$, $$\stackrel{~}{B}^a=B^a+C_1^a+\mathrm{}+C_{p_a}^a.$$ (3.6) It is easy to see that both $`\stackrel{~}{H}^a`$ and $`\stackrel{~}{B}^a`$ have a definite Grassmann parity respectively given by $`np_a+1`$ and $`p_a`$ modulo $`2`$. On the other hand, exterior products of $`\stackrel{~}{H}^a`$ or $`\stackrel{~}{B}^a`$ (including the $`\stackrel{~}{H}^a`$ and $`\stackrel{~}{B}^a`$ themselves) are not homogeneous in form degree and ghost number. To isolate a component of a given form degree $`k`$ and ghost number $`g`$, we enclose the product in brackets $`[\mathrm{}]^{k,g}`$. The component in $`[A]^{k,g}`$ which has definite antighost number $`l`$ is denoted $`[A]_l^{k,g}`$. Since products of $`\stackrel{~}{B}^a`$ very frequently appear in the rest of the analysis, we introduce the following notations, $$𝒬^{a_1\mathrm{}a_m}=\stackrel{~}{B}^{a_1}\mathrm{}\stackrel{~}{B}^{a_m}\text{and}𝒬_{k,g}^{a_1\mathrm{}a_m}=[\stackrel{~}{B}^{a_1}\mathrm{}\stackrel{~}{B}^{a_m}]_g^k.$$ (3.7) We shall not write explicitly the wedge product from now on ($`dx^0dx^1`$ can clearly only mean $`dx^0dx^1`$). We also define the three “mixed operators”: $`\mathrm{\Delta }=\delta +d`$, $`\stackrel{~}{\gamma }=\gamma +d`$ and $`\stackrel{~}{s}=s+d`$. Using those definitions we have the following relations: $`\mathrm{\Delta }\stackrel{~}{H}^a`$ $`=0,\mathrm{\Delta }\stackrel{~}{B}^a=0,\mathrm{\Delta }H^a=0`$ (3.8) $`\stackrel{~}{\gamma }\stackrel{~}{H}^a`$ $`=0,\stackrel{~}{\gamma }\stackrel{~}{B}^a=H^a,\stackrel{~}{\gamma }H^a=0`$ (3.9) $`\stackrel{~}{s}\stackrel{~}{H}^a`$ $`=0,\stackrel{~}{s}\stackrel{~}{B}^a=H^a,\stackrel{~}{s}H^a=0.`$ (3.10) Eq. $`\stackrel{~}{\gamma }\stackrel{~}{B}^a=H^a`$ is known in the literature as the “horizontality condition” . It is easy to construct solutions of the Wess-Zumino consistency condition out of the variables $`H^a,\stackrel{~}{H}^a,\stackrel{~}{B}^a`$. For example, in ghost number zero, $$a^{n,0}=[P_b(H^a,\stackrel{~}{H}^a)\stackrel{~}{B}^b]^{n,0},$$ (3.11) is a solution of (2.1). This can be seen by applying $`\stackrel{~}{s}`$ to $`P_b(H^a,\stackrel{~}{H}^a)\stackrel{~}{B}^b`$. One gets $`\stackrel{~}{s}(P_b\stackrel{~}{B}^b)=()^{ϵ_P}P_b(\stackrel{~}{s}\stackrel{~}{B}^b)=()^{ϵ_P}P_bH^b`$ and thus, $`s[P_b\stackrel{~}{B}^b]^{n,0}+d[P_b\stackrel{~}{B}^b]^{n1,1}=[\stackrel{~}{s}(P_b\stackrel{~}{B}^b)]^{n,1}=[P_bH^b]^{n,1}=0`$ (no ghost occurs in $`P_bH^b`$). We shall prove in this article the remarquable property that all antifield dependent solutions of the Wess-Zumino consistency condition in ghost number $`0`$ are in fact of the form (3.11) (modulo antifield independent terms). According to the discussion at the beginning of Section 2, this is equivalent to proving Theorem 1.1 since $`a_0^{n,0}=[P_b(H^a,\stackrel{~}{H}^a)\stackrel{~}{B}^b]_0^{n,0}=P_b(H^a,\overline{H}^a)B^b`$ is of the required form. ## 4 Some useful lemmas In order to construct the general solution of the (mod $`d`$) BRST cocycle condition along the lines indicated in the previous section, we shall need a few lemmas. ###### Lemma 4.1 Let $`a_k`$ be a solution of $`\gamma a_k+db_k=0`$, with non-vanishing antighost number $`k`$. Then one has $`a_k=a_k^{}+\gamma m_k+dn_k`$ where $`a_k^{}`$ is annihilated by $`\gamma `$, $`\gamma a_k^{}=0`$. Proof: The proof proceeds as in the Yang-Mills case: one analyses the descent equation associated with $`\gamma a_k+db_k=0`$. In we have listed all the non-trivial descents without taking into account the antifields. However the results are unchanged even if one includes the antifields since their contributions to non-trivial descents can always be absorbed by trivial terms (the proof of this statement is identical to the one in the Yang-Mills case ). Therefore, if $`a_k`$ involves the antifields, the descent associated with it is necessarily trivial so that one can find a different representative $`a_k^{}`$ in the same class of $`H(\gamma |d)`$ as $`a_k`$ which is annihilated by $`\gamma `$$`\mathrm{}`$ ###### Lemma 4.2 The general solution of $`\gamma a_k=0`$ is given by, $$a_k=\underset{I}{}P_k^I\omega ^I+\gamma c_k,$$ (4.1) where the $`\omega ^I`$ are polynomials in the undifferentiated ”last” ghosts of ghosts $`C_{p_a}^a`$ and the $`P_k^I`$ are spacetime $`n`$-forms with coefficients that are polynomials in the field strengths, their derivatives, the antifields and their derivatives (these variables will be denoted $`\chi `$ in the sequel). Proof: The proof of this lemma is quite standard. One redefines the variables into three sets obeying respectively $`\gamma x^i=0,\gamma y^\alpha =z^\alpha `$, $`\gamma z^\alpha =0`$. The variables $`y^\alpha `$ and $`z^\alpha `$ form “contractible pairs” and the cohomology is then generated by the (independent) variables $`x^i`$. In our case, the $`x^i`$ are given by $`dx^\mu `$, the fields strengths components, the antifields and their derivatives as well as the last (undifferentiated) ghosts of ghosts. A complete proof of the lemma in the absence of antifields can be found in . Here we simply note that the antifields are automatically part of the $`x^i`$ variables since they are all $`\gamma `$-closed and do not appear in the $`\gamma `$ variations. $`\mathrm{}`$ Using the conventions (3.7) and dropping the trivial term, we can write the cocycle (4.1) as, $`a_k=_mP_k^{a_1\mathrm{}a_m}[\stackrel{~}{B}^{a_1}\mathrm{}\stackrel{~}{B}^{a_m}]^{0,l}=_mP_k^{a_1\mathrm{}a_m}𝒬_{0,l}^{a_1\mathrm{}a_m},`$ with $`l=_mp_{a_m}`$. ###### Lemma 4.3 Let $`\alpha `$ be an antifield independent $`\gamma `$-cocycle that takes the form $$\alpha =R_1(H^{a_r},C_{p_{a_r}}^{a_r})R_2(H^{b_s},C_{p_{b_s}}^{b_s}),p_{b_s}>p_{a_r},$$ (4.2) where $`R_1`$ (respectively $`R_2`$) is an exterior polynomial in the curvature form $`H^{a_r}`$ (respectively $`H^{b_s}`$) and the last ghost of ghost $`C_{p_{a_r}}^{a_r}`$ (respectively $`C_{p_{b_s}}^{b_s}`$) such that $`p_{b_s}>p_{a_r}`$. Assume that $`R_1`$ contains no constant term and is trivial in $`H(\gamma |d)`$, $$R_1=\gamma U_1+dV_1.$$ (4.3) Then, $`\alpha `$ is also trivial in $`H(\gamma |d)`$. Proof: This result was proved in . Since $`R_1`$ is trivial, it is the obstruction to the lift of a $`\gamma `$-cocycle $`\beta _1`$ through the descent equations of $`H(\gamma |d)`$. Because of the condition $`p_{b_s}>p_{a_r}`$, $`\alpha `$ then also appears as the obstruction to the lift of the $`\gamma `$-cocycle $`\beta _1R_2`$ indicating that $`\alpha `$ is trivial in $`H(\gamma |d)`$$`\mathrm{}`$ The theorem applies in particular when $`R_1`$ is an arbitrary polynomial of degree $`>0`$ in the curvatures $`H^{a_r}`$. ###### Lemma 4.4 Let $`a`$ be a cochain with form-degree $`p`$ and ghost number $`g`$, $`a[a]^{p,g}`$, and let $`a=a_0+\mathrm{}+a_k`$ be its expansion according to the antighost number, $`a_i=[a]_i^{p,g}`$. Assume that the last term $`a_k`$ takes the form $`a_k=[P]_k^{q,k}\chi `$ where $`P`$ is an exterior polynomial in $`\stackrel{~}{H}`$ and $`H`$ and where $`\chi \chi ^{pq,k+g}`$ is an exterior polynomial in $`H`$ and $`C_{p_a}^a`$ which is trivial in $`H(\gamma |d)`$, $`\chi (H,C)=\gamma m+dn`$. Then one can redefine $`a_k`$ away by adding $`s`$-exact terms modulo $`d`$ to $`a`$, $$a=su+dv+\text{ terms of antighost number }<k.$$ (4.4) Proof: One has $`P(\stackrel{~}{H},H)=[P]_0^{qk,0}+\mathrm{}+[P]_k^{q,k}+\mathrm{}+[P]_{nq+k}^{n,n+qk}`$ and $`\stackrel{~}{s}\stackrel{~}{P}=0`$. One has also by assumption, $`\chi \chi ^{pq,k+g}=\gamma m^{pq,k+g1}+dm^{pq1,k+g}`$ with $`m^{pq,k+g1}m`$ and $`m^{pq1,k+g}n`$. If we define $`m^{i,j}(i<pq1)`$ through the descent equation $`\gamma m^{pq1,k+g}+dm^{pq2,k+g+1}=0,\mathrm{}`$ and $`\stackrel{~}{m}=m^{pq,k+g1}+m^{pq1,k+g}+m^{pq2,k+g+1}+\mathrm{}+m^{0,k+g+pq1}`$, one gets, $`\chi ^{pq,k+g}=\stackrel{~}{\gamma }\stackrel{~}{m}dm^{pq,k+g1}=\stackrel{~}{s}\stackrel{~}{m}dm^{pq,k+g1}`$. Thus, $`\stackrel{~}{s}((1)^{ϵ_P}P\stackrel{~}{m})=a_kPdm^{pq,k+g1}`$. If we project this equation on the form degree $`p`$ of $`a_k`$, one finds the equation, $$su^{p,g1}+du^{p1,g}=a_k[P]_{k1}^{q1,k+1}dm^{pq,k+g1},$$ (4.5) where we have set $`u^{p,g1}[(1)^{ϵ_P}P\stackrel{~}{m}]^{p,g1}`$ and $`u^{p1,g}[(1)^{ϵ_P}P\stackrel{~}{m}]^{p1,g}`$. Thus, $$a_k=su^{p,g1}+du^{p1,g}+\text{ terms of antighost number }<k,$$ (4.6) which is the desired result. $`\mathrm{}`$ ## 5 Proof of theorem We now have all the necessary tools required to solve the Wess-Zumino consistency condition (2.1). Consider first the case where the expansion of $`a`$ (which has total ghost number $`0`$) reduces to $`a_0`$ (no antifields). Then, $`aa_0`$ fulfills $`\gamma a_0+db_0=0`$. This equation was investigated in detail in , where it was shown that it has only two types of solutions: those for which one can assume that $`b_0=0`$, which are the strictly gauge-invariant terms; and those for which no redefinition yields $`b_0=0`$ (“semi-invariant terms”), which are exhausted by the Chern-Simons terms. Both types of solutions preserve the form of the gauge symmetries and are in agreement with the theorem; we can thus turn to the case where $`a`$ involves the antifields, $`k0`$. By lemma 4.1, one can assume that the last term $`a_k`$ in the expansion of $`a`$ is annihilated by $`\gamma `$. Indeed, the (allowed) redefinition $`aasm_kdn_k`$ (see Lemma 4.1) enables one to do so. Then, the next to last equation in the chain (3) implies $`d\gamma b_{k1}`$, i.e., by the algebraic Poincaré lemma, $`\gamma b_{k1}+dc_{k1}=0`$ for some $`c_{k1}`$ (the cohomology of $`d`$ is trivial in form-degree $`n1`$). Now, two cases must be considered: either $`k>1`$, in which case lemma 4.1 implies again that one can assume $`\gamma b_{k1}=0`$ through redefinitions. Or $`k=1`$, in which case $`b_{k1}b_0`$ does not involve the antifields and may lead to a non trivial descent. This second possibility arises only if $`H(\gamma )`$ does not vanish in pureghost number one since $`a_ka_1`$ must be a non-trivial element of $`H^k(\gamma )`$ or else can be eliminated through a redefinition. In the absence of $`1`$-forms, $`H^1(\gamma )`$ vanishes (lemma 4.2), so we can assume $`k>1`$. The case $`k=1`$ will be discussed in section 6 where we allow for the presence of $`1`$-forms. If $`k>1`$, one can expand the elements $`a_k`$ and $`b_{k1}`$ according to lemma 4.2, $$a_k=P_k^I\omega ^I,b_{k1}=Q_{k1}^I\omega ^I$$ (5.1) ($`\gamma `$-trivial terms can be eliminated). The next to last equation in the chain (3) then implies $$\delta P_k^I+dQ_{k1}^I=0,$$ (5.2) which indicates that $`P_k^I`$ is a cocycle of the cohomology $`H(\delta |d)`$. This cohomology, which is related to the so-called invariant characteristic cohomology, was completely worked out in . It was shown that all its representatives can be written as the $`[]^{n,k}`$ component of an exterior polynomial in $`H^a`$ and $`\stackrel{~}{H}^a`$, $$P_k^I=[P^I(H^a,\stackrel{~}{H}^a)]^{n,k},(k>1).$$ (5.3) It is because of this property that antifield dependent solutions of the Wess-Zumino consistency condition, which belong a priori to the algebra generated by all the variables and their individual, successive derivatives, turn out to be expressible in terms of the forms $`H^a`$, $`\stackrel{~}{H}^a`$ and $`B^a`$ only. Relation (5.3) implies that the term $`a_k`$ of highest antighost number in the expansion of $`a`$ is up to trivial terms of the form, $$a_k=[P^I(H^a,\stackrel{~}{H}^a)]^{n,k}\omega ^I,$$ (5.4) where the pureghost number of the $`\omega ^I`$ must be equal to $`k`$ in order to obtain a BRST cocycle in ghost number $`0`$. The question is now: can we construct from the known higher-order component $`a_k`$ the components $`a_j`$ of lower antighost numbers in order to obtain a solution of the Wess-Zumino consistency condition? As we have seen in Section 3 this is always possible when the $`\omega ^I`$ are linear in the ghosts of ghosts and the resulting BRST cocyle is then given by (3.11). We are now going to show that when the $`\omega ^I`$ in $`a_k`$ are at least quadratic in the ghosts of ghosts then one encounters an obstruction in the construction of the corresponding solution of the Wess-Zumino consistency condition. To proceed we exhibit explicitly in $`a_k`$ the $`\stackrel{~}{B}^a`$ which correspond to the forms of lowest degree occuring in $`a_k`$ and denote them by $`\stackrel{~}{B}_1^{a_i}`$. The form degree in question is called $`p`$. The other $`\stackrel{~}{B}^a`$ are denoted $`\stackrel{~}{B}_2^{b_j}`$. Thus we write $`a_k`$ as, $$a_k=[P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}]^{n,k}[\stackrel{~}{B}_1^{a_1}\mathrm{}\stackrel{~}{B}_1^{a_r}\stackrel{~}{B}_2^{b_1}\mathrm{}\stackrel{~}{B}_2^{b_s}]^{0,k}.$$ (5.5) Of course, $`k>p`$ ($`a_k`$ is at least quadratic in the $`\stackrel{~}{B}`$). In fact, $`k>p+1`$ since there is no $`1`$-form in the problem. A direct calculation then shows that the equations $`\gamma a_j+\delta a_{j+1}+db_j=0`$ determining $`a_{k1},a_{k2},\mathrm{}`$ have a solution up to $`a_{kp}`$. These solutions are, $`a_{kj}`$ $`=`$ $`[P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}]^{nj,k+j}[\stackrel{~}{B}_1^{a_1}\mathrm{}\stackrel{~}{B}_1^{a_r}\stackrel{~}{B}_2^{b_1}\mathrm{}\stackrel{~}{B}_2^{b_s}]^{j,kj},`$ $`\text{for }0jp.`$ Unless $`a_k`$ is trivial (i.e., can be removed by the addition of exact terms to $`a`$), there is however an obstruction in the construction of $`a_{kp1}`$. To discuss this obstruction, one needs to know the ambiguity in the $`a_{kj}`$ ($`0jp`$). One easilly verifies that it is given by $`a_{kj}a_{kj}+m_0+m_1+\mathrm{}+m_{j1}`$ where $`m_0`$ satisfies $`\gamma m_0=0`$, $`m_1`$ satisfies $`\gamma m_1+\delta n_1+db_1=0,\gamma n_1=0`$, $`m_2`$ satisfies $`\gamma m_2+\delta n_2+db_2=0,\gamma n_2+\delta l_2+dc_2=0,\gamma l_2=0`$, etc. However, none of these ambiguities except $`m_0`$ in $`a_{kp}`$ can play a role in the construction of a non-trivial solution. To see this, we note that $`\delta `$, $`\gamma `$ and $`d`$ conserve the polynomial degree of the variables of any given sector<sup>2</sup><sup>2</sup>2By sector we mean the variables corresponding to a given $`p`$-form and its associated antifields and ghosts.. We can therefore work at fixed polynomial degree in the variables of all the different $`p`$-forms. Since $`n_1`$, $`l_2`$, etc. are $`\gamma `$-closed terms which can be lifted at least once, they have the generic form $`R[H,\stackrel{~}{H}]𝒬`$ where $`𝒬`$ has to contain a ghost of ghost of degree $`p_A<p`$. Because we work at fixed polynomial degree, the presence of such terms imply that $`P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}`$ has to depend on $`H^A`$ (a dependence on $`\stackrel{~}{H}^A`$ is not possible since by assumption $`k>p`$). However, $`a_k`$ is then of the form described in Lemma 4.4 and can be eliminated from $`a`$ by the addition of trivial terms and the redefinition of the terms of antighost numbers $`<k`$. Therefore we may now assume that $`a_k`$ does not contain $`H^A`$ and that the only ambiguity in the definitions of the $`a_{kj}`$ is $`m_0`$ in $`a_{kp}`$. Since $`k>p`$, we have to substitute $`a_{kp}`$ in the equation $`\gamma a_{kp1}+\delta a_{kp}+db_{kp1}=0`$. We then get, $`\gamma a_{kp1}+\delta [P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}]^{np,k+p}[\stackrel{~}{B}_1^{a_1}\mathrm{}\stackrel{~}{B}_1^{a_r}\stackrel{~}{B}_2^{b_1}\mathrm{}\stackrel{~}{B}_2^{b_s}]^{p,kp}`$ (5.7) $`+\delta m_0+db_{kp1}=0,`$ (5.8) which can be written as, $`\gamma a_{kp1}^{^{}}+db_{kp1}^{^{}}+\delta m_0`$ $`+()^{ϵ_P}r[P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}]^{np1,k+p+1}H_1^{a_1}𝒬_{0,kp}^{a_2\mathrm{}a_rb_1\mathrm{}b_s}=0.`$ (5.9) By acting with $`\gamma `$ on the above equation we obtain $`d\gamma b_{kp1}^{^{}}=0\gamma b_{kp1}^{^{}}+db_{kp1}^{^{\prime \prime }}=0`$ which means that $`b_{kp1}^{^{}}`$ is a $`\gamma `$ mod $`d`$ cocycle. Because we have excluded $`1`$-forms from the discussion, $`kp1>0`$ so that we may assume that $`b_{kp1}^{^{}}`$ is strictly annihilated by $`\gamma `$. Accordingly, $`db_{kp1}^{^{}}=[d\beta _{a_2\mathrm{}a_rb_1\mathrm{}b_s}(\chi )]𝒬_{0,g+qp}^{a_2\mathrm{}a_rb_1\mathrm{}b_s}+\gamma l_{0,kp1}^n`$. Equation (5.9) then reads, $`()^{ϵ_P}r[P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}]^{np1,k+p+1}H_1^{a_1}`$ $`+\delta \alpha _{a_2\mathrm{}a_rb_1\mathrm{}b_s}(\chi )+d\beta _{a_2\mathrm{}a_rb_1\mathrm{}b_s}(\chi )=0,`$ (5.10) where we have set $`m_0=\alpha _{a_2\mathrm{}a_rb_1\mathrm{}b_s}(\chi )𝒬_{0,kp}^{a_2\mathrm{}a_rb_1\mathrm{}b_s}`$. Eq. (5.10) implies, $`[P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}]^{np1,q+p+1}H_1^{a_1}=0,`$ (5.11) since $`\delta `$ and $`d`$ both increase the number of derivatives of the $`\chi `$. Let us first note that $`P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}`$ cannot depend on $`\stackrel{~}{H}_1^c`$ because in that case we would have $`kp10`$ which contradicts our assumption that there is no $`1`$-form (indeed, the component of form-degree $`n`$ of a polynomial in $`H^a`$ and $`\stackrel{~}{H}^a`$ which depends on $`\stackrel{~}{H}_1^c`$ has maximum antighost number $`p+1`$). Therefore, $`P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}`$ will satisfy (5.11) only if it is of the form, $`P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}=R_{ca_1\mathrm{}a_rb_1\mathrm{}b_s}H_1^c`$ with $`R_{ca_1\mathrm{}a_rb_1\mathrm{}b_s}`$ symmetric in $`ca_1`$ (resp. antisymmetric) if $`H_1`$ is anticommuting (resp. commuting). However, using Lemma 4.4 we conclude once more that in that case $`a_k`$ can be absorbed by the addition of trivial terms and a redefinition of the components of lower antighost number of $`a`$. This ends our proof of the statement that for a system of $`p`$-forms with $`p2`$ all the antifield dependent solutions of the Wess-Zumino consistency conditions in ghost number $`0`$ are of the form (3.11). ## 6 Presence of $`1`$-forms If $`1`$-forms are present in the system of $`p`$-forms considered, the solutions in Theorem 1.1 are still valid. However, new solutions of the Wess-Zumino consistency condition appear, so the list is no longer exhaustive. The first set of new solutions, related to the Noether conserved currents of the theory, arise because $`H^1(\gamma )`$ no longer vanishes. Although the term $`b_{k1}b_0`$ which appears in (5.1) may lead to a non-trivial descent, one can show that (5.2) still holds so that $`P^IP^a`$ has to be an element of $`H_1^n(\delta |d)`$. This cohomology is isomorphic to the set $`a^\mathrm{\Delta }`$ of non-trivial global symmetries of the theory. The corresponding solutions of the Wess-Zumino consistency condition can then be written as, $$a=k_\mathrm{\Delta }^a(j^\mathrm{\Delta }B_1^a+a^\mathrm{\Delta }C_1^a),$$ (6.1) where the $`j^\mathrm{\Delta }`$ are the Noether currents corresponding to the $`a^\mathrm{\Delta }`$ and satisfy $`\delta a^\mathrm{\Delta }+dj^\mathrm{\Delta }=0`$. The dimension of this set of solutions is infinite since one can construct infinitely many conserved currents $`j^\mathrm{\Delta }`$ . This feature is characteristic of free lagrangians. Although these solutions define consistent interactions to first order in the deformation parameter, it is expected that most of them are obstructed at the second order. Furthermore, they are severely constrained by Lorentz invariance. The second set of new solutions of the Wess-Zumino consistency condition arise because the condition $`kp1>0`$ under (5.9) may no longer hold. Indeed, if $`p=1`$ and $`k=2`$ then we have $`kp1=0`$. As above, the term $`b_{kp1}^{^{}}b_0^{^{}}`$ appearing in (5.9) may now lead to a non-trivial descent in $`H(\gamma |d)`$. According to the analysis of , equation (5.11) is then replaced by, $`()^{ϵ_P}r[P_{a_1\mathrm{}a_rb_1\mathrm{}b_s}(H^a,\stackrel{~}{H}^a)]_0^{n2}H_1^{a_1}+V_{a_2\mathrm{}a_rb_1\mathrm{}b_s}(H^a)=0.`$ (6.2) The only solution of the above equation for $`P^I`$ is $`P^Ik_{abc}\stackrel{~}{H}_1^a`$ with $`k_{abc}`$ completely antisymmetric . The corresponding BRST cocyles are given by, $$a=k_{abc}[\stackrel{~}{H}_1^a\stackrel{~}{B}_1^b\stackrel{~}{B}_1^c]_0^n.$$ (6.3) They give rise to the famous Yang-Mills vertex since $`a_0=k_{abc}\overline{H}_1^aB_1^bB_1^c`$. In particular, the above discussion confirms that is not possible to construct a Lagrangian with coloured $`p`$-forms ($`p>1`$) since vertices of the form $`a_0\overline{H}BA`$ (where $`A`$ is a $`1`$-form potential) do not exist. This fact is well appreciated in the litterature. ## 7 Comments and conclusions In this paper we have provided the complete proof of the Theorem given in on the consistent deformations of non-chiral free $`p`$-forms. The same techniques can be used to study solutions of the Wess-Zumino consistency condition at other ghost numbers (e.g., candidate anomalies) . For instance, one can show that if all the exterior gauge fields have form degree $`3`$, Theorem 1.1 is also valid for candidate anomalies (the gauge potential being replaced by the corresponding ghosts of pure ghost number $`1`$). The same methods have also been extended recently to cover chiral $`p`$-forms . ## 8 Acknowledgements This work is suported in part by the “Actions de Recherche Concertées” of the “Direction de la Recherche Scientifique - Communauté Française de Belgique”, by IISN - Belgium (convention 4.4505.86) and by Proyectos FONDECYT 1970151 and 7960001 (Chile). Bernard Knaepen is supported by a post-doc grant from the “Wiener-Anspach” foundation.
no-problem/9912/gr-qc9912061.html
ar5iv
text
# The case for cosmic acceleration: Inhomogeneity versus cosmological constant Proceedings of the Spanish Relativity Meeting Bilbao, 1999 ## 1 Introduction Assuming the Cosmological Principle and hence the Friedmann (FLRW) models to be valid as global models of the Universe, for several decades astronomers have attempted to use the Hubble diagram of some predefined standard candle, to place constraints on the free parameters of the FLRW models, by comparing the observed redshift-luminosity distance relation at very low redshift (or alternatively the redshift-magnitude one) with that predicted in FLRW models with different values of the deceleration parameter $`q_0`$. Last year, two independent groups , by using type Ia Supernovae as standard candles without evolution effects (but see recently ), were able to extend the Hubble diagram of luminosity distance versus redshift, out to a redshift of $`z\stackrel{<}{_{}}1`$, implementing a generalized K-correction. The main conclusion of these works is that the deceleration parameter at present cosmic time $`q_0`$, is negative, i.e., an acceleration of the cosmic expansion. As it is customary, they interpret this conclusion in the framework of the Friedmann (FLRW) models with cosmological constant, $`\mathrm{\Lambda }`$, in which a necessary and sufficient condition of cosmic acceleration, if the weak energy condition holds, is that $`\mathrm{\Lambda }`$ is positive. The cosmological constant $`\mathrm{\Lambda }`$, was before reinterpreted as a vacuum energy and was used in the inflationary models. An estimate of this vacuum ”quantum-mechanical” energy, is at least 120 orders of magnitude higher, that the vacuum energy associated with $`\mathrm{\Lambda }`$, determined by the interpretation of the Supernova (SNe Ia) data in the background of FLRW models with $`\mathrm{\Lambda }`$. It is not known which is the supression mechanism, if it exists. In this work, I develop an alternative explanation for the measured cosmic acceleration, which was first proposed in . In this new explanation, from the beginning, $`\mathrm{\Lambda }`$ and hence the vacuum energy are set to zero. Our starting point will be the relaxation of the essential assumption of the FLRW models, the Cosmological Principle and, after, the consideration of barotropic locally rotational symmetric (LRS) inhomogeneous models without $`\mathrm{\Lambda }`$, in which the acceleration of the congruence of cosmic matter, which is related to the inhomogeneity of matter-energy, is just a sufficient condition for the SNe Ia measured cosmic acceleration. Our main ”a priori” argument to consider inhomogeneous models , is observational. Observationally, we can only assert that there is almost isotropy about our worldline and this has been falsified using different tests, being the most important one, the measured high degree of isotropy of the cosmic background radiation, CBR, in particular through use of the results of COBE and other posterior experiments. The exact isotropy about our worldline, when combined with the Copernican Principle, leads to exact isotropy about all worldlines (at late times, of different clusters of Galaxies and, at early times, of the average motion of a mixture of gas and radiation) and thus to the exact homogeneity of the 3-dim spacelike hypersurfaces of constant cosmic time and finally to the FLRW models. However, as Ellis et al. pointed out, if we suspend the Copernican assumption in favour of a direct observational approach, then it turns out that the measured almost isotropy of the CBR about our worldline, is insufficient to force exact isotropy into the spacetime geometry and hence exact spatial homogeneity of the 3-dim cosmic hypersurfaces, i.e., to force the verification of the Cosmological Principle. Exact homogeneity of the 3-dim spacelike hypersurfaces have poor observational support. At the cosmological level, we only have data from our past light cone and testing homogeneity of the 3-dim hypersurfaces at constant global cosmic time, requires us to know about conditions at great distances at present global cosmic time, whereas what we only can observe at great distances is what happened long time ago. Exact homogeneity cannot be proven without either a fully determinate thoery of source evolution or availability of distance measures that are fully independent of the evolution of the suorces. So to test exact homogeneity of spacelike cosmic hypersurfaces, we first have to understand how is the evolution of both the spacetime geometry and its matter-energy contents. ## 2 Our model: Barotropic inhomogeneous spherically symmetric LRS The evidence for almost isotropy comes from the CBR and galaxy counts. There is one family of spacetimes in which the Cosmological Principle is relaxed but they assure the observational almost isotropy, these are the locally rotationally symmetric (LRS)and spherically symmetric (SS) but spatially inhomogeneous models. In the family of inhomogeneous LRS spacetimes that we will consider, class IIc, the isometry group is 3-dimensional, just half the isometry group of the FLRW models. In our model, I will assume that the matter part of Einstein equations have a perfect fluid form. However, we will not consider the dust case, i.e., the Lemaitre-Tolman-Bondi (LTB) models, because then necessarily the congruence of matter worldlines will be geodesic or in free fall. Instead, I will consider a barotropic equation of state, $`p=p(\varrho )`$ and $`\varrho +p>0`$ (NEC condition), which allows for an accelerating congruence. If exact isotropy of CBR is assumed then the EGS theorem, assuming perfect fluid and expanding geodesic motion, uniquely select a FLRW spacetime. However, as Ferrando, Morales and Portilla showed , for a non-geodesic congruence or (and) an imperfect matter fluid, shear and vorticity free conformally stationary inhomogeneous spacetimes exist for which the CBR is exactly isotropic. In particular, this occurs in imperfect fluid LTB models or in the conformaly flat SS Stephani models (which do not admit a barotropic perfect fluid but they allow a thermodynamical scheme) which Dabrowski has recently considered to explain the SNe Ia datae . But, note that almost isotropy allows the presence of shear as in the model considered. LRS perfect fluid spacetimes have been before studied, for instance in using the tetrad description and in , using as ours, the 1+3 threading formalism. Geometrically, in these LRS perfect fluid models (in particular class IIc in the Stewart-Ellis clasification,), if one assumes spherical symmetry (SS), the coefficients of the spacetime metric depend on two independent variables of cosmic time and a radial coordinate, and if one chooses a comoving coordinate system, then the metric depends on three non-negative coefficients, and reads $$ds^2=N^2(r,t)dt^2+B^2(r,t)dr^2+R^2(r,t)d\mathrm{\Omega }^2.$$ (1) The congruence of matter fluid is initially irrotational and by the supposed barotropic equation of state (where by SS, $`\varrho =\varrho (r,t)`$ and $`p=p(r,t)`$), the vorticity is zero at any time. However, the other kinematical quantities of the congruence of matter worldlines, i.e., acceleration, shear and expansion are non zero in this spacetime. Note that in the FLRW models all are zero except the expansion. As the vorticity of the matter flow and the spatial rotation (twist) of $`e^1`$ are zero, then the fluid matter flow is always hypersurface orthogonal and and $`e^1`$ is surface orthogonal, respectively, and there exists in our model: 1) A cosmic time function $`t`$, 2) A 3-metric of the spacelike hypersurfaces and 3) A spherical metric for the 2-dim surfaces. As far as I know, this spacetime was used by Mashhoon and Partovi to describe the gravitational collapse of a charged fluid sphere , and to obtain large-scale observational relations . From the Einstein equations without $`\mathrm{\Lambda }`$, one obtains the conservation of energy-momentum $`T^{ab}`$: $$_aT^{ab}=0.$$ (2) From (2), one obtains (see ) for a perfect fluid, the energy conservation equation: $$\frac{\varrho }{t}+(\varrho +p)\mathrm{\Theta }=0,$$ (3) being $`\mathrm{\Theta }`$ the expansion of the matter fluid $`\theta `$ multiplied by the lapse $`N`$, and the Euler equation $$\frac{p}{r}+(\varrho +p)a=0,$$ (4) being $`a`$, the acceleration of the fluid congruence. It should be here emphasized that this kinematic acceleration is due to pressure gradients, or equivalently, when a barotropic equation of state is supposed, as in our work, to mass-energy gradients. This kinematic acceleration is hence not origined by gravitation nor inertia, which are, on the other hand, covariantly entangled in General Relativity. This last error (the gravitational origin of the acceleration) has been propagated through almost all the literature on the subject. Note that in FLRW models, equation (4), is a tautology, because both terms on the LHS are independently zero. However, the consequences of Euler equation (4), are very important in our model. As the fluid is barotropic and the NEC holds, the acceleration is always away from a high-pressure region towards a neighbouring low-pressure one. In other words, the radial gradient of pressure is negative and gives place to an acceleration of the matter flow which opposes the gravitational attraction. This can also be important in order to surpass the classical singularity theorems, due to the fact that $`\ddot{S}(t)>0`$ in our model, but in this work, I will only prove that this fluid acceleration can explain the SNe Ia data about the negativeness of the $`q_0`$ parameter. ## 3 Luminosity distance-redshift relation and deceleration parameter To relate our model with the SNe Ia data, we need to know how the luminosity distance-redshift relation and the deceleration parameter are modified by the inhomogeneity. By using conservation of light flux, (see ), it follows from the metric (1) $$D_L=(1+z)^2R(t_s,r_s),$$ (5) being $`D_L`$ the luminosity distance and $`t_s,r_s`$ the cosmic time and radial coordinate at emission. At present time, $`t_o`$, this relation reads $$D_L(t_0,z)=(1+z)^2R[t_s(t_0,z),r_s(t_0,z)].$$ (6) If one makes an expansion of $`D_L`$ to second order in $`z`$, after making an expansion to first order in $`z`$ of $`t_s(t_0,z)`$ and $`r_s(t_0,z)`$, one finds, : $$D_L(t_0,z)\frac{1}{H_0}[z+\frac{1}{2}(1Q_0)z^2],$$ (7) where $`Q_0`$ is a generalized deceleration parameter at present cosmic time. On the other hand, if one develops the metric coefficients of (1) and the mass-energy and pressure in power series of the radial coordinate and after imposing the Einstein equations, one obtains after a scale change in the radial coordinate, : $`ds^2`$ $`\left(1+\frac{1}{2}\alpha (t)r^2\right)^2dt^2`$ (8) $`+S^2(t)\left[\left(1+\frac{1}{2}\beta (t)r^2\right)^2dr^2+r^2\left(1+\frac{1}{2}\gamma (t)r^2\right)^2d\mathrm{\Omega }^2\right],`$ where $`S(t)`$ is the usual scale factor, $`\alpha (t)`$ is a non-negative function related to the acceleration of the cosmic fluid and a combination of $`\beta `$ and $`\gamma `$ gives the intrinsic spatial curvature of the 3-dim spacelike cosmic spaces. On the basis of equations (7,8), one finds , that $$Q_0=q_0II_0,$$ (9) thus, the luminosity distance-redshift relation at present time, reads $$D_L(t_0,z)\frac{1}{H_0}[z+\frac{1}{2}(1q_0+II_0)z^2],$$ (10) where $`H_0`$ and $`q_0`$ are the usual Hubble and deceleration parameters $$H_0:=\frac{\dot{S_0}}{S_0},$$ $$q_0:=\frac{S_0\ddot{S_0}}{\dot{S_0}^2},$$ and $`II_0`$ is a new inhomogeneity parameter which reads $$II_0=\frac{\alpha (t_0)}{(S_0H_0)^2}.$$ (11) Note that $`II_0`$ is related to the congruence acceleration $`A`$, through the metric coefficient $`\alpha (t)`$. In our model the deceleration parameter at present time is: $$q_0=\frac{1}{2}\mathrm{\Omega }_0\left(1+\frac{3p_0}{\varrho _0}\right)II_0,$$ (12) as at present time $`{\displaystyle \frac{3p_0}{\varrho _0}}1`$, then one finally obtains $$q_0\frac{1}{2}\mathrm{\Omega }_0II_0,$$ (13) where $`\mathrm{\Omega }_0`$ is the present matter density in units of the critical density. ## 4 Conclusions From the formulae (10) and (13), we see that one can obtain a negative deceleration parameter, i.e., cosmic acceleration, in agreement with recent SNe Ia data, by the presence of a positive inhomogeneity parameter related to the kinematic acceleration or, equivalently, to a negative pressure gradient or negative mass-energy gradient of the cosmic barotropic fluid. In this way, it is not necessary to explain the Supernova data by the presence of $`\mathrm{\Lambda }`$ or a vacuum energy or some other exotic forms of matter. Although in our model without $`\mathrm{\Lambda }`$, the Cosmological Principle is relaxed, however, it maintains perfect agreement with the almost isotropy about our worldline measured by the CBR observations. ## Acknowledgments I am grateful to M.P. Dabrowski for drawing my attention to Stephani spacetimes and to A. San Miguel and F. Vicente for many discussions on this and (un)related subjects and TeX help. This work is partially supported by the spanish research projects VA61/98, VA34/99 of Junta de Castilla y León and C.I.C.Y.T. PB97-0487. ## References Perlmutter, S., et al., Astrophys. J., 517 (1999) 565. Riess, A.G., et al., Astronomical J., 116 (1999) 1009. Riess, A. G., preprint astro-ph/9907038, (1999). Pascual-Sánchez, J.-F., Mod. Phys. Lett. A, 14 (1999) 1539. Krasiński, A., (1997) ’Inhomogeneous cosmological models’, ed. C.U.P. Ellis, G.F.R. et al., Phys. Rep., 124 (1985) 315. Ferrando, J. J., Morales, J.A., Portilla, M., Phys. Rev. D, 46 (1992) 578. Dabrowski, M. P., preprint gr-qc/9905083, (1999). Stewart, J.M., Ellis, G.F.R., J. Math. Phys., 9 (1968) 1072. Mashhoon, B., Partovi, M.H., Phys. Rev. D, 20 (1979) 2455. Partovi, M.H., Mashhoon, B., Astrophys. J., 276 (1984) 4. Van Elst, H., Ellis, G.F.R., Class. Quantum Grav., 13 (1996) 1099. Ellis, G.F.R., (1971) in ’General Relativity and Cosmology’, ed. R.K. Sachs (N.Y.: Academic Press). Kristian, J., Sachs, R.K. Astrophys. J., 143 (1966) 379.
no-problem/9912/cond-mat9912414.html
ar5iv
text
# Two-scale analysis of the 𝑆⁢𝑈⁢(𝑁) Kondo Model \[ ## Abstract We show how to resolve coherent low-energy features embedded in a broad high-energy background by use of a fully self-consistent calculation for composite particle operators. The method generalizes the formulation of Roth, which linearizes the dynamics of composite operators at any energy scale. Self-consistent equations are derived and analyzed in the case of the single-impurity $`SU(N)`$ Kondo model. \] The development of effective methods for describing correlated electron systems has been the subject of intensive activity over the last decade, spurred by the experimental discoveries of the heavy fermion systems, the high-temperature superconductivity and, generally, a revival of interest in transition metal-oxide physics . The Roth method for the correlation problem in the context of the Hubbard model is based on an ansatz which reduces the dynamics of field operators to a linearized one. The essential idea is to select a basis of fermionic operators $`\psi _i`$, write their equations of motion which involve operators $`J_i`$, and then close these equations by projecting $`J_i`$ onto the basis by using the Roth projector $`𝒫`$ defined by $$𝒫\left(J_l\right)=\underset{rs}{}\{J_l,\psi _r^{}\}I_{rs}^1\psi _s$$ (1) where $`I_{rs}=\{\psi _r,\psi _s^{}\}`$, with $`\{\mathrm{\_},\mathrm{\_}\}`$ denoting the anticommutator. In this approach the determination of the Green’s functions is then reduced to the evaluation of certain static thermal averages: $`\{\psi _r,\psi _s^{}\}`$ and $`\{J_l,\psi _r^{}\}`$. When these parameters are connected to matrix elements of the Green’s functions associated to the basis, one has a self-consistent scheme for their calculation. However, this is often not the case, and further approximations are introduced for their evaluation. The application of this method, as well as similar methods , has recently been reviewed by Mancini and collaborators . Through careful comparison with existing numerical data, they concluded that good results for many physical quantities are obtained by requiring that the Green’s functions fullfil exact equal-time identities accompanying the fermionic character of the operators. In spite of its intuitive appeal, there are several serious difficulties with the Roth’s method. Recent advances in the study of correlated electron systems converge upon a picture of the one-particle Green’s function made up of incoherent broad spectral features in addition to more dispersive quasi-particle bands which exist at lower energies . The Roth approach describes the Green’s functions in terms of a finite number of sharp poles which are a poor description of the incoherent structure of the high-energy spectra. Also, the presence of low-energy features embedded in a broad high-energy background precludes the straightforward extension to low-energy scales of this approach. Indeed, low-energy features cannot be resolved increasing the size of the basis. Increasing the size of the basis only amounts to calculating self-consistently a larger number of spectral moments which are dominated by high-energy contributions. A clear example of this dramatic failure is provided by the Kondo impurity model, where it has been proved impossible to derive the existence of a Kondo resonance in the spectra within a projection scheme. In this Letter, we present a generalization of Roth’s projection technique which overcomes the limitations discussed above and, as an illustration of the technique, we investigate the single-impurity $`SU(N)`$ Kondo model . Our goal is to introduce a general technique in a simple context which is well understood, but so far has not been successfully treated by techniques based on the equations of motion. We will demonstrate that our method reproduces all the well-known spectral features of the impurity model. The method carries out the following steps. (i) In the first step, we write the equations of motion for the operators of physical interest in terms of higher order ones (or *composite operators*). Similarly, we express the Green’s functions of interest in terms of the Green’s functions of the composite operators. The composite operators should not have components on the physical fields at high energies. (ii) Then, we evaluate the Green’s functions of the composite operators by a technique which is valid at high energies, such as the mode-coupling approximation . Use of the mode-coupling approximation is motivated by the fact that it more clearly reflects that the high-energy part of the spectra is quite incoherent. At this stage, we divide the composite operators into (a) a high-energy part, described by the mode-coupling approximation, and (b) a low-energy part, to be determined by a non-perturbative closure of the equations of motion. The necessity for this step is checked by writing the expressions for the moments and noting that the mode-coupling approximation fails to give the spectral weights, as expected from an independent evaluation of the moments. (iii) The low-energy closure of the equations of motion is dictated by the physics of the problem and is inspired by the successes of the slave boson techniques . It is a simple quasi-particle theory which involves unknown parameters such as the low-energy spectral weights. The self-consistent determination of the low-energy parameters completes the full determination of the physical Green’s functions. The $`SU(N)`$ Kondo model is described by the following Hamiltonian: $$H=\underset{𝐤,𝐤^{}}{}c^{}(𝐤)\left[\delta _{\mathrm{𝐤𝐤}^{}}\epsilon _c(𝐤)+2J_\mathrm{K}\frac{1}{NN_s}\stackrel{}{\tau }_N\stackrel{}{n}^d\right]c(𝐤^{})$$ (2) where $`c(𝐤)`$ denotes the conduction electron operator, and $`\stackrel{}{n}^d`$ represents the spin operator at the impurity site ($`\stackrel{}{n}^d\stackrel{}{n}^d=N(N+1)/2`$). $`\epsilon _c(𝐤)`$ and $`J_\mathrm{K}`$ are the conduction electron energy and the Kondo coupling, respectively. $`N_s`$ is the number of atomic sites of the host metal responsible for the orbitals which form the conduction band. $`\tau _N^a`$ are the $`N^21`$ traceless generators of $`SU(N)`$ ($`\stackrel{}{\tau }_N\stackrel{}{\tau }_N=2(N^21)/N`$). From (2), we have $$\mathrm{i}\frac{}{t}c(𝐤)=\epsilon _c(𝐤)c(𝐤)+2J_\mathrm{K}\frac{1}{NN_s}\underset{𝐪}{}\stackrel{}{\tau }_N\stackrel{}{n}^dc(𝐪)$$ (3) Next, we introduce the Composite Heisenberg Field Operator: $$\psi ^{}=(\psi _1^{},\psi _2^{})=(c_0^{},2\frac{1}{N}c_0^{}\stackrel{}{\tau }_N\stackrel{}{n}^d)$$ (4) where $`c_0=\frac{1}{\sqrt{N_s}}_𝐪c(𝐪)`$ is the electron at the impurity site. The field $`\psi _2`$ in (4) abides by the criterion required by the method in (i). In fact, when $`\{\psi _2,\psi _1^{}\}`$ is regarded as the scalar product of the field $`\psi _2`$ with the field $`c_0`$, there is no component of $`\psi _2`$ on $`c_0`$ at any energy scale since $`\{\psi _2,\psi _1^{}\}=0`$. Then, using the equation (3) we can express the Green’s functions of the first field in terms of the Green’s function for the composite operator $`\psi _2`$. We have $`G_{11}(\omega )`$ $`=`$ $`\mathrm{\Gamma }_0(\omega )+J_\mathrm{K}^2\mathrm{\Gamma }_0(\omega )G_{22}(\omega )\mathrm{\Gamma }_0(\omega )`$ (5) $`G_{12}(\omega )`$ $`=`$ $`J_\mathrm{K}\mathrm{\Gamma }_0(\omega )G_{22}(\omega )`$ (6) where $`G_{\alpha \beta }(\omega )`$ is the thermal Green’s function associated with the basis in (4) and $`\mathrm{\Gamma }_0(\omega )=\frac{1}{N_s}_𝐪\frac{1}{\omega \epsilon _c(𝐪)}`$ is the free propagator of the field $`c_0`$. The total spectral weight attached to the second composite field $`\psi _2`$ is $$I_{22}=4\frac{N+1}{N^2}+4K_D$$ (7) where the Kondo amplitude $`K_D=\psi _1\psi _2^{}=2\frac{1}{N^2}\psi _1^{}\stackrel{}{\tau }_N\psi _1\stackrel{}{n}^d`$ describes the binding between the localized spin and the spin excitations of the field $`c_0`$. In order to resolve low-energy features embedded in a high-energy background, we write $$G_{22}(\omega )=G_{22}^H(\omega )+G_{22}^L(\omega )$$ (8) where $`G_{22}^H(\omega )`$ keeps the information about the band structure and is not sensitive to features which are small with respect to the bandwidth $`2D`$. In contrast, $`G_{22}^L(\omega )`$ mostly takes coherent contribution from low energies and depends only weakly on the high-energy part of the spectrum. Such a decomposition corresponds to decomposition of the composite field $`\psi _2`$ as $`\psi _2=\psi _2^H+\psi _2^L`$, with $`\psi _2^H`$ giving rise to incoherent broad features, whereas $`\psi _2^L`$ emerges as an observable quasi-particle at low energies. In the high-energy regime, time-dependent correlation functions can be treated within the mode-coupling approximation in terms of electron-hole and charge-spin fluctuations. By use of mode-coupling in the paramagnetic case, we have for the time ordered Green’s function $`S_{22}^H(\omega )`$: $$S_{22}^H(\omega )=8\frac{N^21}{N^3}\frac{\mathrm{i}}{2\pi }𝑑\mathrm{\Omega }S_{11r}(\omega \mathrm{\Omega })S_{11}(\mathrm{\Omega })$$ (9) where $$S_{11r}(t_i,t_j)=𝒯\left[n_r^d(t_i)n_r^d(t_j)\right]r=1,\mathrm{},N^21$$ (10) The spectral weight absorbed by the propagator $`G_{22}(\omega )`$ in the mode-coupling form (9) is $`4(N+1)/N^2`$. For simplicity, we take the atomic limit for the Bose propagator, so that $`G_{22}^H(\omega )=4\frac{N+1}{N^2}G_{11}(\omega )`$. Other treatments for the Bose propagators would not substantially affect our results for the fermionic spectral function. From the Hamiltonian (2) it is direct to derive $`\mathrm{i}{\displaystyle \frac{}{t}}\psi _2=2{\displaystyle \frac{1}{N}}\stackrel{}{\tau }_N\stackrel{}{n}^dc_\epsilon +2J_\mathrm{K}{\displaystyle \frac{1}{N}}\stackrel{}{\tau }_N\stackrel{}{n}^d\psi _2`$ (11) $`+8\mathrm{i}f_{abc}^NJ_\mathrm{K}{\displaystyle \frac{1}{N^2}}\tau _N^a\left[c_0^{}\tau _N^bc_0\right]n_c^dc_0`$ (12) where $`f_{abc}^N`$ are the structure constants of the $`SU(N)`$ Lie algebra ($`[\tau _N^a,\tau _N^b]=2\mathrm{i}f_{abc}^N\tau _N^c`$) and $`c_\epsilon =\frac{1}{\sqrt{N_s}}_𝐪\epsilon _c(𝐪)c(𝐪)`$. It is worth noting that the source (12) has a direct component on the field $`c_0`$ ($`2J_\mathrm{K}/N\stackrel{}{\tau }_N\stackrel{}{n}^d\psi _2=4J_\mathrm{K}(N+1)/N^2c_0+\mathrm{}`$) which disappears for $`N\mathrm{}`$ being the coefficient $`4J_\mathrm{K}(N+1)/N^2`$ (for $`N=2`$, $`2J_\mathrm{K}/N\stackrel{}{\tau }_N\stackrel{}{n}^d\psi _23J_\mathrm{K}c_02J_\mathrm{K}\psi _2`$). In the low-energy regime, we assume the following dynamics for the field $`\psi _2`$: $$\mathrm{i}\frac{}{t}\psi _2^L=\mathrm{}\psi _1$$ (13) This corresponds to the physical assumption that at low energies (i.e., at energies much smaller than $`J_\mathrm{K}`$ and $`D`$) we have a quasi-particle theory. Indeed, the ansatz in (13) can be described as an application of Roth’s projection idea to a field $`\psi _2^L`$ which has most of its spectral weight at low energies. Thus, the high-energy spectral weight is already accounted for by the mode-coupling approximation. This is the second main departure from the original Roth approach, where an equation of motion for the field $`\psi _2`$ would have been projected onto $`\psi _1`$ and $`\psi _2`$ itself. This field would have spectral weight at all frequencies (e.g., $`4(N+1)/N^2`$) and from it a Kondo scale cannot be estimated. Once again, noteworthy is the fact that the basis defined in (4) alone is inadequate to capture both low- and high- energy physics of the Kondo model once a Roth truncation is realized. At this level of approximation, in the scattering matrix (i.e., $`J_\mathrm{K}^2G_{22}(\omega )`$ ) there is only one energy scale that cannot mimic a crossover between the two regimes. This is set by $`I_{22}`$ where the high-energy spectral weight (i.e., $`4(N+1)/N^2`$) prevents the low-energy scale from emerging. By combining (3) and (13) it is direct to show that $$G_{22}^L(\omega )=\frac{I_{22}^L}{\omega J_\mathrm{K}^2I_{22}^L\mathrm{\Gamma }_0(\omega )}$$ (14) being $`\mathrm{}=J_\mathrm{K}I_{22}^L`$, after projecting (13) on the field $`\psi _1`$. We have defined $`I_{22}^L`$ as the spectral weight of $`G_{22}(\omega )`$ in the low-energy region. Also, it is implicitly assumed that $`\{\psi _2^L,\psi _2^H\}=0`$ because they span different energy sectors of the Hilbert space. In conclusion, we have $`G_{11}(\omega )`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Gamma }_0(\omega )}{14\frac{N+1}{N^2}J_\mathrm{K}^2\mathrm{\Gamma }_0^2(\omega )}}`$ (15) $`+`$ $`{\displaystyle \frac{J_\mathrm{K}^2\mathrm{\Gamma }_0^2(\omega )}{14\frac{N+1}{N^2}J_\mathrm{K}^2\mathrm{\Gamma }_0^2(\omega )}}{\displaystyle \frac{I_{22}^L}{\omega J_\mathrm{K}^2I_{22}^L\mathrm{\Gamma }_0(\omega )}}`$ (16) $`G_{12}(\omega )`$ $`=`$ $`{\displaystyle \frac{4\frac{N+1}{N^2}J_\mathrm{K}\mathrm{\Gamma }_0^2(\omega )}{14\frac{N+1}{N^2}J_\mathrm{K}^2\mathrm{\Gamma }_0^2(\omega )}}`$ (17) $`+`$ $`{\displaystyle \frac{J_\mathrm{K}\mathrm{\Gamma }_0(\omega )}{14\frac{N+1}{N^2}J_\mathrm{K}^2\mathrm{\Gamma }_0^2(\omega )}}{\displaystyle \frac{I_{22}^L}{\omega J_\mathrm{K}^2I_{22}^L\mathrm{\Gamma }_0(\omega )}}`$ (18) $`G_{22}(\omega )`$ $`=`$ $`{\displaystyle \frac{4\frac{N+1}{N^2}\mathrm{\Gamma }_0(\omega )}{14\frac{N+1}{N^2}J_\mathrm{K}^2\mathrm{\Gamma }_0^2(\omega )}}`$ (19) $`+`$ $`{\displaystyle \frac{1}{14\frac{N+1}{N^2}J_\mathrm{K}^2\mathrm{\Gamma }_0^2(\omega )}}{\displaystyle \frac{I_{22}^L}{\omega J_\mathrm{K}^2I_{22}^L\mathrm{\Gamma }_0(\omega )}}`$ (20) At this stage of the method, once $`I_{22}^L`$ is evaluated, the problem is solved, as in point (iii) referred to above. From Eq. (7) it is clear that $`I_{22}^L`$ is connected to $`K_D`$. However, it is crucial to note that it represents the low-energy spectral weight and is a distinct object in respect to the $`K_D`$ parameter which also contains contributions from energies of the order $`J_\mathrm{K}`$. For the determination of this low-energy scale, we need to call upon some self-consistent condition. Evaluating the Kondo amplitude $`K_D`$ using $`G_{12}(\omega )`$ $$K_D=T\underset{n}{}G_{12}(i\omega _n)\omega _n=(2n+1)\pi T$$ (21) and inserting this into Eq. (7), we give a relation between $`I_{22}`$ and $`I_{22}^L`$ of the form $`I_{22}=F\left[I_{22}^L\right]`$. To get the self-consistent equation, we estimate $`I_{22}^H=F\left[I_{22}^L=0,T=J_\mathrm{K}\right]`$ which results in an equation for the low-energy spectral weight of the form $`I_{22}^L=I_{22}I_{22}^H`$. In other words, we choose $`J_\mathrm{K}`$ as the energy above which the one-particle Green’s function is made up of incoherent high-energy contributions with no relevant temperature dependence. While the equations resemble the slave boson equations, they differ in a significant way from them. These equations do not introduce additional redundant phases and Lagrange multipliers, and avoid all the difficulties associated with the treatment of the gauge fields. The slave boson method has been very successful in obtaining low-energy information. High-energy information can also be obtained by performing fluctuations around the mean-field solutions, but this gets increasingly difficult particularly in lattice models . We now present some results. For the numerical solution of the equations we used a constant density of states $`\rho =\frac{1}{2D}\theta \left(D|\omega |\right)`$ for the field $`c_0`$ so that $`\mathrm{\Gamma }_0^R(\omega )=\frac{1}{2D}\mathrm{ln}|\frac{D+\omega }{D\omega }|\mathrm{i}\pi \frac{1}{2D}\theta \left(D|\omega |\right)`$. In Figs. 1 and 2, the Kondo amplitude $`K_D`$ and $`I_{22}^L`$ are shown as functions of the temperature for $`J_\mathrm{K}=0.08`$ and $`N=2`$. $`D`$ has been set equal to $`1`$. In Fig. 2 we also show the solution for $`N=\mathrm{}`$ and the one after replacing $`I_{22}^H=F\left[I_{22}^L=0,T=J_\mathrm{K}\right]`$ with $`I_{22}^H=F\left[I_{22}^L=0\right]`$. Both these solutions give a spurious transition at a characteristic temperature which is the signature of the Kondo crossover in the exact solution. As in the slave boson approximation, this is due to the absence of a small inhomogeneous term which is present in our method and mimics mixing effects between high- and low- energy contributions. The quantities in Fig. 2 coincide only in the limit of large spin degeneracy (i.e., $`N\mathrm{}`$) where the method recovers the exact results . Our numerical estimate for the Kondo temperature $`T_\mathrm{K}`$ agrees well with the exact solution . At this temperature ($`T_\mathrm{K}0.002`$), $`I_{22}^L`$ has a change in the concavity of its slope when plotted as a function of the temperature. In Fig. 3, we present the spectral density $`\sigma _{22}(\omega )=\left(\frac{1}{\pi }\right)\mathrm{}\left[G_{22}^R(\omega )\right]`$ for two different temperatures. Again, it is clear that at high temperatures (i.e., $`TJ_\mathrm{K}`$) only a high-energy incoherent background is left. A well-defined singlet excitation mode no longer exists, even if some residual spin-spin interaction persists being energetically favored by a finite bandwidth (i.e., $`K_D`$ is non-zero at any temperature as in Fig. 1). In conclusion, we have shown how to resolve coherent low-energy features embedded in a broad high-energy background by use of a fully self-consistent calculation for composite particle operators. In a problem with more than one energy scale, which is typical of strongly correlated systems, we succeeded to capture low-energy features. Our scheme extends and improves upon the Roth’s method by combining the advantages of the methods based on the equations of motion and the slave boson techniques. Finally, we note that when there is an expansion parameter such as the size of the group, or the size of the representation, our approach can be formulated so as to reduce to the correct solution in the exactly soluble limit. We have illustrated this here with the Kondo model in the limit of large spin symmetry group which has often been shown to retain many crucial aspects of low-energy physics . Finally, our approach is directly applicable to lattice models and work in this direction is currently in progress. ###### Acknowledgements. This work was supported by the NSF under Grant DMR-95-29138. D.V. thanks Gina Valeri for her careful reading of the manuscript.
no-problem/9912/astro-ph9912545.html
ar5iv
text
# 1 Introduction ## 1 Introduction Among all astrophysical objects neutron stars (NSs) attract most attention of physicists. Now we know more than 1000 NSs as radiopulsars and more than 100 NSs emitting X-rays, but the Galactic population of these objects is about $`10^8`$$`10^9`$. So only a tiny fraction of one of the most fascinating astrophysical objects is observed at present. NSs can appear as sources of different nature: as isolated objects and as binary companions, powered by wind or disk accretion from a secondary companion. X-ray pulsars are probably one of the most prominent among binary sources, because there important parameters of NSs can be determined. Now we know more than 40 X-ray pulsars (see e.g. Bildsten et al., Borkus). Observations of optical counterparts of X-ray sources give an opportunity to determine distances to these objects and other parameters with relatively high precision, and with hyroline detections one can obtain the value of magnetic field, $`B`$, of a NS. But lines are not detected in all sources of that type and magnetic field can be estimated from period measurements (see e.g. Lipunov). Precise distance measurements usually are not available immediately after X-ray discovery (especially, if localization error boxes are large and X-ray sources have transient nature). In that sense methods of simultaneous determination of field and distance basing only on X-ray observations can be useful, and several of them were suggested by different authors previously. Here we try to obtain estimates of the magnetic fields (and distances) of NSs in X-ray pulsars from their period (and flux) variations. ## 2 Estimates of the magnetic field Magnetic fields of accreting NSs can be estimated using period variations or using the hypothesis of the equilibrium period (see Lipunov). We use both of these methods. For estimating of magnetic momentum of NSs using observed values of maximum spin-down we use the following main equation: $$\frac{dI\omega }{dt}=k_t\frac{\mu ^2}{R_{co}^3},$$ where $`I`$ – NS’s momentum of inertia, $`\omega =\frac{2\pi }{p}`$ – spin frequency, $`\mu `$ – magnetic momentum, $`R_{co}=\left(\frac{GM}{\omega ^2}\right)^{1/3}`$– corotation radius. We used $`k_t=1/3`$, $`I=10^{45}`$ g cm<sup>2</sup>, $`M=1.4M_{}`$. We used graphs from (Bildsten et al.) to derive spin-up and spin-down rates and flux changes measurements. Data on these graphs is shown with one day time resolution. Equilibrium period can be written in different forms for disk and wind-fed systems. For the first case we used the following equation: $$p_{eq.disk}=2.7\mu _{30}^{6/7}L_{37}^{3/7}s.$$ (1) For wind-accreting systems we have: $$p_{eq.wind}=10.4L_{37}^1T_{10}^{1/6}\mu _{30}s.$$ (2) Here $`L_{37}`$ – luminosity in units $`10^{37}`$ erg s<sup>-1</sup>, $`T_{10}`$ – orbital period in units 10 days, $`\mu _{30}`$ – magnetic momentum in units $`10^{30}`$ G cm<sup>3</sup>. Estimates of the magnetic momentum, $`\mu `$, obtained with different assumptions are shown in the table 1. Three values are shown: an estimate from spin-down obtained from the BATSE data (Bildsten et al.); an estimate from the equilibrium period for wind-fed systems (eq. (2)); an estimate for disk-accreting systems (eq. (1)). Less probable values are marked with asterix. In table 1 we use the following notation: LMXRB- Low Mass X-Ray Binary; HMSG - High Mass SuperGiant; BeTR- Be-transient source. Values, which were used for estimates with the hypothesis of the equilibrium period: spin period, mean luminosity in units $`10^{37}`$ erg s<sup>-1</sup>, orbital period in units 10 days can be found on the Web: http://xray.sai.msu.ru/~polar. More precise estimates can be made by fitting all observed values of spin-up and spin-down rate together with flux measurements. When the distance to the source is know only the value of the magnetic field should be fitted. And on the figure 1 we show such estimates for Her X-1. We plot spin-up and spin-down rates as a function of the parameter, which is a combination of the spin period and source’s luminosity. Spin-up and spin-down values derived from the BATSE data (Bildsten et al.) are plotted as black dots, and theoretical curves for different values of the magnetic momentum are also shown. In ideal the best curve for the magnetic momentum should exist, which fits all observational points. In reality points have some errors, distance to the source in also know with some uncertainty, and simple model of spin-up and spin-down can be only the first approximation. ## 3 Discussion and conclusions We made estimates of the magnetic field of NSs in X-ray pulsars. Estimates which were made with an assumption that $`p=p_{eq}`$ are rather rough. Obtained values depend (except uncertainties connected with the method itself) on unknown parameters of NSs, such as masses, radii, moments of inertia. All of them were accepted to have “standard” values, and of course it is only the first approximation. For example, our estimate for the source GRO 1744-28 is $`\mu 10^{30}`$ G cm<sup>3</sup>, and it is smaller than the estimate shown in (Borkus), which is $`B(25)10^{12}`$ G (we mark, that the estimate obtained by Joss & Rappaport is significantly lower than both: Borkus and our estimates). But if one take “non-standard” value for $`R`$, these estimates of $`\mu `$ and $`B`$ can be in good correspondence. We show several examples in table 2. NSs radii are calculated from the following simple formula: $$R=\left(2\mu /B\right)^{1/3}.$$ Here $`\mu `$ are taken from table 1, and values of $`B`$ are taken from Nagase, Borkus and Wang. As one can see from the table for several sources measured $`B`$ are not in correspondence with our calculated $`\mu `$, and radii of NSs are too big. Mostly these cases are long period wind-fed pulsars like GX 301-2, where formation of temporal reverse disk is possible for the cases of fast spin-down, so there maximum spin-down can be not the best field estimate, and estimates from the equilibrium period for wind-accretion case are in better correspondence with observations. In more clear cases (Her X-1, GRO 1744-28), where we are sure, that accretion is of the disk type, our estimates from maximum spin-down are in good correspondence with observations. And we predict for the cases of Be-transients, where disk accretion is working for sure, that in 2S 1417-624, GRO 1948+32, GRO 1008-57, A 1118-616 and 4U 1145-61 observations of cyclotron lines at energies $`100`$ keV are possible in future. Observations of period and flux variations can be used also for simultaneous determination of magnetic field of a NS and distance to the X-ray source (Popov). The method is based on several measurements of period derivative, $`\dot{p}`$, and X-ray pulsar’s flux, $`f`$. Fitting distance, $`d`$, and magnetic momentum, $`\mu `$, one can obtain good correspondence with the observed $`p,\dot{p}`$ and $`f`$, and that way produce good estimates of distance and magnetic field (see also another way of estimating of these parameters based on the equilibrium period and spin-up measurements applied to GRO1744-28 in (Joss & Rappaport). Lets consider only disk accretion due to application of our method to the system, in which most probably accretion is of the disk type. In that case one can write (see Lipunov): $$\dot{p}=\frac{4\pi ^2\mu ^2}{3GIM}\sqrt{0.45}\mathrm{\hspace{0.17em}2}^{1/14}\frac{\mu ^{2/7}}{I}\left(GM\right)^{3/7}\left[p^{7/3}L\right]^{6/7}R^{6/7},$$ (3) where $`L=4\pi d^2f`$ – luminosity, $`f`$ – the observed flux. So, with some small uncertainty in the equation above we know all parameters ($`I`$, $`M`$, $`R`$ etc.) except $`\mu `$ and $`d`$. Fitting observed points with them we can obtain estimates of $`\mu `$ and $`d`$. Uncertainties mainly depend on applicability of that simple model. To illustrate the method, we apply it to the X-ray pulsar GRO J1008-57, discovered by BATSE (Bildsten et al.). It is a $`93.5`$ s X-ray pulsar, with the BATSE flux about $`10^9`$ erg cm<sup>-2</sup> s<sup>-1</sup>. The source was identified with a Be-system with $`135^d`$ orbital period. On figure 2 we show observations (as black dots) and calculated curves for the disk model on the plane $`\dot{p}`$$`p^{7/3}f`$, where $`f`$ – observed flux (logarithms of these quantities are shown). Curves were plotted for different values of the source distance, $`d`$, and NS magnetic momentum, $`\mu `$. Spin-up and spin-down rates were obtained from graphs in Bildsten et al.. If one uses maximum spin-up, or maximum spin-down values to evaluate parameters of the pulsar, then one can obtain values different from the best fit (they are also shown on the figure): $`d8`$ kpc, $`\mu 37.610^{30}`$ G$``$ cm<sup>3</sup> for maximum spin-up, and two values for maximum spin-down: $`d4\mathrm{kpc}`$, $`\mu 37.610^{30}`$ G$``$ cm<sup>3</sup> and the one close to our best fit (two similar values of maximum spin-down were observed for different fluxes, but we mark, that formally maximum spin-down corresponds to the values, which are close to our best fit). It can be used as an estimate of the errors of our method: accuracy is about the factor of 2 in distance, and about the same value in magnetic field, as can be seen from the figure. Determination of magnetic field (and, probably, distance) only from X-ray observations can be very useful in uncertain situations, for example, when only X-ray observations without precise localizations are available. Acknowledgments PSB thanks prof. Joss for discussions. The work was supported by the RFBR (98-02-16801) and the INTAS (96-0315) grants.
no-problem/9912/patt-sol9912005.html
ar5iv
text
# 1 Introduction ## 1 Introduction The propagation of a dispersive light pulse in a planar waveguide with positive, instantaneous Kerr-type nonlinearity can be described by the (2+1)-dimensional nonlinear Schrödinger equation (NSE) : $$i\frac{}{\zeta }\mathrm{\Psi }+\frac{\sigma }{2}\frac{^2}{\tau ^2}\mathrm{\Psi }+\frac{1}{2}\frac{^2}{\xi ^2}\mathrm{\Psi }+|\mathrm{\Psi }|^2\mathrm{\Psi }=0,$$ (1) where the parameters $`\zeta ,\tau ,\xi `$ are as defined in appendix A. Equation (1) is valid only for pulses in the picosecond range; for shorter pulses additional terms, due to a higher-order dispersion, for example, should be included. The last term in equation (1) describes Kerr-type nonlinearity; second and third terms are associated, respectively, with diffraction, which causes spreading of the pulse in space, and first-order group velocity dispersion, which leads to temporal broadening of the pulse. Parameter $`\sigma `$, which can be either positive (for anomalous dispersion) or negative (for normal dispersion), is the dispersion-to-diffraction ratio . The spatio-temporal dynamics of the pulse depends, to a high degree, on the sign of this parameter. It is known that some solutions of the (2+1)-dimensional NSE can develop into a singularity of the electric field in the self-focus point. This phenomenon, known as catastrophic self-focusing, occurs simultaneously in space and time for pulses propagating in planar waveguides with anomalous group velocity dispersion (equation (1) with $`\sigma >0`$) , and also for dispersionless beams propagating in self-focusing bulk media (equation (1) with the dispersive term replaced by a diffraction term) when parameters of the system are above the threshold of catastrophic self-focusing , which is usulally computed with the aid of the method of moments , the variational method , and also numerical simulations . The occuurence of catastrophic self-focusing is not only non-physical, it also prevents examination of the pulse behavior behind the self-focus, for it emerges just as an artifact of approximations made when deriving the NSE. In order to avoid this limitation, either some nonlinear stabilization mechanisms such as saturation or non-locality of nonlinearity, Raman scattering , plasma formation , multiphoton ionization , higher-order group velocity dispersion terms , an adequate composition of the above mentioned effects , or a non-paraxial treatment of the process of self-focusing should be included into consideration. However the standard paraxial NSE can still serve as the model equation for self-focusing in the case when parameters of the system are below the threshold of catastrophic self-focusing or, in the reverse case, for studying dynamics of a pulse/beam in the prefocal region. Another situation occurs when the pulse propagates in a normal dispersion regime. In this case the terms describing dispersion and diffraction have different signs and two different effects, spatial self-focusing and temporal self-defocusing, simultaneously influencing the propagation of the pulse. This causes the situation where, in the solution of the NSE (equation (1) with $`\sigma <0`$) neither singularity nor localized steady-states occurs . Moreover, this solution is accompanied by a breaking of spatio-temporal symmetry and a uniform structure of the pulse and can finally lead to an occurrence of several humps in the field distribution , splitting of the pulses into two sub-pulses , or splitting into several sub-pulses . It has also been reported that the presence of even very small normal dispersion can lead to the destruction of soliton breathers propagating in nonlinear planar waveguides . In the case of the (3+1)-dimensional NSE splitting of a pulse into two sub-pulses has also been observed , while splitting into several sub-pulses predicted theoretically in has been confirmed experimantally by the authors of . Thus, depending on the sign of dispersion, a dispersive pulse propagating in a Kerr-type planar waveguide reveals different behaviour. Catastrophic self-focusing (in the framework of the NSE) takes place in the case of anomalous dispersion. For normal dispersion the typical process is spatio-temporal splitting. It seems interesting to study an interaction between two pulses co-propagating in such a medium, i.e. a Kerr-type planar waveguide, under the assumption that one of them propagates in a normal dispersion regime and another is in an anomalous regime. To the author’s knowledge this problem has not been studied in the literature and the main purpose of this paper is to consider it. Note that interaction of spatially separated light beams whose evolution is modeled by a set of $`n`$ ($`n2`$) nonlinearly coupled NSEs was studied by several authors . Moreover, the importance of the interaction between two pulses in a nonlinear medium has been pointed out already by Agrawal in , where an intriguing effect of an induced focusing of two beams co-propagating in a self-defocusing medium has been reported. It is also known that neither for anomalous dispersion nor for normal dispersion do stable soliton-like solutions of the (2+1)-dimensional (and also (3+1)-dimensional) NSE exist. This statement also concerns experimental results, since no soliton-like solution has been observed in pure Kerr-like nonlinear media with two or three transverse dimensions. From the point of view of applications, i.g. as elements of optical switching devices , the existence of stable soliton-like solutions is very important. Therefore, solutions to this problem has been already proposed by several authors: for example, it has been shown that soliton-like structures can be realized in media with saturation-type nonlinearity , in photorefractive media , in media with quadratic nonlinearity , in media with cascaded $`\chi ^{(2)}\chi ^{(3)}`$ nonlinearity , and also in the limiting case of the discrete-continuous NSE which can model propagation of short optical pulses in an array of linearly coupled optical fibers . In this paper we will consider another possibility of obtaining a self-trapped solution in two transverse dimensions, namely in a configuration of the (1+1)-dimensional NSE coupled to the (2+1)-dimensional NSE. We proceed as follows. In section 2, two coupled NSEs describing the co-propagation of two dispersive pulses in a nonlinear planar waveguide and basic equations following from the variational method will be introduced. Next, in section 3, the problem of catastrophic self-focusing will be considered. First, the influence of the parameters of the pulse propagating in a normal dispersion regime on the threshold of catastrophic self-focusing of the pulse propagating in an anomalous dispersion regime will be studied. We will also examine whether catastrophic self-focusing of the pulse propagating in a normal dispersion regime can occur as a result of the nonlinear coupling between two pulses. In section 4, which is devoted to the problem of spatio-temporal splitting, we will investigate whether the influence of the pulse propagating in a normal dispersion regime can enforce spatio-temporal splitting of the pulse with anomalous dispersion. In the last section, section 5, we will focus on the limiting case when the dispersive term of the normal pulse can be neglected. In this case the problem of two coupled (2+1)-dimensional NSEs will be reduced to the system of a (1+1)-dimensional NSE coupled to a (2+1)-dimensional NSE. The main reason to study this configuration is to investigate a possibility of a stable, self-trapped solution. The interaction between pulses will be assumed to be limited to cross-phase modulation, a nonlinear effect through which the phase of an optical beam/pulse is affected by another propagating beam/pulse and which can cause a redistribution of energy within each beam/pulse. Another effect, four-wave mixing, will be neglected, so that no energy transfer between both pulses will be taken into consideration. The analysis presented in this paper is based on the variational method and numerical simulations using the split-step spectral method . Throughout the paper the pulse propagating in an anomalous (normal) dispersion regime will be referred to as the anomalous (normal) pulse. ## 2 Basic equations The co-propagation of two optical pulses in a nonlinear planar waveguide can be described by two coupled nonlinear Schrödinger equations: $$i\frac{}{\zeta }\mathrm{\Psi }_1+\frac{\sigma _1}{2}\frac{^2}{\tau ^2}\mathrm{\Psi }_1+\frac{1}{2}\frac{^2}{\xi ^2}\mathrm{\Psi }_1+(\mathrm{\Psi }_1^2+2\mathrm{\Psi }_2^2)\mathrm{\Psi }_1=0,$$ $`(2a)`$ $$i\frac{}{\zeta }\mathrm{\Psi }_2+\frac{\sigma _2}{2}\frac{^2}{\tau ^2}\mathrm{\Psi }_2+\mu \frac{1}{2}\frac{^2}{\xi ^2}\mathrm{\Psi }_2+r(\mathrm{\Psi }_2^2+2\mathrm{\Psi }_1^2)\mathrm{\Psi }_2=0,$$ $`(2b)`$ where the last terms represent cross-phase modulation, a nonlinear effect which causes a coupling between pulses and the terms before the last ones describe self-phase modulation. It is assumed that the subscript $`j=1`$ $`(j=2)`$ denotes the anomalous (normal) pulse, hence $`\sigma _1>0`$ and $`\sigma _2<0`$. The notations in equations ($`(2b)`$a,b) are explained in appendix A. The initial conditions will be taken in the form of the Gaussian pulses $$\mathrm{\Psi }_j(\zeta =0,\tau ,\xi )=\sqrt{\kappa _j}exp\left[\frac{1}{2}\tau ^2\left(1+iC_{\tau j}\right)\right]exp\left[\frac{1}{2}\xi ^2\left(1+iC_{\xi j}\right)\right],$$ (3) where $`C_{\tau j}`$ $`(C_{\xi j})`$ is the temporal (spatial) chirp of the $`j`$-th pulse, $`j=1,2`$. The parameter $`\kappa _j`$ will be called here the strength of nonlinearity of the j-th pulse (see explanation in appendix A). ### 2.1 Variational method It is known that the set of NSEs (equation ($`(2b)`$a,b)) can be obtained from the Lagrangian density given by $$L=\frac{i}{2}\left(\mathrm{\Psi }_1^{}\frac{\mathrm{\Psi }_1}{\zeta }\mathrm{\Psi }_1\frac{\mathrm{\Psi }_1^{}}{\zeta }\right)+\frac{i}{2}\frac{1}{r}\left(\mathrm{\Psi }_2^{}\frac{\mathrm{\Psi }_2}{\zeta }\mathrm{\Psi }_2\frac{\mathrm{\Psi }_2^{}}{\zeta }\right)$$ $$\frac{1}{2}\left|\frac{\mathrm{\Psi }_1}{\xi }\right|^2\frac{\sigma _1}{2}\left|\frac{\mathrm{\Psi }_1}{\tau }\right|^2\frac{1}{2}\frac{\mu }{r}\left|\frac{\mathrm{\Psi }_2}{\xi }\right|^2\frac{1}{2}\frac{\sigma _1}{r}\left|\frac{\mathrm{\Psi }_2}{\tau }\right|^2$$ (4) $$+\frac{1}{2}\mathrm{\Psi }_1^4+2\mathrm{\Psi }_1^2\mathrm{\Psi }_2^2+\frac{1}{2}\mathrm{\Psi }_2^4.$$ Following the variational method let us choose a proper multi-parametric trial function for the solution of equation ($`(2b)`$a,b). Since in this paper we consider the Gaussian initial condition (equation (3)) it is natural to take as the trial function the Gaussian function: $$\mathrm{\Psi }_j=A_j(\zeta )exp\left[\frac{1}{2}\frac{\tau ^2}{w_{\tau j(\zeta )}}\right]exp\left[\frac{1}{2}\frac{\xi ^2}{w_{\xi j}(\zeta )}\right]exp\left[\frac{i}{2}\tau ^2C_{\tau j}(\zeta )\right]exp\left[\frac{i}{2}\xi ^2C_{\xi j}(\zeta )\right],$$ (5) with 12 parameters: the complex conjugate amplitudes, $`A_j,A_j^{}`$, the temporal and the spatial widths, $`w_{\tau j},w_{\xi j}`$, and the temporal and the spatial chirps, $`C_{\tau j},C_{\xi j}`$, where $`j=1,2`$. From the initial condition (equation (3)) it follows that $`A_j(\zeta =0)=\sqrt{\kappa _j}`$, $`w_{\tau j}(\zeta =0)=w_{\xi j}(\zeta =0)=1`$. The evolution equations for the parameters of the trial function are obtained by varying the reduced Lagrangian $$L:=\underset{\mathrm{}}{\overset{\mathrm{}}{}}L𝑑\xi 𝑑\tau ,$$ into which the trial function (equation (5)) is inserted, with respect to the parameters of the trial function, $`A_j,A_j^{}`$, $`w_{\tau j}`$, $`w_{\xi j}`$, $`C_{\tau j}`$, $`C_{\xi j}`$. We obtain the following 12 coupled ordinary differential equations: $$\frac{d}{dz}_1=0$$ $`(6a)`$ $$\frac{d}{dz}_2=0$$ $`(6b)`$ $$\frac{d^2w_{\tau 1}}{d\zeta ^2}=\frac{\sigma _1^2}{w_{\tau 1}^3}\frac{\sigma _1}{2}\frac{_1}{w_{\tau 1}^2w_{\xi 1}}\frac{4_2w_{\tau 1}\sigma _1}{(w_{\tau 1}^2+w_{\tau 2}^2)^{\frac{3}{2}}(w_{\xi 1}^2+w_{\xi 2}^2)^{\frac{1}{2}}}$$ $`(7a)`$ $$\frac{d^2w_{\xi 1}}{d\zeta ^2}=\frac{1}{w_{\xi 1}^3}\frac{1}{2}\frac{_1}{w_{\tau 1}w_{\xi 1}^2}\frac{4_2w_{\xi 1}}{(w_{\tau 1}^2+w_{\tau 2}^2)^{\frac{1}{2}}(w_{\xi 1}^2+w_{\xi 2}^2)^{\frac{3}{2}}}$$ $`(7b)`$ $$\frac{d^2w_{\tau 2}}{d\zeta ^2}=\frac{\sigma _2^2}{w_{\tau 2}^3}\frac{\sigma _2}{2}\frac{_2r}{w_{\tau 2}^2w_{\xi 2}}\frac{4_1w_{\tau 2}\sigma _2r}{(w_{\tau 1}^2+w_{\tau 2}^2)^{\frac{3}{2}}(w_{\xi 1}^2+w_{\xi 2}^2)^{\frac{1}{2}}}$$ $`(7c)`$ $$\frac{d^2w_{\xi 2}}{d\zeta ^2}=\frac{\mu ^2}{w_{\xi 2}^3}\frac{\mu }{2}\frac{_2r}{w_{\tau 2}w_{\xi 2}^2}\frac{4_1w_{\xi 2}\mu r}{(w_{\tau 1}^2+w_{\tau 2}^2)^{\frac{1}{2}}(w_{\xi 1}^2+w_{\xi 2}^2)^{\frac{3}{2}}}$$ $`(7d)`$ $$C_{\tau 1}=\frac{1}{\sigma _1}\frac{dln(w_{\tau 1})}{dz}$$ $`(8a)`$ $$C_{\xi 1}=\frac{dln(w_{\xi 1})}{dz}$$ $`(8b)`$ $$C_{\tau 2}=\frac{1}{\sigma _2}\frac{dln(w_{\tau 2})}{dz}$$ $`(8c)`$ $$C_{\xi 2}=\frac{1}{\mu }\frac{dln(w_{\xi 2})}{dz}$$ $`(8d)`$ $$\frac{d\varphi _1}{dz}=\frac{3}{4}|A_1|^2\frac{1}{2}\frac{\sigma _1}{w_{\tau 1}^2}+\frac{1}{w_{\xi 1}^2}+_1\frac{2+w_{\tau 1}^2/(w_{\tau 1}^2+w_{\tau 2}^2)+w_{\xi 1}^2/(w_{\xi 1}^2+w_{\xi 2}^2)}{(w_{\tau 1}^2+w_{\tau 2}^2)^{\frac{1}{2}}(w_{\xi 1}^2+w_{\xi 2}^2)^{\frac{1}{2}}}$$ $`(9a)`$ $$\frac{d\varphi _2}{dz}=\frac{3}{4}r|A_2|^2\frac{1}{2}\frac{\sigma _2}{w_{\tau 1}^2}+\frac{\mu }{w_{\xi 2}^2}+_2r\frac{2+w_{\tau 2}^2/(w_{\tau 1}^2+w_{\tau 2}^2)+w_{\xi 2}^2/(w_{\xi 1}^2+w_{\xi 2}^2)}{(w_{\tau 1}^2+w_{\tau 2}^2)^{\frac{1}{2}}(w_{\xi 1}^2+w_{\xi 2}^2)^{\frac{1}{2}}}$$ $`(9b)`$ where $`_j:=w_{\tau j}(\zeta )w_{\xi j}(\zeta )|A(\zeta )|^2=\kappa _j=const`$. From equations (6a) and (6b), which are actually the energy conservation laws for two pulses, $`N_j:=|\mathrm{\Psi }_j|^2𝑑\tau 𝑑\xi =\pi _j`$, it follows that there is no energy transfer between the pulses. The set of equations (6a)-(9b) is rather complicated and only in the special case when $`\sigma _1=\sigma _2=1,_2=0`$ is the analytical solution $$w_{\xi j}(\zeta )=w_{\tau j}(\zeta )=\left[1+\zeta ^2\left(1\frac{\kappa _j}{2}\right)\right]^{\frac{1}{2}},j=1,2$$ available . More general situations should be treated numerically, e.g. using the Runge-Kutta method . Still, equations (7a)-(7d) can be simplified to one evolution equation. To proceed, let us first rewrite equations (7a)-(7d) in the form $$\frac{d^2w_{\tau 1}}{d\zeta ^2}=\frac{\sigma _1}{2}\frac{}{w_{\tau 1}}V_1(w_{\tau 1},w_{\xi 1},w_{\tau 2},w_{\xi 2}),\frac{d^2w_{\xi 1}}{d\zeta ^2}=\frac{1}{2}\frac{}{w_{\xi 1}}V_1(w_{\tau 1},w_{\xi 1},w_{\tau 2},w_{\xi 2}),$$ $$\frac{d^2w_{\tau 2}}{d\zeta ^2}=\frac{\sigma _2}{2}\frac{}{w_{\tau 2}}V_1(w_{\tau 1},w_{\xi 1},w_{\tau 2},w_{\xi 2}),\frac{d^2w_{\xi 2}}{d\zeta ^2}=\frac{\mu }{2}\frac{}{w_{\xi 2}}V_1(w_{\tau 1},w_{\xi 1},w_{\tau 2},w_{\xi 2}),$$ where the potentials $`V_1(w_{\tau 1},w_{\xi 1},w_{\tau 2},w_{\xi 2})`$ and $`V_2(w_{\tau 1},w_{\xi 1},w_{\tau 2},w_{\xi 2})`$ read as $$V_1(w_{\tau 1},w_{\xi 1},w_{\tau 2},w_{\xi 2}):=\frac{\sigma _1}{w_{\tau 1}^2}+\frac{1}{w_{\xi 1}^2}\frac{_1}{w_{\tau 1}w_{\xi 1}}\frac{4_2}{(w_{\tau 1}^2+w_{\tau 2}^2)^{\frac{1}{2}}(w_{\xi 1}^2+w_{\xi 2}^2)^{\frac{1}{2}}},$$ $$V_2(w_{\tau 1},w_{\xi 1},w_{\tau 2},w_{\xi 2}):=\frac{\sigma _2}{w_{\tau 2}^2}+\frac{\mu }{w_{\xi 2}^2}\frac{_2r}{w_{\tau 2}w_{\xi 2}}\frac{4_1r}{(w_{\tau 1}^2+w_{\tau 2}^2)^{\frac{1}{2}}(w_{\xi 1}^2+w_{\xi 2}^2)^{\frac{1}{2}}}.$$ It can be also shown that the quantity $`W:=r_1W_1+_2W_2`$, where $$W_1:=\frac{1}{\sigma _1}\left(\frac{dw_{\tau 1}}{d\zeta }\right)^2+\left(\frac{dw_{\xi 1}}{d\zeta }\right)^2+V_1,W_2:=\frac{1}{\sigma _2}\left(\frac{dw_{\tau 2}}{d\zeta }\right)^2+\frac{1}{\mu }\left(\frac{dw_{\xi 2}}{d\zeta }\right)^2+V_2$$ is a constant of motion. Again, using equations (7a)-(7d) it can be calculated that $$\frac{d^2\overline{w}}{d\zeta ^2}=2W,$$ (10) where $`\overline{w}:=r_1\left(\frac{w_{\tau 1}^2}{\sigma _1}+w_{\xi 1}^2\right)+_2\left(\frac{w_{\tau 2}^2}{\sigma _2}+\frac{w_{\xi 2}^2}{\mu }\right)`$ (here we assume $`\sigma _10,\sigma _20,\mu 0`$). From equation (10) one can easily get the evolution equation for $`\overline{w}`$ $$\overline{w}(\zeta )=W\zeta ^2+\zeta \frac{\overline{w}}{\zeta }|_{\zeta =0}+\overline{w}(\zeta =0).$$ (11) ## 3 Catastrophic self-focusing This section is devoted to the problem of catastrophic self-focusing, which can occur in the solution of the set of equation ($`(2b)`$a,b). Our analysis is based on the variational method and numerical simulations and the comparison of the results of both. Note that once we have specified the threshold of catastrophic self-focusing we then know for which parameters of the system the NSE is valid and we can use this information in further research. From the point of view of analytical estimations, which can be done using the method of moments or the variational method catastrophic self-focusing is identified with a development of a singularity in the solution at a finite distance of propagation. Let us briefly discuss the case of a single pulse, i.e., let us make the assumption that $`_2=0`$ and $`:=_1,\sigma :=\sigma _1,w_\tau :=w_{\tau 1},w_\xi :=w_{\xi 1}`$. Then we get that $$W:=\frac{1}{\sigma }\left(\frac{dw_\tau }{d\zeta }\right)^2+\left(\frac{dw_\xi }{d\zeta }\right)^2+V,$$ with the potential $$V(w_\tau ,w_\xi ):=\frac{\sigma }{w_\tau ^2}+\frac{1}{w_\xi ^2}\frac{I}{w_\tau w_\xi }$$ and $$\overline{w}=\frac{w_\tau ^2}{\sigma }+w_\xi ^2,$$ whereas equations (10) and (11) still remain valid. From equation (11) it follows that the quantity $`\overline{w}`$ goes to zero on a finite distance of propagation when one of the following conditions is satisfied $$\{\begin{array}{cc}W<0,\hfill & \\ W=0\text{and}\frac{\overline{w}}{\zeta }|_{\zeta =0}<0,\hfill & \\ W>0\text{and}\frac{\overline{w}}{\zeta }|_{\zeta =0}2\sqrt{W\overline{w}(0)}.\hfill & \end{array}$$ $`(12)`$ A vanishing of $`\overline{w}`$ can be associated with a singularity of the solution of the NSE (equation (1)) only when dispersion is anomalous, $`\sigma >0`$, since only in this case the quantity $`\overline{w}`$ can be interpreted as an average width of the pulse and the condition $`\overline{w}=0`$ is equivalent to a simultaneous vanishing of both widths of the pulse. Therefore, for the Gaussian initial condition (equation (3) with $`\kappa :=\kappa _1`$) without initial chirp, $`C_\tau (0):=C_{\tau 1}(0)=0,C_\xi (0):=C_{\xi 1}(0)=0`$, i.e. for $`\frac{\overline{w}}{\zeta }|_{\zeta =0}=0`$, catastrophic self-focusing of the pulse with anomalous dispersion will arise when the condition $`W<0`$ is satisfied, i.e. when $$\kappa >\kappa _{Vcat}=\sigma +1.$$ (13) Note that the condition given by equation (13) agrees with the results obtained in with the aid of the method of moments for an elliptic Gaussian beam. Another situation occurs in the case of normal dispersion, namely a vanishing of the quantity $`\overline{w}`$ means only that $`w_{\tau 1}^2=\sigma w_\xi ^2`$, therefore, nothing about catastrophic self-focusing can be concluded from equation (11). However, based on the method of moments it has been demonstrated that catastrophic self-focusing in this case does not occur . In our numerical simulations catastrophic self-focusing is identified with a discontinuity of the phase $`\varphi (\tau ,\xi ,\zeta )`$ of the amplitude $`\mathrm{\Psi }:=|\mathrm{\Psi }|e^{i\varphi }`$ in the central point of the coordinate system, $`\tau =0,\xi =0`$, and with non-monotonic behavior of the intensity $`|\mathrm{\Psi }|^2`$ in the central point after catastrophic self-focusing has been reached . The threshold of catastrophic self-focusing given by the numerical analysis $$\kappa _{Ncat}\sigma +0.885$$ is lower than the one given by analytical estimations. Let us examine now two coupled NSEs given by equations ($`(2b)`$a), ($`(2b)`$b). We can consider three different cases: (i) both pulses propagate in an anomalous dispersion regime, $`\sigma _1>0,\sigma _2>0`$; (ii) both pulses propagate in normal dispersion regime, $`\sigma _1<0,\sigma _2<0`$; (iii) pulses propagate in different dispersion regimes, anomalous and normal, $`\sigma _1>0,\sigma _2<0`$. In the first case, when both pulses propagate in anomalous dispersion regimes the threshold of catastrophic self-focusing can be calculated in a similar way as it was done for a single pulse and is given by equation (12). For the Gaussian initial condition (equation (3)) without initial chirp, $`C_{\tau 1}(0)=C_{\xi 1}(0)=C_{\tau 2}(0)=C_{\xi 2}(0)=0`$, i.e. when $`\frac{\overline{w}}{\zeta }|_{\zeta =0}=0`$, catastrophic self-focusing occurs when the condition $`W<0`$, which reads as $$r_1\left(\sigma _1+1\kappa _12\kappa _2\right)+_2\left(\sigma _2+\mu r\kappa _22r\kappa _1\right)<0,$$ (14) is satisfied. Since vanishing of the quantity $`\overline{w}`$, which can be interpreted as an average width of the pulses, is associated with a simultaneous vanishing of both widths of both pulses, then it can be concluded that when catastrophic self-focusing of one of the pulse occurs, it also occurs for the second one. This conclusion and the condition (14) written for the symmetric case, $`\sigma _1=\sigma _2=\mu =r=1`$, agree with the results obtained in for two cylindrically symmetric, spatially separated beams whose distance vanishes. In the case of two pulses propagating in a normal dispersion regime the situation is simple: catastrophic self-focusing does not develop, even for very large strengths of nonlinearity of the pulses. The situation in more complicated when the pulses propagate in different dispersion regimes, anomalous and normal: the threshold of catastrophic self-focusing cannot be calculated from equation (11), therefore numerical solutions of equations (7a)-(7d) should be performed in order to analyse this problem. The first goal of our study is to examine an influence of the parameters of the normal pulse on the threshold of catastrophic self-focusing of the anomalous pulse. The parameters of the anomalous pulse have, therefore, been chosen in such a way that the relations $`\kappa _1>\kappa _{Vcat}=1+\sigma _1`$ (in the variational method), and $`\kappa _1>\kappa _{Ncat}0.885+\sigma _1`$ (in the numerical simulations) are satisfied, which mean that catastrophic self-focusing of the anomalous pulse will develop when there is no coupling between pulses. Then the parameters of the normal pulse, i.e. the strength of nonlinearity, $`\kappa _2`$, and the dispersion-to-diffraction ratio, $`\sigma _2`$, are varied. We found that catastrophic self-focusing of the pulse propagating in an anomalous dispersion regime can be arrested by the influence of the pulse propagating in a normal dispersion regime. The results following from the variational method are shown in figure 1. The shaded area denotes the range of the parameters of the normal pulse, $`\kappa _2`$ and $`\sigma _2`$, for which catastrophic self-focusing of the anomalous pulse does not occur. It is evident that for small nonlinearity of the normal pulse, $`\kappa _2`$, the term describing cross-phase modulation of the anomalous pulse is negligible as compared with self-phase modulation. Therefore, the process of catastrophic self-focusing cannot be stopped and it takes place for all values of $`\sigma _2`$. When the strength of nonlinearity $`\kappa _2`$ increases, the influence of the normal pulse on the anomalous pulse through the cross-phase modulation term increases and then it is possible, for some values of the dispersion-to-diffraction ratio, $`|\sigma _{Vlow}(\kappa _2)|<|\sigma _2|<|\sigma _{Vupp}(\kappa _2)|`$, to stop catastrophic self-focusing. The lower threshold, $`|\sigma _{Vlow}(\kappa _2)|`$, in the beginning decreases with an increase of the strength of nonlinearity of the normal pulse, $`\kappa _2`$. For sufficiently large nonlinearity, $`\kappa _2>\kappa _{Vlow}`$, the lower threshold becomes zero. The upper threshold, $`|\sigma _{Vupp}(\kappa _2)|`$, increases with an increase of nonlinearity. The existence of the lower threshold can be explained as follows: when $`|\sigma _2|<|\sigma _{Vlow}|`$ is small, the dispersive term of the normal pulse is negligible as compared with diffraction. Therefore, the most important role in the propagation of the normal pulse is played by self-focusing ,which not only does not lead to an arresting of catastrophic self-focusing, but even additionally enhances it. A similar situation is known, for example, in a configuration of two beams, which co-propagate in a bulk medium and have the same amplitudes : namely the critical value of nonlinearity necessary for catastrophic self-focusing is three times smaller than in the case when they propagate as single pulses, as can be calculated i.g. from equation (14). On the other hand, while for large $`|\sigma _2|`$, we have a broadening of the normal pulse with a significant spreading of energy out from the centre of the coordinate system, $`\xi =0,\tau =0`$, while for the anomalous pulse there is a tendency of energy to concentrate in the centre. Then the overlap of two pulses becomes negligible, so that the coupling between them through cross-phase modulation is very small and catastrophic self-focusing of the anomalous pulse can not be stopped by the influence of the normal pulse. The results obtained with the aid of the numerical calculations are shown in figure 2. They confirm predictions of the variational method. Namely, catastrophic self-focusing of the anomalous pulse can be arrested by the pulses propagating in a normal dispersion regime when the strength of nonlinearity is sufficiently large, $`\kappa _2>\kappa _{Nlow}`$, and the dispersion-to-diffraction ratio satisfies the relation $`|\sigma _{Nlow}(\kappa _2)|<|\sigma _2|<|\sigma _{Nupp}(\kappa _2)|`$. Another question is whether the nonlinear coupling between pulses can cause catastrophic self-focusing of the pulse propagating in a normal dispersion regime. As it has been already mentioned, in the case of two simultaneously propagating pulses with anomalous dispersion catastrophic self-focusing of one of the pulse is associated with catastrophic self-focusing of the other one, and both widths of both pulses go to zero simultaneously when catastrophic self-focusing occurs. However, the variational method demonstrates that in the case discussed here catastrophic self-focusing of the anomalous pulse does not necessarily lead to catastrophic self-focusing of the normal pulse. Namely, when catastrophic self-focusing of the anomalous pulse occurs, the normal pulse can demonstrate, depending on the parameters of the system, two different characteristics: (i) both widths of the pulse initially decrease reaching a minimum on a certain distance of propagation and then they start to increase; (ii) the spatial width of the pulse vanishes to zero on a finite distance of propagation whereas the temporal width initially decreases, reaching a minimum on a certain distance of propagation, and then it increases. In particular, the case (i) can be realized for the following parameters of the system $`\kappa _1=3,\kappa _2=3,\sigma _1=1,\sigma _2=7`$, while the case (ii) occurs, for example, when $`\kappa _1=3,\kappa _2=3,\sigma _1=1,\sigma _2=1`$. An effect, similar to (ii) has been also observed in the case of a pulse that propagates in a bulk medium with normal dispersion and whose dynamics is modeled by the (3+1)-dimensional NSE: namely catastrophic self-focusing of this pulse occurs when only the spatial widths vanish to zero while the temporal width is not allowed to reach zero value . The numerical simulations have not confirmed the results of the variational method concerning a possibility of catastrophic self-focusing of the normal pulse whose spatial width vanishes to zero on a cartain distance of propagation, $`\zeta `$, and whose temporal width is left larger than zero. However, no definite statement that this effect is prohibited can be made either. Additional calculations should be performed to clarify this question. We can therefore conclude, based on the variational method and the numerical simulations, that catastrophic self-focusing of the anomalous pulse does not necessarily lead to catastrophic self-focusing of the normal pulse. ## 4 Spatio-temporal splitting In this section the problem of spatio-temporal splitting is discussed in more detail. The origin of spatio-temporal splitting of a single pulse propagating in a normal dispersion regime in Kerr-type planar waveguides or bulk media is the fact that spatial self-focusing (in one or two dimensions) and temporal self-defocusing act simultaneously during the propagation. Therefore, in space, there is a tendency of energy to concentrate in the centre of the coordinate system, $`\tau =0,\xi =0`$, whereas in time a spreading of energy away from the centre takes place. When both effects are combined, local focusing areas develop away from the centre and, as a result, spatio-temporal splitting of the pulse into several sub-pulses takes place . The number of sub-pulses emerging in this way, for a sufficiently large propagation distance, is proportional, as it has been proposed in , to the order of the temporal soliton, $`N:=\sqrt{\kappa /\sigma }`$, where the parameters of the (2+1)-dimensional NSE (equation (1)) $`\sigma `$ and $`\kappa `$ denote respectivelly the dispersion-to-diffraction ratio and the strength of nonlinearity. Specifically, splitting of the pulse into two sub-pulses has been observed in the system with parameters $`\zeta =2,\kappa =4,\sigma =0.1`$, while in splitting of the pulse into three sub-pulses has been obtained for $`\zeta =0.15,\kappa =100,\sigma =3`$. Although we have not verified the statement that the number of sub-pulses is proportional to the order of a temporal soliton, $`N`$, since it was not the purpose of our study, we have observed that in the case of a single pulse with normal dispersion the tendency of the pulse to split increases when the strength of nonlinearity, $`\kappa `$, increases and when the dispersion-to-diffraction ratio, $`\sigma `$, decreases. Still, it remains for us an open question whether the humps in the filed distribution of a pulse mentioned in can be identified with sub-pulses whose existence is demonstrated in this paper and in . If this is not the case, we could take the opportunity to speculate that the splitting of a pulse could not be observed by the authors of since the strength of nonlinearity used by them was relatively weak, $`\kappa 1.76`$ (while the dispersion-to-diffraction ration was chosen to be $`\sigma 0.32`$) and only small local humps, instead of full pulse splitting, could be detected. Some agreement between numerical and variational solutions of the NSE with normal dispersion has been demonstrated in the literature. For example, in it has been shown, respectively, that in both methods the number of oscillations of the peak amplitude of the pulse is the same and there is a similarity in evolution of the average square widths of the pulses. Nevertheless, in all cases when spatio-temporal splitting of pulses has been observed, numerical simulations have been used . The variational method is not appropriate to predict the splitting of pulses, since it requires the solution to have a shape which does not change in propagation. When a Gaussian function is chosen as the initial condition, as we have done in this paper, it is difficult (if not impossible) to guess a trial function function which would satisfy the initial condition and also could describe spatio-temporal splitting of the pulse. Here, it is worthwhile to recall that the variational method cannot be applied to predict, for example, the formation of higher-order solitons in planar waveguides or optical fibers . Since the variational method is not applicable to the study of spatio-temporal splitting of two pulses propagating simultaneously in a nonlinear planar waveguide, the results of this section are due to numerical simulations. Figures 3 and 4 show the spatio-temporal dependences of intensities of both pulses, the anomalous one (figure 3(a) and 4(a)), and the normal one (figure 3(b) and 4(b)), for different longitudinal variables, $`\zeta `$. Parameters of the pulses were chosen in such a way that when they propagate as single pulses the following effects take place on large propagation distances: (i) symmetric, spatio-temporal broadening of the anomalous pulse without occurence of catastrophic self-focusing (see figures 3(c) and 4(c)); and (ii) large asymmetrical, spatio-temporal broadening of the normal pulse without splitting into sub-pulses (see figures 3(d) and 4(d)). The case (i) occurs when the conditions $`\sigma _1=1`$ and $`\kappa _1<\kappa _{Vcat}=1+\sigma _1`$ are satisfied, while the case (ii) does when the strength of nonlinearity, $`\kappa _2`$, is sufficiently small. When the pulses propagate simultaneously, i.e. there is a nonlinear coupling between them, the situation becomes qualitatively different, as it can be seen from figures 4(a) and 4(b). Namely, spatio-temporal splitting of both pulses can develop, so that for the propagation distance $`\zeta =2`$ the anomalous (normal) pulse becomes divided into $`n=3`$ $`(n>10)`$ sub-pulses. The effect of splitting of the anomalous pulse, which does not occur when it propagates as a single pulse, can be explained as follows. When the nonlinear coupling between pulses through cross-phase modulation is present one pulse can induce a redistribution of energy of the other pulse. Therefore, if there are some local focusing areas in the distribution of energy of one pulse, energy of the other pulse tends to concentrate there. Such a tendency has been already pointed out by Agrawal, who has observed the occurrence of local focusing areas in the distribution of energy of two beams which co-propagate in a defocusing nonlinear medium . ## 5 The limiting case of vanishing dispersion of the normal pulse In this section we consider the limiting case when the dispersive term of the normal pulse can be neglected. We apply the variational method and numerical simulations and compare their results. We assume that the initial condition has the shape of the Gaussian function given by equation (3) and concentrate basically on the question as to whether there exists a stable self-trapped solution of the above-mentioned system of equations. First we briefly discuss the case when the pulses propagate in a planar waveguide separately, i.e. when there is no coupling between them. Specifically, we consider (i) the propagation of a pulse with anomalous dispersion and (ii) the propagation of a dispersion-less beam. The case (i) can be described by the (2+1)-dimensional NSE, which does not have stable, self-trapped solutions. Thus, depending on parameters of the system, either spatio-temporal spreading of the pulse or catastrophic self-focusing develops. The case (ii) is described by the (1+1)-dimensional NSE which depends only on one transverse variable $`\xi `$ and being an integrable system possesses the familiar soliton solution given by $`sech`$ function . Taking the Gaussian function (equation (3)) which depends on two transverse variables $`\tau `$ and $`\xi `$ from the variational method we obtain that the temporal width of the pulse is constant while the spatial width oscillates. These oscillations are due to the fact that the shape of the Gaussian trial function differs from the exact soliton solution given by the $`sech`$ function . However, numerical simulations lead to a slightly different behavior. Namely, the temporal width of the pulse appears to oscillate in sinchronization with the spatial width. Amplitudes of both oscillations decrease with the longitudinal variable $`\zeta `$ and vanish at finite $`\zeta `$ when the spatial soliton is formed . Now, let us take into account the nonlinear coupling between pulses and discuss briefly the aspect of catastrophic self-focusing. Based on the variational method we have observed that when parameters of the anomalous pulse are chosen in such a way that catastrophic self-focusing does not occur when it propagates as a single pulse, i.e. when the condition $`\kappa _1<\kappa _{Vcat}=1+\sigma _1`$ is satisfied, then also in the case of the anomalous pulse coupled to the normal pulse no catastrophic self-focusing of both of them occurs, even for very large strength of nonlinearity of the normal pulse, i.g., $`\kappa _2=20`$. However, when the condition $`\kappa _1<\kappa _{Vcat}=1+\sigma _1`$ is not fulfilled a similarity to the case discussed in section 3 can be found, namely three different behaviours of the pulses can be observed: (i) no catastrophic self-focusing of both pulses; (ii) catastrophic self-focusing of the anomalous pulse; (iii) catastrophic self-focusing of both pulses, the spatial width of the normal pulse vanishes to zero while his temporal width remains larger than zero. The above results obtained in the variational method have been verified in the numerical simulations except for the case (iii): that is a development of catastrophic self-focusing of the normal pulse whose spatial width venishes to zero on a certain distance of propagation, $`\zeta `$, and whose temporal width is left larger than zero has not been observed, but also no definite statement that this effect is prohibited can be made. Rather, we are interested in possibilities of formation of self-trapped solutions and, therefore, we will restrict our analysis to parameters of the pulses which assure that no catastrophic self-focusing occurs, i.e., that condition $`\kappa _1<\kappa _{Ncat}0.885+\sigma _1`$ is satisfied. From the variational method it follows that the evolution of the normal pulse coupled to the anomalous one is essentially similar to that of the single normal pulse. Namely, the temporal width of the pulse does not depend on the longitudinal variable, $`\zeta `$, as is seen from equation (8c) with the neglected dispersion of the normal pulse $`\sigma _2=0`$, while the spatial width of the pulse undergoes periodic oscillations (see figure 5(b)). The propagation of the anomalous pulse coupled to the normal one is, however, qualitatively different to the behaviour of a single anomalous pulse. Namely, both temporal and spatial widths of the pulse undergo periodic oscillations (see figure 5(a)). Therefore, neither spatio-temporal spreading nor catastrophic self-focusing of the anomalous pulse can develop and a self-trapped solution arises. Note that a similar self-trapped solution was found in the case of (2+1)-dimensional NSE with the saturation of nonlinearity . We also performed numerical simulations for the case of simultaneously propagating pulses. The results are displayed in figure 6 from which it is evident that the temporal and spatial widths of both pulses oscillate in sinchronization, with the amplitude of the temporal oscillations smaller than the amplitude of the spatial ones. Unfortunately, the numerical calculations are rather labourious and we have not yet been able to calculate evolution for longer longitudinal variables, $`\zeta >2`$, so that we do not know whether the amplitude of oscillations decreases with $`\zeta `$ and whether or not no spreading and catastrophic self-focusing of the anomalous pulse develop. Nevertheless, the currently available numerical results suggest that a self-trapped solution can exist in the configuration under discussion. Further calculations should clarify this question. Note that a configuration of two simultaneously propagating pulses could also be used in optical compression techniques since, as is seen from figure 6(a), for some particular values of the longitudinal distance $`\zeta `$ the temporal width of the anomalous pulse decreases by about five times the initial width. ## 6 Conclusions In this paper properties of two pulses propagating simultaneously in different dispersion regimes, i.e. anomalous and normal, in a Kerr-type planar waveguide are considered. The propagation is described by two coupled NSEs. The interaction between pulses is assumed to be limited to cross-phase modulation. Four wave mixing is neglected, i.e. no energy transfer between pulses is taken into account. The accuracy of another assumption used in the analysis, the omitting of the difference of group velocities of the pulses, is discussed in appendix B. Our analysis is based on the variational method and numerical simulations. First we studied the influence of the parameters of the pulse propagating in a normal dispersion regime on the threshold of catastrophic self-focusing of the pulse with an anomalous dispersion. We observed that catastrophic self-focusing of the pulse propagating in an anomalous dispersion regime can be arrested by the pulse propagating in a normal dispersion regime when the strength of nonlinearity is sufficiently large, $`\kappa _2>\kappa _{Xlow}`$ and the dispersion-to-diffraction ratio satisfies the relation: $`|\sigma _{Xlow}(\kappa _2)|<|\sigma _2|<|\sigma _{Xupp}(\kappa _2)|`$. In this notation $`XV(XN)`$ concerns the results obtained in the variational method (numerical simulations). We also investigated whether the nonlinear coupling between pulses can cause catastrophic self-focusing of the pulse propagating in a normal dispersion regime. The variational method indicates that when catastrophic self-focusing of the anomalous pulse occurs, the normal pulse can display, depending on the parameters of the system, two different characteristics: (i) both widths of the pulse initially decrease reaching a minimum on a certain distance of propagation and then they start to increase, (ii) the spatial width of the pulse vanishes to zero on a finite distance of propagation whereas the temporal width initially decreases reaching a minimum on a certain distance of propagation and then it increases. The occurence of catastrophic self-focusing of the normal pulse has not been observed in the numerical simulations. Therefore, we can conclude, based on the variational method and the numerical simulations, that catastrophic self-focusing of the anomalous pulse does not necessarily lead to catastrophic self-focusing of the normal pulse. We found also, using the numerical simulations, that the presence of the pulse propagating in a normal dispersion regime can lead to spatio-temporal splitting of the pulse propagating in an anomalous dispersion regime. Recall that splitting of an anomalous pulse into several pulses does not occur when it propagates as a single pulse. Finally, we considered the limiting case of vanishing dispersion of the pulse propagating in a normal dispersion regime with parameters of the pulses chosen in such a way that catastrophic self-focusing does not occur, i.e. that the conditions $`\kappa _1<\kappa _{Vcat}=1+\sigma _1`$ (in the variational method) and $`\kappa _1<\kappa _{Ncat}0.885+\sigma _1`$ (in the numerical simulations) are satisfied. The main motivation was to see whether such a configuration can lead to a stable self-trapped propagation of a pulse with anomalous dispersion. The positive answer was obtained within the variational method which confirms that neither spatio-temporal spreading nor catastrophic self-focusing of the anomalous pulse can develop thus giving rise to a self-trapped solution. Note that this kind of stabilization is similar to that which has been found earlier in media with saturation-type nonlinearity . Although the existing data supports the existence of a self-trapped solution, conclusive results require labourious simulations at high values of the longitudinal variable $`\zeta `$ and are not yet available (work in progress). Note, in conclusion, that the existence of a stable self-trapped solution could be useful, for example, in optical switching devices. The configuration of two simultaneously propagating pulses in a planar waveguide could also be of use in optical compression techniques. ## 7 Acknowledgements The work was supported by the Polish Committee of Scientific Research (KBN, grant no 8T11F 007 14) and the Deutsche Akademische Austauschdienst (DAAD), to both of which I express my gratitude. I take the opportunity to express my thanks to Professor F. Lederer for his kind hospitality at the Institute of Solid State Physics and Theoretical Optics, Fredrich-Schiller-Universität Jena, Jena, Germany. The numerical calculations were partially done thanks to a fellowship at the Abdus Salam International Centre for Theoretical Physics, Trieste, Italy. I gratefully acknowledge the Director of the Centre Professor M. Virasoro, and Professor G. Denardo for their kind hospitality and helpful support. I also would like to thank the referees for constructive suggestions. ### Appendix A The notation in equations (1) and ($`(2b)`$a,b) is as follows : $`\zeta =z/z_{DF1}`$ is the longitudinal coordinate normalized to the Fresnel diffraction length of the anomalous pulse, $`\xi =x/w_1`$ is the spatial transverse coordinate normalized to the initial spatial width of the anomalous pulse, $`\tau =(t\beta _1^{(1)}z)/t_1`$ is the local time normalized to the initial temporal width of the anomalous pulse. The parameters $`\sigma _j=z_{DF1}/z_{DSj}`$, $`\mu =z_{DF1}/z_{DF2}`$, $`r=\lambda _1/\lambda _2=\omega _2/\omega _1`$ denote, respectively, the dispersion-to-diffraction ratio, the ratio of the Fresnel diffraction length of the anomalous pulse to the Fresnel diffraction length of the normal pulse and, finally, the ratio of the carrier frequency of the anomalous pulse to the carrier frequency of the normal pulse. $`\mathrm{\Psi }_j:=\sqrt{\kappa _j}U_j(\zeta ,\tau ,\xi )/U_{j0}`$ denotes the normalized amplitude of the $`j`$-s pulse, where $`U_j(\zeta ,\tau ,\xi )`$ is the amplitude of the slowly varying envelope of the electric field, $`U_{j0}:=U_j(0,0,0)`$ is the dimensionless initial peak amplitude. The parameter $`\kappa _j:=\left(z_{DFj}/z_{NLj}\right)^2`$ defined as the strength of nonlinearity of j-s pulse is proportional to the nonlinear part, $`n_2`$, of the refractive index of a medium, $`n:=n_0+n_2|U_j|^2`$, to the initial peak intensity, $`|U_{j0}|^2`$, and to the square of the spatial width of the pulse, $`w_{\xi j}^2`$. Note also that in the case of the (1+1)-dimensional NSE, i.e. when $`\sigma _j=0`$, the quantity $`\sqrt{\kappa _j}`$ can be interpreted as the order of a spatial soliton, so that a first-order soliton arises when $`\kappa _j=1`$ . The dispersive terms are defined as follows: $`\beta _j^{(0)}:=\beta ^{(0)}(\omega _j)=\omega _j/c`$ is the wavenumber, $`\beta _j^{(1)}:=\beta /\omega |_{\omega =\omega _j}=1/v_{gj}`$ is the reverse group velocity, and $`\beta _j^{(2)}:=^2\beta /\omega ^2|_{\omega =\omega _j}`$ is the group velocity dispersion. The parameters $`z_{DFj}:=\beta _j^{(0)}n_0(\omega _j)w_i^2`$, $`z_{DSj}:=t_j^2/\beta _j^{(2)}`$, $`z_{NLj}:=w_j\sqrt{n_0/(2n_2|U_{0j}|^2)}`$, $`w_j`$, $`t_j`$ denote, respectively, the Fresnel diffraction length, the dispersive length, the nonlinear length, the initial spatial width and the initial temporal width of the $`j`$-s pulse. In the above notation $`j=1,2`$, where the subscript $`j=1`$ ($`j=2`$) refers to the anomalous (normal) pulse. ### Appendix B Since we have assumed that pulses have different wavelengths and different group velocity dispersions, it is physically evident that they should also have different group velocities. Therefore, the assumption that the difference of the group velocities of the pulses vanishes is a simplification accepted in this paper and should be viewed as a first step of the analysis. When this difference does not vanish the pulses propagate with different velocities and the overlap between them decreases with the longitudinal variable. Therefore, the nonlinear coupling between them also decreases. In the limiting case of the difference of the group velocities of the pulses approaching infinity the coupling between pulses becomes zero and the problem of simultaneous propagation of two pulses reduces to the case when they propagate separately. However, we believe that the inclusion of a small difference of group velocities of the pulses, which should be studied numerically, will not cause qualitative changes in the results of this paper, such as the possibility of an arresting of catastrophic self-focusing of the pulse propagating in an anomalous dispersion regime by the influence of the pulse propagating in a normal dispersion regime. The only difference we expect is a change of the values of the parameters, $`\sigma _{Nlow},\sigma _{Nupp},`$ $`\kappa _{Nupp}`$, which describe the threshold of catastrophic self-focusing for fixed values of $`\sigma _1`$ and $`\kappa _1`$. These quantitative changes would be proportional to the value of the difference of the group velocities of the pulses.
no-problem/9912/hep-th9912283.html
ar5iv
text
# A Heavy Fermion Can Create a Soliton: A 1+1 Dimensional Example ## I Introduction Scalar field theories can contain spatially varying (but time independent) configurations that are local minima of the classical energy. These solitons are found as solutions to the non–linear classical equations of motion. Sometimes a topological conservation law can be used to show that the soliton is absolutely stable because it is the lowest energy configuration with a given value of a conserved topological charge. When quantum effects are taken into account, the classical description must be re–examined. Now the spatially varying soliton configuration should minimize the “effective energy” which takes into account classical and quantum effectsBy effective energy we mean the effective action per unit time; the term “effective potential” is reserved for spatially constant configurations.. Since the effective energy for general configurations is difficult to compute, quantum effects are typically computed as approximate corrections to the classical soliton. In a non–renormalizable theory, these corrections are cutoff dependent and the model must be redefined to include the cutoff prescription. The hope is that the energy of the soliton is slightly altered by quantum effects but its qualitative features remain. In this Letter we give an example of a quantum soliton that is not present in the classical theory alone. We examine a renormalizable model in $`1+1`$ dimensions where a scalar field is Yukawa coupled to a fermion. Fermion number is conserved. The classical energy is minimized when the scalar field has a constant value $`v`$. There are no classical solitons. The fermion gets a mass $`m=Gv`$ through the Yukawa coupling. We calculate exactly the fermion’s properly renormalized one loop contribution to the scalar field effective energy. By “exactly” we mean to all orders in the derivative expansion, which is crucial since we consider configurations varying on the scale $`1/m`$. We then show that for certain choices of model parameters — in particular with $`G`$ large — we can exhibit a field configuration that carries fermion number and has energy below $`m`$. It cannot decay by emitting a free fermion. We search for the lowest energy configuration carrying fermion number using a few parameter variational ansatz. The soliton, which is the actual lowest energy configuration with fermion number one, is presumably not far from our variational minimum and has strictly lower energy. The soliton therefore has energy less than $`m`$ and is absolutely stable. The idea that a heavy fermion can create as soliton is not new and has been explored previously . What we are adding to the discussion is the ability to exactly calculate the renormalized fermionic one loop effective energy for any spatially varying meson background, which is essential for demonstrating stability at the quantum level. Since we are working in a renormalizable theory, the counterterms in the Lagrangian cancel the cutoff dependent part of the sum over zero–point energies in the explicit evaluation of the effective energy, leaving a finite result. This result is unambiguous because we are able to fix the counterterms in the perturbative sector of the theory. Furthermore, we can choose model parameters to justify neglecting the one loop boson contributions and all higher loop contributions. Thus we conclude that in this $`1+1`$ dimensional model, a heavy fermion can create a soliton. ## II The Model The model we consider has a two–component meson field $`\stackrel{}{\varphi }=(\varphi _1,\varphi _2)`$ coupled equally to $`N_F`$ fermions. We suppress the fermion flavor label but will keep track of the factor $`N_F`$ as necessary. The Lagrangian is $`=_B+_F`$ with $`_B={\displaystyle \frac{1}{2}}_\mu \stackrel{}{\varphi }^\mu \stackrel{}{\varphi }V(\stackrel{}{\varphi }).`$ (1) where $`V(\stackrel{}{\varphi })={\displaystyle \frac{\lambda }{8}}\left[\stackrel{}{\varphi }\stackrel{}{\varphi }v^2+{\displaystyle \frac{2\alpha v^2}{\lambda }}\right]^2{\displaystyle \frac{\lambda }{2}}\left({\displaystyle \frac{\alpha v^2}{\lambda }}\right)^2\alpha v^3\left(\varphi _1v\right)`$ (2) and $`_F={\displaystyle \frac{1}{2}}\left(i[\overline{\mathrm{\Psi }},/\mathrm{\Psi }]G\left([\overline{\mathrm{\Psi }},\mathrm{\Psi }]\varphi _1+i[\overline{\mathrm{\Psi }},\gamma _5\mathrm{\Psi }]\varphi _2\right)\right).`$ (3) (The reason for the commutators in eq. (3) will be explained later.) Note that with $`\alpha `$ set to zero, the theory has a global $`U(1)`$ invariance $`\varphi _1+i\varphi _2e^{i\phi }\left(\varphi _1+i\varphi _2\right)\mathrm{and}\mathrm{\Psi }e^{i\phi \gamma _5/2}\mathrm{\Psi }.`$ (4) Naïvely, one would imagine that spontaneous symmetry breaking occurs with $`\alpha =0`$. Then we could pick a classical vacuum, say $`\stackrel{}{\varphi }_{\mathrm{cl}}=(v,0)`$, and expand the theory about this point. But in $`1+1`$ dimensions, the massless mode that corresponds to motion along the chiral circle, $`\stackrel{}{\varphi }\stackrel{}{\varphi }=v^2`$, gives rise to infra–red singularities and there is no spontaneous symmetry breaking . By introducing $`\alpha 0`$ we have tilted the potential to eliminate the massless mode. For $`\alpha `$ large enough it is legitimate to expand about $`\stackrel{}{\varphi }_{\mathrm{cl}}`$. There are two massive bosons, which we call $`\sigma `$ and $`\pi `$, with $`m_\sigma ^2=\left(\lambda +\alpha \right)v^2`$ and $`m_\pi ^2=\alpha v^2`$. The classical bosonic theory governed by $`_B`$ has no classical soliton. The fermions get mass through their Yukawa coupling to $`\stackrel{}{\varphi }`$. In the perturbative vacuum, expanding about $`\stackrel{}{\varphi }_{\mathrm{cl}}`$, the fermion has mass $`m=Gv`$. One could imagine that various distortions of $`\stackrel{}{\varphi }`$ would affect the fermion spectrum. For example, one could keep $`\varphi _2=0`$ and let $`\varphi _1\varphi _1(x)`$ with $`lim_{x\pm \mathrm{}}\varphi _1(x)=v`$, but $`\varphi _1(x)<v`$ over some region in $`x`$ of order $`w`$. Alternatively, one could keep $`\stackrel{}{\varphi }\stackrel{}{\varphi }=v^2`$, but let $`\stackrel{}{\varphi }=v(\mathrm{cos}\theta (x),\mathrm{sin}\theta (x))`$, where $`\theta (x)0`$ as $`x\mathrm{}`$ and $`\theta (x)2\pi `$ as $`x+\mathrm{}`$. Again the deviation of $`\stackrel{}{\varphi }`$ from $`\stackrel{}{\varphi }_{\mathrm{cl}}`$ occurs in a region of order $`w`$. In both cases, if $`w`$ is of order $`1/m`$, then there are bound state solutions of the single–particle Dirac equation associated with eq. (3) that have binding energies of order $`m`$, so that a fermion bound to the $`\stackrel{}{\varphi }`$ field has an energy below $`m`$. (Because of its topological properties, the latter configuration is especially efficient at binding a fermion .) However, there is an energy cost from the gradient and potential terms. Still, considering just the single bound fermion and the classical scalar energy, we might expect a total energy below $`m`$ for $`G`$ large enough. Of course, $`\mathrm{\Psi }`$ describes a quantum field and any distortion of the background $`\stackrel{}{\varphi }(x)`$ away from $`\stackrel{}{\varphi }_{\mathrm{cl}}`$ will cause shifts in the zero–point energies of the fermion fluctuations. To form a self–consistent approximation, we must compute the effect of these shifts as well, since they are of the same order in $`\mathrm{}`$ as the bound state contribution. In general the sum over zero–point energies diverges. In order to proceed we must regularize and renormalize the calculation. We are working in a renormalizable field theory so we know that the counterterms that are implicit in $`_B`$ will cancel these divergences. We want to compare the energy of non–trivial configurations with the perturbative spectrum of the model, therefore we fix the counterterms by standard renormalization conditions on the Green’s functions. The Green’s functions are evaluated perturbatively so the counterterms have an expansion in Feynman diagrams. Regularization and renormalization of the sum over zero–point energies has been problematic in the past. We work in the continuum, where the sum is replaced by an integral over scattering phase shifts. In Ref. we show that it is possible to analytically continue this integral to $`d`$ spatial dimensions, where it converges. Then we are able to identify potential divergences with low orders in the Born expansion for the phase shifts, and, in turn, with specific Feynman diagrams. We subtract the low order Born terms from the integral, which then remains finite when analytically continued back to $`d=1`$. We then add back in the corresponding Feynman diagrams, which combine with the counterterms in the usual way to yield a finite and unambiguous result in $`d=1`$. In the next section we show how we evaluate this contribution to the energy of a static configuration. ## III The One Fermion Loop Effective Energy We have written eq. (3) as a commutator to ensure that the Lagrangian is invariant under the charge conjugation operation $`\mathrm{\Psi }\mathrm{\Psi }^{}`$ and $`(\varphi _1,\varphi _2)(\varphi _1,\varphi _2)`$. As a result, the vacuum energy gets contributions from both the positive and negative energy eigenvalues of the single particle Dirac Hamiltonian $`H[\stackrel{}{\varphi }]=i\sigma _1{\displaystyle \frac{d}{dx}}+G\left(\sigma _2\varphi _1+\sigma _3\varphi _2\right).`$ (5) Here we are using a Majorana representation of the Dirac matrices, $`\gamma ^0=\sigma _2`$, $`\gamma ^1=i\sigma _3`$, and $`\gamma _5=\sigma _1`$, which implies that $`=\text{1}\text{ }`$. For one flavor of fermions the energy of the lowest energy state is $`E_{\mathrm{vac}}={\displaystyle \frac{1}{2}}\left\{{\displaystyle \underset{\omega _n>0}{}}\omega _n+{\displaystyle \underset{\omega _n<0}{}}\omega _n\right\}={\displaystyle \frac{1}{2}}{\displaystyle \underset{n}{}}|\omega _n|,`$ (6) where the $`\omega `$’s are the eigenvalues of $`H[\stackrel{}{\varphi }]`$. (For a charge conjugation invariant background, for each eigenvalue $`\omega `$ there is an eigenvalue $`\omega `$, so the two sums in eq. (6) are the same, and $`E_{\mathrm{vac}}`$ reduces to the sum over the Dirac sea, $`E_{\mathrm{vac}}=_{\omega _n<0}\omega _n`$.) We will restrict our attention to background fields that obey $`\varphi _1(x)=\varphi _1(x)`$ and $`\varphi _2(x)=\varphi _2(x)`$. In this case, $`H[\stackrel{}{\varphi }]`$ commutes with the parity operator $`P=\sigma _2\mathrm{\Pi }`$, where $`\mathrm{\Pi }`$ is the coordinate reflection operator that transforms $`x`$ to $`x`$. We can thus decompose the solutions of eq. (6) into separate parity channels. For a given background, we wish to evaluate eq. (6) and subtract from it what we get in the free case, $`\stackrel{}{\varphi }=(v,0)`$. Following Ref. , we use the relationship between the change in the density of states and the derivative of the phase shift $`\rho (k)\rho _0(k)={\displaystyle \frac{1}{\pi }}{\displaystyle \frac{d\delta (k)}{dk}}`$ (7) to write the change in the vacuum energy as $`\mathrm{\Delta }E^F={\displaystyle \frac{1}{2}}{\displaystyle \underset{l}{}}|\omega _l|{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dk}{2\pi }}\omega (k){\displaystyle \frac{d}{dk}}\delta _F(k)+{\displaystyle \frac{m}{2}}+E_{\mathrm{ct}},`$ (8) where $`\delta _F(k)=\delta _+(\omega (k))+\delta _+(\omega (k))+\delta _{}(\omega (k))+\delta _{}(\omega (k)).`$ (9) Here $`\omega (k)=\sqrt{k^2+m^2}`$, $`\delta _\pm `$ is the scattering phase shift for the positive (negative) parity channel, the $`\{\omega _l\}`$ are the discrete bound state energy levels, and $`E_{\mathrm{ct}}`$ is the counterterm contribution, which is fixed by renormalization conditions discussed below. The extra $`m/2`$ reflects an important subtlety in one dimension: in the non–interacting case ($`\delta _F(k)=0`$) there are bound states exactly at the continuum thresholds, $`\omega =\pm m`$, which count as $`1/2`$ in the sum in eq. (6) . Levinson’s theorem, $`\delta _\pm (m)+\delta _\pm (m)`$ $`=`$ $`\pi (n^\pm {\displaystyle \frac{1}{2}}),`$ (10) relates the phase shift at threshold to the number of bound states $`n_\pm `$, with threshold bound states again counting as $`1/2`$. It allows us to rewrite eq. (8) as $`\mathrm{\Delta }E^F={\displaystyle \frac{1}{2}}{\displaystyle \underset{l}{}}\left(|\omega _l|m\right){\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dk}{2\pi }}\left(\omega (k)m\right){\displaystyle \frac{d}{dk}}\delta _F(k)+E_{\mathrm{ct}},`$ (11) which is convenient because it makes it clear that as we increase the background and a bound state appears, there are no discontinuities in $`\mathrm{\Delta }E^F`$. Of course, $`\mathrm{\Delta }E^F`$ given by eq. (11) is formally infinite. For large $`k`$, the phase shifts go to zero like $`1/k`$ so the integral is divergent. To regulate this divergence and allow us to identify it unambiguously with specific Feynman diagrams, we have extended the method of dimensional regularization to the density of states written in terms of phase shifts. The details are presented in Ref. . Once continued to $`d`$-dimensions, where all quantities are finite, we can identify the leading large $`k`$ behavior of $`\delta _F(k)`$ with the contribution of the first Born approximation plus the piece of the second Born approximation related to it by chiral symmetry, which we call $`\widehat{\delta }(k)`$. We also identify it unambiguously with the coefficient of the Lagrangian counterterm, $`v^2\stackrel{}{\varphi }\stackrel{}{\varphi }`$, evaluated by standard Feynman perturbation theory. The renormalization conditions that fix the counterterm in perturbation theory here translate into the statement that in evaluating eq. (11) we should subtract $`\widehat{\delta }(k)`$ from $`\delta _F(k)`$. After this subtraction the integral can be analytically continued back to $`d=1`$ to give a result that is finite and unambiguous, $`\mathrm{\Delta }E^F={\displaystyle \frac{1}{2}}{\displaystyle \underset{l}{}}\left(|\omega _l|m\right){\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dk}{2\pi }}\left(\omega (k)m\right){\displaystyle \frac{d}{dk}}\left(\delta _F(k)\widehat{\delta }(k)\right),`$ (12) where $`\widehat{\delta }(k)={\displaystyle \frac{2G^2}{k}}{\displaystyle _0^{\mathrm{}}}𝑑x\left(v^2\stackrel{}{\varphi }(x)^2\right).`$ (13) We can solve numerically for the phase shift $`\delta _F(k)`$ for any background $`\stackrel{}{\varphi }(x)`$. We make use of the fact that $`P`$ commutes with $`H[\stackrel{}{\varphi }]`$ so that positive and negative parity channels are decoupled. The positive (negative) parity states obey $`\psi _\pm (x)=\pm \sigma _2\psi _\pm (x)`$ (14) and therefore $`\psi _\pm (0)\left(\begin{array}{c}1\\ \pm i\end{array}\right).`$ (15) Any state defined for $`x0`$ and obeying one of the boundary conditions in eq. (15) can be extended via eq. (14) to the whole line, so we need only consider the half line $`x0`$ with the boundary conditions eq. (15). Consider the free case, $`\stackrel{}{\varphi }=(v,0)`$. For each energy $`\omega `$, both positive and negative, the right moving eigenstate of eq. (5) is $`\phi _k^0(x)={\displaystyle \frac{1}{\omega }}\left(\begin{array}{c}\omega \\ k+im\end{array}\right)e^{ikx}`$ (16) and the left moving free eigenstate is $`\phi _k^0(x)={\displaystyle \frac{1}{\omega }}\left(\begin{array}{c}\omega \\ k+im\end{array}\right)e^{ikx}`$ (17) where $`k=\sqrt{\omega ^2m^2}`$ is positive. For backgrounds $`\stackrel{}{\varphi }(x)`$ that are not everywhere equal to $`\stackrel{}{\varphi }_{\mathrm{cl}}`$, we still impose the requirement that $`\stackrel{}{\varphi }(x)`$ approaches $`\stackrel{}{\varphi }_{\mathrm{cl}}`$ as $`x`$ gets large. For these non–trivial backgrounds we call $`\phi _k(x)`$ the eigenstate of eq. (5) that approaches $`\phi _k^0(x)`$ as $`x\mathrm{}`$ and $`\phi _k(x)`$ the eigenstate that approaches $`\phi _k^0(x)`$ as $`x\mathrm{}`$. Note that $`\phi _k^0`$, $`\phi _k^0`$, $`\phi _k`$ and $`\phi _k`$ are not eigenstates of $`P`$. For $`x0`$ let $`\psi _\pm (x)`$ $`=`$ $`\phi _k(x)\pm \frac{ik+m}{\omega }e^{2i\delta _\pm (\omega )}\phi _k(x)`$ (18) be the parity eigenstates of $`H[\stackrel{}{\varphi }]`$ with energy $`\omega `$. This defines the phase shifts $`\delta _\pm (\omega )`$. The $`2\pi `$ ambiguity in this definition is resolved by requiring that the phase shifts be smooth and go to zero as $`\omega \pm \mathrm{}`$. Note that in the free case $`\delta _\pm (\omega )=0`$. For any value of $`\omega `$, we can solve numerically for the eigenstates of eq. (5) in both parity channels. Using eq. (18) we can then extract the phase shifts. Our method for computing the phase shift actually allows us to resolve the $`2\pi `$ ambiguity for each $`\omega `$ individually and has other numerical advantages, which are elaborated in Ref. . For any value of $`k`$ we can obtain $`\delta _F(k)`$, so we can compute the integral in eq. (12). To find the bound state energies we solve the eigenvalue problem numerically. Levinson’s theorem tells us how many bound states to search for. For a fixed background $`\stackrel{}{\varphi }`$, the numerical evaluation of eq. (12) can be done quickly and with high accuracy. This allows us to search over a class of $`\stackrel{}{\varphi }`$’s for the configuration with the lowest total energy. ## IV The Total Energy We are interested in calculating the total one loop effective energy of a static configuration $`\stackrel{}{\varphi }(x)`$. We take $`\stackrel{}{\varphi }(x)`$ to be specified by a short list of parameters $`\{\zeta _i\}`$. We measure energy in units of the fermion mass $`m=Gv`$ and use a dimensionless distance $`\xi =mx`$. In $`1+1`$ dimensions $`\stackrel{}{\varphi }(x)`$ and $`v`$ are dimensionless. We rescale $`\stackrel{}{\varphi }(x)`$ by $`v`$ so that $`\stackrel{}{\varphi }(x)(1,0)`$ as $`|\xi |\mathrm{}`$, and define dimensionless couplings $`\stackrel{~}{\alpha }={\displaystyle \frac{\alpha }{G^2}}\mathrm{and}\stackrel{~}{\lambda }={\displaystyle \frac{\lambda }{G^2}}.`$ (19) By this rescaling, using eq. (1) and eq. (2), we have $`{\displaystyle \frac{E_{\mathrm{cl}}[\stackrel{}{\varphi }]}{m}}`$ $`=`$ $`v^2{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\xi \left({\displaystyle \frac{1}{2}}\stackrel{}{\varphi }^{}\stackrel{}{\varphi }^{}+{\displaystyle \frac{\stackrel{~}{\lambda }}{8}}[\stackrel{}{\varphi }\stackrel{}{\varphi }1+{\displaystyle \frac{2\stackrel{~}{\alpha }}{\stackrel{~}{\lambda }}}]^2{\displaystyle \frac{\stackrel{~}{\lambda }}{2}}({\displaystyle \frac{\stackrel{~}{\alpha }}{\stackrel{~}{\lambda }}})^2\stackrel{~}{\alpha }(\varphi _11)\right)`$ (20) $`=`$ $`v^2_{\mathrm{cl}}(\stackrel{~}{\alpha },\stackrel{~}{\lambda },\{\zeta _i\}),`$ (21) where prime denotes differentiation with respect to $`\xi `$. The fermion one loop contribution to the energy arises from eq. (5), which with $`\stackrel{}{\varphi }`$ measured in units of $`v`$ is $`H[\stackrel{}{\varphi }]=m\left(i\sigma _1{\displaystyle \frac{d}{d\xi }}+\sigma _2\varphi _1(\xi )+\sigma _3\varphi _2(\xi )\right).`$ (22) We see that a single fermion makes a contribution proportional to $`m`$ and dependent on the variational parameters $`\{\zeta _i\}`$. This means that eq. (12) can be expressed as $`m^F(\{\zeta _i\})`$. For $`N_F`$ flavors the one loop contribution is therefore $`N_Fm^F(\{\zeta _i\})`$. The boson one loop contribution comes from summing the square roots of the eigenvalues of the operator $`\frac{^2}{x^2}+\frac{^2V}{\varphi _i\varphi _j}`$. Rescaling as before we find that the boson one loop energy can be written as $`m^B(\{\zeta _i\})`$. Putting together the classical energy and the one loop energies we get $`{\displaystyle \frac{E_{\mathrm{tot}}[\stackrel{}{\varphi }]}{N_Fm}}={\displaystyle \frac{v^2}{N_F}}_{\mathrm{cl}}(\stackrel{~}{\alpha },\stackrel{~}{\lambda },\{\zeta _i\})+^F(\{\zeta _i\})+{\displaystyle \frac{1}{N_F}}^B(\stackrel{~}{\alpha },\stackrel{~}{\lambda },\{\zeta _i\})+\text{higher loops}.`$ (23) For $`N_F`$ large we can neglect the boson one loop contribution relative to the fermion one loop contribution. Furthermore it can be shown that $`1/v^2`$ counts boson loops. Taking both $`N_F`$ and $`v^2`$ large with the ratio fixed, we can neglect the higher loops entirely and all but the single fermion loop in eq. (23). Therefore we need only consider the contributions from $`_{\mathrm{cl}}`$ and $`^F`$. ## V The Fermion Number A non–trivial background $`\stackrel{}{\varphi }(x)`$ distorts the energy levels of the Dirac Hamiltonian eq. (5), possibly introducing single particle bound states (with positive and negative energy). We identify the lowest energy state of the system, the one with the all the negative energy levels filled, as the vacuum. If a level crosses zero as we locally interpolate between $`\stackrel{}{\varphi }_{\mathrm{cl}}(x)`$ and $`\stackrel{}{\varphi }(x)`$, this state will have non–zero fermion number. In particular, if $`\stackrel{}{\varphi }(x)`$ circles $`\stackrel{}{\varphi }=(0,0)`$ as $`\stackrel{}{\varphi }`$ goes from $`(1,0)`$ at $`x=\mathrm{}`$ to $`(1,0)`$ at $`x=\mathrm{}`$, then the vacuum state will carry non–zero fermion number provided that the scale over which $`\stackrel{}{\varphi }`$ varies, $`w`$, is much larger than the fermion Compton wavelength $`1/m`$ . In Ref. we derive a formula for the fermion number of the vacuum, $`𝒬_{\mathrm{vac}}`$, in terms of the positive energy phase shifts at $`k=0`$ and the number of positive energy bound states, $`𝒬_{\mathrm{vac}}=N_F\left({\displaystyle \frac{1}{\pi }}\left[\delta _+(m)+\delta _{}(m)\right]+{\displaystyle \frac{1}{2}}n_{\omega >0}\right)`$ (24) where $`n_{\omega >0}`$ is the number of bound states with positive energy. The configurations we look at loop at most once around $`\stackrel{}{\varphi }=0`$, so $`𝒬_{\mathrm{vac}}`$ is either $`0`$ or $`N_F`$. We are interested in states with fermion number $`N_F`$. If $`𝒬_{\mathrm{vac}}=N_F`$, then the state we want is the vacuum. If $`𝒬_{\mathrm{vac}}=0`$, then we build the lowest energy state with fermion number $`N_F`$ by filling the lowest positive energy level of eq. (5) with $`N_F`$ fermions. Therefore, if $`𝒬_{\mathrm{vac}}=0`$, $`^F`$ appearing in eq. (23) must be augmented by $`\omega _1`$ where $`m\omega _1`$ is the smallest positive eigenvalue of eq. (22). ## VI Results We want to look for background configurations $`\stackrel{}{\varphi }`$ that can produce states with fermion number $`N_F`$, and whose total energy is below $`N_Fm`$. From eq. (23) with $`^B`$ neglected, we define $`={\displaystyle \frac{v^2}{N_F}}_{\mathrm{cl}}+^F1,`$ (25) which is the energy of the fermionic configuration minus the energy of $`N_F`$ free fermions in units of $`mN_F`$. For our numerical computations, we take the ansatz $`\varphi _1+i\varphi _2`$ $`=`$ $`1R+Re^{i\mathrm{\Theta }}\mathrm{where}\mathrm{\Theta }=\pi (1+\mathrm{tanh}(\xi /w)).`$ (26) The two variational parameters are $`R`$ and $`w`$. As $`\xi `$ goes from $`\mathrm{}`$ to $`\mathrm{}`$, $`\stackrel{}{\varphi }`$ moves in a circle of radius $`R`$ in the $`(\varphi _1,\varphi _2)`$ plane, starting and ending at $`(1,0)`$. The scale over which $`\stackrel{}{\varphi }`$ varies is $`w`$. For fixed $`\stackrel{~}{\alpha }`$ and $`\stackrel{~}{\lambda }`$ we vary $`R`$ and $`w`$ until we produce the configuration with the smallest $``$. The results are shown in Fig. 1. We see that it is possible to find a configuration whose total energy is below $`N_Fm`$. Since we are minimizing $``$ subject to the constraint that $`\varphi `$ is of the form eq. (26), we know that the true minimum of $``$ in the fermion number $`N_F`$ sector also has an energy below $`N_Fm`$. This is the stable soliton. In Fig. 2 we show the width, $`w_{\mathrm{sol}}`$, and the radius, $`R_{\mathrm{sol}}`$, for the minimum energy configuration as a function of $`\stackrel{~}{\alpha }`$ for various values of $`\stackrel{~}{\lambda }`$. Note that the size of the soliton grows like $`1/\sqrt{\stackrel{~}{\alpha }}`$ as $`\stackrel{~}{\alpha }`$ goes to zero. In that region, $`R_{\mathrm{sol}}`$ approaches 1, so the $`\stackrel{}{\varphi }`$ configuration approaches the chiral circle. In fact, the energy of the fermion number $`N_F`$ soliton goes to zero as $`\stackrel{~}{\alpha }`$ goes to zero. However for $`\stackrel{~}{\alpha }`$ very small the bosonic quantum fluctuations restore the classically broken symmetry. Thus we can not trust our results for $`\stackrel{~}{\alpha }`$ very small and we do not believe that this large and light soliton is a reliable consequence of this model. For moderate values of $`\stackrel{~}{\alpha }`$, where the width of the soliton is not controlled by $`1/\sqrt{\stackrel{~}{\alpha }}`$, we do trust our results. For the value of $`v/\sqrt{N_F}`$ shown in Fig. 2, the model becomes trustworthy for $`\stackrel{~}{\alpha }0.3`$. For further discussion of this point, see Ref. . We have developed and applied a variational technique for renormalizable quantum field theories through one loop order. Because we have applied unambiguous, standard perturbative renormalization procedures, we have been able to hold the theory (i.e. the renormalized masses and coupling constants) fixed, while searching over a variational ansatz. Here we have used these methods to demonstrate the existence of a stable fermionic soliton, stabilized by quantum effects, in a model with no soliton at the classical level. The result suggests that similar phenomena might persist in 3+1 dimensions, and no obstacles stand in the way of generalizing the method to that case. ## Acknowledgments We would like to thank S. Bashinsky, Y. Frishman, J. Goldstone, D. Son, U. Weise, X. Wen, and F. Wilczek for helpful conversations, suggestions and references. This work is supported in part by funds provided by the U.S. Department of Energy (D.O.E.) under cooperative research agreement #DF-FC02-94ER40818 and the Deutsche Forschungsgemeinschaft (DFG) under contract We 1254/3-1.
no-problem/9912/hep-ph9912470.html
ar5iv
text
# 1 Introduction ## 1 Introduction Anisotropies in the Cosmic Microwave Background (CMB) carry an enormous amount of information about the early universe. The anisotropy spectrum depends sensitively on close to a dozen cosmological parameters, some of which have never been measured before. Experiments over the next decade will help us extract these parameters, teaching us not only about the early universe, but also about physics at unprecedented energies. We are truly living in the Golden Age of Cosmology. One of the dangers of the age is that we are tempted to ignore the present data and rely too much on the future. This would be a shame, for hundreds of individuals have put in countless \[wo\]man-years building state-of-the-art instruments, making painstaking observations at remote places on and off the globe. It seems unfair to ignore all the data that has been taken to date simply because there will be more and better data in the future. In this spirit, I would like to make the following claims: * We understand the theory of CMB anisotropies. * Using this understanding, we will be able to extract from future observations extremely accurate measurments of about ten cosmological parameters. * Taken at face value, present data determines one of these parameters, the curvature of the universe. * The present data is good enough that we should believe these measurements. The first three of these claims are well-known and difficult to argue with; the last claim is more controversial, but I will present evidence for it and hope to convince you that it is true. If you come away a believer, then you will have swallowed a mouthful, for the present data strongly suggest that the universe has zero curvature. If you believe this data, then you believe that (a) a fundamental prediction of inflation has been verified and (b) since astronomers do not see enough matter to make the universe flat, roughly two-thirds of the energy density in the universe is of some unknown form. ## 2 Anisotropies: The Past When the universe was much younger, it was denser and hotter. When the temperature of the cosmic plasma was larger than about $`1/3`$ eV, there were very few neutral hydrogen atoms. Any time a free electron and proton came together to form hydrogen, a high energy ($`E>13.6`$ eV) photon was always close enough to immediately dissociate the neutral atom. After the temperature dropped beneath a $`1/3`$ eV, there were no longer enough ionizing photons around, so virtually all electrons and protons combined into neutral hydrogen. This transition – called recombination – is crucial for the study of the CMB. Before recombination, photons interacted on short time scales with electrons via Compton scattering, so the combined electron-proton-photon plasma was tightly coupled, moving together as a single fluid. After recombination, photons ceased interacting with anything and traveled freely through the universe. Therefore, when we observe CMB photons today, we are observing the state of the cosmic fluid when the temperature of the universe was $`1/3`$ eV. Since the perturbations to the temperature field are very small, of order $`10^5`$, solving for the spectrum of anisotropies is a linear problem. This means that different modes of the Fourier transformed temperature field do not couple with each other: each mode evolves independently. Roughly, the large scale modes evolve very little because causal physics cannot affect modes with wavelengths larger than the horizon<sup>1</sup><sup>1</sup>1Recall that the horizon is the distance over which things are causally connected.. When we observe anisotropies on large angular scales, we are observing the long wavelength modes as they appeared at the time of recombination. Since these modes evolved little if at all before recombination, our observations at large angular scales are actually of the primordial perturbations, presumably set up during inflation. Inflation also set up perturbations on smaller scales, but these have been processed by the microphysics. The fluid before recombination was subject to two forces: gravity and pressure. These two competing forces set up oscillations in the temperature. A small scale mode, begins its oscillations (in time) as soon as its wavelength becomes comparable to the horizon. Not surprisingly, each wavelength oscillates with a different period and phase. The wavelength which will exhibit the largest anisotropies is the one whose amplitude is largest at the time of recombination. Figure 1 illustrates four snapshots in the evolution of a particularly important mode, one whose amplitude peaks at recombination. Early on (top panel) at redshifts larger than $`10^5`$, the wavelength of this mode was larger than the horizon size. Therefore, little evolution took place: the perturbations look exactly as they did when they were first set down during inflation. At $`z10^4`$, evolution begins, and the amplitudes of both the hot and cold spots decrease, so that, as shown in the second panel, there is a time at which the perturbations vanish (for this mode). A bit later (third panel) they show up again; this time, the previous hot spots are now cold spots and vise versa (compare the first and third panels). The amplitude continues to grow until it peaks at recombination (bottom panel). Figure 1 shows but one mode in the universe. A mode with a slightly smaller wavelength will “peak too soon:” its amplitude will reach a maximum before recombination and will be much smaller at the crucial recombination time. Therefore, relative to the maximal mode shown in figure 1, anisotropies on smaller scales will be suppressed. Moving to even smaller scales, we will find a series of peaks and troughs corresponding to modes whose amplitudes are either large or small at recombination. An important question to be resolved is at what angular scale will these inhomogeneities show up? Consider figure 2 which again depicts the temperature field at decoupling from the mode corresponding to the first peak. All photons a given distance from us will reach us today. This distance defines a surface of last scattering (which is just a circle in the two dimensions depicted here, but a sphere in the real universe). This immediately sets the angular scale $`\theta `$ corresponding to the wavelength shown, $`\theta `$ (wavelength/distance to last scattering surface). If the universe is flat, then photons travel in straight lines as depicted by the bottom paths in figure 2. In an open universe, photon trajectories diverge as illustrated by the top paths. Therefore, the distance to the last scattering surface is much larger than in a flat universe. The angular scale corresponding to this first peak is therefore smaller in an open universe than in a flat one. The spectrum of anisotropies will therefore have a series of peaks and troughs, with the first peak showing up at larger angular scales in a flat universe than in an open universe. Figure 3 shows the anisotropy spectrum expected in a universe in which perturbations are set down during inflation. The RMS anisotropy is plotted as a function of multipole moment, which is a more convenient representation than angle $`\theta `$. For example, the quadrupole moment corresponds to $`L=2`$, the octopole to $`L=3`$, and in general low $`L`$ corresponds to large scales. The COBE satellite therefore probed the largest scales, roughly from $`L=2`$ to $`L=30`$. The first peak shows up at $`L200`$ in a flat universe, and we do indeed see a trough at smaller scales and then a later peak at $`L550`$. This sequence continues to arbitrarily small scales (although past $`L1000`$ the amplitudes are modulated by damping). We also observe the feature of geodesics depicted in figure 2: the first peak in an open universe is shifted to much smaller scales. An important aspect of figure 3 is the accuracy of the predictions. Although I have given a qualitative description of the evolution of anisotropies, I and many other cosmologists spent years developing quantitative codes to compute the anisotropies accurately. This activity anticipated the accuracy with which CMB anisotropies will be measured and therefore we strove for (i) accuracy and (ii) speed. The former was obtained through a series of informal discussions and workshops, until half a dozen independent codes converged to answers accurate to within a percent. Speed is important because ultimately we will want to churn out zillions of predictions to compare with observations in an effort to extract best fit parameters. Fortunately, Seljak and Zaldarriaga developed CMBFAST, a code which runs in about a minute on a workstation. None of these developments are particularly surprising: perturbations to the CMB are small, and therefore the problem is to solve a set of coupled linear evolution equations. The fact that there are many coupled equations makes the problem challenging, but the fact that these are linear more than compensates. ## 3 Anisotropies: The Future Figure 4 shows why cosmologists are so excited about the future possibilities of the CMB. First, the top panel shows that people are voting with their feet. There are literally hundreds of experimentalists who have chosen to devote their energies to measuring anisotropies in the CMB. Over the coming decade, this will lead to observations by over a dozen experiments, culminating in the efforts of the two satellites, MAP and Planck. Some of these results are beginning to trickle in. In particular, Viper, MAT, MSAM, Boomerang NA, and Python have all reported results within the last year. The middle panel in figure 4 shows the expected errors after all this information has been gathered and analyzed. Take one multipole moment, at $`L=600`$ say. We see that the expected error is of order $`5\mu `$K, while the expected signal is about $`50\mu `$K. At $`L=600`$, therefore, we expect a signal to noise of roughly ten to one. Notice though that this estimate holds for all the multipoles shown in the figure. In fact, it holds for many not shown in the figure as well: it is quite possible that Planck will go out to $`L2000`$. So, we will have thousands of data points, each of which will have signal to noise of order ten to one, to compare with a theory in which it is possible to make linear predictions! No wonder everyone is so excited. The final panel in figure 4 shows the ramifications of getting this much information about a theory in which it is easy to make predictions. The exact spectrum of anisotropies depends on about ten cosmological parameters: the baryon density, curvature, vacuum density, Hubble constant, neutrino mass, epoch of reionization, and several parameters which specify the primordial spectrum emerging from inflation. Figure 4 shows the expected errors in four of these parameters. In each case, all (roughly ten) other parameters have been marginalized over. That is, the uncertainty in the Hubble constant stated allows for all possible values of the other parameters. The uncertainty in the Hubble constant, of five to ten percent, comes down significantly if one assumes the universe is flat. In any event, this uncertainty is still smaller than the current estimates from distance ladder measurements. The very small uncertainty on the baryon density is smaller than the five percent number obtained by looking at deuterium lines in QSO absorption systems. More importantly, the systematics involved in the two sets of determinations are completely different. If the two determinations agree, we can be very confident that systematics are under control. The upper limit on the neutrino mass is particularly interesting given recent evidence for non-zero neutrino masses. The CMB alone will not go down to $`0.07`$ eV, the most likely number from atmospheric neutrino experiments, but it will certainly probe the LSND region ($`m_\nu 23`$ eV). Further, it is possible that, in conjunction with large scale structure and weak lensing measurements, we will get to the range probed by atmospheric neutrinos. The final bar in the bottom panel shows the predicted uncertainty in the slope of the primordial spectrum. While one might reasonably ask, “What difference does it matter if we know the baryon density or the Hubble constant to five percent or two percent accuracy?” the slope of the primordial spectrum and other inflationary parameters are different. For every inflationary model makes predictions about the primordial perturbation spectrum. The more accurately we determine the parameters governing the spectrum, the more models we can rule out. So it is extremely important to get the primordial slope and other inflationary parameters as accurately as possible. These may well be our only probe of physics at energies on the order of the GUT scale. Along these lines, I should mention several recent developments in the field of parameter determination. The first is an argument made by several groups for measuring polarization. They show that accurate measurement of polarization will decrease the uncertainty in the primordial slope by quite a bit. Even though currently planned experiments may well do a nice job measuring polarization, there will still be work to do even after Planck. So we can look forward to proposals for a next generation experiment which measures polarization, and I believe we should strongly support such efforts. Another development in the field of parameter determination is the realization that a large part of the uncertainty in some parameters (especially some of the inflationary ones) is contributed by treating the reionization epoch as a free parameter. In fact, it is a function of the cosmological parameters and some astrophysical parameters. Recently, Venkatesan has argued that we can use our very rough knowledge of the astrophysical parameters together with the reionization models to reduce the errors on the cosmological parameters. ## 4 Anisotropies: The Present It is time to confront the data. Figure 5 shows all data as of November, 1999. There are two features of this compilation worthy of note. First, note that data reported within the last year are distinguished from earlier results, illustrating in a very graphic way the progress of the field. Second, figure 5 understates this progress because it was produced before the late November release of the the Boomerang North America “test” flight. Indeed, the results which follow do not include this test flight. The papers describing the Boomerang release are fascinating if only because one can compare the results of all data pre-Boomerang with the test flight data. Both subsets of the data have enough power to constrain the curvature by themselves. They produce remarkably consistent results. The data in figure 5 show a clear peak at around the position expected in flat models. Indeed, a number of groups have analyzed subsets of this data and found it to be consistent with a flat universe and inconsistent with an open one. I will briefly describe my efforts with L. Knox. We accounted for a number of facts which make it difficult to do a simple “chi-by-eye” on the data. First, every experiment has associated with it a calibration uncertainty: all the points from a given experiment can move up or down together a given amount. We account for this by including a calibration factor for each experiment and including a Gaussian prior on this factor with a width determined by the stated uncertainties. Second, the error bars in the plot are slightly misleading because the errors do not have a Gaussian distribution. In particular, the cosmic variance part of the error is proportional to the signal itself, so the error gets much larger than one would expect at high $`\delta T`$. In other words, the distribution is highly skewed, with very high values of $`\delta T`$ not impossible. The true distribution is close to a log-normal distribution, and we have accounted for this in our analysis. Finally, as alluded to above, there are many cosmological parameters in addition to the curvature. We do a best fit to a total of seven cosmological parameters (in addition to eighteen calibration factors). The top left panel of figure 6 shows our results. The likelihood peaks at total density $`\mathrm{\Omega }`$ very close to one (no curvature) and falls off sharply at low $`\mathrm{\Omega }`$. A universe with total density equal to $`40\%`$ of the critical density is less likely than the flat model by a factor of order $`10^7`$. This ratio is key because observations of the matter density in the universe have converged to a value in the range $`0.30.4`$ of the critical density. We can combine these two results to conclude that there must be something else besides the matter in the universe. This conclusion probably sounds familiar to you, as the recent discoveries of high redshift supernovae also strongly suggest that there is more to the universe than just the observed matter: there is dark energy in the universe. The exciting news is that we now have independent justification of these results using CMB + $`\mathrm{\Omega }_{matter}`$ determinations. One way to depict this information which has been popularized by the supernovae teams is to plot the constraints in a space with vacuum energy and matter density as the two parameters. As shown in figure 7 the strongest constraints on the matter density come from observations of baryons and dark matter in clusters of galaxies. We obtain contours in this plane from the CMB shown in figure 7. Note that the flat line runs diagonally from top left to bottom right and is strongly favored by the CMB. The data are so powerful that some discrimination is appearing along this line. Very large values of $`\mathrm{\Omega }_\mathrm{\Lambda }`$ are disfavored, and, at a much smaller statistical level, so is $`(\mathrm{\Omega }_\mathrm{\Lambda }=0,\mathrm{\Omega }_{matter}=1)`$. The main result, though, is that the intersection of the regions allowed by clusters and the CMB is at $`\mathrm{\Omega }_\mathrm{\Lambda }0.6`$, in remarkable agreement with the high redshift supernovae results. This concludes my arguments for the first three claims advanced in the introduction. Undoubtedly many of you have heard them in various forms over the past few years. Now let’s turn to the hardest claim to justify, the claim that we should indeed believe the powerful conclusions of the CMB results. I will focus on two arguments. First, one might be worried about the possibility that the weight of these conclusions rests on one experiment, and one experiment might be wrong. The remaining panels of figure 6 show that this is not a problem. We have tried removing any one data set to see how our conclusions about $`\mathrm{\Omega }`$ are affected; in all cases, the conclusion stands. We even tried removing pairs of data sets and again saw no change. One has to argue for a bewildering set of coincidences if one were to disbelieve the statistical conclusions. The second class of arguments hinges on something that was not possible until very recently. Ultimately, skeptics will be convinced if different experiments get the same signal when measuring the same piece of sky. Until now, this test has been difficult to carry out for two reasons. First, at least at small scales, only a very small fraction of the sky has been covered, so there has been little overlap. This has changed a bit over the last year and obviously will change dramatically in the coming years. Second, different experiments observe the sky differently: they smooth with different beam sizes and use different chopping strategies to subtract off the atmosphere. Recently we have developed techniques which “undo” the experimental processing, thereby allowing for easy comparisons between different experiments. To illustrate the map-making technique, let us model the data $`D`$ in a given experiment as $$D=BT+N$$ (1) where $`T`$ is the underlying temperature field; $`B`$ is the processing matrix which includes all smoothing and chopping; and $`N`$ is noise which is assumed to be Gaussian with mean zero and covariance matrix $`C_N`$. To obtain the underlying temperature field $`T`$, we need to invert the matrix $`B`$. This inversion is carried out by constructing the estimator $`\widehat{T}`$ which minimizes the $`\chi ^2`$: $$\chi ^2(DB\widehat{T})C_N^1(DB\widehat{T}).$$ (2) We find $$\widehat{T}=\stackrel{~}{C}_NBC_N^1D.$$ (3) This estimator will be distributed around the true temperature due to noise, where the noise covariance matrix is $$\stackrel{~}{C}_N<(\widehat{T}T)(\widehat{T}T)>=\left(B^TC_N^1B\right)^1.$$ (4) Not surprisingly, maps made from modulated data are extremely noisy. By definition, modulations throw out information about particular modes. For example, a modulation which takes the difference between the temperature at two different points clearly cannot hope to say anything useful about the sum of the temperatures. So looking at a raw, demodulated map is a very unenlightening experience. There are two ways of getting around this noisiness and producing a reasonable-looking map. Before I discuss them, though, it is important to point out that even without any cleaning up, the maps in their raw noisy states are very useful. They can be analyzed in the same manner as the modulated data, with the huge advantage that the signal covariance matrix is very simple to compute. Previously, calculating the signal covariance matrix required doing a multi-dimensional integral for every covariance element. In the new “map basis,” the signal covariance matrix simplifies to $$<T_iT_j>=\underset{L}{}\frac{2L+1}{4\pi }P_L(\mathrm{cos}(\theta _{ij}))C_L.$$ (5) Indeed, one way to think of a map is that it is the linear combination of the data for which the signal (and therefore its covariance) is independent of the experiment. The noise covariance (Eq. 4) accounts for all the experimental processing. Nonetheless, we would like to produce nice looking maps, if only to use to compare different experiments. One way to do this is to Wiener filter the raw map, multiplying the estimator in equation 3 by $`C_T(C_T+\stackrel{~}{C}_N)^1`$, which is roughly the ratio of signal to (signal plus noise). Noisy modes are thereby eliminated from the map<sup>2</sup><sup>2</sup>2A simple way to derive this factor is to put in a Gaussian prior in for the signal $`T`$, effectively adding to the $`\chi ^2`$ in equation 2 the term $`TC_T^1T`$. Minimizing this new $`\chi ^2`$ leads to the Wiener factor.. An example of the Weiner filter is shown is shown in figure 8. The two panels are two different years of data taken by the MSAM experiment. It is well established that the two data sets are consistent . I show these because it is important to get a sense of what constitutes good agreement. Most of the features are present in both experiments, but there are several – for example the hot spot at RA $`135`$ and the cold spot at RA $`120`$ in the 1992 data – which do not have matches. This is not surprising: the same regions in the 1994 experiment may have been noisy so that, in the process of throwing out the noise, the Wiener filter also eliminated the signal. Another feature of these maps which is readily apparent is that they only have information in one direction. There is very little information about declination. As a corollary, the exact shapes of the hot and cold spots in the two data sets do not agree, nor should they. Another way of saying this is to point out that there are some modes remaining in the maps which are noisier than others (e.g. the shapes of the spots are noisy modes). Is there a more systematic way to eliminate noise than the Wiener filter? A different technique is illustrated in figure 9 in a setting which is more challenging. Whereas the two years of MSAM data both had very high signal to noise and both were taken with the same instrument at the same frequencies, the two years of Python data shown were taken with completely different instruments (bolometers in the 1995 data and HEMTs in the 1997 data) at completely different frequencies ($`90`$ vs. $`40`$ GHz). They are therefore subject to a completely different set of systematics and foregrounds. Further, the 1997 data is part of a much larger region of sky covered; to get very large sky coverage, the team sacrificed on signal to noise per pixel. Therefore, the signal to noise ratios of the two years are very different. To make the maps in figure 9, I started with the raw maps and then decomposed the data into signal to noise eigenmodes . By ordering the data in terms of signal to noise, we can gradually and systematically eliminate the noisiest modes. This has already been done on the 1995 in the bottom panel. The top panel contains all modes with S/N greater than about $`1.5`$. As indicated by the bars, there are very few such modes, on the order of ten. Nonetheless, many features are found in both maps. There is the triplet of cold spots extending diagonally from $`15^{}`$ to $`10^{}`$ azimuth. There is the cold spot at $`4^{}`$ azimuth, and the hot spot at $`0^{}`$, and then finally the cold spot at the far right. It appears to me that these two maps agree – after far too many hours staring at them – as well as the MSAM maps. In fact the $`\beta `$ test advocated by Bond, Jaffe, and Knox confirms this agreement. ## 5 Conclusion The first acoustic peak in the CMB has been detected at an angular position corresponding to that expected in a flat universe. This confirms the fundamental prediction of inflation that the universe is flat. It also offers independent evidence for the existence of dark energy with negative pressure. This is but the first of many grand results we expect to come out of the CMB over the coming decade. I am grateful to my collaborators Lloyd Knox, Kim Coble, Grant Wilson, John Kovac, Mark Dragovan, and other members of the MSAM/Python teams. This work is supported by NASA Grant NAG 5-7092 and the DOE. Discussion Sherwood Parker (University of Hawaii): Inflation is motivated, in part, by the uniformity of the black body radiation coming from places that did not have time to communicate since the origin of the expanding universe. Is there any data that would exclude the following possibility: (1) the universe is much, and possibly infinitely, larger than the part we can see; (2) the universe is much, possibly infinitely, older than 15 billion years; and (3) there was a gravitationally driven infall of part of it that was reversed at a high energy by phenomena beyond the reach of present experiments? Dodelson: It would be interesting to work out the predictions of theories other than inflation. At present, the best alternative is topological defects, which fare very poorly when confronted with the data. If you can work out some prediction of your model, it would be wonderful: we need alternatives to inflation if only to serve as strawmen. Regarding your specific model, I don’t know what you mean by larger than we can see: the standard cosmology has this built in. If the age was much older than 15 billion years, one would wonder why the oldest objects are roughly 10-15 billion years old. Jon Thaler (University of Illinois): If $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is 70% and $`\mathrm{\Omega }_M`$ is 30%, do we still need non-baryonic dark matter? Dodelson: Yes, due to limits from nucleosynthesis and structure formation.
no-problem/9912/astro-ph9912431.html
ar5iv
text
# Analysis of Star Formation in Galaxy-like Objects ## 1 Introduction Within the last few years, it has been possible to start drawing an observational picture of the formation and evolution of galaxies (e.g. Fukigita et al. (1996)) based upon an unprecedented amount of results on the astrophysical properties of galactic objects up to $`z5`$ at ultraviolet, optical and near IR wavelengths (e.g. Steidel & Hamilton (1992); Madau (1995); Lilly et al. (1995); Cowie et al. (1996); Ellis et al. (1996); Steidel et al. (1988)). These data provide observational constraints on the history of structure formation in the Universe. In particular, there has been a significant progress in describing the cosmic history of star formation and metal enrichment (e.g. Madau et al. (1996); Lowenthal et al. (1997)). Hopefully these data together with previous and future results will contribute to the understanding of the process of star formation. Although there are several works on this subject in the scientific literature (e.g. Kennicutt (1996); Ferrini (1997) and references therein; Kennicutt (1998)), the detailed physical mechanism is still under study since several aspects remain poorly understood. One open question is regarding the factors that trigger and control the star formation activity in galaxies of different morphologies. In a hierarchical clustering model where structure forms in a bottom-up way, a typical galactic halo is the result of the mergers of smaller substructures. The halo merger frequency and the typical mass involved in one event vary depending on the environment and the cosmological model assumed, among other factors. The astrophysical properties of the galactic objects are determined by all these processes and by the hydro-dynamical evolution of the baryons in a complex way. The conditions which may trigger and control the transformation of cold gas in dense regions into stars are not yet well understood and may involve different processes such as supernova energy injection, mergers, disk instabilities and tidal fields. Of particular interest is the possible triggering of star formation by the merging of substructures as a galactic object is assembled in a hierarchical clustering scenario. This fact would imply the existence of a correlation between mergers with satellites and enhancements of the star formation activity. Hereafter, we will define a merger as the complete process since the time two baryonic objects are identified to share the same dark matter halo for the first time to their actual fusion. And whenever tidal fields are mentioned, they are those originated during the mergers. The study of any interactions that do not result in the actual fusion of the structures is not considered in this paper. There has been also much work on the link between the high star formation rates in ultra-luminous IRAS galaxies (ULIRGs) and their environments, although a consensus on how the galaxy environment influences its star formation rate has yet to be reached. Lawrence et al. (1989) find that most ($`70\%`$) of the ULIRGS they studied appeared to have close companions or be morphologically disturbed (Lucas et al. (1997); see also Sanders & Mirabel (1996)), whilst a significant minority appear to be isolated. Other authors such as Clements et al. (1996) and Sanders et al. (1988) estimate that $`99\%`$ of ULIRGs are in interacting systems. It is clear that much of the discrepancy is due to the difficulty in classifying quantitatively what is meant by ‘interacting’. Despite this reliance on a rather subjective classification of the data, there seems to be a trend in increasing likelihood of interaction and increasing starburst luminosity in IRAS selected galaxies. All the above studies of the environments of star-bursting galaxies assume that both, the environmental trigger and the starburst episode, are contemporaneous. As Joseph and Wright (1985) point out, this might not be true if we use tidal tails as the evidence for an interaction as, depending on the stellar initial mass function (IMF), the tidal tail might last longer than the starburst it triggers. Therefore it is still unclear as to whether an interaction/merger is required in order for a starburst to take place and also whether they happen simultaneously. On the other hand, recent observations (mainly from the Hubble Deep Field) show an increasing number of irregular/interacting or morphologically disturbed objects with look-back time, which also exhibit strong star formation activity (e.g. Glazebrook et al. (1995); Ellis et al. (1996); Lilly et al. (1997); Lowenthal et al. (1997); Guzmán et al. (1997); Driver et al. (1998)). It is not yet clear at what extent these data are fully consistent with predictions of models based on hierarchical clustering. Nevertheless all of them agree in supporting a scenario where interactions/mergers and star formation activity increase with red-shift (e.g. Bouwens et al. (1997); Driver et al. (1998); Brichmann et al. (1998)). From a theoretical point of view, semi-analytic models have been quite successful in formulating a picture of galaxy formation and evolution, although they have to resort to global recipes in order to take into account complex effects (Kauffmann et al. (1993); Baugh et al. (1996)). In this regard, Lacey & Silk (1991) have also used tidal interactions between galaxy-sized objects for controlling the star formation processes in galactic objects, although they did not consider mergers of substructure as a possible cause. Baugh et al. (1996) formulate a semi-analytic model where the morphology of a galaxy is determined by its history of major mergers which trigger violent star formation. No predictions for the strength, duration and frequency of these stellar bursts could be done with this model. Self-consistent numerical simulations have proved a powerful tool for the study of the formation of galaxies in a cosmological framework (Cen et al. (1998); Hernquist et al. (1996); Theuns et al. (1998); Tissera et al. (1997); Pearce et al. (1999); Katz et al. (1999)). They have the advantage over semi-analytic models of being able to provide a consistent description of the evolution of the structure in the non-linear regime. As a consequence, physical processes related to the evolution of the dissipative component can be included and modeled upon a more physical basis. Among these processes, star formation mechanisms are highly important because of their outstanding role in the formation of structure on galactic scales. However, the treatment of processes related with star formation is at its beginnings, since numerical problems, added to the lack of a fully theoretical understanding, make its implementation difficult. Several authors have analyzed numerical models of the formation of individual galaxies including simple schemes to transform the cold dense gas into stars (e.g. Katz (1992); Navarro & White , Gerritsen (1997)). Of most relevance to the analysis carried out in this paper are the results of Barnes & Hernquist (1991, 1996) and Mihos & Hernquist (1994, 1996). These authors modeled mergers of two disk/halo galaxies using hydro-dynamical codes. They found that a merger with a satellite can induce the formation of a bar along which the gas is compressed and shocked, loosing angular momentum. This process would trigger a gas inflow which could fuel star formation activity at the center. These models illustrate one way in which tidal fields can produce gas inflows in strongly interacting and merging galaxies. Mihos and Hernquist (1996) also showed that the internal structure of a parent disk-like galaxy is relevant to regulate the rate of this gas inflow. Hernquist (1989 a,b) showed that even the accretion of low-mass satellites by disks can result in an inward gas flow. In all cases, by applying the Schmidt law, an enhancement of the gas density directly implies an increase of the star formation activity. With the models described in this work, we intend to establish the relevant characteristics of the interplay between hierarchical aggregation and star formation, and to assess if the outcomings are consistent with observations of galaxies undergoing star formation activity in a cosmological framework. Note, however, that we intend to analyze the star formation process in normal field galactic-like objects. We will refer to mergers that arise as a consequence of their formation and evolution in a hierarchical cosmological scenario. In hierarchical clustering scenarios, galactic objects build up through the aggregation of substructure and may suffer major and minor encounters throughout their life. Their star formation histories may be affected by these mergers and, as already pointed out by several authors (e.g., Mihos & Hernquist (1996), Barnes & Hernquist (1996); Baugh et al. (1996)), they may contribute to its triggering. As an attempt to assess the relation between this hierarchical built-up of the structure and the star formation process, we analyze the evolutionary history of each halo in our simulations, looking for possible correlations between star formation enhancements and mergers. The main difference between previous works and this one, is that, in this case, every merger event arises naturally, in consistency with a cosmological model. In fully consistent cosmological simulations, the distribution of merger parameters, such as the orbit characteristics, the orbital energy and angular momentum, the masses of the virial halos and baryonic clumps involved in the merger event, and the spin, internal structure and relative orientation of the baryonic clumps that are about to merge, among others, arise naturally at each epoch as a consequence of the initial spectrum of the density fluctuation field, its normalization, and the cosmological models and its parameters. In controlled type mergers, they are set by hand. Moreover, in fully-consistent cosmological simulations, the effects of diffuse gas accretion and of interactions with small satellites on the assembly of a galactic object are also accounted for. We will also intend to know at what extent can a simple star formation (SF) model follow the SF history of a galaxy-like object, if those SF histories are consistent with observations and how sensitive they are to the free parameters of the model. We will also use the technique described by Tissera, Lambas & Abadi (1997) to assign luminosities at different wavelengths to the simulated galactic objects. This implementation allows us to follow the evolution of their colors as a function of $`z`$ and in relation to their evolutionary history. This paper is organized as follows. Section 2 describes the main characteristics and parameters of the simulations. Section 3 analyses the results and compares them with recent observational data. In Section 4 we investigate the effects of mergers on the color distributions of the galaxies formed, and Section 5 outlines the results. ## 2 Simulations The simulations analyzed follow the evolution of a typical region of the Universe using a version of AP3M+SPH code (Tissera et al. (1997)). We carried out three simulations (hereafter referred as S.1, S.2 and S.3). S.1 and S.2 share the same initial conditions for the distributions of gas and dark matter particles, whilst S.3 has different ones. Simulations S.1 and S.2 are identical except for the star formation efficiency, so any difference between them is due to the SF process and the fact that stars behave as collision-less particles. The initial conditions are set up by using ACTION (for S.1 and S.2) and COSMICS (for S.3), and are consistent with a Cold Dark Matter (CDM) spectrum with $`\mathrm{\Omega }=1`$, $`\mathrm{\Lambda }=0`$, $`\mathrm{\Omega }_\mathrm{b}=0.1`$, and $`\sigma _8=0.4`$ for S.1, S.2 and $`\sigma _8=0.67`$ for S.3. We used $`N=262144`$ particles ($`\mathrm{N}_{\mathrm{dark}}=259520`$ and $`\mathrm{N}_{\mathrm{bar}}=26214`$) in a comoving box of length $`L=5h^1`$ Mpc ($`H_0=100h^1\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, $`h=0.5`$). Note that dark matter and baryonic particles have the same mass ($`\mathrm{M}_{\mathrm{part}}=2.6\times 10^8\mathrm{M}_{}`$). The gravitational softening used in these simulations is 3 kpc, and the smaller smoothing length allowed is 1.5 kpc. The time-steps of integration used are $`\mathrm{\Delta }t=1.4\times 10^7`$ yr for S.1 and S.2, and $`\mathrm{\Delta }t=1.2\times 10^7`$ yr for S.3. These simulations have proved to be adequate to study processes related with formation of galaxies in a fully cosmological framework (e.g., Tissera et al. (1997); Tissera & Domínguez-Tenreiro (1998); Domínguez-Tenreiro et al. (1998); Tissera, Sáiz & Domínguez-Tenreiro (1999)) despite they have lower hydro-dynamical resolution when compared to prepared ones (Navarro & Steinmetz (1997)). These simulations include star formation according to the algorithm described by Tissera et al. (1997). The gas cools due to radiative cooling. We use the approximation for the cooling function given by Dalgarno and McCray (1972). Gas particles are transformed into stars if they are cold ($`T_{}3\times 10^4`$ K for S.1 and S.2, and $`T_{}10^4`$ K for S.3), dense ($`\rho >\rho _{\mathrm{crit}}7\times 10^{26}\mathrm{gr}\mathrm{cm}^3`$) and satisfy the Jean’s instability criterium. When a gas particle satisfies these conditions, it is transformed into a star particle after a time interval ($`\tau `$)<sup>1</sup><sup>1</sup>1 This time interval is the time estimated from equation (1) over which 99 $`\%`$ of the gas mass in a particle is expected to be transformed into stars (Navarro & White ). It is estimated as: $`\tau =\mathrm{ln0}.01t_{}/c`$. over which its gas mass is being converted into stars according to $$\frac{d\rho _{\mathrm{star}}}{dt}=c\frac{\rho _{\mathrm{gas}}}{t_{}}$$ (1) where $`c`$ is the star formation efficiency ($`c=0.01,0.1,0.01`$ for S.1, S.2 and S.3, respectively) and $`t_{}`$ is a characteristic time-scale assumed to be equal to the dynamical time of the particle ($`t_{}=t_{\mathrm{dyn}}=(3\pi /(16G\rho _{\mathrm{gas}}))^{1/2}`$). The simulations have different star formation efficiency parameters, $`c`$. The total number of stars formed will depend on the values of $`c`$ and $`T_{}`$, which can be adjusted in order to reproduce observations. Simulations S.1, S.2 and S.3 have transformed $`12\%`$, $`28\%`$ and $`7.5\%`$, respectively, of their total baryonic mass into stars at $`z=0`$. Because of the higher value of $`\sigma _8`$ used in S.3, halos collapse earlier and reach higher densities sooner. As a consequence, stars form from higher $`z`$ depleting quicker the gas in condition of forming stars. Hence, the SF history depends also on the normalization of the power spectrum (Baugh et al. (1996)). Different values of critical temperature $`T_{}`$ have been used. Simulation S.3 has a lower value which helps to produce less stars than in S.1. The total stellar masses at $`z=0`$ imply a stellar density parameter $`\mathrm{\Omega }_\mathrm{s}`$ greater than the observed ones: $`0.005<\mathrm{\Omega }_\mathrm{s}h^2<0.009`$ (Madau (1998)), although the latter are subject to serious uncertainties such as dust effects which can lead to important underestimations. Recent observations in the mid and far infrared suggest higher values (Flores et al. (1998)). Note also that we have adopted $`\mathrm{\Omega }_\mathrm{b}=0.10`$, a value that could be considered somewhat high. However, recent measurements of the deuterium abundance in clouds of hydrogen at high red-shift (Burles & Tytler ;Burles & Tytler ), if correct, allow to constrain the baryon fraction to a precision of $`10\%`$, $`\mathrm{\Omega }_\mathrm{b}=0.08\pm 0.008`$ (for $`h=0.5`$, as we have assumed), close to the value we have used. Feedback effects and metallicity enrichment by supernova explosions have not been included in these simulations. Feedback processes are believed to play a key role in helping to set a self-regulated star formation regimen (Silk (1997)). But its modeling in hydro-dynamical simulations is still quite controversial (Katz (1992); Navarro & White ; Metzler & Evrard (1995); Yepes et al. (1997)). Since the SF process is not completely understood, it is wise to analyze these two effects in separate steps. One shortcoming of numerical simulations is that numerical resolution decreases with look-back time. The higher the red-shift, the smaller the objects and so, the smaller the number of particles used to resolve them. This is an ubiquitous problem in all numerical simulations and a very large number of particles would be required to improve it. Since this is impossible to accomplish at present, results should always be considered with caution. In order to minimize this problem, we will study the evolutionary history of the larger objects in our simulations (see Section 3). Note, however, that we use fully self-consistent cosmological simulations, hence, the hierarchical evolution of a galactic halo is very well represented and so are mergers and the effects of tidal fields generated by the nearby structure. ## 3 Analysis ### 3.1 Global Star Formation Recent observations of objects at different $`z`$ have provided with information about the SF history of the Universe (Somverville & Primack (1998) an references therein). Although many uncertainties such as the initial mass function, reddening by internal and Ly$`\alpha `$ cloud absorptions and the fact that UV luminosity traces mainly the formation of massive stars, among others, affect these results, it is possible to envisage a global SF trend. In this sense, this observational relation can be a useful tool to assess how the SF proceeds within a simulated box. It has to stressed that we do not intend to explain what is observed but to use observations as a global constrain to qualitatively compare the effects that the different SF parameters may have on the SF history of the simulated box. We estimate the global SFR by reckoning the stellar mass formed in the simulated box at each $`z`$, and smoothing it overtime in order to diminish the noise introduced by the discreteness of the SF process. In order to compare these mean star formation rate $`<SFR>`$ histories with observational results, we calculate the cosmic star formation rate density: $`\rho _{\mathrm{SFR}}=<SFR>/\mathrm{V}`$, at each time-step of integration in each simulation. V is the comoving volume of the simulated boxes at each corresponding $`z`$. In Figure 1, we plot $`\rho _{SFR}`$ for simulations S.1, S.2 and simulation S.3, and include recent observational results. As can be seen from this figure, the simulated $`\rho _{SFR}`$ are quite different. None of them have a peak at $`z1.5`$ as that claimed by Madau et al. (1996), but they are within the observed range. Simulations S.1 and S.2, because of the combined effects of the lower normalization parameter $`\sigma _8`$ and SF ones, they start forming stars later on. The only difference between S.1 and S.2 is the value of $`c`$. Therefore, it is direct that a change in the star formation efficiency introduces a delay in process and decreases the overall star formation rates. To fill up the gap between $`z2`$ and the last point measured by Madau et al. (1996) at $`z=5.5`$, it can be estimated that approximately 20 gas particles should have been transformed into stars at a constant rate of 45 $`\mathrm{M}_{}/\mathrm{yr}`$. This number represents less than a $`10\%`$ of the total stellar mass formed in S.1 and S.2. Whence we can conclude that the results for $`z<2`$ will not be strongly affected by the necessary changes in the SF process in order to have a complete SFR history at higher $`z`$. Simulation S.3 starts forming stars at larger $`z`$ due to the higher normalization parameter $`\sigma _8`$ adopted. But as a consequence of the low $`c`$ and $`T_{}`$ values used, the SF rates are lower than those in the other two simulations. Globally, the $`<SFR>`$ in S.3 shows a similar trend to observations. In this simulation, the total amount of stars formed at $`z>2`$ is $`30\%`$ of the total stellar mass at $`z=0`$, while $`55\%`$ was formed at $`1<z<2`$. Note that the peak of SF is actually at earlier times, $`z3.5`$, and that it decreases slowly to smaller $`z`$ in accordance with recent results from Cowie, Songaila and Barger (1999). It has to be mentioned that numerical resolution affects SF more strongly at higher $`z`$ that at lower ones, and, that the inclusion of feedback mechanisms could have a non-negligible impact of the $`\rho _{\mathrm{SFR}}`$. However, current numerical models that include SN feedback are still limited by numerical and theoretical problems. To sum up, the normalization of the power spectrum, the star formation efficiency and the minimum $`T_{}`$ adopted in the SF model, affect the global SFR history of the simulated volumes and consequently, the SFR of each individual galaxy-like object. The value of $`T_{}`$ is determined, in part by the available cooling functions, and indirectly, by numerical resolution of the gas component. The normalization parameter $`\sigma _8`$ is now defined within a narrow range depending on the adopted cosmology. However, many questions remain to be answered about the bias and its dependence on scale and red-shift. The value of $`c`$ is the actual free parameter of the SF model. In the following sections, we use these experiments (S.1, S.2 and S.3) to analyze the SF process within each galaxy-like object in relation to their merger histories, and assess how these parameters affect the individual SF histories. ### 3.2 Star Formation History and Mergers In this paper, we restrict the analysis to typical field galactic-like objects (GLOs). At $`z=0`$, GLOs are identified at their virial radius i.e., the radius for which the density contrast is estimated to be $`\delta \rho /\rho 200`$ (White & Frenk ). We reject those GLOs with a comparable companion within two virial radii at $`z=0`$. In this way, we avoid complications due to tidal fields originated by the presence of several companions and/or the underlying over-density, focusing on the effects produced by the assembly of each individual object through hierarchical growth. In particular, in S.3 some objects have been discarded since they belong to groups. Each GLO is composed of a dark matter halo and a baryonic component in the form of gas and/or stars. We will only analyze in detail objects resolved with more than 250 baryonic particles within their virial radius at $`z=0`$. Table 1 gives the total dark matter($`\mathrm{N}_{\mathrm{dark}}`$) and baryonic ($`\mathrm{N}_{\mathrm{bar}}`$) particles within the virial radius for each GLO at $`z=0`$. The SF algorithm used is very effective at forming stars at the dense cores of the galactic-like objects, so it is easy to isolate star particle clumps. We have named GLOs in S.1 and S.2 using the same label code as the one chosen in Tissera & Domínguez-Tenreiro (1998) to identify halos. Simulations S.1 and S.2 correspond to their simulations I.2 and I.3. The main baryonic clumps in GLOs in S.1 that resemble a disk-like structure (DLO) have been studied from a dynamical point of view by Domínguez-Tenreiro et al. (1998). GLOs 3, 4, 5 and 6 in Table 1 host DLOs 1, 2, 4 and 3 in their Table 1, respectively. We then follow back the evolution of the matter inside their virial radius as a function of the look-back time for the available outputs of the simulations. We construct the merger trees of each GLO identified at $`z=0`$ in the three simulations by recursively tracing back in time the objects which contain particles that end up in the final GLO. In this way, we individualize the progenitors and the satellites with which they merged at all outputs of the simulations (every 100 time-steps for S.1 and S.2, and 20 time-steps for S.3). All objects, progenitors and satellites, are identified at their virial radius at the corresponding $`z`$. The set of objects identified in this way gives a complete record of the merger history of each GLO. We will assume that the progenitor clump or parent galaxy of a GLO at $`z=0`$ is the most massive object identified at $`z>3`$ from its merger tree. A merger will be defined as the complete process since the time two baryonic objects are identified to share the same dark matter halo for the first time to their actual fusion. We will not distinguish between major and minor mergers. Instead, a merger will be counted each time the progenitor fusions with a satellite with more than 10 % of its virial mass at that time of the merger. Otherwise, (i.e. less than 10 %) it will be considered accretion or infall. We can, then, follow the evolution with look-back time of each dark matter halo and its baryonic main clump. Some small satellites may have been formed and accreted between outputs. In this case, they would be missed by our analysis. However, this situation would happened only for very small objects that can be, anyway, counted as accretion. Note that we keep track of evolution of all smaller substructures that merge with the progenitor, but we do not look at them in detail as only their virial mass and gas content at the time of the merger are required for this analysis. We estimate the star formation history of each GLO by reckoning the stellar mass formed in its progenitor objects at each $`z`$ and then, smoothing these distributions over time. The reason for adopting this procedure is that the SF model used in these simulations transforms a gas particle into a star one at once, after a time delay $`\delta t`$ over which the gas is supposed to be transformed into stars as explained in Section 2. A typical value for this time delay is $`<\delta t>20\mathrm{\Delta }t`$, where $`\mathrm{\Delta }t`$ is the integration time-step of the simulations Then the $`SFR`$ have been smoothed by binning these distributions in time-bins of 20 points centered at the formation time of each star particle and averaging the stellar mass formed within each time-bin. In Figure 2 we show, as an example, the star formation history from $`z=1`$ of the galactic objects 1, 2, 3 and 4 in simulation S.1. We have plotted the $`<SFR>`$ ($`\mathrm{M}_{}/\mathrm{yr}`$) in the progenitor object versus look-back time ($`\tau (z)=1.(1+z)^{2/3}`$ for $`\mathrm{\Omega }=1`$ ). The times at which the satellite enters the virial radius of the progenitor has been indicated with an arrow pointing up, while the actual fusion of the baryonic cores has been indicated with an arrow pointing down. These are all the merger events in which the GLOs are involved in the range depicted in the figure. As can be seen in Figure 2, in all cases there is an increase of SF related to a merger with a satellite of more than 10 % the progenitor mass. This situation is common to all GLOs in the other three simulations. From this figure it can be also seen that when a satellite enters the virial region of the progenitor, there is a delay in the fusion of the gaseous cores (Navarro, Frenk & White (1995)). During this interval, the objects orbit around each other and are under the effects of strong tidal fields. According to the analysis of some authors (e.g., Hernquist 1989a ; Hernquist 1989b ; Barnes (1988); Domínguez-Tenreiro et al. (1998)), the interactions and fusions with satellites may supply gas to the central region of the parent galaxy fueling a burst of star formation. Recall that among the different processes that can affect the SF history of the GLOs in hierarchical clustering scenarios (such as the assembly of the main object at high $`z`$, the merger of the progenitor with other clumps, gas compression as it cools and collapses inside dark matter halos during the quiescent phases of the assembly of the GLOs, and the interactions with neighboring structures that do not end up in actual fusions), in this work, we refer only to mergers (as defined in the Introduction). Consequently, we only study those well-defined peaks that are located within merger events (i.e., those within arrows in Figure 2). So we have not analyzed the first SF peaks that can be related with the assembly of the progenitor object (i.e., in Figure 2 peaks at $`\tau (z)0.32`$ in GLO 1 and 4). These peaks occur at a $`z`$ where the virial mass of the progenitor is always less than $`20\%`$ of the final GLO virial mass, and they could be strongly affected by numerical resolution. During the merger process (i.e. from the time the satellite enters the virial radius of the progenitor until one single baryonic clump forms) the progenitor and its satellite continue accreting gas. In our simulations, this fraction is important since, in same cases, it equals the amount of new stars formed. For example, GLO 2 in S1 transformed $`30\%`$ of its original gas into stars during a merger with an object with a mass of $`40\%`$ the progenitor mass. During that process, the amount of gas accreted was $`29\%`$ of the gas mass of the final objects, and the ratio between the new stars and the old ones was $`0.82`$. The same object in S.2 transformed $`83\%`$ of its initial total gas into stars, accreted $`30\%`$ of the remnant gas and the burst resulted in a $`25\%`$ increase of stellar mass. ### 3.3 Star Formation Peaks Because the star formation algorithm used in this work, in practice, transforms at once (after satisfying all the requirements mentioned in Section 2) a gas particle into a star one, the overall star formation history of a galaxy-like object is discrete and quite noisy. Although it is clear when there is a peak in the star formation history, this noise makes it difficult to isolate the stars formed in a single burst, and consequently, to classify the strength of the peak of new stars. In order to do so more rigorously, we took the following steps. We estimate the overall minimum star formation rate in a GLO, $`\delta _{\mathrm{min}}`$, at any red-shift. We, then, subtract a factor $`f`$ of this minimum from the total star formation rate history, so the peaks are clearly identified as the values with a signal larger than a threshold, $`\sigma _{\mathrm{min}}=f\times \delta _{\mathrm{min}}`$. We tried different values of $`f`$, choosing $`f=3`$ since this is the minimum one that allows us to individualize peaks in all GLOs in all simulations. Values below $`\sigma _{\mathrm{min}}`$ are considered ’ambient star formation rate’ (ASFR). This ASFR can be explained as being driven by the increase of cold dense gas as the result of the cooling and collapse of baryons on to the potential well of the halo. We estimate that the ASFRs take values of $`3\mathrm{M}_{}\mathrm{yr}^1`$. The mean values for $`<\delta _{\mathrm{min}}>`$ are 0.90, 1.45, 0.89 $`\mathrm{M}_{}/\mathrm{yr}`$ for S.1, S.2 and S.3, respectively. The larger mean value measured for S.2 reflects the fact that the rate of star formation is always higher in this simulations (since $`c`$ is higher) than in the other ones. In Figure 2, the horizontal solid lines represent $`\sigma _{\mathrm{min}}`$ for each GLO. From this figure we see that the total star formation rate histories are composed of this approximately constant ambient star formation rate over which stellar bursts are superposed. For each stellar burst, we estimate the value of a local maximum ($`\sigma _{\mathrm{star}}`$), its duration ($`\tau _{\mathrm{burst}}`$) and the total amount of stars ($`M_{\mathrm{burst}}`$) formed during this period of time <sup>2</sup><sup>2</sup>2 Peaks have to be determined by more than three points higher than $`\sigma _{\mathrm{min}}`$ in order to be classified as a stellar burst; and their durations, $`\tau _{\mathrm{burst}}`$, are estimated as the period of time comprised between the first point to surpass this threshold and the last point which satisfies this condition. . These last two parameters ($`M_{\mathrm{burst}}`$ and $`\tau _{\mathrm{burst}}`$) are sensitive to $`f`$. In order to carry out a consistent analysis, once $`f`$ is chosen, it is kept constant for all objects in all simulations. In general, peaks have $`\sigma _{\mathrm{star}}>3\sigma _{\mathrm{min}}`$. As already discussed, we have restricted this study to those peaks that can be directly related to merger events, after the main objects or progenitors are already formed and better resolved. In Table 1 we summarize the principal parameters that characterize those star formation peaks, including the ratios between the virial masses of the progenitor, $`M_{\mathrm{pro}}`$, and the satellite objects, $`M_{\mathrm{sat}}`$, at the time of the merger (i.e., the time when the satellite enters the virial radius of the parent GLO) and between the stellar mass content, $`M_{\mathrm{star}}`$, of the system and its total baryonic mass, $`M_{\mathrm{bar}}`$, at the same time. The ratio $`M_{\mathrm{star}}^z/M_{\mathrm{star}}^0`$ is the fraction of the total stellar mass of the progenitor GLO at $`z=0`$ that has actually been formed at the red-shift of each merger ($`M_{\mathrm{star}}^z`$). GLO 6 in S.2 has not been included since it is not possible to clearly isolate the star formation peaks associated with the mergers due to the high level of noise in its SF history. During some merger events, two stellar peaks have been detected. They are denoted by a letter D in Table 1 and will be discussed in more detail in a separate paper (Tissera et al. 2000). Note that we study a total number of 25 different merger events. So, even if the GLOs whose evolutionary histories are analyzed are restricted to the more massive ones, we look at all mergers recorded in their merger trees. Hence, this is equivalent to have performed 25 different mergers using controlled toy-models, with the advantage that each one of these mergers has physical properties determined by the underlying cosmology and the astrophysical model, and occurs at different stages of evolution. Unfortunately, this sample is not large enough to study the possible dependence of the stellar burst characteristics with the red-shift. Let us now try to investigate if the parameters that characterize the stellar bursts are consistent with observations of galaxies undergoing strong stellar activity at different red-shifts and how they change with the model parameters ($`\sigma _8`$, $`c`$, $`T_{}`$). As can be seen from Table 1, the values of $`\tau _{\mathrm{burst}}`$, $`M_{\mathrm{burst}}`$ and $`\sigma _{\mathrm{star}}`$ vary among simulations and are different for S.1 and S.2 versions of the same merger event. In Table 2 are shown the mean values $`<\sigma _{\mathrm{star}}>`$, $`<\tau _{\mathrm{burst}}>`$, and $`<M_{\mathrm{burst}}>`$ in units of $`\mathrm{M}_{}/\mathrm{yr}`$, $`10^8\mathrm{yr}`$ and $`10^{10}\mathrm{M}_{}`$, respectively. As expected, the higher values are measured for peaks in S.2 since, as discussed in Section 2, the gas is transformed into stars more efficiently in this simulations than in S.1 and S.3. Recall that the only difference between S.1 and S.2 is the $`c`$ parameter, and, that S.1 and S.3 have different bias parameter and critical temperature, $`T_{}`$, but equal $`c`$ value. In S.3, at lower $`z`$, the stellar peaks are less important when compared to those in S.1, because a larger fraction of gas has been consumed at higher $`z`$ (so the gas density is lower at later times) and also because of the more restrictive temperature criterium $`T_{}`$ used. At those low $`z`$, GLOs in S.1 are more gas-rich and can produce stronger stellar bursts. For simulation S.3, we see that the mean stellar mass, $`<M_{\mathrm{burst}}>=3.90\times 10^9\mathrm{M}_{}`$, and the starburst time-scale, $`<\tau _{\mathrm{burst}}>=5.14\times 10^8\mathrm{yr}`$, are consistent with observed values inferred for starburst galaxies and high $`z`$ objects undergoing important star formation activity (see Sawicki & Yee 1997; Kennicutt 1998 and references therein). Also note that these values depend on $`f`$. Had we chosen a higher threshold ($`\sigma _{\mathrm{min}}`$), $`\tau _{\mathrm{burst}}`$ and $`M_{\mathrm{burst}}`$ would have been smaller in all cases. Note also that the simulated bursts occur at different $`z`$, so a direct comparison with observed objects at high $`z`$ is complex since, for example, the effects of dust reddening are still uncertain and lead to underestimations of the SFRs. And, on the other hand, a comparison with only local starburst galaxies could be misleading, since the properties of the galactic objects change with $`z`$ and so could do their SF histories (e.g., Guzmán et al. (1997); Driver et al. (1998); Flores et al. (1998)). The next question is in relation to the factors triggering these stellar bursts and if cause-effect relations can be isolated. We estimate the ratio between the virial mass of the satellite ($`M_{\mathrm{sat}}`$) that is falling in, and the virial mass of the progenitor ($`M_{\mathrm{pro}}`$) just before the satellite enters the virial radius of the parent galaxy. We plot $`\sigma _{\mathrm{star}}`$ vs. $`M_{\mathrm{sat}}/M_{\mathrm{pro}}`$, including the double peaks (see Section 3.3) in Figure 3a. The response is different depending on the simulation. In simulation S.2, $`\sigma _{\mathrm{star}}`$ values are higher than those in S.1 although the GLOs are numerically identical have the same merger trees in both simulations. This difference is originated in the different star formation efficiency as already discussed: it is easier to transform a gas particle into a star one in S.2 than in S.1. The same argument is valid for S.3 which has the lower values for roughly the same $`M_{\mathrm{sat}}/M_{\mathrm{pro}}`$ ratios. Note that events with approximately equal $`M_{\mathrm{sat}}/M_{\mathrm{pro}}`$ ratios produce different $`\sigma _{\mathrm{star}}`$ in the same simulation. An interesting fact observed from this figure is that, even a merger with a low mass satellite, $`M_{\mathrm{sat}}/M_{\mathrm{pro}}0.10`$, correlates with an increase of the star formation rate in all simulations, with $`\sigma _{\mathrm{star}}`$ values similar to those corresponding to higher $`M_{\mathrm{sat}}/M_{\mathrm{pro}}`$ . This result is in agreement with Mihos & Hernquist (1996) who studied in detail the accretion of low mass satellites and found that even a merger with an object with a mass of 10% the parent galaxy mass, can produce an inflow of gas to the center of the main object fueling a starburst. This result may suggest that a comparable companion is not a necessary condition for triggering star formation, but a smaller one may produce significant effects. The availability of gas in condition of forming stars is one necessary condition to trigger a stellar burst. Hence, it would be expected a correlation between the gas content of the system and the strength of the stellar peaks. In Figure 3b, we plot $`\sigma _{\mathrm{star}}`$ vs. $`M_{\mathrm{gas}}/M_{\mathrm{bar}}`$ where $`M_{\mathrm{gas}}`$ is the total gas mass of the satellite and its progenitor within $`r_{200}`$, before the merger, and their total baryonic mass $`M_{\mathrm{bar}}`$ (gaseous and stellar masses of the satellite and the progenitor together at $`r_{200}`$) at the same time, for the objects in S.1, S.2 and S.3. It can be seen that there is no correlation. Note also that not always the more gas-rich objects have the larger $`\sigma _{\mathrm{star}}`$ or that equally gas-rich objects have different $`\sigma _{\mathrm{star}}`$, even in the same simulations, that is, with the same SF parameters. However, the absolute value of the bursts depends on $`c`$: S.2 has higher $`\sigma _{\mathrm{star}}`$ values even though the objects are more gas-poor than their counterpart in S.1. Note also that within the same simulation, GLOs have very similar gas abundances. Hence the fact that equal massive mergers produce different $`\sigma _{\mathrm{star}}`$ in the same simulation cannot be directly related to a difference in gas richness of the objects involved. Given the durations of the bursts, $`\tau _{\mathrm{burst}}`$, and the total stellar mass formed during that period, $`M_{\mathrm{burst}}`$, it is possible to estimate an overall star formation rate $`<SFR>_{\mathrm{burst}}=M_{\mathrm{burst}}/\tau _{\mathrm{burst}}`$ associated to the burst. We found values ranging from $`50\mathrm{M}_{}/\mathrm{yr}`$ to $`2\mathrm{M}_{}/\mathrm{yr}`$ depending on the simulation. For illustration purposes, we plot them against the the ratio $`M_{\mathrm{sat}}/M_{\mathrm{pro}}`$ (Figure 3c). Again, we found no correlation signal. Since $`<SFR>_{\mathrm{burst}}`$ depends not only on the amount of stellar mass formed, but also on the duration of the burst, this lack of correlation is not surprising. The duration of the bursts, $`\tau _{\mathrm{burst}}`$, can be determined by different parameters such as orbital orientation, internal structure, star formation efficiency, in a complex way. From this figure we can also observe that a merger with a satellite of $`0.10`$ or $`0.40`$ the mass of the parent object may produce the same average $`<SFR>_{\mathrm{burst}}`$ within the same simulation. This fact supports the idea that it is not only the masses of each component in each halo that matters, but other factors could also be relevant, such as the dynamical characteristics of the encounter and the structural properties of the baryonic clumps that merge. We have also searched for possible correlations between both, $`\tau _{\mathrm{burst}}`$ and $`M_{\mathrm{burst}}`$, with $`M_{\mathrm{sat}}/M_{\mathrm{pro}}`$ and $`M_{\mathrm{gas}}/M_{\mathrm{bar}}`$. No signal was detected implying that the total stellar mass and the burst duration are not simple functions of only the relative masses of the merging objects or their gas richness. However, they do depend on $`c`$ as already pointed out (see Table 1). These results prompt us to differentiate between minor and major mergers in order to ascertain if a hidden effect could be disentangled. In Figure 4a we plot again $`\sigma _{\mathrm{star}}`$ vs. $`M_{\mathrm{gas}}/M_{\mathrm{bar}}`$ but, in this case, filled symbols represent major mergers, while open ones, minors events (circles, triangles and pentagon for S.1, S.2 and S.3, respectively). The limit between major and minor mergers has been set at $`M_{\mathrm{sat}}/M_{\mathrm{pro}}=0.35`$ (Baugh et al. (1996)). We include single bursts and the first components of double ones (secondary components are formed with the remanent gas after the first one, so actually they depend on the properties of the first component). We can see from this figure that a minor merger can trigger a burst as strong as a major one, even if the systems is less gas-rich, and that a major merger, in some cases, does not trigger a strong burst even in gas-rich collisions. To deepen into the burst process, we compare the amount of stars formed in a given stellar burst, $`M_{\mathrm{burst}}`$, with the amount of gas available in the system to form stars, $`M_{\mathrm{gas}}`$. In Figure 4b we do not see any clear correlation, that is, the amount of gas available at the beginning of the merger that is actually transformed into stars does not mainly depend on $`M_{\mathrm{gas}}`$. In most cases, it is smaller than the amount of gas available in the system, so the gas mass is not actually completely exhausted during a burst. This can also be seen in the SFR histories, from where we observe that the SF continues after a burst, although at a lower rate. An estimate of the efficiency of the star formation process in each burst can be defined as the fraction of available gas in the system, at the time the satellite enters the virial radius of the progenitor, that was actually converted into stars, $`M_{\mathrm{burst}}/M_{\mathrm{gas}}`$. We plot this burst efficiency ratio versus $`M_{\mathrm{gas}}/M_{\mathrm{bar}}`$ (Figure 5a) and $`M_{\mathrm{sat}}/M_{\mathrm{pro}}`$ (Figure 5b) for the single bursts and for the first peaks of double bursts. As can be seen from these figures, there is no correlation between the burst efficiency and the gas abundance of the GLOs, implying that, in these GLOs, the burst efficiency is not determined by the gas abundance (neither is there a correlation of the efficiencies with $`M_{\mathrm{gas}}`$, which suggest that they are doubtfully determined by numerical resolution). But a trend is present with $`M_{\mathrm{sat}}/M_{\mathrm{pro}}`$, suggesting that massive mergers can induce more efficient transformations of the gas into stars. The mean values of $`M_{\mathrm{burst}}/M_{\mathrm{gas}}`$ for stellar bursts associated with minor ($`M_{\mathrm{sat}}/M_{\mathrm{pro}}<0.35`$) and major ($`M_{\mathrm{sat}}/M_{\mathrm{pro}}0.35`$) mergers for S.1, S.2 and S.3 are (0.24, 0.35), (0.23, 0.72) and (0.10, 0.19), respectively. We have averaged all peaks regardless of the red-shift at they have occurred. This trend has been reported by Mihos & Hernquist (1994, 1996) who used high-resolution models of pairs of merging galaxies, and successfully implemented by Somerville, Primack & Faber (1998) in a semi-analytic model. We found the same relation for objects formed in a cosmological context and for a range of merger parameters that naturally arises in coherence with the stage of evolution of the GLOs. However, our efficiencies are not as high as those claims by Somerville et al. (1998) and depend on the SF parameters. All these results suggest that, at least, a third parameter is playing a role in the triggering of the bursts: it is not enough to have available gas in the system, independently of the relative mass of the colliding objects. The gas has to be violently compressed in short time-scales in order to induce a starburst, and in this case, mergers seem to be doing part of the work. Concerning numerical resolution, the lack of correlation found between the burst characteristics and the gas abundance ($`M_{\mathrm{gas}}/M_{\mathrm{bar}}`$) or the gas mass ($`M_{\mathrm{gas}}`$, that gives a rough idea of the gas resolution of the system at the time of the merger) strongly suggest that they are not determined by numerical resolution. A second fact that supports this point is that peaks in S.2 have higher $`\sigma _{\mathrm{star}}`$ values than those of the same GLOs in S.1, even though the objects are more gas-poor than those in S.1. Because the SF process depends directly on the gas density, its correct numerical description is crucial. As shown by Tissera & Domínguez-Tenreiro (1998) and Domínguez-Tenreiro et al (1998), the gas density within the dark matter halos of massive systems in these simulations are rather well described. In this sense, an advantage of these simulations is that the mass of gas and dark particles are equal, implying that the dark matter is resolved with a factor of about 10 more particles than the gas. This fact assures that two-body effects are unimportant and that the dark matter profiles are well represented in the central regions. An adequate resolution of the dark matter profiles strongly helps the gas to cool and collapse inside the central regions following a correct density profile (Steinmetz & White (1997)). This implies that the SF process, that depends on the gas density, can be also adequately followed. ## 4 Stellar Population and Color Distributions As already discussed in Section 3, the star formation history of each individual galactic object can be followed with look-back time. This information can be combined with stellar population synthesis models to estimate the luminosities and colors of galactic objects throughout their evolutionary history (Tissera et al. (1997)). We use the models of Bruzual & Charlot (1993) to calculate the luminosity of a particle as a function of wavelength $`\lambda `$ and $`z`$. We assume a Miller-Scalo initial mass function with a lower mass cutoff of $`0.1M_{}`$ and an upper mass cutoff of $`125M_{}`$, and a burst of 20 time-steps duration ($`2.8\times 10^8`$ yr for S.1 and S.2, and $`2.4\times 10^8`$ for S.3) for each particle. Then, we sum up the luminosities of the particles belonging to an object and, from the total luminosities, we estimate their colors and magnitudes at different wavelengths. No supernova energy injection or metallicity enrichment have been included. We have neither included reddening effects so some comparison with observed data may result to be rather crude and unfair to simulated colors. In Figure 6 we plot $`UB`$ vs $`BV`$ at $`z=0`$ for the simulations analyzed: S.1, S.2 and S.3. We also include the observational data obtained from the RC3 catalogue (de Vaucouleurs et al. (1991)) which has UBV photometry, Hubble types and measured red-shifts for $`10000`$ galaxies. We plot $`UB,BV`$ locus for both early and late-type galaxies with $`z0.05`$ in RC3. We see that most of the simulated GLOs are bluer than the galaxies although the maximum departure from the observations is at most of $`0.3`$ magnitudes. This is clearly due to fact that the star formation rate histories of the simulations produce higher rates than observed at $`z<0.05`$. But in spite of this fact, a better agreement between simulated and observed colors is not impossible and it only requires a lower star formation rate at $`z0`$ which can be accomplished, for example, by using a lower $`c`$ value or including SN feedback. It is also possible that, because at high $`z`$, GLOs are not very well resolved in these simulations, the gas is inefficiently consumed into stars leading to more gas-rich GLOs at lower $`z`$. On the other hand, higher resolution experiments without a correct feedback implementation could lead to a very effective SF process at high $`z`$ producing GLOs with lower SFR and redder colors at $`z=0`$ (e.g., Steinmetz & Navarro 1999). In our simulations, low resolution at very high $`z`$ acts as a feedback effect. Note that the SFR histories among the simulations are quite different (because of the different SF parameters used), however the distributions of colors at $`z=0`$ on this color-color diagram are very similar. Hence these colors are very insensitive to the star formation history of the GLOs and are not a very safe way of adjusting SF parameters in semi-analytic or numerical models. In Figure 7 we plot $`BV`$ vs $`z`$ for four galactic-like objects in simulations S.1 and S.2. Although the number of outputs available is small, it is still possible to follow the color evolution of the progenitor. It can be clearly seen that the color path is not smooth. An object can move from blue to red and back to blue colors depending on their history. The change in colors in these models can be as large as half a magnitude. In particular, each change in direction in the color-color diagrams correlates with a peak of star formation as expected. The strength in the change depends on the relative number of the new to the old stars. In this figure we have plotted the tracks of objects in S.1 and S.2 with the aim at comparing the influence of the SF efficiency. As can be seen, although the general behavior is similar, the detailed evolution is different. The $`BV`$ tracks of GLOs in S.2 are displaced in time with respect to those in S.1. Since the only difference between these simulations is their star formation efficiency parameters, the color evolution of the objects could be affected by its choice. An even clearer approach for analyzing the evolution of the stellar population and its relation with mergers is to look at $`BI`$ vs $`z`$. In Figure 8 we plot it for the same four objects in S.1 shown in Figure 7 (solid lines for a Miller-Scalo IMF). This figure shows clearly when there is a burst of star formation and its correlation with mergers (the red-shift at which the satellite enters the virial radius of the progenitor has been indicated with an arrow pointing up, while the actual fusion of the baryonic cores has been indicated with an arrow pointing down). In some cases, because of the small number of outputs, we could have missed the peak and just see the remnants. Note that this problem is not present in the star formation history since it is saved completely until $`z=0`$. Again, the changes in color depend on the particular characteristic of the mergers and the proportion of old to new stellar populations. We also estimate colors using a Salpeter IMF with the same lower and upper cutoffs (dashed lines). As can be seen the differences are very small, thou the colors calculated using Salpeter IMF tend to be redder. Unfortunately, the smaller stellar mass allowed by Bruzual & Charlot’s models is $`0.1M_{}`$, so we could not evaluate the effects of assuming a lower mass cutoff such as $`0.01M_{}`$. In Figure 9, we plot $`BI`$ vs $`z`$ for the 4 GLOs shown in Figure 7 and 8 (simulations S.1 and S.2) and observed values from the LDSS2 (Ellis et al. (1996)) and LRIS (Guzmán et al. (1997)). Despite the $`\rho _{\mathrm{SFR}}`$ are very different between these simulations, their evolutionary color tracks are in general agreement with observations. Obviously not all star bursts are triggered by mergers, but, since they are common events in a hierarchical scenario, their possible influence on the evolution of colors cannot be ignored. This picture deduced from the star formation histories and color evolutionary tracks resembles the ’Christmas Tree’ model discussed by Lowenthal et al. (1997) in which individual star forming blobs come and go. According to our models, the star formation in this blobs would be the result of two contributions: one due to an approximately constant ambiance star formation and a certain number of SF peaks. The aggregation of substructure according to a hierarchical scheme would be one possible mechanism of SF triggering. The parent galaxy will evolve by undergoing a number of mergers which may trigger starbursts depending on the particular characteristic of the encounters, and the physical properties of the objects involved. Hence during violent phases colors would became bluer to change again to redder ones as a quiescent period takes place. Moreover, because of the way the structure forms in a hierarchical scenario, the higher the $`z`$, the higher the probability that massive objects would be observed to be undergoing an important star formation activity period (Guzmán et al. (1997)) since the rate of mergers increase with $`z`$, GLOs are gas richer and the gas is denser. This technique that combines hydro-dynamical simulations and synthesis evolutionary models has proved to be potentially powerful to study the formation of galaxies as a function of $`z`$. Nevertheless, more complex models will be needed in order to mitigate numerical resolution problems, and to allow us to numerically follow the evolution of galactic structure of different masses with look-back time. ## 5 Summary We have analyzed the history of star formation in galactic objects simulated within the framework of a clustering hierarchical model. Our aim was to use a set of three cosmological simulations to study the possible interplay between hierarchical aggregation and star formation. We found that, if the structure in the Universe is well represented by a hierarchical clustering model, then our results suggest that the process of aggregation of substructure could be one of the mechanisms that triggers star formation in galactic systems. In this work, SN effects have not been included. It is expected that they will contribute to set a self-regulated SF. However, their actual impact on galaxy scales remains to be clearly stablished. Our conclusions can be summarized as follows: I. The star formation rates as a function of $`z`$ of our simulated GLOs have two components: one approximately constant ($`\mathrm{ASFR}<3`$ $`\mathrm{M}_{}/\mathrm{yr}`$), and a series of stellar bursts superposed. We found that the aggregation of substructures by the progenitor objects correlates with the presence of stellar bursts. These bursts last $`10^810^9\mathrm{yr}`$ and produce stellar masses of $`10^910^{10}\mathrm{M}_{}`$. For S.3, these parameters are consistent with observations of galaxies undergoing strong SF activity at different $`z`$. For S.1 and S.2 the values are higher as could be predicted from the global $`\rho _{\mathrm{SFR}}`$. II. No correlation between the strength of the stellar peaks, its duration and stellar mass formed, on one hand, and the ratios $`M_{\mathrm{sat}}/M_{\mathrm{pro}}`$ and $`M_{\mathrm{gas}}/M_{\mathrm{bar}}`$, on the other hand, was found. This fact implies that they do not determine the characteristics of the burst by their own. And that the strength of a stellar peak can not be predicted only from the gas richness or the size of the colliding satellite. Mergers with equally massive objects produce different effects in the same simulation. When major and minor mergers are distinguished it can be seen that both of them can generate different stellar maxima regardless of their mass content. The strength of the star formation peaks, however, do depend on the SF efficiency parameter used in the models. III. We found a trend for massive mergers to be more efficient at inducing a transformation of the gas available in the system into stars. IV. In agreement with Hernquist (1989b), we find that a merger with a satellite of even $`10\%`$ of the progenitor mass can be correlated with a stellar burst independently of the value of SF efficiency used. This result would imply that, when searching for a companion as the triggering factor of strong star formation activity in a galaxy, not only similar galaxies should be checked out but, also smaller ones (Donzelli & Pastoriza (1997)). V. The color tracks of GLOs are not smooth, but go from bluer to redder in the quiescent phases of evolution, and vice-versa in the violent phases corresponding to mergers. The amount by which colors change on the latter depend mainly on the star formation history of each GLO. But color distributions at $`z=0`$ are quite insensible to their particular star formation histories. The author is grateful to Prof. Diego G. Lambas, Rachel Somerville and the anonymous referee of this paper for stimulating discussions and comments. We thank Francois Hammer for providing useful information. P.B.Tissera thanks Imperial College the University of Oxford and the Centro de Computación Científica (Universidad Autónoma de Madrid) and the University of Oxford for providing the computational support for this work and for their hospitality. This work was partially supported by DGES (Spain), through grant PB96-0029 and Consejo Nacional de Ciencia y Tecnologia (Conicet, Argentina).
no-problem/9912/cond-mat9912286.html
ar5iv
text
# Viability of competing field theories for the driven lattice gas. ## Abstract It has recently been suggested that the driven lattice gas should be described by a novel field theory in the limit of infinite drive. We review the original and the new field theory, invoking several well-documented key features of the microscopics. Since the new field theory fails to reproduce these characteristics, we argue that it cannot serve as a viable description of the driven lattice gas. Recent results, for the critical exponents associated with this theory, are re-analyzed and shown to be incorrect. The critical behavior of the driven lattice gas (DLG) has been the subject of some debate, ever since the first Monte Carlo simulations and field theoretic predictions were found to give differing values for the order parameter exponent $`\beta `$. This discrepancy has led to developments in different directions: some researchers have modified the simulation data analysis, invoking anisotropic finite size scaling , while others have suggested that the original field theory might be deficient in the limit of infinite drive, proposing and analyzing an alternate coarse-grained theory instead. In this communication, we review both the original and the alternate field theory, in the light of Monte Carlo simulation data. We first document that the alternate theory is not a coarse-grained description of the driven lattice gas, since it fails to exhibit several well-established properties of the microscopic model. In a second step, we re-analyze the proposed theory, assuming that it might describe some other, yet to be determined, microscopics. We show that the renormalization group analysis of Ref. is seriously flawed, resulting in incorrect exponents and a proliferation of uncontrolled infrared singularities. We begin with a brief summary of the background. Microscopically, the DLG is a simple ferromagnetic Ising lattice gas, half-filled and coupled to a heat bath at temperature $`T`$, in which particles jump to empty nearest-neighbor sites subject to the usual Ising energetics and a uniform driving force $`E`$ acting along a particular lattice direction. Thus, the effect of $`E`$ is identical to adding a locally linear potential. Clearly, $`E=0`$ corresponds to the equilibrium Ising model. On the other hand, even $`E=\mathrm{}`$ can be realized if Metropolis rates are used: Simply accept/forbid all forward/backwards jumps. Since large values of $`E`$ accentuate the nonequilibrium features of this system, most simulations have been performed at $`E50`$, in units of the Ising coupling constant. The driven lattice gas and many of its variants have attracted considerable attention since they evolve into simple nonequilibrium steady states displaying a wealth of counterintuitive characteristics . Two of its most remarkable features are (i) the discontinuity singularity of the structure factor $`S(𝐤)`$ , which is intimately connected to an $`r^d`$ decay (in $`d`$ dimensions) of the two-point correlations , and (ii) the emergence of nontrivial three-point correlations in the disordered phase, corresponding to the violation of the Ising symmetry by $`E`$ (which drives particles and holes in opposite directions). Such dramatically “non-Ising” characteristics are easily observed in Monte Carlo simulations, at intermediate and large driving fields. They are also confirmed in a high-temperature series expansion, derived directly from the microscopic dynamics . These observations from Monte Carlo simulations play a crucial role in identifying the correct field theory. A basic tenet in the study of critical phenomena is that a microscopic model and its coarse-grained field theory should possess the same symmetries, if they are to belong into the same universality class. For the driven lattice gas, the data on the structure factor indicate that the theory is highly anisotropic. Moreover, the detailed behavior of the discontinuity singularity, upon approaching the origin in wave vector space from different directions, informs us precisely how the familiar Ornstein-Zernike form is modified. Generically, we find that $$R\frac{lim_{|𝐤_{}|0}S(𝐤_{},k_{}=0)}{lim_{k_{}0}S(𝐤_{}=0,k_{})}>1$$ (1) above criticality, and $`R\mathrm{}`$ upon approaching $`T_c`$. The subscripts distinguish the parallel ($``$) and transverse ($``$) subspaces, with respect to the drive direction. Just as significantly, the non-vanishing three-point functions demonstrate that the usual “up-down” symmetry of the Ising model is broken. These key features of the microscopics must be reflected in any viable continuum theory for the driven lattice gas. We first consider the original field theory. It is based on a Langevin equation, in continuous space and time, which describes the stochastic evolution of the local particle density $`\rho (𝐱,t)`$. In terms of $`\varphi 2\rho 1`$, the equation reads: $$_t\varphi =\lambda \left\{\left(\tau _{}_{}^2\right)_{}^2\varphi +\tau _{}_{}^2\varphi +_{}\varphi ^2+\frac{g}{3!}_{}^2\varphi ^3\right\}\xi .$$ (2) The Langevin noise term reflects the fast degrees of freedom: $`\xi (𝐱,𝐭)`$ $`=`$ $`0`$ $`\xi (𝐱,t)^{}\xi (𝐱^{},t^{})`$ $`=`$ $`2\left(n_{}_{}^2+n_{}_{}^2\right)\delta (𝐱𝐱^{})\delta (tt^{}).`$ We emphasize that (i) all coefficients are strictly positive except possibly $`\tau _{}`$ and/or $`\tau _{}`$ which control criticality (see below) and (ii) independent from one another (i.e., not related by symmetry). The parameter $`\lambda `$ sets the time scale. This theory contains two closely linked key ingredients: First, there is a driving term, $`_{}\varphi ^2`$, where $``$ denotes the coarse-grained drive (a naive continuum limit gives $`\mathrm{tanh}(E/T)`$). This term is required to break the Ising “up-down” ($`\varphi \varphi `$) symmetry. Second, the theory is highly anisotropic, with two different diffusion coefficients $`\tau _{}`$ and $`\tau _{}`$. In particular, it predicts an equal-time structure factor, $$S(𝐤)=\frac{n_{}k_{}^2+n_{}k_{}^2}{\tau _{}k_{}^2+\tau _{}k_{}^2+O(k^4)}\text{ }$$ (3) in the disordered phase. This $`S`$ generates a discontinuity singularity $`R=(n_{}\tau _{})/(n_{}\tau _{})`$. To ensure that the observed behavior is faithfully reproduced, we demand $`n_{}\tau _{}`$ $`>n_{}\tau _{}`$ in the disordered phase. Moreover, criticality must be marked by $`\tau _{}=0`$ at positive $`\tau _{}`$ if the divergence of $`S`$ is to be captured correctly. To summarize, the two key features of the original Langevin equation are unambiguously supported by the Monte Carlo data for the microscopic model. We comment briefly on the issue of finite versus infinite fields. In all Monte Carlo simulations, the current is observed to saturate as $`E`$ increases. This saturation is reflected by $`\mathrm{tanh}(E/T)1`$ in the original field theory. Therefore, this theory holds equally well for any nonzero value of the microscopic drive. Furthermore, simulations using Metropolis rates with $`E=50`$, $`100`$ and $`\mathrm{}`$ have been performed. The results are (statistically) indistinguishable! Such sensible behavior is entirely consistent with this theory. The discrepancies arise when critical exponents are measured, specifically the order parameter exponent $`\beta `$, and compared to field theoretic predictions. The original field theory, due to the vanishing of $`\tau _{}`$ at positive $`\tau _{}`$, naturally leads to anisotropic scaling of wave vectors: $`k_{}k_{}^{1+\mathrm{\Delta }}`$ in the critical region, with a nontrivial anisotropy exponent $`\mathrm{\Delta }`$. Three important consequences are that, first, the upper critical dimension $`d_c`$ is shifted to $`5`$, and second, the theory predicts $`\mathrm{\Delta }=`$ $`1+(5d)/3`$ and $`\beta =1/2`$ exactly, i.e., to all orders in perturbation theory. The values obtained by simulations differ, depending on the method used to analyze the data. If a careful anisotropic finite size analysis is used, based on system sizes consistent with the field-theoretic scaling, i.e., $`L_{}/L_{}^{1+\mathrm{\Delta }}=const`$, the field-theoretic exponents result in good data collapse for a number of different observables . However, data for isotropic systems, $`L_{}/L_{}=const`$, appear to indicate an order parameter exponent around $`0.23`$ . Since most of the data were taken at very large fields, some authors have suggested that the origin of the discrepancies does not reside in the data analysis. Instead, they argue that the standard field theory does not capture the $`E\mathrm{}`$ limit correctly and propose an alternate theory. It is based on the Langevin equation: $$_t\varphi =\lambda \left\{\left(\tau _{}_{}^2\right)_{}^2\varphi _{}^2_{}^2\varphi +\frac{g}{3!}_{}^2\varphi ^3\right\}\xi .$$ (4) With minor renamings of parameters , this is Eq. (1) of Ref. . The vanishing of $`\tau _{}`$ marks the critical point. The noise satisfies (Eq. (2) of Ref. ): $`\xi (𝐱,t)`$ $`=`$ $`0`$ $`\xi (𝐱,t)^{}\xi (𝐱^{},t^{})`$ $`=`$ $`2\lambda \left(_{}^2+{\displaystyle \frac{1}{2}}_{}^2\right)\delta (𝐱𝐱^{})\delta (tt^{}).`$ Two key terms appearing in the original field theory are absent in this one, namely, * the driving term $`_{}\varphi ^2`$, and * a diffusion term $`\tau _{}_{}^2\varphi `$ for the parallel direction. Since the driving term is absent, the alternate field theory obeys the Ising “up-down” ($`\varphi \varphi `$) symmetry. Thus, three-point functions are identically zero in this theory, for all $`TT_c`$. This prediction is in serious disagreement with existing Monte Carlo data! While one may argue that a field theory need not reproduce all of the microscopic detail of the underlying lattice model, one should be very cautious before endowing it with a higher symmetry: This is only justified if a high-symmetry fixed point exists and can be shown, via an explicit renormalization group calculation, to be stable against perturbations by symmetry-breaking operators. Neither is the case here. The absence of the parallel diffusion term also has serious consequences. Eq. (4) generates a steady-state structure factor: $$S(𝐤)=\frac{k_{}^2+\frac{1}{2}k_{}^2}{k_{}^2\left(k^2+\tau _{}\right)}\text{ }$$ (5) which ought to be a good approximation at high temperatures. Yet, for $`k_{}0`$ it predicts a divergence along the whole $`k_{}=0`$ line, at any $`T>T_c`$. This stands in glaring contrast to the Monte Carlo results for the disordered phase, where all measured structure factors are found to be finite everywhere in $`k`$-space. Since Eq. (4) fails to reproduce the most basic properties of the microscopic model, we conclude that it is not a viable field theory for the driven lattice gas. It may, however, describe some as yet unknown microscopics. Therefore, we now proceed to analyze the field theory, defined by Eq. (4), in its own right. Following Ref. , we recast Eq. (4) as a dynamic functional : $$[\stackrel{~}{\varphi },\varphi ]=d^dx𝑑t\left\{\stackrel{~}{\varphi }\left[_t\varphi +\lambda \left(_{}^2_{}^2+(_{}^2)^2\tau _{}_{}^2\right)\varphi \lambda \frac{g}{3!}_{}^2\varphi ^3\right]\lambda \stackrel{~}{\varphi }\left(_{}^2+\frac{1}{2}_{}^2\right)\stackrel{~}{\varphi }\right\}$$ (6) We first note that Eq. (6) describes a theory with a four-point coupling $`\stackrel{~}{\varphi }_{}^2\varphi ^3`$ and anisotropic free propagators as given in Ref. . Therefore, the combinatorics of this theory is identical to that of Model B, which reduces to $`\varphi ^4`$-theory in the static limit. For such theories, it is well known that the one-loop result for the exponent $`\nu `$ (denoted $`\nu _{}`$ in Ref. ) is determined by combinatorics alone, i.e., the explicit expressions for the Feynman integrals are not required. This is most easily seen by calculating in the critical theory, where $`\tau _{}=0`$, with insertions of $`\lambda \stackrel{~}{\varphi }_{}^2\varphi `$. We denote one-point irreducible vertex functions with $`\stackrel{~}{n}`$ ($`n`$) external $`\stackrel{~}{\varphi }`$ ($`\varphi `$) legs and $`m`$ insertions by $`\mathrm{\Gamma }_{\stackrel{~}{n}n}^{(m)}`$. At one-loop order, there are two primitively divergent vertex functions, namely $`\mathrm{\Gamma }_{11}^{(1)}`$ and $`\mathrm{\Gamma }_{13}^{(0)}`$. Both of these consist of a zero-loop term and a one-loop contribution. Each one-loop contribution consists of a combinatoric factor, the appropriate powers of the coupling constant and the external momentum, and a loop integral. The key simplification here is that the loop integrals for $`\mathrm{\Gamma }_{11}^{(1)}`$ and $`\mathrm{\Gamma }_{13}^{(0)}`$ are identical, independent of the detailed forms of the free propagators. Thus, the two one-loop contributions differ only by a simple factor which is purely combinatoric in origin. As a result, one obtains to first order in $`ϵd_cd`$, for all of these theories: $$\nu =\frac{1}{2}+\frac{ϵ}{12}+O(ϵ^2)$$ (7) Since the authors of Ref. have chosen to calculate at finite $`\tau _{}`$, let us illustrate how this result emerges in their case. No insertions are needed here so the upper index of $`\mathrm{\Gamma }_{\stackrel{~}{n}n}^{(m)}`$ can be dropped. Keeping track of coupling constants and signs, and taking care of the $`T_c`$ shift, we can write the two bare vertex functions $`\mathrm{\Gamma }_{11}`$ and $`\mathrm{\Gamma }_{13}`$ in the form $`\mathrm{\Gamma }_{11}`$ $`=`$ $`i\omega +\lambda k_{}^2k^2+\lambda k_{}^2\tau _{}\left[1+{\displaystyle \frac{1}{2}}gI_1\right]`$ (8) $`\mathrm{\Gamma }_{13}`$ $`=`$ $`\lambda gk_{}^2\left[1{\displaystyle \frac{3}{2}}gI_2\right]`$ (9) Here, the factors $`1/2`$ and $`3/2`$ arise from combinatorics while the integrals $`I_1`$ and $`I_2`$ are easily computed in dimensional regularization, resulting in $$I_1=\frac{3}{(4\pi )^2ϵ}\left[1+O(ϵ)\right]\text{and }I_2=\frac{3}{(4\pi )^2ϵ}\left[1+O(ϵ)\right]$$ (10) We notice immediately that the simple $`ϵ`$-poles of $`I_1`$ and $`I_2`$ are identical. Thus, their numerical prefactor can be absorbed into the definition of the coupling constant, leaving us with one-loop corrections to $`\nu `$ that are purely combinatoric in origin. Completing the calculation at finite $`\tau _{}`$, this provides the key to Eq. (7). Only at two-loop order do the detailed forms of the free propagators come into play. Then, of course, exponents are also no longer determined by combinatorics alone. In Ref. , the exponent $`\nu `$ is quoted as $`(1+ϵ/4)/2`$, indicating the presence of a computational error. More seriously, however, there are deeper flaws in this theory. Recall that the steady-state structure factor, Eq. (5), diverges along the whole $`k_{}=0`$ line, even for $`\tau _{}>0`$. As a result, the theory is plagued by infrared singularities, which are entirely unrelated to criticality, and unrenormalizable divergences. We note, for completeness, that one can, of course, regularize such singularities by re-introducing the diffusion term $`\tau _{}_{}^2\varphi `$ into Eq. (4). Then, however, one should also reconsider the two fourth-order derivative terms. At two-loop order, these will acquire different primitive divergences, so that an additional coupling constant $`\rho ^2`$ is required, appearing in Eq. (4) as $`\rho ^2(_{}^2)^2\varphi +_{}^2_{}^2\varphi `$. To summarize, we have shown that the field theory proposed by Garrido, de los Santos and Muñoz fails to reproduce the key features of the driven lattice gas. Predicting infinite structure factors and zero three point correlations (for all temperatures above criticality), it cannot be a viable continuum model for the latter. Accepting it as a representation of some other, as yet undetermined, microscopic model, we carry out a standard analysis. First, we find that the one-loop calculation of Ref. is incorrect. Second, beyond one-loop order, uncontrolled infrared singularities proliferate, rendering the field theory unrenormalizable. In contrast, the original field theory is consistent with the fundamental symmetries of the driven lattice gas, for any value of the drive. Its predictions for 2- and 3-point functions in the disordered phase are in good agreement with simulation results. Based on the phenomenology of $`S(𝐤)`$ near criticality, it plumbs the consequences of a highly anisotropic scaling limit, $`k_{}k_{}^{1+\mathrm{\Delta }}0`$. To test its predictions against Monte Carlo simulations, this limit should be respected in the choice of system sizes, i.e., $`L_{}L_{}^{1+\mathrm{\Delta }}`$. If, instead, simulations and finite-size analysis are performed with disregard for such strong anisotropies, complications from extraneous scaling variables or inconsistencies can be expected. In a more exotic scenario, such simulations may indicate a new type of low temperature phase, quite distinct from the ordinary Ising-like one. This work is supported in part by the National Science Foundation through the Division of Materials Research.
no-problem/9912/astro-ph9912069.html
ar5iv
text
# LASCO Measurements of the Energetics of Coronal Mass Ejections ## 1 Introduction Material ejections are a common phenomenon of the solar corona. Since the first observation on 14 December 1971 (Tousey, 1973), several thousands of CMEs have been seen (Howard et al., 1985; Kahler, 1992; Webb, 1992; Hundhausen, 1997; Gosling, 1997). Nevertheless, the mechanisms that cause a CME and the forces acting on it during its subsequent propagation through the corona are largely unknown. Of these two issues, the issue of CME propagation through the corona is by far more amenable. Past observations have provided insufficient coverage of the CME development for several reasons: restricted field of view of the coronagraphs, frequent orbital nights and low sensitivity of the instruments. Consequently, past studies were largely focused on either the phenomenological description and classification of CMEs or the measurement of average values for the physical properties of the events such as speed, mass, kinetic energy (Jackson and Hildner, 1978; Howard et al., 1985). The study of the CME energetics, in particular, was necessarily restricted to a handful of well observed events (Rust et al., 1980; Webb et al., 1980). Their analysis revealed the importance of the (elusive) magnetic energy and established that the potential energy dominates the kinetic energy. It was also found that the energy residing in shocks, radio continua and other forms of radiation was insignificant in comparison to the mechanical energy of the ejected material. The lessons learned from the past resulted in a greatly improved set of instruments; the LASCO coronagraphs (Brueckner et al., 1995), aboard the SOHO spacecraft (Domingo et al, 1995). The location of the spacecraft at the L1 point permits the continuous monitoring of the Sun while the combination of the three LASCO coronagraphs provides an unprecedented field of view from 1.1 $`R_{}`$ to 30 $`R_{}`$. The replacement of videcons with CCD detectors and the very low stray light levels of the coronagraphs have led to a vast sensitivity improvement. It is now possible to routinely follow the dynamical evolution of a CME. Here, we compute basic quantities; mass, velocity and geometry and derive quantities such as the potential, kinetic and magnetic energies of CMEs as they progress through the outer corona into the heliosphere. To our knowledge, this is the first time that detailed observations of the dynamical evolution of these quantities has been presented. These measurements are expected to provide concrete observationally-based constraints on the driving forces in CME models. For this study, we focus on a group of CMEs that share a common characteristic; namely, they resemble a helical flux rope in the C2 and C3 coronagraph images. We choose these events for three reasons: (i) the area of a CME that corresponds to the flux rope is usually easily identifiable in the coronagraph images, (ii) their appearance can be related to the flux rope structures measured in-situ from Earth-orbiting spacecraft, and (iii) there has been extensive theoretical and observational interest for this class of CMEs. Several CMEs observed with the LASCO instrument exhibit a helical structure like that of a flux rope (Chen et al., 1997; Dere et al., 1999; Wood et al., 1999). The theoretical basis for flux rope configurations in solar and interplanetary plasmas is well established (e.g., Gold, 1963; Goldstein, 1983; Chen and Garren, 1993; Low, 1996; Kumar and Rust, 1996; Guo et al., 1996; Wu, Guo & Dryer, 1997). These treatments envisage the helical flux rope as a magnetic structure that resides in the lower corona and erupts to form a CME. There is some debate about whether the flux rope is formed before the eruption, or whether it is formed as a consequence of reconnection processes that lead to the eruption. These arguments are related to those which consider whether the reconnection occurs above the sheared arcade which presumably forms the flux rope, or below it (Antiochos, Devore and Klimchuk, 1999). Neither the physical mechanisms of the initial driving impulse, nor the conditions in the corona which determine the subsequent propagation of the flux rope are very well known from observations. Theoretical models often rely on educated guesses to model both the initiation of the CME as well as its propagation through the corona. Statements about the energetics, or driving forces behind CMEs are made on these bases; for instance, Chen (1996) and Wu et al. (1997) show plots of the variation of kinetic, potential and magnetic energies of CMEs as calculated from their models. The measurements we present in this paper are expected to yield some clues about the validity of the assumptions made in these models. It may be emphasized that our measurements are made only in the outer corona (2.5 $`R_{}`$ \- 30 $`R_{}`$). They are therefore not expected to shed much light on the energetics of the flux rope CMEs immediately following initiation, or on the initiation process itself. Our estimates of the magnetic energy of flux-rope CMEs are made on the basis of in-situ measurements of magnetic clouds near the earth. This is because flux-rope CMEs ejected from the Sun are often expected to evolve into magnetic clouds (Rust and Kumar, 1994; Kumar and Rust, 1996; Chen et al., 1997; Gopalswamy et al., 1998). Conversely, in-situ measurements of magnetic clouds near the earth suggest that their magnetic field configuration resembles a flux rope (Burlaga, 1988; Lepping, Jones, & Burlaga, 1990; Farrugia et al., 1995; Marubashi, 1997). Radio observations of moving Type-IV bursts can also probe the magnetic field in CMEs (Stewart, 1985; Rust et al., 1980) but they are so rare that near-Earth measurements are the most reliable estimates of the magnetic flux. It should be borne in mind, however, that the precise relationship between CMEs and magnetic clouds and the manner in which CMEs evolve into magnetic clouds is not very well understood (Dryer, 1996; Gopalswamy et al., 1998). The main reason for this situation is the simple observational fact that while CMEs are best observed off the solar limb, magnetic clouds are measured near the Earth. This issue will hopefully be addressed in the near future by the next generation of space-borne instruments. The rest of the paper is organized as follows: We describe our methods of measuring the mass and position of a CME and of calculating the different forms of energy associated with it in § 2. § 3 presents the results of our measurements. We discuss caveats that accompany these results in § 4 and draw conclusions in § 5. ## 2 Data Analysis ### 2.1 Mass calculations White light coronagraphs detect the photospheric light scattered by the coronal electrons and therefore provide a means to measure coronal density. Transient phenomena, such as CMEs, appear as intensity (hence, density) enhancements in a sequence of coronagraph images. We compute the mass for a CME in a manner similar to that described by Poland et al. (1981). After the coronagraph images are calibrated in units of solar brightness, a suitable pre-event image is subtracted from the frames containing the CME. The excess number of electrons is simply the ratio of the excess observed brightness, $`B_{obs}`$, over the brightness, $`B_e(\theta )`$, of a single electron at some angle, $`\theta `$, (usually assumed to be 0) from the plane of the sky. $`B_e(\theta )`$ is computed from the Thomson scattering function (Billings, 1966). The mass, $`m`$, is then calculated assuming that the ejected material comprises a mix of completely ionized hydrogen and 10% helium. Namely, $$m=\frac{B_{obs}}{B_e(\theta )}1.97\times 10^{24}\mathrm{gr}$$ (1) After the mass image is obtained, we delineate the flux rope by visual inspection, as shown in Figure 1. We attempt to circumscribe the cross section of the helical flux rope as seen in the plane of the sky. The cavity seen in the white light/mass images is taken to be the interior of the flux rope, bounded by the helical magnetic field (Figure 1). The mass contained in the flux rope is computed by summing the masses in the pixels encompassed by the flux rope. The accuracy of the mass calculations depends on three factors: the CME depth and density distribution along the line of sight and the angular distance of the CME from the plane of the sky. All three factors are unknown since the white light observations represent only the projection of the CME on the plane of the sky. Some additional information can be obtained from pB measurements, but these are only occasionally available. Therefore, to convert the observed brightness to a mass measurement we have to make an assumption. Namely, we assume that all the mass in the CME is concentrated in the plane of the sky. Since CMEs are three-dimensional structures, our calculations will tend to underestimate the actual mass. To quantify the errors arising from our assumption, we performed two brightness calculations shown in Figure 2. The solid line shows the angular dependence of the quantity $`B_e(\theta )`$ in equation (1) normalized to its value at $`0^{}`$. We see that our assumption that the ejected mass is always in the sky plane ($`\theta =0^{}`$) underestimates the mass by about a factor of 2 at angles $`5060^{}`$. We expect that the CMEs in our sample are relatively close to the plane of the sky ($`\theta <50^{}`$) since their flux rope morphology is clearly visible. Next, we investigate the effect of the finite width of a CME. We simulate a CME with constant density per angular bin along the line of sight, centered in the plane of the sky at a heliocentric distance of 10 R. Using equation (1) we calculate the observed mass, $`m_{obs}`$, for various widths and compare it to the actual mass, $`m_{cme}`$ for the same widths. The dashed line in Figure 2 shows the dependence of this ratio, $`m_{obs}/m_{cme}`$ on the width of the CME. For angular widths similar to those of the CMEs in our sample ($`60^{}`$) the mass would be underestimated by about $`15\%`$. Finally, we estimate the noise in the LASCO mass images from histograms of empty sky regions. The statistics in these areas show a gaussian distribution centered at zero, as expected. We define the noise level as one standard deviation or about $`5\times 10^8`$ gr in the C2 frames and $`3\times 10^{10}`$ gr in the C3 frames. The average C2 pixel signal in the measured CMEs is 10 times the noise and the C2 pixel signal-to-noise ratio in the mass measurements is between 10-100. The CMEs get fainter as they propagate farther from the sun. Therefore, the pixel signal-to-noise ratio in the C3 images drops to about 3-4. These figures refer to single pixel statistics and demonstrate the quality of the LASCO coronagraphs. Our measurements are based on statistics of hundreds or thousands of pixels for each image. Therefore, the “mass” noise in our images is insignificant compared to the systematic errors involved in the calculation of a CME mass as discussed previously. In summary, these calculations suggest that the LASCO measurements tend to underestimate the CME mass by about 50%, for realistic widths and propagation angles. A more detailed analysis of CME mass calculations will appear elsewhere. ### 2.2 CME Energy calculations In this analysis we consider only three forms of energy — potential, kinetic, and magnetic energy. These energies can be estimated from quantities measured directly in the LASCO images like CME area, mass and speed. Two of the many other forms of energy that can exist in the CME/corona system can be estimated based on some assumptions and educated guesses: the CME enthalpy $`U`$ and the thermal energy $`E_T`$. We will show in § 4 that the thermal energy $`E_T`$ is insignificant. There are several uncertainties involved in calculating the enthalpy of a CME. Firstly, the temperature structure of a CME is far from known. It is conceivable that is composed of multithermal material. In situ measurements of magnetic clouds near the earth reveal a temperature range of $`10^410^5`$ K. Furthermore, it is not clear if the gas in the CMEs in the outer corona is in local thermodynamic equilibrium. Nonetheless, if we assume the CME to be a perfect gas in local thermodynamic equilibrium with equal electron and ion temperatures, the enthalpy $`U`$ can be as large as $`5E_T=5nkT`$. If we assume a temperature of a million degrees K and a mass of $`10^{15}`$ gr, this yields $`U3\times 10^{29}`$ ergs. As will be seen later, even this upper limit for the enthalpy $`U`$ is lower than the kinetic and potential energies by at least one order of magnitude, except in the lower corona where it can be comparable to the kinetic energy. Furthermore, the enthalpy is directly proportional to the mass, which, as will be seen later, remains approximately constant as the CME propagates outwards. We therefore conclude that the enthalpy is a small, constant magnitude correction which can be safely neglected without affecting the overall conclusions regarding CME energetics. #### Potential Energy We define the potential energy of the flux rope as the amount of energy required to lift its mass from the solar surface. The gravitational potential energy is calculated using $$E_P=\underset{\mathrm{flux}\mathrm{rope}}{}_R_{}^R\frac{GM_{}m_i}{r_i^2}𝑑r_i,$$ (2) where $`m_i`$ and $`r_i`$ denote the mass and distance from sun-center respectively, of each pixel, $`M_{}`$ is the mass of the sun, $`R_{}`$ is the solar radius and $`G`$ is the gravitational constant. The summation is taken over the pixels comprising the flux rope (Figure 1). #### Kinetic Energy We use the center of mass of the flux rope to describe its movement. The location of the center of mass relative to the sun center is given by $$\stackrel{}{r}_{CM}=\frac{_{\mathrm{flux}\mathrm{rope}}m_i\stackrel{}{r_i}}{_{\mathrm{flux}\mathrm{rope}}m_i},$$ (3) where $`\stackrel{}{r}_{CM}`$ is the radius vector of the center of mass and $`\stackrel{}{r_i}`$ is the radius vector for each pixel. The summation, as before, is taken over the pixels comprising the flux rope. We calculate $`\stackrel{}{r}_{CM}`$ for each CME frame as it progresses through the LASCO field of view. In other words, we compile a table of center-of-mass locations versus time, ($`\stackrel{}{r}_{CM},t`$). By fitting a second degree polynomial to ($`\stackrel{}{r}_{CM},t`$) we obtain the center of mass velocity, $`\stackrel{}{v}_{CM}`$ and acceleration $`\stackrel{}{a}_{CM}`$. The calculation of the speed and acceleration as described above has the advantage of involving only the measurement of the CME center of mass. Once the flux rope is delineated, its mass, speed and energetics follow. The kinetic energy is simply $$E_K=\frac{1}{2}\underset{\mathrm{flux}\mathrm{rope}}{}m_iv_{CM}^2.$$ (4) Note that these measurements are based on the plane of the sky location of the center of mass. The speed used in the calculations is therefore a projected quantity and not the true radial speed. It follows that the derived kinetic energies are lower limits. The same applies for all of our observed and derived quantities which facilitates the comparison among the different events. #### Magnetic Energy The calculations of the potential and kinetic energies of flux rope CMEs are made directly from the mass images. On the other hand, the values we use for the magnetic energy of these CMEs are only estimates because the magnetic field strength in a CME is unknown. In-situ measurements by spacecraft like WIND yield the magnetic field contained in magnetic clouds observed near the earth. As mentioned in § 1, helical flux-rope CMEs are thought to evolve into magnetic clouds similar to those observed at the earth. Therefore, measurements of the magnetic flux contained in such magnetic clouds are expected to be fairly representative of that carried by flux rope CMEs. The magnetic energy carried by a flux rope CME is defined by $$E_M=\frac{1}{8\pi }_{\mathrm{flux}\mathrm{rope}}B^2𝑑V,$$ (5) where $`B`$ is the magnetic field carried by the flux rope, and the integration is carried out over the volume of the flux rope. For a highly conducting medium such as the heliosphere, the magnetic flux, $`B𝑑A`$, is frozen into the CME as it evolves to form a magnetic cloud. The magnetic flux measured in-situ is therefore taken to be the same as that contained in the CME as it passes through the LASCO field of view. We use this frozen flux assumption since we feel that it is a simple, physically motivated one. Another assumption which gives very similar results is conservation of magnetic helicity (Kumar and Rust, 1996). The volume integral in equation (5) contains another unknown; the volume occupied by the flux rope. Assuming a cylindrical flux rope with constant magnetic field, equation (5) is approximated as $$E_M\frac{1}{8\pi }\frac{l}{A}(BA)^2,$$ (6) where $`A`$ is the area of flux rope as measured in the LASCO images and $`l`$ is the length of flux rope. The quantity $`BA`$ is the magnetic flux frozen into the flux rope and is conserved. For our purposes, we need, in equation (6), a representative value for the magnetic flux of a flux rope. We obtain such an estimate from model fits (Lepping, Jones, & Burlaga, 1990) to several magnetic clouds observed by WIND between 1995–1998 available at http://lepmfi.gsfc.nasa.gov/mfi/mag\_cloud\_pub1p.html. We only consider clouds that occurred at the same time interval as the LASCO CMEs (1997-98). From this sample we get the average magnetic flux, $`<BA>=1.3\pm 1.1\times 10^{21}`$ G cm<sup>2</sup> which we put in equation (6). The resulting magnetic energy uncertainty is then $`(1.1/1.3)^270\%`$. To calculate the magnetic energy, we also need the length $`l`$ of the rope along the line of sight. Since the true length of the rope cannot be obtained observationally, we assume that the flux rope is expanding in a self-similar manner, with its length being proportional to its heliocentric height; namely, $`lr_{CM}`$. Finally, we emphasize that the magnetic cloud data used here are only representative. They are not measurements from the same LASCO events we analyzed. Also the magnetic flux in individual events can differ from the average value we adopted. Furthermore, the magnetic field values we use refer to the total (toroidal + poloidal) magnetic field contained in the flux rope. The definition of $`BA`$, however, refers only to the toroidal component of the magnetic field which is normal to the cross-sectional area of the flux rope. For these reasons, it is difficult to ascribe errors to our magnetic energy calculations of individual events. Therefore, we decided to use the statistical uncertainty in the average flux to compute the error in the magnetic energy which is about 70% as shown above. It is unfortunate that the magnetic energy measurements are so uncertain and they will continue to be so until direct observations of the coronal magnetic field become available. ## 3 Results For our analysis, we searched the LASCO database for CMEs with clear flux rope morphologies. We picked 11 events for which we compiled the evolution of the mass and velocity of the center of mass and the potential, kinetic and magnetic energies as the CME progressed through the LASCO C2 and C3 fields of view. For reference purposes we present a list of the events in Table 1. The information for the 1997 CMEs is taken from the LASCO CME list compiled by Chris St. Cyr (http://lasco-www.nrl.navy.mil/cmelist.html) except for the final speeds in the last column that refer to the center of mass of the fluxropes and were calculated by us. Further information on source regions and associated photospheric/low corona emissions for some of these events can be found in the references noted in the table. Our measurements are shown in Figures 36. The horizontal axis denotes heliocentric height in solar radii. Each row is a separate CME event, labeled by its date of observation by the LASCO/C2 coronagraph. The left panels show the evolution of the potential, kinetic, magnetic and total energy in the CME. The total energy is the sum of the potential, kinetic and magnetic energies. The right panels show the evolution of the flux rope mass and the center-of-mass speed. As discussed in § 2, a second degree fit to ($`\stackrel{}{r}_{CM},t`$) yields the acceleration of the center of mass $`\stackrel{}{a}_{CM}`$. The radial component of $`\stackrel{}{a}_{CM}`$ is also shown in this panel. The dash-dot line, visible in some plots, marks the escape speed from the Sun as a function of height. An inspection of the plots leads to the following overall conclusions that hold for most of the events: * The total energy (curves marked with +) is relatively constant, to within a factor of 2, for the majority of the events despite the substantial variation seen in the individual energies. This suggests that, for radii between approximately $`3R_{}`$ and $`30R_{}`$, the flux rope part of these CMEs can be considered as an isolated system; i.e., there is no additional “driving energy” other than the energies we have already taken into account (potential and kinetic energies of the flux rope, and magnetic energy associated with the magnetic field inside the flux rope). * We see that the kinetic and, (to a lesser degree) potential energies increase at the expense of the magnetic energy, keeping the total energy fairly constant. The decrease in magnetic energy is a direct consequence of the expansion of the CME. It could imply that the untwisting of the flux rope might be providing the necessary energy for the outward propagation of the CME in a steady-state situation. * The center of mass accelerates for most of the events, and the CMEs achieve escape velocity at heights of around 8-10 $`R_{\mathrm{}}`$, well within the LASCO/C3 field of view. * The mass in the flux rope remains fairly constant for some events (e.g., 97/08/13 or 97/10/30) while other events (e.g., 97/11/01 or 98/02/04) exhibit a significant mass increase in lower heights and tend to a constant value in the outer corona, above about $`1015`$ R. This observation raises the question: why is pile up of preexisting material observed only in some flux rope CMEs? We plan to investigate this effect further in the future. It would also be interesting to examine how the mass increase close to the Sun relates to interplanetary “snowplowing” observations (Webb, Howard, & Jackson, 1996). The only notable exception is the event of 98/06/02 which is also the most massive and its total energy increases with distance from the center of the sun. This CME is associated with an exceptionally bright prominence which may affect the measurements. A detailed analysis of this event is presented in Plunkett, Vourlidas, & Simberova (2000). ## 4 Discussion The conclusions of the previous section are based on a set of broadband white light coronagraph observations. The accuracy of the measurement of any structure (i.e., CME) in such images is inherently restricted by three unknowns: the amount and distribution of the material and the extent of the structure along the line of sight. We addressed the first two problems in § 2 where we showed that for the case of a uniformly filled CME extending $`\pm 80`$ degrees out of the plane of the sky, we will measure about 65% of its mass. Since the potential and kinetic energies are directly proportional to the mass, our measurements in Figures 36 could be higher by as much as 35%. The spatial distribution of the material will also affect the visibility of the structures we are trying to measure. Because we delineate the area of the flux rope by visual inspection, we might not be following the same cross section as the structure evolves. This might account for some of the variability of the energy curves. However, we chose the CMEs based on their clear flux rope signatures. The measurements involve hundreds or even thousands of pixels per image and therefore we don’t expect that the trends seen in the data are affected by the slight changes in the visibility of the structure. The widths along the line of sight of the observed CMEs are difficult to quantify. There is no way to measure this quantity with any instrumentation in existence today. Only the magnetic energy depends on the width of the flux rope. In § 2.2, we assumed that the width of the flux rope is equal to the height of its center of mass which implies that its preeruption length is about a solar radius. Prominences and loop arcades of this length are not uncommon features on the solar surface. As described in § 1, flux rope CMEs are expected to evolve into magnetic clouds near the earth. This is the basis on which we use in-situ data to estimate the magnetic energy carried by the flux rope CMEs (§ 2). In § 2, we also estimated that the overall normalization of the magnetic energy curve is uncertain by about 70%. In summary, none of the above errors can affect the trends of the curves for a given event. Only the magnitudes of the various energies could change. Finally, some of the variability of the measured quantities could be attributed to the intrinsic variability of the corona and/or of the CME structure itself and cannot be removed without affecting the photometry. For this reason, it is rather difficult to associate an error estimate to individual measurements. Therefore, we decided not to include any error bars in our figures. The analysis of the CME dynamics in Figures 3-6 reveals an interesting trend; namely, the total energy remains constant. It appears that the flux rope part of a CME propagates as a self-contained system where the magnetic energy decrease drives the dynamical evolution of the system. All the necessary energy for the propagation of the CME must be injected in the erupting structures during the initial stages of the event. The notions that these CMEs are indeed magnetically driven and that the thermal energy contribution can be ignored are further reinforced by the magnitude of the plasma $`\beta `$ parameter (Fig. 7). The calculations were performed with the assumption that the CME material is at a coronal temperature of $`10^6`$ K. We see that the CMEs have a very small $`\beta `$ (except the events on 98/02/04 and 98/06/02) which increases slightly outwards. It appears to tend towards a constant value. Such a behavior for the plasma $`\beta `$ parameter was predicted in the flux rope model of Kumar and Rust (1996). We also find that the potential energy is larger than the kinetic energy. These results confirm the conclusions from earlier Skylab measurements (see Rust et al. (1980) for details). The relation between the helical structures seen in the coronagraph images and eruptive prominences is still unclear. In our sample, only half of the CMEs have clear associations with eruptive prominences (e.g., 97/02/23). No helical structures are visible in pre-eruption EIT 195Å images, in agreement with past work (Dere et al., 1999). On the other hand, the flux rope of the event on 98/06/02 is very clearly located above the erupting prominence and there is strong evidence that it was formed before the eruption (Plunkett, Vourlidas, & Simberova, 2000). It seems, therefore, likely that the process of the formation of the flux rope is completed during the early stages of the eruption at heights below the C2 field of view ($`<2`$ R). Such an investigation, however, is beyond the scope of this paper. Finally, we turn our attention to the evolution of the flux rope shape as a function of height. We proceed by comparing the velocity of the CME front to its center of mass velocity. Because the visual identification of points along the front can be influenced by visibility changes as the CME evolves, it is susceptible to error. A better method is to use a statistical measure for the location of the front such as the center of mass. Hence, the location of the front is defined as the center of mass of the pixels that lie within 0.1 $`R_{\mathrm{}}`$ of the front of the flux rope and within $`\pm 25^{}`$ of the radial line that connects the sun center with the center of mass. The velocity of the front, $`v_f`$ is calculated in the same manner as $`v_{CM}`$ (§ 2.2). The comparison of the two velocity profiles for some representative events is shown in Figure 8. Six of the eleven CMEs have profiles similar to 97/08/13 (self-similar expansion) or 97/10/30 (no expansion), while five show a progressive flattening such as 97/04/13 or 97/11/01, similar to that found in Wood et al. (1999). Some theoretical flux rope models also predict flattening of the flux rope as it propagates outwards (Chen et al., 1997; Wood et al., 1999). ## 5 Conclusions We have examined, for the first time, the energetics of 11 flux rope CMEs as they progress through the outer corona into the heliosphere. The kinetic and potential energies are computed directly from calibrated LASCO C2 and C3 images, while the magnetic energy is based on estimates from near-Earth in-situ measurements of magnetic clouds. These results are expected to provide constraints on flux rope models of CMEs and shed light on the mechanisms that drive such CMEs. These measurements provide no information about the initial phases of the CME (at radii below $`2R_{}`$). All the measurements and conclusions hold for heights in the C2 and C3 fields of view; between 3 and $`30R_{}`$. The salient conclusions from an examination of 11 CMEs with a flux rope morphology are: * For relatively slow CMEs, which constitute the majority of events, + The potential energy is greater than the kinetic energy. + The magnetic energy advected by the flux rope is given up to the potential and kinetic energies, keeping the total energy roughly constant. In this sense, these events are magnetically driven. * For the relatively fast CMEs with velocities $``$ 600 km/s (97/02/23, 98/06/02), + The kinetic energy exceeds the potential energy by the time they reach the outer corona (above $`15R_{\mathrm{}}`$). + The magnetic energy carried by flux rope is significantly below the potential and kinetic energies. We thank D. Spicer for the initial discussions that led to this paper and the referee for his/her constructive comments. SOHO is an international collaboration between NASA and ESA and is part of the International Solar Terrestrial Physics Program. LASCO was constructed by a consortium of institutions: the Naval Research Laboratory (Washington, DC, USA), the University of Birmingham (Birmingham, UK), the Max-Planck-Institut für Aeronomie (Katlenburg-Lindau, Germany) and the Laboratoire d’Astronomie Spatiale (Marseille, France).
no-problem/9912/hep-ph9912477.html
ar5iv
text
# ON THE ENHANCEMENT OF Λ_𝑄 DECAY RATE ## Abstract The enhancement of the $`\mathrm{\Lambda }_b`$ and $`\mathrm{\Lambda }_c`$ decay rates due to four-quark operators is calculated. Hard gluon exchange modifies the wave function for the b$`\overline{u}`$ pair in $`\mathrm{\Lambda }_b`$, $`|\mathrm{\Psi }(0)|_{bu}^2`$, and the wave function for the c$`\overline{d}`$ pair in $`\mathrm{\Lambda }_c`$, $`|\mathrm{\Psi }(0)|_{cd}^2`$. The modified wave function is found to account for the discrepancy of 0.20 ps<sup>-1</sup> among the lifetimes of $`\mathrm{\Lambda }_b`$ and B<sup>0</sup> and for the difference of 2.6 ps<sup>-1</sup> in the case of $`\mathrm{\Lambda }_c`$ and D<sup>0</sup>. Heavy flavour hadrons, H, contain a heavy quark, Q (b and c), and a light cloud comprising of light quarks (anti-quarks) and gluons. When the mass of the heavy quark goes to infinity, the picture of the heavy hadron decays are so simplified that the light cloud has no role to play. Theoretical study of the heavy hadrons is facilitated by the expansion of the hadronic matrix element, based on the operator product expansion of QCD, in inverse powers of the heavy quark mass, m . The leading order of the expansion, corresponding to the asymptotic limit of the heavy quark mass, describes the decay rate of a heavy flavour hadron as if it is of a free heavy quark decay. This implies the same lifetime for all heavy hadrons of a given heavy flavour quantum number. The next-to-leading order terms, appearing at $`1/m^2`$, describe the motion of the heavy quark inside the hadron and the chromomagnetic interaction, distinguish the lifetimes of the mesons on one hand and the baryons on the other (with exception of $`\mathrm{\Omega }_Q`$). The term corresponding to the third order in 1/m of the expansion is the matrix element of four-quark operators which contains effects such as W-exchange, weak annihilation and Pauli interference, coming from the spectator (light) quarks. At this order, the lifetimes of the various mesons differ among themselves and so is for baryons. The spectator effects which appear through four-quark operators, are supposed, yet poorly-understood, to explain the intricacies of the lifetime differences and to fix the lifetime hierarchy of the hadrons. According to the idea of heavy quark expansion, all hadrons of a given heavy flavour are expected to have nearly the same lifetime. The theoretical prediction of the ratio of lifetimes of $`\mathrm{\Lambda }_b`$ and $`B^0`$, obtained to two orders in 1/m, is 0.9. But this is found much higher than the observed value of the ratio: 0.78 for $`\tau (\mathrm{\Lambda }_b)`$ = 1.20 ps and for $`\tau (B^0)`$ = 1.58 ps. On the other hand, the corresponding charm sector shows an intrinsically different picture due to the mass of the charm quark, which is not so asymptotically large as the b quark mass. In the charm case, the dominant effects come from the four-quark operators rather than the kinetic and chromomagnetic operators. Experimentally, the ratio of the lifetimes of $`\tau (\mathrm{\Lambda }_c)`$ and $`\tau (D^0)`$ is 0.496 for $`\tau (\mathrm{\Lambda }_c)`$ = 0.206 ps and $`\tau (D^0)`$ = 0.415 ps. Therefore, for both the cases, the explanation for the suspected small lifetime, and an equivalently enhanced decay rate, of $`\mathrm{\Lambda }_Q`$ should come from the third order term of the heavy quark expansion in 1/m where the matrix element involves four-quark operators. Given the present experimental decay rates for beauty hadrons: $`\mathrm{\Gamma }(\mathrm{\Lambda }_b)`$ = 0.83 $`\pm `$ 0.02 ps<sup>-1</sup> and $`\mathrm{\Gamma }(B^0)`$ = 0.63 $`\pm `$ 0.05 ps<sup>-1</sup>, the needed enhancement is 0.2 ps<sup>-1</sup>, whereas 2.6 ps<sup>-1</sup> is needed for charmed hadrons of decay rates: $`\mathrm{\Gamma }(\mathrm{\Lambda }_c)`$ = 5 ps<sup>-1</sup> and $`\mathrm{\Gamma }(D^0)`$ = 2.4 ps<sup>-1</sup>. The four-quark operators are estimated using phenomenological models. Hence their size is questionable. Nevertheless, it is the only way now to do. The four-quark operators are related to the probability of finding the Q$`\overline{q}`$ pair at the origin simultaneously using quark models, denoted by $`|\mathrm{\Psi }(0)|_{Qq}^2`$. The wave function is evaluated relating it to the mass splitting of hadrons arising out of the heavy-light quarks interaction. In Ref. , Rosner evaluated the wave function, using the hyperfine splitting in a similarly heavy-flavoured baryon, utilising the DELPHI value on $`\mathrm{\Sigma }_b^{}`$ \- $`\mathrm{\Sigma }_b`$ splitting <sup>*</sup><sup>*</sup>*Though the DELPHI value has not hitherto been confirmed, its use does not alter the goal anyway since the central value of the mass splitting due to hyperfine interaction is expected to be around 50 MeV. Neubert and Sachrajda introduced hadronic parameters accounting for hybrid renormalisation while parameterizing the four-quark matrix for b-flavoured hadrons. The hadronic parameters are yet unknown. Their values are obtained from QCD sum rules. As an extension to c-flavoured hadrons, Voloshin studied the the charmed baryons. In this approach, the authors of Ref. analysed the inclusive charmed-baryon decays to fix the hierarchy of charmed lifetimes. In the present study, we take into account the contribution coming from the exchange of hard gluons when two different scales, the heavy quark mass, m, and the QCD scale, $`\mu `$, are involved, while estimating the wave function in quark model. We treat the coupling constant corresponding to a meson and a baryon differently. It is found that the wave function, and hence the enhanced decay rate, is large in both the cases of b and c and explains part of the lifetime differences between $`\mathrm{\Lambda }_b`$ and B<sup>0</sup> as well as between $`\mathrm{\Lambda }_c`$ and D<sup>0</sup>. The enhancement in the decay rate of $`\mathrm{\Lambda }_b`$ is arising out of the processes involving four-quarks : (a) the weak scattering process bu$`>`$cd in the $`\mathrm{\Lambda }_b`$ involving matrix elements between hadronic states of $`(\overline{b}b)(\overline{u}u)`$ operators and (b) the process contributing to the Pauli interference involving matrix elements of operators $`(\overline{b}b)(\overline{d}d)`$ operators. Hence, the enhancement of the $`\mathrm{\Lambda }_b`$ decay rate is given by: $$\mathrm{\Delta }\mathrm{\Gamma }(\mathrm{\Lambda }_b)=\frac{G_f^2}{(2\pi )}|\mathrm{\Psi }(0)|_{bu}^2|V_{ud}|^2|V_{cb}|^2m_b^2(1x)^2[C_{}^2(1+x)C_+(C_{}C_+/2)]$$ (1) where $`x=m_c^2/m_b^2;C_{}`$ and $`C_+=C_{}^{1/2}`$ are the short distance QCD enhancement and suppression factors for quarks in a colour antitriplet and sextet respectively: $$C_{}=[\alpha _s(m_b^2)/\alpha _s(m_W^2)]^{4/\beta },\beta =112n_f/3$$ (2) where $`n_f`$ is the active quark flavours between $`m_b`$ and $`m_W`$. The $`C_{}`$ term corresponds to the weak sattering process $`bucdbu`$ and the other term represents the destructive interference between the two intermediate d quarks in the process $`bdcuddcd`$. The wave function for the bu pair (in the initial baryon), $`|\mathrm{\Psi }(0)|_{bu}^2`$, in Eq. (1) is of the form: $$|\mathrm{\Psi }(0)|_{bu}^2=\frac{4}{3}\frac{\mathrm{\Delta }M(B_{ijk})}{\mathrm{\Delta }M(M_{i\overline{j}})}\xi |\mathrm{\Psi }(0)|_{b\overline{u}}^2$$ (3) where $$\mathrm{\Delta }M(B_{ijk})=\frac{16\pi }{9}\alpha _s\underset{i>j}{}\frac{<S_i.S_j>}{m_i.m_j}|\mathrm{\Psi }(0)|_{ij}^2$$ (4) and $$\mathrm{\Delta }M(M_{i\overline{j}})=\frac{32\pi }{9}\alpha _s\frac{<S_i.S_j>}{m_i.m_j}|\mathrm{\Psi }(0)|_{i\overline{j}}^2$$ (5) are the hyperfine mass splittings in a baryon and in a meson respectively. There is the colour factor 1/2 in the baryonic case due to the quark composition: a heavy quark and two light quarks. Under isospin symmetry, the effective masses of light quarks are equal. In the same token, wave functions for bu and bd pairs are equal. Equation (3) is obtained for the values of $`<S_i.S_j>`$ = (1/4, -3/4) with spin (0, 1) for the meson and $`<S_i.S_j>`$ = (1/4, -1/2) with spin (1/2, 3/2) for the baryon with $`S_{qq}`$ = 1. In Eq. (3), $`\xi `$ is the ratio of the coupling constants governing a baryon and a meson. The coupling governing a baryon is stronger than that of a meson. Though $`\xi `$ has been chosen as a free parameter varying from 0.25 to 1.5, it can be exactly calculated . In rigorous sense, it should be more than unity. The wave function on the right hand side of Eq. (3) corresponds to the matrix element for B-meson decay into vacuum, parameterised as, $$|<0|\overline{q}\gamma _\mu \gamma _5Q|B>|^2=f_B^2M_B^2$$ (6) and to the relation obtained in the non-relativistic limit $$|<0|\overline{q}\gamma _\mu \gamma _5Q|B>|^2=12M_B|\mathrm{\Psi }(0)|_{b\overline{u}}^2$$ (7) Both Eqs. (6) and (7) characterise the same process involving a heavy quark and a light quark (and gluons) but are normalised at two different scales: the normalisation point of Eq. (6) is $`\mu `$($``$R<sup>-1</sup>) which is the order of the virtuality of the quarks inside the meson, and the scale of the Eq. (7) is the mass of the heavy quark. Therefore, in order to account for the hard gluon exchange between the light and heavy quarks, Eqs. (6) and (7) have to be related. The relation turns out to be an evolution equation of current operators : $$<0|\overline{q}\gamma _\mu \gamma _5Q|B>(m_Q)=<0|\overline{q}\gamma _\mu \gamma _5Q|B>(\mu )[\alpha _s(\mu )/\alpha _s(m_Q)]^{\gamma /\beta },\mu m_Q$$ (8) where $`\gamma `$ (= 2) is the hybrid anomalous dimension. The coupling constant defined through leading order renormalisation group equation can be expressed as $$\alpha _s(m)=\frac{\alpha _s(\mu )}{[1\frac{b}{2\pi }\alpha _s(\mu )ln(\mu /m)]}$$ (9) Using eq. (8), one can get the eq. (3) of the form $$|\mathrm{\Psi }(0)|_{bu}^2=\frac{4}{3}\frac{\mathrm{\Delta }M(B_{ijk})}{\mathrm{\Delta }M(M_{i\overline{j}})}\frac{f_B^2M_B}{12}\xi \left(1\frac{b}{12}\alpha _s(\mu )ln\frac{\mu }{m}\right)^{\frac{2\gamma }{\beta }}$$ (10) The parameter $`\xi `$ describes the ratio of the coupling constants of baryon and meson. The other term in the bracket represents the logrithmic effects due to the hard gluon exchange that would be expected when one goes down to a scale as small as the hadronic scale from the heavy quark mass. This obviously modifies the expectation values of the wave function density at the origin. The choice of values of the parameters are given in Table I. The $`\mathrm{\Delta }M(B_{ijk})`$ for $`\mathrm{\Lambda }_b`$ is given by M($`\mathrm{\Sigma }_b^{}`$)- M($`\mathrm{\Sigma }_b`$) of DELPHI . For c, the $`\mathrm{\Delta }M(B_{ijk})`$ is given by M($`\mathrm{\Sigma }_c^{}`$) = 2517.5 1.4 MeV and M($`\mathrm{\Sigma }_c`$) = 2452.2 0.6 MeV . The enhanced decay rate is now given by Eq. (3) alongwith eq. (10). The results for the enhanced decay rate of $`\mathrm{\Lambda }_b`$ and $`\mathrm{\Lambda }_c`$ are given in Tables II and III respectively. The wave function density is, roughly speaking, independent of the hadronic scale value. However, its dependence upon the parameter $`\xi `$ is important. For $`\xi `$ is graeter than one, the enhanced decay rate becomes closer to the enhancement required to explain the smaller lifetimes of $`\mathrm{\Lambda }_b`$ and $`\mathrm{\Lambda }_c`$ baryons. In conclusion, it is demonstrated that the four-quark operators appearing at the third order in 1/m expansion explains the needed enhancement in the decay rates of $`\mathrm{\Lambda }_Q`$. Though subtle, the couplings of meson and $`\mathrm{\Lambda }_b`$ baryon make much difference. On the other hand, if the couplings are considered equal, the four-quark operators still account for the difference in lifetimes the $`\mathrm{\Lambda }_b`$ baryon and B meson. ###### Acknowledgements. The author wishes to thank Prof. P. R. Subramanian and for fruitful discussions. He thanks UGC for its support through the Special Assistance Programme.
no-problem/9912/astro-ph9912180.html
ar5iv
text
# The Power Spectrum of the Sunyaev–Zel’dovich Effect ## I Introduction The hot gas in the IGM induces distortions in the spectrum of the Cosmic Microwave Background (CMB) through inverse compton scattering. This effect, known as the thermal Sunyaev-Zel’dovich (SZ) effect , is a source of secondary anisotropies in the temperature of the CMB (see Refs. for reviews). Because the SZ effect is proportional to the integrated pressure of the gas, it is a direct probe of the large scale structure in the low redshift universe. Moreover, it must be carefully subtracted from the primary CMB anisotropies, to allow the high-precision determination of cosmological parameters with the new generation of CMB experiments (see and references therein). Thanks to impressive recent observational progress, the SZ effect from clusters of galaxies is now well established . The statistics of SZ clusters were calculated by a number of authors using the Press-Schechter (PS) formalism (eg. ). Recently, Atrio-Barandera & Mücket used this formalism, along with assumptions about cluster profiles to compute the angular power spectrum of the SZ anisotropies for the Einstein–de Sitter universe. A similar calculation was carried out by Komatsu & Kitayama (KK99, hereafter), who also studied the effect of the spatial correlation of clusters and cosmological models. The statistics of SZ anisotropies have also been studied using hydrodynamical simulations. Scaramella et al. , and more recently da Silva et al. , have used this approach to construct SZ maps and study their statistical properties. Persi et al. instead used a semi-analytical method, consisting of computing the SZ angular power spectrum by projecting the 3-dimensional power spectrum of the gas pressure on the sky. In this paper, we follow the approach of Persi et al. using Moving Mesh Hydrodynamical (MMH) simulations . We focus on the angular power spectrum of the SZ effect and study its dependence on cosmology. We compare our results to the Press-Schechter predictions derived using the methods of KK99 . We study the redshift dependence of the SZ power spectrum, and estimate the contribution of groups and filaments. We also study the effect of the finite resolution and finite box size of the simulations. Results from projected maps of the SZ effect using the same simulations are presented in Seljak et al.. We study the implications of our results for future and upcoming CMB missions (see also Refs.). This paper is organized as follows. In §II, we briefly describe the SZ effect and derive expressions for the integrated comptonization parameter and the SZ power spectrum. In §III, we describe our different methods used to compute these quantities: hydrodynamical simulations, the PS formalism, and a simple model with constant bias. We present our results in §IV, and discuss the limitations imposed by the finite resolution and box size of the simulations. Our conclusions are summarized in §V. ## II Sunyaev–Zel’dovich Effect The SZ effect is produced from the inverse Compton scattering of CMB photons . The resulting change in the (thermodynamic) CMB temperature is $$\frac{\mathrm{\Delta }T}{T_0}=yj(x)$$ (1) where $`T_0`$ is the unperturbed CMB temperature, $`y`$ is the comptonization parameter, and $`j(x)`$ is a spectral function defined in terms of $`xh\nu /k_BT_0`$, $`h`$ is the Planck constant and $`k_B`$ is the Boltzmann constant. In the nonrelativistic regime, the spectral function is given by $`j(x)=x(e^x+1)(e^x1)^14`$, which is negative (positive) for observation frequencies $`\nu `$ below (above) $`\nu _0217`$ GHz, for $`T_02.725`$ K. In the Rayleigh-Jeans (RJ) limit ($`x1`$), $`j(x)2`$. The comptonization parameter is given by $$y=\sigma _T𝑑ln_e\frac{k_BT_e}{m_ec^2}=\frac{\sigma _T}{m_ec^2}𝑑lp_e$$ (2) where $`\sigma _T`$ is the Thomson cross-section, $`n_e`$, $`T_e`$ and $`p_e`$ are the number density, temperature and thermal pressure of the electrons, respectively, and the integral is over the physical line-of-sight distance $`dl`$. We consider a general FRW background cosmology with a scale parameter defined as $`aR/R_0`$, where $`R`$ is the scale radius at time $`t`$ and $`R_0`$ is its present value. The Friedmann equation implies that $`da=H_0\left(1\mathrm{\Omega }+\mathrm{\Omega }_ma^1+\mathrm{\Omega }_\mathrm{\Lambda }a^2\right)^{1/2}dt`$ where $`\mathrm{\Omega }\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }`$, $`\mathrm{\Omega }_m`$, and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ are the present total, matter, and vacuum density in units of the critical density $`\rho _c3H_0^2/(8\pi G)`$. As usual, the Hubble constant today is parametrized by $`H_0100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. It is related to the present scale radius by $`R_0=c/(\kappa H_0)`$, where $`\kappa ^21\mathrm{\Omega }`$, 1, and $`\mathrm{\Omega }1`$ in a open, flat, and closed cosmology, respectively. The comoving distance $`\chi `$, the conformal time $`\tau `$, the light travel time $`t`$, and the physical distance $`l`$ are then related by $`dl=cdt=cad\tau =ad\chi `$. With these conventions, and assuming that the electrons and ions are in thermal equilibrium, equation (2) becomes $$y=\sigma _Ta𝑑\chi \frac{\rho }{\mu _em_p}\frac{k_BT}{m_ec^2},$$ (3) where $`\rho `$ is the gas mass density, $`T`$ is the gas temperature, and $`\mu _e^1n_e/(\rho /m_p)`$ is the number of electrons per proton mass. Equation (3) can be written in the convenient form $$y=y_0𝑑\chi T_\rho a^2,$$ (4) where $`T_\rho \rho T/\overline{\rho }`$ is the gas density-weighted temperature, and $`\overline{\rho }=\rho _c\mathrm{\Omega }_ba^3`$. The overbar denotes a spatial average and $`\mathrm{\Omega }_b`$ is the present baryon density parameter. The constant $`y_0`$ is given by $`y_0`$ $``$ $`{\displaystyle \frac{\sigma _T\rho _c\mathrm{\Omega }_bk_B}{\mu _em_pm_ec^2}}`$ (5) $``$ $`1.710\times 10^{16}\left({\displaystyle \frac{\mathrm{\Omega }_bh^2}{0.05}}\right)\left({\displaystyle \frac{1.136}{\mu _e}}\right)\mathrm{K}^1\mathrm{Mpc}^1,`$ (6) where the central value for $`\mu _e`$ was chosen to correspond to a He fraction by mass of 0.24, and that for $`\mathrm{\Omega }_b`$ to agree with Big Bang Nucleosynthesis constraints. The mean comptonization parameter $`\overline{y}`$ can be directly measured from the distortion of the CMB spectrum (see Ref. for a review), and is given by $$\overline{y}=y_0𝑑\chi \overline{T}_\rho a^2.$$ (7) It can thus be computed directly from the history of the volume-averaged density-weighted temperature $`\overline{T}_\rho `$. The gas in groups and filaments is at a temperature of the order of $`10^7\mathrm{K}`$ (or $`1\mathrm{keV}`$), and thus induce a $`y`$-parameter of the order of $`10^6`$ over a cosmological distance of $`cH_0^13000h^1\mathrm{Mpc}`$ (see Eq. ). This is one order of magnitude below the current upper limit of $`\overline{y}<1.5\times 10^5`$ (95% CL) from the COBE/FIRAS instrument . The CMB temperature fluctuations produced by the SZ effect are quantified by their spherical harmonics coefficients $`a_{lm}`$, which are defined by $`\mathrm{\Delta }T(𝐧)=T_0^1_{lm}a_{lm}Y_{lm}(𝐧)`$. The angular power spectrum of the SZ effect is then $`C_l|a_{lm}|^2`$, where the brackets denote an ensemble average. Since most of the SZ fluctuations occur on small angular scales, we can use the small angle approximation and consider the Fourier coefficients $`\stackrel{~}{\mathrm{\Delta }T}(𝐥)=d^2𝐧\mathrm{\Delta }T(𝐧)e^{i𝐥𝐧}`$. They are related to the power spectrum by $`\stackrel{~}{\mathrm{\Delta }T}(𝐥)\stackrel{~}{\mathrm{\Delta }T}^{}(𝐥^{})T_0^2(2\pi )^2\delta ^{(2)}(𝐥𝐥^{})C_l`$, where $`\delta ^{(2)}`$ denotes the 2-dimensional Dirac-delta function. The SZ temperature variance is then $`\sigma _\mathrm{T}^2\left(\mathrm{\Delta }T/T_0\right)^2=_l(2l+1)C_l/(4\pi )𝑑llC_l/(2\pi )`$. Since, as we will see, $`\overline{T}_\rho a^2`$ varies slowly in cosmic time scale and since the pressure fluctuations occur on scales much smaller than the horizon scale, we can apply Limber’s equation in Fourier space (eg. Ref. ) to equation (4) and obtain, $$C_lj^2(x)y_0^2𝑑\chi \overline{T}_\rho ^2P_p(\frac{l}{r},\chi )a^4r^2,$$ (8) where $`r=R_0\mathrm{sinh}(\chi R_0^1)`$, $`\chi `$, and $`R_0\mathrm{sin}(\chi R_0^1)`$ are the comoving angular diameter distances in an open, flat, and closed cosmology, respectively, and $`P_p(k,\chi )`$ is the 3-dimensional power spectrum of the pressure fluctuations, at a given comoving distance $`\chi `$. In general, we define the 3-dimensional power spectrum $`P_q(k)`$ of a quantity $`q`$ by $$\stackrel{~}{\delta _q}(𝐤)\stackrel{~}{\delta _q}^{}(𝐤^{})=(2\pi )^3\delta ^{(3)}(𝐤𝐤^{})P_q(k),$$ (9) where $`\stackrel{~}{\delta _q}(𝐤)=d^3x\delta _q(𝐱)e^{i𝐤𝐱}`$, and $`\delta _q(q\overline{q})/\overline{q}`$. With these conventions, the variance is $`\sigma _q^2\delta _q^2=d^3kP_q(k)/(2\pi )^3`$. For a flat universe, equation (8) agrees with the expression of Persi et al. . The SZ power spectrum can thus be readily computed from the history of the mean density-weighted temperature $`\overline{T}_\rho (\chi )`$ and of the pressure power spectrum $`P_p(k,\chi )`$. ## III Methods ### A Simulations We used the MMH code written by Pen, which was developed by merging concepts from earlier hydrodynamic methods. Grid-based algorithms feature low computational cost and high resolution per grid element, but have difficulties providing the large dynamic range in length scales necessary for cosmological applications. On the other hand, particle-based schemes, such as the Smooth Particle Hydrodynamics (for a review see Ref.) fix their resolution in mass elements rather than in space and are able to resolve dense regions. However, due to the development of shear and vorticity, the nearest neighbors of particles change in time and must be determined dynamically at each time step at a large computational cost. To resolve these problems, several approaches have recently been suggested . The MMH code combines the advantages of both the particle and grid-based approaches by deforming a grid mesh along potential flow lines. It provides a twenty fold increase in resolution over previous Cartesian grid Eulerian schemes, while maintaining regular grid conditions everywhere . The grid is structured in a way that allows the use of high resolution shock capturing TVD schemes (see for example Ref. and references therein.) at a low computational cost per grid cell. The code that optimized for parallel processing, which is straightforward due to the regular mesh structure. The moving mesh provides linear compression factors of about 10, which correspond to compression factors of about $`10^3`$ in density. Note that this code does not include the effects of cooling and feedback of the gas. We ran three simulation with $`128^3`$ curvilinear cells, corresponding to $`\sigma _8`$-normalized SCDM, $`\mathrm{\Lambda }`$CDM, and OCDM models. The simulation parameters are listed in table I. Note that in all cases, the shape parameter for the linear power spectrum was set to $`\mathrm{\Gamma }=\mathrm{\Omega }_mh`$ . The simulation output was saved at $`z=0,0.5,1,2,4,8`$ and $`16`$, and was used to compute 3-dimensional statistics. To test the resolution of the simulation, we compared the power spectrum of the dark matter density fluctuations $`P_{\rho DM}(k)`$ (defined in Eq. with $`q\rho _{DM}`$) from the simulations to that from the Peacock & Dodds fitting formula. The results for the $`\mathrm{\Lambda }`$CDM are shown on figure 5, and are similar for the other three models. The simulation power spectrum agrees well with the fitting formula for $`0.2k2h`$ Mpc<sup>-1</sup> at all redshifts. For $`k0.2`$ and $`k2h`$ Mpc<sup>-1</sup>, the simulations are limited by the finite size of the box and the finite resolution, respectively. We will use these limits below, to study the effect of these limitations on the SZ power spectrum. ### B Press–Schechter Formalism It is useful to compare the simulation results with analytic calculations based on the Press–Schechter (PS) formalism. We compute the angular power spectrum and the mean Comptonization parameter, using the methods of KK99 and Barbosa et al., respectively. For definitiveness, we adopt the spherical isothermal $`\beta `$ model with the gaussian-like filter for the gas density distribution in a cluster, $$\rho _{\mathrm{gas}}(r)=\rho _{\mathrm{gas0}}\left[1+\left(\frac{r}{r_c}\right)^2\right]^{3\beta /2}e^{r^2/\xi R^2},$$ (10) where $`R`$ and $`r_c`$ are the virial radius and the core radius of a cluster, respectively, and a fudge factor $`\xi =4/\pi `$ is taken to properly normalize the gas mass enclosed in a cluster. We employed a self-similar model for the cluster evolution. Note that other evolution models yield spectra that differ only at small angular scales ($`l>2000`$). The gas mass fraction of objects is taken to be the cosmological mean, i.e., $`\mathrm{\Omega }_b/\mathrm{\Omega }_m`$. The volume-averaged density-weighted temperature is given by $$\overline{T}_\rho (z)=\frac{1}{\overline{\rho }_0}_{M_{\mathrm{min}}}^{M_{\mathrm{max}}}𝑑MM\frac{dn(M,z)}{dM}T(M,z),$$ (11) where $`\overline{\rho }_0=2.775\mathrm{\Omega }_mh^2M_{}\mathrm{Mpc}^3`$ is the present mean mass density of the universe, $`dn/dM`$ is the PS mass function which gives the comoving number density of collapsed objects of mass $`M`$ at $`z`$. $`T`$ is computed by the virial temperature given by $`k_BT(M,z)`$ $`=`$ $`5.2\beta ^1\left({\displaystyle \frac{\mathrm{\Delta }_c(z)}{18\pi ^2}}\right)^{1/3}\left({\displaystyle \frac{M}{10^{15}h^1M_{}}}\right)^{2/3}`$ (13) $`\times (1+z)\mathrm{\Omega }_m^{1/3}\mathrm{keV},`$ where $`\mathrm{\Delta }_c(z)`$ is the mean mass density of a collapsed object at $`z`$ in units of $`\overline{\rho }_0\mathrm{\Omega }_m(1+z)^3`$. While Barbosa et al. used $`\beta 5/6`$, we adopt $`\beta =2/3`$ according to KK99. The limits $`M_{\mathrm{min}}`$ and $`M_{\mathrm{max}}`$ should be taken to fit the resolved mass range in the simulation. The mass enclosed in the spherical top-hat filter with comoving wavenumber $`k`$ is $`M`$ $`=`$ $`{\displaystyle \frac{4\pi }{3}}\overline{\rho }_0\left({\displaystyle \frac{\pi }{k}}\right)^3`$ (14) $`=`$ $`3.6\times 10^{13}\left({\displaystyle \frac{k}{1h\mathrm{Mpc}^1}}\right)^3\mathrm{\Omega }_mh^1M_{}.`$ (15) Since the $`k`$-range of confidence in the simulation is approximately $`0.2k2h\mathrm{Mpc}^1`$ (see §III A), equation (14) gives $`M_{\mathrm{min}}4.5\times 10^{12}\mathrm{\Omega }_mh^1M_{}`$ and $`M_{\mathrm{max}}4.5\times 10^{15}\mathrm{\Omega }_mh^1M_{}`$. This mass range is used for calculating the angular power spectrum, the mean Comptonization parameter, and the density-weighted temperature. A more detailed inspection of Figure 5 reveals that the resolution of the simulations depends on redshift, and involves a power law cutoff in $`k`$ rather than a sharp cutoff. This must be kept in mind when comparing the two methods (see §IV D). ### C Constant Bias Model It is also useful to consider a simple model with constant bias. The bias $`b_p`$ of the pressure with respect to the DM density can be defined as $$b_p^2(k,z)\frac{P_p(k,z)}{P_{\rho _{DM}}(k,z)},$$ (16) and generally depends both on wave number $`k`$ and redshift $`z`$. In this simple model, we assume that $`b_p`$ is independent of both $`k`$ and $`z`$, and replace the pressure power spectrum $`P_p(k,z)`$ in Equation (8), by $`b_pP_{\rho DM}(k,z)`$, where $`P_{\rho DM}`$ is evaluated using the Peacock & Dodds fitting formula . This has the advantage of allowing us to extend the contribution to the SZ power spectrum to arbitrary ranges of $`k`$. This will be used in §IV D to test the effect of finite resolution and finite box size on the SZ power spectrum. ## IV Results ### A Projected Maps Figure 1 shows a map of the density-weighted temperature for the $`\mathrm{\Lambda }`$CDM model projected through one box at $`z=0`$. Clusters of galaxies are clearly apparent as regions with $`k_BT3`$ keV. The gas in filaments and groups can be seen to stretch between clusters and has temperatures in the range $`0.1k_BT3`$ keV. While these regions have smaller temperatures, they have a relatively large covering factor and can thus contribute considerably to the $`y`$-parameter and to the SZ fluctuations. This can be seen more clearly in Figure 2, which shows the corresponding map of the comptonization parameter. Clusters produce $`y`$-parameters greater than $`10^5`$, while groups and filaments produce $`y`$-parameters in the range $`10^710^5`$. Note that of the total SZ effect on the sky would include contributions for a number of simulation boxes along the line-of-sight. In such a map, the filamentary structure is less apparent as filaments are averaged out by projection . A quantitative analysis of the contribution of groups and filaments to the SZ effect is presented in the following sections. ### B Mean Comptonization Parameter The evolution of the density-weighted temperature for each of the simulations is shown on figure 3. The temperatures at present are listed in table II and are quite similar. This is expected since all models were chosen to have similar $`\sigma _8`$ normalizations. The evolution is steeper for the SCDM, flatter for the OCDM model, and intermediate for the $`\mathrm{\Lambda }`$CDM model. This is consistent with the different rate of growth of structure for each model. Also plotted on this figure is the density-weighted temperature derived from the PS formalism (Eq. ). The agreement for $`z4`$ is good, both for the relative amplitudes and for the shapes of the temperature evolution. At $`z4`$ the non-linear mass scale is not sufficiently large compared to the mass resolution of the simulation, so the temperatures are not meaningful in that regime. This is however not a serious limitation, as these redshifts do not contribute significantly to either the mean comptonization or the SZ fluctuations. The PS temperatures exceed the simulations at low redshift for all cosmological models, since massive (high temperature) clusters, which may be missed in the simulations due to the effect of finite box size, dominate there. The PS temperatures at $`z=0`$ are listed in Table II. The parameters of our $`\mathrm{\Lambda }`$CDM model were chosen to coincide with that for the simulation of Cen & Ostriker. While the slope of our density-weighted temperature agrees approximately with theirs for $`z3`$, the amplitude is significantly different. They find a final temperature of about 0.9 keV, which is a factor of about 5 larger than ours. This discrepancy could be due to the fact that their simulation include feedback from star formation, while ours only comprise gravitational forces. It is however surprising that standard feedback could produce such a large difference. One can estimate the gravitational binding energy of virialized matter from the cosmic energy equation and finds a thermal component of fluids to be of order 1/4 keV, consistent with our simulations. We should note, however, that the high thermal temperatures from feedback may be required for consistency with the X-ray background constraints. The reason for this discrepancy is still unknown at present, but should be kept in mind for the interpretation of our results. The mean comptonization parameter for each simulation was derived using equation (7) and is listed in table II. In all cases, $`\overline{y}`$ is well below the upper limit $`\overline{y}<1.5\times 10^5`$ (95% CL) set by the COBE/FIRAS instrument . The differential and cumulative redshift dependence of $`\overline{y}`$ are shown on figure 4. For the three models, most of the mean SZ effect is produced at $`z2`$. The contribution from high redshift is largest for the OCDM and smallest for the SCDM model, again in agreement with the relative growth of fluctuations in each model. The differential and cumulative redshift dependence of $`\overline{y}`$ derived from the PS formalism are shown on this figure as the thin lines. The values of $`\overline{y}`$ from PS are also listed in Table II. They are higher than that for the simulations by about 25% for the $`\mathrm{\Lambda }`$CDM and OCDM models, are in close agreement for the SCDM model. The shapes of the differential curves approximately agree, although the PS formalism predicts more contributions from lower redshifts. This is can be traced to the slightly steeper evolution of the PS temperatures in figure 3, and is due to massive nearby clusters. ### C Power Spectrum As noted in §II, the SZ power spectrum can be derived from the history of the temperature $`\overline{T}_\rho `$ and of the pressure power spectrum $`P_p(k)`$ (Eq. ). The evolution of the pressure power spectrum is shown on figure 6, for the $`\mathrm{\Lambda }`$CDM simulation. The amplitude of $`P_p(k)`$ increases with redshift, while keeping an approximately similar shape. Perhaps more instructive is the evolution of the pressure bias $`b_p(k,z)`$ (Eq ), which is shown on figure 7. For $`z1`$, $`b_p`$ is approximately independent of scale, in the $`k`$-range of confidence ($`0.2k2h`$ Mpc<sup>-1</sup>; see figure 5). The value of $`b_p`$ at $`z=0`$ and $`k=0.5h`$ Mpc<sup>-1</sup> is listed in table II. For $`z1`$, $`b_p`$ remains approximately constant on large scales, but is larger on small scales. Indeed, at early times, only a small number of small regions have collapsed and are thus sufficiently hot to contribute to the pressure. As a result, the pressure at high-$`z`$ is more strongly biased on small scales. The SZ angular power spectrum derived from integrating the pressure power spectrum along the line of sight (Eq. ) is shown in Figure 8 for each simulation. For comparison, the spectrum of primary CMB anisotropies was computed using CMBFAST, and was also plotted on this figure as the solid line. The SZ power spectrum can be seen to be two order of magnitude below the primordial power spectrum below $`l2000`$, but comparable to it beyond that. Because of finite resolution and box size, the SZ power spectra should be interpreted as lower limits outside of the $`l`$-range of confidence highlighted by thicker lines (see §IV D). The SCDM spectrum is lower than that for the $`\mathrm{\Lambda }`$CDM and OCDM. This is a consequence of the lower value of $`\sigma _8`$ for this model. Indeed, KK99 have shown that the SZ power spectrum scales as $`C_l\mathrm{\Omega }_b^2\sigma _8^6h`$, and is thus very sensitive on this normalization. This scaling relation also allows us to compare our results to that of the SCDM calculation of Persi et al.. The amplitude of their power spectrum, rescaled to the same value of $`\sigma _8`$, is within 20% of ours at $`l=1000`$, while its shape is similar to ours. Figure 9 presents a comparison of the SZ power spectra derived from each of the three methods described in §III. For both the simulations and the PS formalism, the SZ power spectra peak around $`l2000`$ for the SCDM and $`\mathrm{\Lambda }`$CDM models, and around $`l5000`$ for the OCDM model. On the other hand, the constant bias models, which do not have a mass or $`k`$ cutoff, peak at $`l1000030000`$. This is explained by the fact that this model does not have a mass or $`k`$ cutoff and has therefore more power on small scales. In §IV D, we will use this comparison to study the effect of finite resolution and box size of the simulations. For $`200l2000`$, the simulation and PS predictions approximately agree for the SCDM and $`\mathrm{\Lambda }`$CDM models. On the other hand, for the OCDM model, the PS prediction is a factor of 3 higher than that from the simulations in this range. This can be traced to the fact that the $`\mathrm{\Lambda }`$CDM simulation yields a larger pressure bias $`b_p`$ (Eq. 16) at low redshifts than the OCDM simulation. By inspecting the figure corresponding to Figure 5 for the OCDM model, we indeed noticed that more power was missing on small scales in this simulation. This is probably due to the fact that the OCDM simulation was started at a higher redshift ($`z=100`$) than the other two simulations ($`z=30`$). Due to truncation errors in the Laplacian and gradient calculations, modes with frequencies close to the Nyquist frequency are known to grow much more slowly even in the linear regime. This effect is reduced if the simulation is started later. The redshift dependence of the SZ power spectrum is shown in figure 11, for the $`\mathrm{\Lambda }`$CDM model. Most of the SZ fluctuations are produced at low redshifts: at $`l=500`$, about 50% of the power spectrum is produced at $`z0.1`$, and about 90% at $`z0.5`$. The contribution of the warm gas in groups and filaments can be studied by examining figure 12. This figure shows the $`\mathrm{\Lambda }`$CDM power spectrum measured after removing hot regions from the simulation volume, for several cutoff temperatures. Approximately 50% of the SZ power spectrum at $`l=500`$ is produced by gas with $`k_BT5\mathrm{keV}`$. In §IV E, we show that these combined facts give good prospects for the removal (and the detection) of SZ fluctuations from CMB maps. The behavior of the power spectrum for $`l1000`$, in figures 12 and 11, agrees with the results of KK99 who studied the Poisson and clustering contributions separately. At low $`l`$’s , the SZ power spectrum is produced primarily by bright (low-redshift or high-temperature) objects, i.e., by massive clusters, and is thus dominated by the Poisson term. However, after subtracting bright clusters from the SZ map, the correlation term dominates the Poisson term at high redshift. Therefore, the SZ spectrum on large angular, measured after subtracting bright spots, should trace clustering at high redshift. This interesting effect will be discussed in details elsewhere. ### D Limitations of the Simulations It is important to assess the effect of the limitations of the simulations on these results. First, the finite resolution may lower the temperature $`T_\rho `$, since it prevents small scale structures from collapsing. As we saw in §III B, the resolution limits of the simulations correspond to halo masses of about $`4.5\times 10^{12}\mathrm{\Omega }_mh^1M_{}`$. According to the PS formalism, the contribution to $`T_\rho `$ from halos with masses smaller than this limit is about $`0.010.02`$ keV, for $`z4`$ in the $`\mathrm{\Lambda }`$CDM model. The SZ power spectrum at $`l2000`$ is produced mainly at low redshifts, and is therefore little affected by this effect. Note however, that $`\overline{y}`$, which is sensitive to small halos at high redshifts, is more affected. Indeed, the contribution to $`\overline{y}`$ by these halos is about $`0.70\times 10^6`$, assuming a gas mass fraction of $`\mathrm{\Omega }_b/\mathrm{\Omega }_m`$. The finite box size and resolution also suppress power in the pressure power spectrum. As we saw in §III A and Figure 5, the simulations lack power for $`k0.1`$ and $`k2h`$ Mpc<sup>-1</sup>. To test the impact of this suppression, we consider the constant bias model described in §III C. The total SZ power spectrum for this model is shown in figure 10 as the solid line, for the $`\mathrm{\Lambda }`$CDM case. This figure also shows the results of performing the same calculation, but after suppressing power in several ranges of $`k`$ values. The finite box size (keeping only modes with $`k>k_{\mathrm{min}}=0.1h`$ Mpc<sup>-1</sup>) reduces $`C_l`$ slightly for $`l200`$ and $`l20000`$, and thus does not have a very large effect. On the other hand, the finite resolution ($`k<k_{\mathrm{max}}=2,5,10h`$ Mpc<sup>-1</sup>) reduces $`C_l`$ considerably for $`l2000`$. The above results can be interpreted as follows. At a given $`l`$, the limited $`k`$-range corresponds to a limited $`z`$-range, $`l/k_{\mathrm{max}}<r(z)<l/k_{\mathrm{min}}`$. Let us take $`k_{\mathrm{min}}=0.1`$ and $`k_{\mathrm{max}}=2h\mathrm{Mpc}^1`$, as relevant for the simulations. Then, $`l=100`$, $`l=1000`$ and $`l=10000`$ correspond to $`0.02z0.4`$, $`0.2z<\mathrm{}`$ and $`5z<\mathrm{}`$, respectively. Since most of contributions to $`C_l`$ come from $`z<0.5`$, the finite $`k_{\mathrm{min}}`$ decreases $`C_l`$ only at low $`l`$, while $`k_{\mathrm{max}}`$ does so over the entire $`l`$ range. We conclude that the limitations of the simulations precludes us from predicting the SZ power spectrum outside of the $`200l2000`$ range. These limits can only be improved by using larger simulations. It is however worth noting that there could be more SZ power around $`l10000`$. This might then be detectable by future interferometric CMB measurements that have angular resolutions around $`1^{}`$, intermediate in scale between the satellite missions and the planned millimeter experiments (ALMA, LMSA). ### E Prospects for CMB Experiments The impact of secondary anisotropies on the upcoming MAP mission were studied by Refregier et al.. They showed that discrete sources, gravitational lensing and the SZ effect were the dominant extragalactic foregrounds for MAP. The dotted line on Figure 8 shows the expected noise for measuring the primary CMB power spectrum with the 94 GHz MAP channel, with a band average of $`\mathrm{\Delta }l=10`$. For all model considered, the SZ power spectrum is well below the noise. The $`rms`$ $`y`$-parameter for the MAP 94 GHz beam ($`13^{}`$ FWHM) is listed in table II for each model, from both the simulations and the PS formalism. The resulting rms RJ temperature fluctuations are of the order of a few $`\mu `$K, compared to a nominal antenna noise of about $`35\mu `$K. The SZ effect will therefore not be a major limitation for estimating cosmological parameters with MAP. For comparison, the residual spectrum from undetected point sources ($`S(94\mathrm{GHz})<2`$ Jy) expected using the model of Toffolatti et al. is shown in figure 8 for the 94 GHz channel. Point sources dominate over the SZ effect at $`l300`$, but are comparable below that. Moreover, we have shown in figure 11 and 12, that about 50% of the SZ power spectrum at $`l500`$ is produced at low redshifts ($`z0.1`$) and by clusters of galaxies ($`k_BT5\mathrm{keV}`$). This confirms the results of Refregier et al., who predicted that most of the SZ effect could be removed by cross-correlating the CMB maps with existing X-ray cluster catalogs (eg. XBACS , BCS ). Because of its limited spectral coverage, the MAP mission will not permit a separation between the SZ effect and primordial anisotropies. Apart from a handful of clusters which will appear as point sources, it will therefore be difficult to detect the SZ fluctuations directly with MAP. On the other hand, the future Planck surveyor mission will cover both the positive and the negative side of the SZ frequency spectrum, and will thus allow a clear separation of the different foreground and background components . Aghanim et al. have established that, using such a separation, the SZ profiles of individual clusters can be measured down to $`y3\times 10^7`$. Moreover, Hobson et al. have estimated that the SZ power spectrum could be measured for $`50l1000`$, with a precision per multipole of about 70%. Planck surveyor will therefore provide a precise measurement of the total SZ power spectrum. This would provide a direct, independent measurement of $`\mathrm{\Omega }_b`$ and of $`\sigma _8`$, and would thus help breaking the degeneracies in the cosmological parameters estimated from primordial anisotropies alone. Note that this measurement might be also feasible, albeit with less precision, with upcoming balloon experiments which also have broad spectral coverage. ### F The Missing Baryon Problem and Feedback The measured abundance of deuterium in low metallicity systems, together with Big Bang Nucleosynthesis, predicts about twice as many baryons than what is observed in galaxies, stars, clusters and neutral gas . These “missing baryons” are likely to be in the form of the warm gas in groups in filaments . This component is indeed difficult to observe directly since it is too cold to be seen in the X-ray band, and too hot to produce any absorption lines in the quasar spectra . The SZ effect on large scale could however provides a unique probe of this warm gas. One can indeed imagine subtracting the detected clusters from SZ maps, and measuring the power spectrum of the residual SZ fluctuations, which are mainly produced by groups and filaments. For instance, if all clusters with $`k_BT3`$ keV were removed from the SZ map, the SZ power spectrum would drop by a factor of about 2 for $`l2000`$ (see figure 12). For the Planck Surveyor sensitivity quoted in §IV E, this yields a signal-to-noise ration per multipole of about 1. The amplitude, if not the shape, of the residual SZ spectrum will thus be easily detected by Planck, thus yielding constraints on the temperature and density of the missing baryons. In our simulations, we have only included gravitational forces. However, feedback from star and AGN formation can also significantly heat the IGM and thus affect the observed SZ effect. Valegeas & Silk (see also reference therein), have studied the energy injection produced by photo-ionization, supernovae, and AGN. In their model, AGN are the most efficient, and can heat the IGM by as much as $`10^6`$ K by a redshift of a few. This results in a mean $`y`$-parameter of about $`10^6`$, which is comparable to our value derived from gravitational instability alone. Preheating by feedback can thus increase the amplitude of the SZ effect by a factor of a few, and thus be easily detected by Planck. Feedback can thus be directly measured as an excess in the $`y`$-parameter or in the SZ power spectrum, over the prediction from gravitational instability alone. Energy injection has a large effect on the gas in groups and filaments, comparatively to that in clusters. We may thus also detect the effects of feedback through the relationship between the X-ray temperature of groups (or their galaxy velocity dispersion) and their SZ temperature. These measurements would then constrain the physics of energy injection. ## V Conclusions We have studied the SZ effect using MMH simulations. Our results for the mean comptonization parameter is consistent with earlier work using the Press-Schechter formalism and hydrodynamical simulations. It is found to be lower than the current observational limit by about one order of magnitude, for all considered cosmologies. The SZ power spectrum is found to be comparable to the primary CMB power spectrum at $`l2000`$. For the SCDM model, our SZ power spectrum is approximately consistent with that derived by Persi et al., after rescaling for the differing values of $`\sigma _8`$. We found that groups and filaments ($`k_BT5\mathrm{keV}`$) contribute about 50% of the SZ power spectrum at $`l=500`$. On these scales, about 50% of the SZ power spectrum is produced at $`z0.1`$ and can thus be removed using X-ray cluster catalogs. The SZ fluctuations are well below the instrumental noise expected for the upcoming MAP mission, and should therefore not be a limiting factor. The SZ power spectrum should however be accurately measured by the future Planck mission. Such a measurement will yield an independent measurement of $`\mathrm{\Omega }_b`$ and $`\sigma _8`$, and thus complement the measurements of primary anisotropies. We have compared our simulation results with predictions from the PS formalism. The results from the two methods agree approximately, but differ in the details. The discrepancy could be due to the finite resolution of the simulations, which limit the validity of our predictions outside the $`200l2000`$ range. We also find discrepancies with other numerical simulations. These issues can only be settled with larger simulations, and by a detailed comparison of different hydrodynamical codes. Such an effort is required for our theoretical predictions to match the precision with which the SZ power spectrum will be measured in the future. A promising approach to measure the SZ effect on large scales is to cross-correlate CMB maps with galaxy catalogs . Most of the SZ fluctuations on MAP’s angular scales ($`l<1000`$) are produced at low redshifts and are thus correlated with tracers of the local large scale structure. Preliminary estimates indicate that such a cross-correlation between MAP and the existing APM galaxy catalog would yield a significant detection. Of course, even larger signals are expected for the Planck Surveyor mission. This would again provide a probe of the gas distributed not only in clusters, but also in the surrounding large-scale structure and therefore help solve the missing baryon problem. Moreover, energy injection from star and AGN formation can produce an SZ amplitude in excess of our predictions, which only involve gravitational forces. The measurement of SZ fluctuations or of a cross-correlation signal thus provides a measure of feedback and can thus shed light on the process of galaxy formation. ###### Acknowledgements. We thank Uros Seljak and Juan Burwell for useful collaboration and exchanges. We also thank Renyue Cen, Greg Bryan, Jerry Ostriker, Arielle Phillips and Roman Juszkiewicz for useful discussions and comparisons. A.R. was supported in Princeton by the NASA MAP/MIDEX program and the NASA ATP grant NAG5-7154, and in Cambridge by an EEC TMR grant and a Wolfson College fellowship. D.N.S. is partially supported by the MAP/MIDEX program. E.K. acknowledges a fellowship from the Japan Society for the Promotion of Science. Computing support from the National Center for Supercomputing applications is acknowledged. U.P. was supported in part by NSERC grant 72013704.
no-problem/9912/astro-ph9912382.html
ar5iv
text
# The Sloan Digital Sky Survey1 and its Archive ## 1. Introduction Astronomy is undergoing a major paradigm shift. Data gathering technology is riding Moore’s law: data volumes are doubling quickly, and becoming more homogeneous. For the first time data acquisition and archival is being designed for online interactive analysis. Shortly, it will be much easier to download a detailed sky map or object class catalog, than wait several months to access a telescope that is often quite small. Several multi-wavelength projects are under way: SDSS, GALEX, 2MASS, GSC-2, POSS2, ROSAT, FIRST and DENIS, each surveying a large fraction of the sky. Together they will yield a Digital Sky, of interoperating multi-terabyte databases. In time, more catalogs will be added and linked to the existing ones. Query engines will become more sophisticated, providing a uniform interface to all these datasets. In this era, astronomers will have to be just as familiar with mining data as with observing on telescopes. ## 2. The Sloan Digital Sky Survey The Sloan Digital Sky Survey (SDSS) will digitally map about half of the Northern sky in five spectral bands from ultraviolet to the near infrared. It is expected to detect over 200 million objects. Simultaneously, it will measure redshifts for the brightest million galaxies (see http://www.sdss.org/). The SDSS is the successor to the Palomar Observatory Sky Survey (POSS), which provided a standard reference data set to all of astronomy for the last 40 years. Subsequent archives will augment the SDSS and will interoperate with it. The SDSS project thus consists of not only of building the hardware, and reducing and calibrating the data, but also includes software to classify, index, and archive the data so that many scientists can use it. The SDSS will revolutionize astronomy, increasing the amount of information available to researchers by several orders of magnitude. The SDSS archive will be large and complex: including textual information, derived parameters, multi-band images, spectra, and temporal data. The catalog will allow astronomers to study the evolution of the universe in great detail. It is intended to serve as the standard reference for the next several decades. After only a month of operation, SDSS found the two most distant known quasars. With more data, other exotic properties will be easy to mine from the datasets. The potential scientific impact of the survey is stunning. To realize this potential, data must be turned into knowledge. This is not easy - the information content of the survey will be larger than the entire text contained in the Library of Congress. The SDSS is a collaboration between the University of Chicago, Princeton University, the Johns Hopkins University, the University of Washington, Fermi National Accelerator Laboratory, the Japanese Participation Group, the United States Naval Observatory, and the Institute for Advanced Study, Princeton, with additional funding provided by the Alfred P. Sloan Foundation, NSF and NASA. The SDSS project is a collaboration between scientists working in diverse areas of astronomy, physics and computer science. The survey will be carried out with a suite of tools developed and built especially for this project - telescopes, cameras, fiber spectrographic systems, and computer software. SDSS constructed a dedicated 2.5-meter telescope at Apache Point, New Mexico, USA. The telescope has a large, flat focal plane that provides a 3-degree field of view. This design balances the areal coverage of the instrument against the detector’s pixel resolution. The survey has two main components: a photometric survey, and a spectroscopic survey. The photometric survey is produced by drift scan imaging of 10,000 square degrees centered on the North Galactic Cap using five broadband filters that range from the ultraviolet to the infrared. The effective exposure is 55 sec. The photometric imaging uses an array of 30x2Kx2K Imaging CCDs, 22 Astrometric CCDs, and 2 Focus CCDs. Its 0.4 arcsec pixel size provides a full sampling of the sky. The data rate from the 120 million pixels of this camera is 8 Megabytes per second. The cameras can only be used under ideal conditions, but during the 5 years of the survey SDSS will collect more than 40 Terabytes of image data. The spectroscopic survey will target over a million objects chosen from the photometric survey in an attempt to produce a statistically uniform sample. The result of the spectroscopic survey will be a three-dimensional map of the galaxy distribution, in a volume several orders of magnitude larger than earlier maps. The primary targets will be galaxies, selected by a magnitude and surface brightness limit in the r band. This sample of 900,000 galaxies will be complemented with 100,000 very red galaxies, selected to include the brightest galaxies at the cores of clusters. An automated algorithm will select 100,000 quasar candidates for spectroscopic follow-up, creating the largest uniform quasar survey to date. Selected objects from other catalogs will also be targeted. The spectroscopic observations will be done in overlapping 3 circular tiles. The tile centers are determined by an optimization algorithm, which maximizes overlaps at areas of highest target density. The spectroscopic survey will utilize two multi-fiber medium resolution spectrographs, with a total of 640 optical fibers. Each fiber is 3 seconds of arc in diameter, that provide spectral coverage from 3900 - 9200 Å. The system can measure 5000 galaxy spectra per night. The total number of galaxy spectra known to astronomers today is about 100,000 - only 20 nights of SDSS data! Whenever the Northern Galactic cap is not accessible, SDSS repeatedly images several areas in the Southern Galactic cap to study fainter objects and identify variable sources. SDSS has also been developing the software necessary to process and analyze the data. With construction of both hardware and software largely finished, the project has now entered a year of integration and testing. The survey itself will take about 5 years to complete. ### 2.1. The SDSS Archives The SDSS will create four main data sets: a photometric catalog, a spectroscopic catalog, images, and spectra. The photometric catalog is expected to contain about 500 distinct attributes for each of one hundred million galaxies, one hundred million stars, and one million quasars. These include positions, fluxes, radial profiles, their errors, and information related to the observations. Each object will have an associated image cutout (”atlas image”) for each of the five filters. The spectroscopic catalog will contain identified emission and absorption lines, and one-dimensional spectra for 1 million galaxies, 100,000 stars, and 100,000 quasars. Derived custom catalogs may be included, such as a photometric cluster catalog, or quasar absorption line catalog. In addition there will be a compressed 1TB Sky Map. These products add up to about 3TB. The collaboration will release this data to the public after a period of thorough verification. This public archive is expected to remain the standard reference catalog for the next several decades. This long-lifetime presents design and legacy problems. The design of the SDSS archival system must allow the archive to grow beyond the actual completion of the survey. As the reference astronomical data set, each subsequent astronomical survey will want to cross-identify its objects with the SDSS catalog, requiring that the archive, or at least a part of it, be dynamic with a carefully defined schema and metadata. Observational data from the telescopes is shipped on tapes to Fermi National Laboratory (FNAL) where it is reduced and stored in the Operational Archive (OA), protected by a firewall, accessible only to personnel working on the data processing. Data in the operational archive is reduced and calibrated via method functions. Within two weeks the calibrated data is published to the Science Archive (SA). The Science Archive contains calibrated data organized for efficient science use. The SA provides a custom query engine that uses multidimensional indices. Given the amount of data, most queries will be I/O limited, thus the SA design is based on a scalable architecture, ready to use large numbers of cheap commodity servers, running in parallel. Science archive data is replicated to Local Archives (LA) within another two weeks. The data gets into the public archives (MPA, PA) after approximately 1-2 years of science verification, and recalibration. A WWW server will provide public access. The Science Archive and public archives employ a three-tiered architecture: the user interface, an intelligent query engine, and the data warehouse. This distributed approach provides maximum flexibility, while maintaining portability, by isolating hardware specific features. Both the Science Archive and the Operational Archive are built on top of Objectivity/DB, a commercial OODBMS. Querying these archives requires a parallel and distributed query system. We have implemented a prototype query system. Each query received from the User Interface is parsed into a Query Execution Tree (QET) that is then executed by the Query Engine. Each node of the QET is either a query or a set-operation node, and returns a bag of object-pointers upon execution. The multi-threaded Query Engine executes in parallel at all the nodes at a given level of the QET. Results from child nodes are passed up the tree as soon as they are generated. In the case of aggregation, sort, intersection and difference nodes, at least one of the child nodes must be complete before results can be sent further up the tree. In addition to speeding up the query processing, this data push strategy ensures that even in the case of a query that takes a very long time to complete, the user starts seeing results almost immediately, or at least as soon as the first selected object percolates up the tree (Thakar etal 1999). ### 2.2. Typical Queries The astronomy community will be the primary SDSS user. They will need specialized services. At the simplest level these include the on-demand creation of (color) finding charts, with position information. These searches can be fairly complex queries on position, colors, and other parts of the attribute space. As astronomers learn more about the detailed properties of the stars and galaxies in the SDSS archive, we expect they will define more sophisticated classifications. Interesting objects with unique properties will be found in one area of the sky. They will want to generalize these properties, and search the entire sky for similar objects. A common query will be to distinguish between rare and typical objects. Other types of queries will be non-local, like ”find all the quasars brighter than r=22, which have a faint blue galaxy within 5 arcsec on the sky”. Yet another type of a query is a search for gravitational lenses: ”find objects within 10 arcsec of each other which have identical colors, but may have a different brightness”. This latter query is a typical high-dimensional query, since it involves a metric distance not only on the sky, but also in color space. Special operators are required to perform these queries efficiently. Preprocessing, like creating regions of attraction is not practical, given the number of objects, and that the sets of objects these operators work on are dynamically created by other predicates. ## 3. Data Organization Given the huge data sets, the traditional Fortran access to flat files is not a feasible approach for SDSS. Rather non-procedural query languages, query optimizers, database execution engines, and database indexing schemes must replace traditional ”flat” file processing. This ”database approach” is mandated both by computer efficiency, and by the desire to give astronomers better analysis tools. The data organization must support concurrent complex queries. Moreover, the organization must efficiently use processing, memory, and bandwidth. It must also support the addition of new data to the SDSS as a background task that does not disrupt online access. It would be wonderful if we could use an off-the-shelf SQL, OR, or OO database system for our tasks, but we are not optimistic that this will work. As explained presently, we believe that SDSS requires novel spatial indices and novel operators. It also requires a dataflow architecture that executes queries concurrently using multiple disks and processors. As we understand it, current systems provide few of these features. But, it is quite possible that by the end of the survey, some commercial system will provide these features. We hope to work with DBMS vendors towards this end. ### 3.1. Spatial Data Structures The large-scale astronomy data sets consist primarily of vectors of numeric data fields, maps, time-series sensor logs and images: the vast majority of the data is essentially geometric. The success of the archive depends on capturing the spatial nature of this large-scale scientific data. The SDSS data has high dimensionality – each item has thousands of attributes. Categorizing objects involves defining complex domains (classifications) in this N-dimensional space, corresponding to decision surfaces. The SDSS teams are investigating algorithms and data structures to quickly compute spatial relations, such as finding nearest neighbors, or other objects satisfying a given criterion within a metric distance. The answer set cardinality can be so large that intermediate files simply cannot be created. The only way to analyze such data sets is to pipeline the answers directly into analysis tools. This data flow analysis has worked well for parallel relational database systems (DeWitt 92). We expect these data river ideas will link the archive directly to the analysis and visualization tools. The typical search of these multi-Terabyte archives evaluates a complex predicate in k-dimensional space, with the added difficulty that constraints are not necessarily parallel to the axes. This means that the traditional indexing techniques, well established with relational databases, will not work, since one cannot build an index on all conceivable linear combinations of attributes. On the other hand, one can use the fact that the data are geometric and every object is a point in this k-dimensional space (Samet 1990a,b). Data can be quantized into containers. Each container has objects of similar properties, e.g. colors, from the same region of the sky. If the containers are stored as clusters, data locality will be very high \- if an object satisfies a query, it is likely that some of the object’s ”friends” will as well. There are non-trivial aspects of how to subdivide, when the data has large density contrasts (Csabai etal 96). These containers represent a coarse-grained density map of the data. They define the base of an index tree that tells us whether containers are fully inside, outside or bisected by our query. Only the bisected container category is searched, as the other two are wholly accepted or rejected. A prediction of the output data volume and search time can be computed from the intersection. The SDSS data is too large to fit on one disk or even one server. The base-data objects will be spatially partitioned among the servers. As new servers are added, the data will repartition. Some of the high-traffic data will be replicated among servers. It is up to the database software to manage this partitioning and replication. In the near term, designers will specify the partitioning and index schemes, but we hope that in the long term, the DBMS will automate this design task as access patterns change. There is great interest in a common reference frame the sky that can be universally used by different astronomical databases. The need for such a system is indicated by the widespread use of the ancient constellations - the first spatial index of the celestial sphere. The existence of such an index, in a more computer friendly form will ease cross-referencing among catalogs. A common scheme, that provides a balanced partitioning for all catalogs, may seem to be impossible; but, there is an elegant solution, a ’shoe that fits all’: that subdivides the sky in a hierarchical fashion. Our approach is described in detail by Kunszt etal (1999). ### 3.2. Broader Metadata Issues There are several issues related to metadata for astronomy datasets. One is the database schema within the data warehouse, another is the description of the data extracted from the archive and the third is a standard representation to allow queries and data to be interchanged among several archives. The SDSS project uses Platinum Technology’s Paradigm Plus, a commercially available UML tool, to develop and maintain the database schema. The schema is defined in a high level format, and a script generator creates the .h files for the C++ classes, and the .ddl files for Objectivity/DB. This approach enables us to easily create new data model representations in the future (SQL, IDL, XML, etc). About 20 years ago, astronomers agreed on exchanging most of their data in self-descriptive data format. This format, FITS, standing for the Flexible Image Transport System (Wells 81) was primarily designed to handle images. Over the years, various extensions supported more complex data types, both in ASCII and binary form. FITS format is well supported by all astronomical software systems. The SDSS pipelines exchange most of their data as binary FITS files. Unfortunately, FITS files do not support streaming data, although data could be blocked into separate FITS packets. We are currently implementing both an ASCII and a binary FITS output stream, using such a blocked approach. We expect large archives to communicate with one another via a standard, easily parseable interchange format. We plan to define the interchange formats in XML, XSL, and XQL. The Operational Archive exports calibrated data to the Science Archive as soon as possible. Datasets are sent in coherent chunks. A chunk consists of several segments of the sky that were scanned in a single night, with all the fields and all objects detected in the fields. Loading data into the Science Archive could take a long time if the data were not clustered properly. Efficiency is important, since about 20 GB will be arriving daily. The incoming data are organized by how the observations were taken. In the Science Archive they will be inserted into the hierarchy of containers as defined by the multi-dimensional spatial index, according to their colors and positions. Data loading might bottleneck on creating the clustering units - databases and containers - that hold the objects. Our load design minimizes disk accesses, touching each clustering unit at most once during a load. The chunk data is first examined to construct an index. This determines where each object will be located and creates a list of databases and containers that are needed. Then data is inserted into the containers in a single pass over the data objects. ### 3.3. Scalable Server Architectures Accessing large data sets is primarily I/O limited. Even with the best indexing schemes, some queries must scan the entire data set. Acceptable I/O performance can be achieved with expensive, ultra-fast storage systems, or with many of commodity servers operating in parallel. We are exploring the use of commodity servers and storage to allow inexpensive interactive data analysis. We are still exploring what constitutes a balanced system design: the appropriate ratio between processor, memory, network bandwidth, and disk bandwidth. Using the multi-dimensional indexing techniques described in the previous section, many queries will be able to select exactly the data they need after doing an index lookup. Such simple queries will just pipeline the data and images off of disk as quickly as the network can transport it to the astronomer’s system for analysis or visualization. When the queries are more complex, it will be necessary to scan the entire dataset or to repartition it for categorization, clustering, and cross comparisons. Experience will teach us the ratio between processor power, memory size, IO bandwidth, and system-area-network bandwidth. Our simplest approach is to run a scan machine that continuously scans the dataset evaluating user-supplied predicates on each object (Acharya 95). Consider building an array of 20 nodes, each with 4 Intel Xeon 450 Mhz processors, 256MB of RAM, and 12x18GB disks (4TB of storage in all). Experiments show that one such node is capable of reading data at 150 MBps while using almost no processor time (Hartman 99). If the data is spread among the 20 nodes, they can scan the data at an aggregate rate of 3 GBps. This half-million dollar system could scan the complete (year 2004) SDSS catalog every 2 minutes. By then these machines should be 10x faster. This should give near-interactive response to most complex queries that involve single-object predicates. Many queries involve comparing, classifying or clustering objects. We expect to provide a second class of machine, called a hash machine that performs comparisons within data clusters. Hash machines redistribute a subset of the data among all the nodes of the cluster. Then each node processes each hash bucket at that node. This parallel-clustering approach has worked extremely well for relational databases in joining and aggregating data. We believe it will work equally well for scientific spatial data. The hash phase scans the entire dataset, selects a subset of the objects based on some predicate, and ”hashes” each object to the appropriate buckets - a single object may go to several buckets (to allow objects near the edges of a region to go to all the neighboring regions as well). In a second phase all the objects in a bucket are compared to one another. The output is a stream of objects with corresponding attributes. These operations are analogous to relational hash-join, hence the name (DeWitt 92). Like hash joins, the hash machine can be highly parallel, processing the entire database in a few minutes. The application of the hash-machine to tasks like finding gravitational lenses or clustering by spectral type or by redshift-distance vector should be obvious: each bucket represents a neighborhood in these high-dimensional spaces. We envision a non-procedural programming interface to define the bucket partition and analysis functions. The hash machine is a simple form of the more general data-flow programming model in which data flows from storage through various processing steps. Each step is amenable to partition parallelism. The underlying system manages the creation and processing of the flows. This programming style has evolved both in the database community (DeWitt 92, Graefe 93, Barclay 95) and in the scientific programming community with PVM and MPI (Gropp 98). This has evolved to a general programming model as typified by a river system (Arpaci-Dusseau 99). We propose to let astronomers construct dataflow graphs where the nodes consume one or more data streams, filter and combine the data, and then produce one or more result streams. The outputs of these rivers either go back to the database or to visualization programs. These dataflow graphs will be executed on a river-machine similar to the scan and hash machine. The simplest river systems are sorting networks. Current systems have demonstrated that they can sort at about 100 MBps using commodity hardware and 5 GBps if using thousands of nodes and disks (Sort benchmark). With time, each astronomy department will be able to afford local copies of these machines and the databases, but for now, they will be a network service. The scan machine will be interactively scheduled: when an astronomer has a query, it is added to the query mix immediately. All data that qualifies is sent back to the astronomer, and the query completes within the scan time. The hash and river machines will be batch scheduled. ### 3.4. Desktop Data Analysis Most astronomers will not be interested in all of the hundreds of attributes of each object. Indeed, most will be interested in only 10% of the entire dataset - but different communities and individuals will be interested in a different 10%. We plan to isolate the 10 most popular attributes (3 Cartesian positions on the sky, 5 colors, 1 size, 1 classification parameter) into small ’tag’ objects, which point to the rest of the attributes. Then we will build a spatial index on these attributes. These will occupy much less space, thus can be searched more than 10 times faster, if no other attributes are involved in the query. Large disks are available today, and within a few years 100GB disks will be common. This means that all astronomers can have a vertical partition of the 10% of the SDSS on their desktops. This will be convenient for targeted searches and for developing algorithms. But, full searchers will still be much faster on the server machines because the servers will have much more IO bandwidth and processing power. Vertical partitioning can also be applied by the scan, hash, and river machines to reduce data movement and to allow faster scans of popular subsets. We also plan to offer a 1% sample (about 10 GB) of the whole database that can be used to quickly test and debug programs. Combining partitioning and sampling converts a 2 TB data set into 2 gigabytes, which can fit comfortably on desktop workstations for program development. It is obvious, that with multi-terabyte databases, not even the intermediate data sets can be stored locally. The only way this data can be analyzed is for the analysis software to directly communicate with the Data Warehouse, implemented on a server cluster, as discussed above. Such an Analysis Engine can then process the bulk of the raw data extracted from the archive, and the user needs only to receive a drastically reduced result set. Given all these efforts to make the server parallel and distributed, it would be inefficient to ignore IO or network bottlenecks at the analysis level. Thus it is obvious that we need to think of the analysis engine as part of the distributed, scalable computing environment, closely integrated with the database server itself. Even the division of functions between the server and the analysis engine will become fuzzy - the analysis is just part of the river-flow described earlier. The pool of available CPU’s will be allocated to each task. The analysis software itself must be able to run in parallel. Since it is expected that scientists with relatively little experience in distributed and parallel programming will work in this environment, we need to create a carefully crafted application development environment, to aid the construction of customized analysis engines. Data extraction needs to be considered also carefully. If our server is distributed and the analysis is on a distributed system, the extracted data should also go directly from one of the servers to one of the many Analysis Engines. Such an approach will also distribute the network load better. ## 4. Summary Astronomy is about to be revolutionized by having a detailed atlas of the sky available to all astronomers. With the SDSS archive it will be easy for astronomers to pose complex queries to the catalog and get answers within seconds, and within minutes if the query requires a complete search of the database. The SDSS datasets pose interesting challenges for automatically placing and managing the data, for executing complex queries against a high-dimensional data space, and for supporting complex user-defined distance and classification metrics. The SDSS project is ”riding Moore’s law”: the data set we started to collect today - at a linear rate - will be much more manageable tomorrow, with the exponential growth of CPU speed and storage capacity. The scalable archive design presented here will be able to adapt to such changes. #### Acknowledgments. We would like to acknowledge support from the Astrophysical Research Consortium, the HSF, NASA and Intel’s Technology for Education 2000 program, in particular George Bourianoff (Intel). ## References R. Arpaci-Dusseau, A. Arpaci-Dusseau, D. E. Culler, J. M. Hellerstein, D. A. Patterson. 1998, ”The Architectural Costs of Streaming I/O: A Comparison of Workstations, Clusters, and SMPs”, Proc. Fourth International Symposium On High-Performance Computer Architecture (HPCA). R. H. Arpaci-Dusseau, E. Anderson, N. Treuhaft, D.E. Culler, J. M. Hellerstein, D. A. Patterson, K.Yelick, 1999, ”Cluster I/O with River: Making the Fast Case Common.” To appear in IOPADS ’99. S. Acharya, R. Alonso, M. J. Franklin, S. B. Zdonik, 1995, ”Broadcast Disks: Data Management for Asymmetric Communications Environments.” SIGMOD Conference 1995: 199-210. T. Barclay, R. Barnes, J. Gray, P. Sundaresan, 1994, ”Loading Databases Using Dataflow Parallelism.”, SIGMOD Record 23(4): 72-83. D.J. DeWitt, J. Gray, 1992, ”Parallel Database Systems: The Future of High Performance Database Systems.” CACM 35(6): 85-98. Csabai,I., Szalay,A.S. and Brunner,R. 1997, Multidimensional Index For Highly Clustered Data With Large Density Contrasts, in Statistical Challenges in Astronomy II, eds. E. Feigelson and A. Babu, (Wiley), 447. W. Gropp, S. Huss-Lederman, 1998, “MPI the Complete Reference: The MPI-2 Extensions”, Vol. 2, MIT Press, ISBN: 0262571234. G.Graefe, 1993, ”Query Evaluation Techniques for Large Databases”. ACM Computing Surveys 25(2): 73-170. Hartman,A. 1999, private communication. Kunszt,P.Z., Szalay,A.S., Csabai,I. and Thakar,A. 1999, “The Indexing of the SDSS Science Archive”, this volume, \[O1-03\] H. Samet 1990a, Applications of Spatial Data Structures: Computer Graphics, Image Processing, and GIS , Addison-Wesley, Reading, MA. ISBN 0-201-50300-0. H. Samet 1990b, The Design and Analysis of Spatial Data Structures, Addison-Wesley, Reading, MA, 1990. ISBN 0-201-50255-0. The Sort Benchmark: http://research.microsoft.com/barc/SortBenchmark/ Szalay,A.S. and Brunner,R.J. 1997, ”Exploring Terabyte Archives in Astronomy”, in New Horizons from Multi-Wavelength Sky Surveys, IAU Symposium 179, eds. B.McLean and D.Golombek, p.455. Thakar,A., Kunszt, P.Z. and Szalay, A.S. 1999, “Multi-threaded Query Agent and Engine for a Very Large Astronomical Database”, this volume, \[P1-05\]. Wells,D.C., Greisen,E.W., and Harten,R.H. 1981, FITS: A Flexible Image Transport System, Astron. and Astrophys. Suppl., 44, 363-370.
no-problem/9912/astro-ph9912088.html
ar5iv
text
# A comparison of estimators for the two–point correlation function ## 1 Introduction The two–point correlation function of galaxies became one of the most popular statistical tools in astronomy and cosmology. If the current paradigm, where the initial Gaussian fluctuations grew by gravitational instability, is correct, the two–point correlation function of galaxies is directly related to the initial mass power spectrum. While the role of the two–point correlation function is central, estimators for extracting it from a set of spatial points are confusingly abundant in the literature. We have collected the nine most important forms used in the area of mathematics and astronomy. The difference between them lies mainly in their respective method of edge correction. The multitude of choices might appear confusing to the practicing observational astronomer. The reason is partly due to the lack of a clear criterion to distinguish between the estimators. For instance one estimator could have smaller variance under certain circumstances, but it could have a bias. Therefore, before doing any numerical experiments, we agreed upon the method of ranking the different estimators. The cumulative probability distribution of the measured value lying within a certain tolerance of the “true” value is going beyond the concepts of bias or variance, and even takes into account any non–Gaussian behavior of the statistics. This is the mathematical formulation of the simple idea that an estimator which is more likely to give values closer to the truth is better. After the above criterion was agreed, the plan to elucidate the confusion was clear. Collect the different forms of estimators (next section), perform a numerical experiment in several subsamples of a large simulation (§3), determining the cumulative probability of measuring values close to the true one, thereby ranking the different estimators (§4). ## 2 The estimators Astrophysical studies favor estimators based on counting pairs, while most of the mathematical research is focused on geometric edge correction (second subsection). The following subsections collect nine of the most successful and widespread recipes from both genres. ### 2.1 Pairwise estimators Following Szapudi and Szalay (1998) (hereafter SS), let us define the pair–counts with a function $`\mathrm{\Phi }`$ symmetric in its arguments $$P_{DR}(r)=\underset{𝐱D}{}\underset{𝐲R}{}\mathrm{\Phi }_r(𝐱,𝐲).$$ (1) The summation runs over coordinates of points in the data set $`D`$ and points in the set $`R`$ of randomly distributed points, respectively. This letter considers the two point correlation function, for which the appropriate definition is $`\mathrm{\Phi }_r(x,y)=[rd(x,y)r+\mathrm{\Delta }]`$, where $`d(x,y)`$ is the separation of the two points, and \[condition\] equals $`1`$ when condition holds, $`0`$ otherwise. $`P_{DD}`$ and $`P_{RR}`$ are defined analogously, with $`x`$ and $`y`$ taken entirely from the data and random samples, under the restriction that $`xy`$. Let us introduce the normalized counts $`DD(r)=P_{DD}(r)/(N(N1))`$, $`DR(r)=P_{DR}(r)/(NN_R)`$, $`RR(r)=P_{RR}(r)/(N_R(N_R1))`$, with $`N`$ and $`N_R`$ being the total number of data and random points in the survey volume. With the above preparation the pairwise estimators used in what follows are the natural estimator $`\widehat{\xi }_\mathrm{N}`$, and the estimators due to Davis and Peebles (1983) $`\widehat{\xi }_{\mathrm{DP}}`$, Hewett (1982) $`\widehat{\xi }_{\mathrm{He}}`$, Hamilton (1993) $`\widehat{\xi }_{\mathrm{Ha}}`$, and Landy and Szalay (1993) (hereafter LS) $`\widehat{\xi }_{\mathrm{LS}}`$: $$\widehat{\xi }_\mathrm{N}=\frac{DD}{RR}1,\widehat{\xi }_{\mathrm{DP}}=\frac{DD}{DR}1,\widehat{\xi }_{\mathrm{He}}=\frac{DDDR}{RR},\widehat{\xi }_{\mathrm{Ha}}=\frac{DDRR}{DR^2}1,\widehat{\xi }_{\mathrm{LS}}=\frac{DD2DR+RR}{RR}.$$ (2) Note that Hewett’s estimator could be rendered equivalent with the LS estimator if the original asymmetric definition of $`DR`$ is symmetrized; the version we use is the one consistent with the notation laid out above. In the case of an angular survey, an optimal weighting scheme can be adapted to any of the above estimators (e.g., Colombi et al. 1998). This is inversely proportional to the errors expected at a particular pair separation, essentially equivalent to the Feldman et al. (1994) weight. ### 2.2 Geometric Estimators Alternative estimates of the two–point correlation function from $`N`$ data points $`𝐱D`$ inside a sample window $`𝒲`$ may be written in the form $$\widehat{\xi }(r)+1=\frac{|𝒲|}{N(N1)}\underset{𝐱D}{}\underset{𝐲D}{}\frac{\mathrm{\Phi }_r(𝐱,𝐲)}{4\pi r^2\mathrm{\Delta }}\omega (𝐱,𝐲).$$ (3) $`|𝒲|`$ is the volume of the sample window and the sum is restricted to pairs of different points $`𝐱𝐲`$. For a suitably chosen weight function $`\omega (𝐱,𝐲)`$ these edge corrected estimators are approximately unbiased. Such weights are the Ripley (1976)Rivolo (1986) weight $`\omega _\mathrm{R}`$, the Ohser and Stoyan (1981)Fiksel (1988) weight $`\omega _\mathrm{F}`$, and the Ohser (1983) weight $`\omega _\mathrm{O}`$. $$\omega _\mathrm{R}(𝐱,𝐲)=\frac{4\pi r^2}{\text{area}(_r(𝐱)𝒲)},\omega _\mathrm{F}(𝐱,𝐲)=\frac{|𝒲|}{\gamma _𝒲(𝐱𝐲)},\omega _\mathrm{O}(𝐱,𝐲)=\frac{|𝒲|}{\overline{\gamma _𝒲}(|𝐱𝐲|)}$$ (4) where $`\text{area}(_r(𝐱)𝒲)`$ is the fraction of the surface area of the sphere $`_r(𝐱)`$ with radius $`r=|𝐱𝐲|`$ around $`𝐱`$ inside $`𝒲`$, the set–covariance $`\gamma _𝒲(𝐳)=|𝒲𝒲_𝐳|`$ is the volume of the intersection of the original sample $`𝒲`$ with the set $`𝒲_𝐳`$ shifted by $`𝐳`$, and $`\overline{\gamma _𝒲}(r)`$ is the isotropized set–covariance. We will consider the estimators $`\widehat{\xi }_\mathrm{R}`$, $`\widehat{\xi }_\mathrm{F}`$, and $`\widehat{\xi }_\mathrm{O}`$ based on these weights. A detailed description of these estimators may be found in Stoyan et al. (1995) and Kerscher (1999). The Minus or reduced sample estimator, employing no weighting scheme at all, may be obtained by looking only at the $`N^{(r)}`$ points $`D^{(r)}`$ which are further than $`r`$ from the boundaries of $`𝒲`$: $$\widehat{\xi }_\mathrm{M}(r)+1=\frac{|𝒲|}{N}\frac{1}{N^{(r)}}\underset{𝐱D^{(r)}}{}\underset{𝐲D}{}\frac{\mathrm{\Phi }_r(𝐱,𝐲)}{4\pi r^2\mathrm{\Delta }}$$ (5) Estimators of this type are used by Sylos Labini et al. (1998). It can be shown that the natural estimator $`\widehat{\xi }_\mathrm{N}`$ is the Monte–Carlo version of the Ohser estimator $`\widehat{\xi }_\mathrm{O}`$. Similarly, the geometric counterparts of the LS and Hamilton estimator may be constructed (Kerscher, 1999). This allowed us to cross-check our programs. Focusing on improved number density estimation, Stoyan and Stoyan (2000) arrived also at the geometrical version of the Hamilton estimator. ## 3 The comparison To compare these estimators for typical cosmological situations we use the cluster catalogue generated from the $`\mathrm{\Lambda }`$CDM Hubble–volume simulation (Colberg et al., 1998). In order to investigate the effects of shape, clustering, and the amount of random data used, we have always varied one parameter at a time, starting from a fiducial sample. Rectangular subsamples were extracted exhausting the full simulation cube: the fiducial cubic subsamples C, slices S, pencil beams P, and cubic samples with cutout holes H, all with approximately the same volume and with approximately 430 clusters each. Cutout holes around bright stars, etc. arise naturally in all realistic surveys. The pattern of holes used for this study was directly mapped from a $`19^{}\times 19^{}`$ patch of the EDSGC survey to one of the faces of the simulation sub–cube. Then the holes were continued across the subsample, parallel with the sides corresponding to a distant observer approximation. The physical size of the holes roughly corresponded to a redshift survey at a depth of about $`300h^1\mathrm{Mpc}`$. All these point sets are “fully sampled” and may be considered as volume limited samples. In addition, Poisson samples N, i.e. without clustering, were generated. All calculations employed $`N_R=100`$k random points for the pairwise estimators, unless otherwise noted. This was sufficient for all indicators to converge. The calculations were repeated for sample C with $`N_R=1`$k and $`N_R=10`$k random points, denoted with R1 and R10, respectively, to investigate the speed of convergence of the different estimators with respect to the random point density. The parameters for the samples are summarized in Table 1. The two–point correlation function $`\xi _{\mathrm{per}}`$ extracted from all clusters inside the $`3h^1\mathrm{Gpc}`$ cube provided our reference or “true value”. Since the simulation was carried out with periodic boundary conditions, the cluster distribution is also periodic, therefore the torus boundary correction is exact (Ripley, 1988). The nine estimators defined above for the two–point correlation function were determined from each of the $`n_S`$ subsamples. For a given radial bin $`r`$ we computed the deviation $`|\widehat{\xi }_{}(r)\xi _{\mathrm{per}}(r)|`$ of the estimated two–point correlation function $`\widehat{\xi }_{}(r)`$ from the reference $`\xi _{\mathrm{per}}(r)`$. The empirical distribution of these deviations provides an objective basis for the comparison of the utility of the estimators. The large number of samples enabled the numerical estimation of the probability $`P(|\widehat{\xi }_{}\xi _{\mathrm{per}}|<d)`$ that the deviation $`|\widehat{\xi }_{}\xi _{\mathrm{per}}|`$ is smaller than a tolerance $`d`$. The larger this probability, the more likely that the estimator will be within the predetermined range from one sample. In general it could happen that the rank of two estimators reverses as the tolerance varies, but as will be shown in the next section, this is quite atypical. This procedure is more general than considering only bias and variance, which yields only a full description if the above distribution is the integral of a Gaussian. Note that a small bias is negligible for practical purposes if the variance dominates the distribution of the deviations. It is worthwhile to note that Gaussian assumption yields a surprisingly good description of the deviations $`|\widehat{\xi }_{}(r)\xi _{\mathrm{per}}(r)|`$. For estimators of the closely related product density, asymptotic Gaussianity of the deviations was proven by Heinrich (1988). Fig. 1 shows the distribution $`P(|\widehat{\xi }_{}(r)\xi _{\mathrm{per}}(r)|<d)`$ for the samples described in Table 1. Three typical scales are displayed to illustrate the general behavior. The principal conclusions to be drawn are the following: Small scales ($`r=4.4h^1\mathrm{Mpc}`$): the effect of any boundary correction scheme becomes negligible, and as expected, all the estimators exhibit nearly identical behavior. The same is true for the samples S, P, N, and H not shown. However, some of the estimators are more sensitive to the density of random points, especially the Hamilton estimator, followed by the Davis–Peebles estimator. They show stronger deviations for the R1 and R10 samples due to the poor sampling of the $`DR`$ term (see also Pons–Bordería et al. 1999). This effect persists on large scales as well. Intermediate scales ($`r=31h^1\mathrm{Mpc}`$): similar to the small scales. The Minus estimator shows stronger deviation becoming even more pronounced for the S, P, and H samples, since the effective remaining volume decreases. Large scales ($`r=115h^1\mathrm{Mpc}`$): edge corrections are becoming important, and the estimators exhibit clear differences in their distributions of the deviations for the samples C, and N. For a given probability, the Minus estimator shows the largest deviations, followed by the Natural, Fiksel and Ohser estimators. Significantly smaller deviations are obtained for the Davis–Peebles and Hewett estimators, and yet smaller for the Rivolo estimator. Finally, the Hamilton and LS estimator display the smallest deviations and thus the best edge correction. The two latter distributions nearly overlap. The above conclusions are robust and only weakly influenced by the presence of cutout holes, as seen from the H sample. The geometry of the subsamples has a non–trivial effect on the distributions. While the deviations are increased in the S and P samples, the differences between the estimators are reduced in the S, becoming negligible in the P samples. In both cases, the Minus estimator is not usable any more, since the $`N^{(r)}`$ equal zero, whereas the Fiksel estimator is biased for such geometries on large scales (this is implicitly shown in the work of Ohser (1983)). Following Szapudi and Szalay (1999) the variance of the LS estimator may be calculated for a Poisson process: $$\sigma _{\mathrm{LS}}^2(r)=\frac{2}{V_\mathrm{\Delta }(r)\overline{\rho }^2},$$ (6) with $`V_\mathrm{\Delta }(r)=_𝒲\mathrm{d}^3x_𝒲\mathrm{d}^3y\mathrm{\Phi }_r(𝐱,𝐲)`$. The $`\sigma _{\mathrm{LS}}`$ calculated for the considered samples is also shown in Fig. 1 to illustrate how much discreteness effects contribute to the distribution of the deviations. For our choice of sample parameters, the discreteness contribution, i.e. the deviation of a corresponding Poisson sample, is always within a factor of few of other important contributions to the variance, such as finite volume and edge effects. In general, the ratio of discreteness effects to the full variance depends in a complicated non–linear fashion on the number of clusters in the sample, the shape of the survey, integrals over the two–point correlation function and its square, and the three– and four–point correlation functions (see Szapudi et al. 2000 for the exact calculation). Varying the side length of the cubic samples from $`300h^1\mathrm{Mpc}`$, $`375h^1\mathrm{Mpc}`$, $`600h^1\mathrm{Mpc}`$ to $`1h^1\mathrm{Gpc}`$ we explored the influence of the size of the sample on the discreteness effects. Still, the rank order of the estimators stayed invariant. ## 4 Summary and Conclusion For a sample with 222,052 clusters extracted from the Virgo Hubble volume simulation a reference two–point correlation function was determined. Within over 500 subsamples several estimators for the two–point correlation function were employed, and the results compared with the reference value. On small scales all the estimators are comparable. On large scales the LS and the Hamilton estimator significantly outperform the rest, showing the smallest deviations for a given cumulative probability. While the two estimators yield almost identical results for infinite number of random points, the Hamilton estimator is considerably more sensitive to the number of random points employed than the LS version. From a practical point of view the LS estimator is thus preferable. The rest of the estimators can be divided into three categories: The first runner–ups are the estimators from Rivolo, Davis–Peebles and Hewett, but already with a significantly increased variance. Even larger deviations are present for the Natural, Fiksel, and Ohser recipes. The Minus estimator has the largest deviation. Although it was shown that for special point processes both the LS and Hamilton estimator are biased (Kerscher, 1999), the present numerical experiment demonstrates that this is irrelevant for the realistic galaxy and cluster point processes, as the bias has an insignificant effect on the distribution of deviations. Pons–Bordería et al. (1999) did not recommend one estimator for all cases. In contrast, through our extensive numerical treatment the LS estimator emerges as a clear recommendation. The above considerations apply to volume limited samples. When the correlation function is estimated directly from a flux limited sample with an appropriate minimum variance pair weighting (Feldman et al. (1994)), the Hamilton estimator has the advantage of being independent of the the normalization of the selection function. The differences between the estimators become smaller for the slice and insignificant for the pencil beam samples. At first sight this is counter intuitive: the difference between the estimators is largely due to edge corrections, and less compact surveys obviously have more edges. For the large scale considered, S and P become essentially two– and one–dimensional and the weight $`\omega (𝐱,𝐲)\overline{\omega }(r)`$ is equal for most of the pairs separated by $`r`$. Since the geometric estimators are approximately unbiased they employ mainly the same weight $`\overline{\omega }(r)`$ on large scales, and consequently show the same distribution of deviations. This argument also applies to the pair estimators, since they may be written in terms of these weights (Kerscher, 1999). All the above numerical investigations are intimately related to the problem of calculating the expected errors on estimators for correlation functions. To include all contributions, such as edge, discreteness, and finite volume effects, the method by Colombi et al. (1994), Szapudi and Colombi (1996), Colombi et al. (1998), Szapudi et al. (1999) has to be extended for the two point correlation function. Such a calculation was performed by Szapudi et al. (2000), (see also Stoyan et al. 1993, Bernstein 1994 and Hamilton 1993 for approximations) and should be used for ab initio error calculations. It is worth to mention, that one of the most widely used method in the literature, “bootstrap”, is based on a misunderstanding of the concept. For bootstrap in spatial statistics, a whole sample takes the role of one point in the original bootstrap procedure. This means that replicas of the original surveys would be needed to fulfill the promise of the bootstrap method. Choosing points (i.e. individual galaxies, clusters, etc.) randomly from one sample, as usually done, yields a variance with no obvious relation to the variance sought (see also Snethlage 1999). The role of the random samples is to represent the shape of the survey in a Monte–Carlo fashion. A practical alternative is to put a fine grid on the survey and calculate the quantities $`DD,DW`$, and $`WW`$, where $`D`$ now represents bin–counts and $`W`$ the indicator function taking the value one for pixels inside the survey, and zero otherwise. According to SS, all the above estimators have an analogous “grid” version (see also Hamilton 1993) which can be obtained formally by the substitution $`RW`$. In practice grid estimators can be more efficient than pair counts, and except for a slight perturbation of the pair separation bins, they both yield almost identical results for scales larger than a few pixel size. The usual way of estimating the power spectrum, using a folding with the Fourier transform of the sample geometry is equivalent to the grid version of the LS estimator. Hence, such a power spectrum analysis extracts the same amount of information from the data as the analysis with the two–point correlation function using the grid version of the LS estimator. The results are only displayed with respect to a different basis. Similarly, Karhunen–Loève (KL) modes form another set of basis functions (Vogeley and Szalay, 1996). The uncorrelated power spectrum (Hamilton, 2000) and the KL modes are the methods of choice for cosmological parameter estimation. The KL modes allow for a well–defined cut–off, and therefore reduce the computational needs in a maximum likelihood analysis. However, geometrical features of the galaxy and cluster distribution directly show up in two–point correlation function and may be interpreted easily. Each bin of the two–point correlation function contains direct information on pairs separated by a certain distance, an intuitively simple concept more suitable to study and control (expected or unexpected) systematics (geometry, luminosity, galaxy properties, biases) than any other representation. In this sense the correlation function is a tool complementary to the power spectrum. ## Acknowledgments We are grateful to the Virgo Supercomputing Consortium http://star-www.dur.ac.uk/$^$frazerp/virgo/virgo.html, who made the Hubble volume simulation data available for our project. The simulation was performed on the T3E at the Computing Centre of the Max-Planck Society in Garching. We would like to thank Simon White and the referee Andrew Hamilton for useful suggestions and discussions. MK would like to thank Claus Beisbart and Dietrich Stoyan for interesting and helpful discussions. IS was supported by the PPARC rolling grant for Extragalactic Astronomy and Cosmology at Durham while there. MK acknowledges support from the Sonderforschungsbereich für Astroteilchenphysik SFB 375 der DFG. AS has been supported by NSF AST9802980 and NASA LTSA NAG653503.
no-problem/9912/math9912119.html
ar5iv
text
# Shape Avoiding Permutations ## 1 Introduction ### 1.1 Outline The Robinson-Schensted(-Knuth) correspondence is a bijection between permutations in $`S_n`$ and pairs of standard Young tableaux of the same shape (and size $`n`$). This common shape is called the shape of the permutation. A permutation $`\pi =(\pi _1,\mathrm{},\pi _n)`$ in $`S_n`$ avoids a permutation $`\sigma =(\sigma _1,\mathrm{},\sigma _m)`$ in $`S_m`$ if there is no subsequence $`(\pi _{i_1},\mathrm{},\pi _{i_m})`$ of $`\pi `$ such that $`\pi _{i_j}>\pi _{i_k}`$ iff $`\sigma _j>\sigma _k`$ ($`j,k`$). $`\pi `$ avoids a shape $`\mu `$ if it avoids all the permutations of shape $`\mu `$. This paper deals with the relation between the property “$`\pi `$ does not avoid a given shape $`\mu `$” and the property “$`\lambda =\mathrm{𝑠ℎ𝑎𝑝𝑒}(\pi )`$ contains $`\mu `$ as a sub-shape”. It turns out that, in general, neither of these properties implies or contradicts the other; but in certain important cases, such implications do hold. These cases include, e.g., rectangular shapes and hook shapes (either for $`\lambda `$ or for $`\mu `$). These positive results are then applied to get asymptotic bounds related to the Stanley-Wilf conjecture on pattern-avoiding permutations (see Corollaries 4 and 5 in Subsection 1.2, and Subsection 7.2). Use is made of the Berele-Regev asymptotic evaluation of the number of standard Young tableaux contained in a “thick hook”. The rest of the paper is organized as follows. The main results are listed in Subsection 1.2. Standard notations and necessary background are given in Section 2. In Section 3 we motivate our investigation by a “false conjecture”. In Section 4 we show that this “false conjecture” is correct for rectangular shapes. Using this knowledge we consider the general case in Section 5. Families of shapes, for which an exact evaluation may be obtained, are presented in Section 6. Section 7 concludes the paper with final remarks and open problems. ### 1.2 Main Results For rectangular shapes the following holds. Theorem 1. If $`\pi `$ is a permutation of rectangular shape $`(m^k)`$, and $`\mu `$ is an arbitrary shape, then: $`\mu `$ is the shape of some subsequence of $`\pi `$ if and only if $`\mu (m^k)`$. See Theorem 4.1 below. Using Theorem 1 we prove the following general result. Theorem 2. For any permutation $`\pi `$ in $`S_n`$ and any partition $`\mu =(\mu _1,\mathrm{},\mu _k)`$ of $`m`$ : If $`(\mu _1^k)\mathrm{𝑠ℎ𝑎𝑝𝑒}(\pi )`$ then $`\mu `$ is the shape of some subsequence of $`\pi `$. See Theorem 5.1 below. For hook shapes a stronger result is proved. Theorem 3. Let $`m`$ and $`k`$ be positive integers and let $`n4km`$. Then for any hook $`\mu =(m,1^{k1})`$ and any permutation $`\pi `$ in $`S_n`$ : $`\pi `$ has a subsequence of shape $`\mu `$ if and only if $`\mu \mathrm{𝑠ℎ𝑎𝑝𝑒}(\pi )`$. See Theorem 6.1 below. Denote by $`\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^\mu `$ the size of the set of all $`\mu `$-avoiding permutations in $`S_n`$. Combining Theorem 2 with the Berele-Regev asymptotic estimates \[BR\] the following bounds are proved. Corollary 4. For any fixed partition $`\mu =(\mu _1,\mathrm{},\mu _k)`$, $$\mathrm{max}\{\text{ht}(\mu ),\text{wd}(\mu )\}\underset{n\mathrm{}}{lim\; inf}(\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^\mu )^{1/2n}$$ and $$\underset{n\mathrm{}}{lim\; sup}(\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^\mu )^{1/2n}\text{ht}(\mu )+\text{wd}(\mu ),$$ where the height of $`\mu `$ $`\text{ht}(\mu ):=k1`$, and the width of $`\mu `$ $`\text{wd}(\mu ):=\mu _11`$. See Corollary 5.2 below. It should be noted that this result is related to the Stanley-Wilf conjecture (see Subsection 7.2). For hook shapes we have a sharper estimate. Corollary 5. For any pair of positive integers $`m`$ and $`k`$ $$\underset{n\mathrm{}}{lim}(\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(m,1^{k1})})^{1/2n}=\mathrm{max}\{m1,k1\}.$$ See Corollary 6.5 below. ## 2 Preliminaries Two classical partial orders on the set of partitions are considered in this paper. Let $`\lambda =(\lambda _1,\mathrm{})`$ and $`\mu =(\mu _1,\mathrm{})`$ be two partitions (not necessarily of the same number). We say that $`\mu `$ is contained in $`\lambda `$, denoted $`\mu \lambda `$, if $$\mu _i\lambda _i(i).$$ We say that $`\mu `$ is dominated by $`\lambda `$, denoted $`\mu \lambda `$, if $$\underset{j=1}{\overset{i}{}}\mu _j\underset{j=1}{\overset{i}{}}\lambda _j(i).$$ Clearly, $`\mu \lambda \mu \lambda `$. The partition conjugate to $`\lambda `$ is $`\lambda ^{}=(\lambda _1^{},\mathrm{})`$, where $`\lambda _i^{}=\mathrm{max}\{j|\lambda _ji\}`$; i.e., the conjugate partition is obtained by interchanging rows and columns in $`\lambda `$. Lemma 2.1. \[Md Ch. I (1.11)\] If $`\lambda `$ and $`\mu `$ are partitions of the same number $`n`$ then $$\mu \lambda \lambda ^{}\mu ^{}.$$ Corollary 2.2. If $`\lambda `$ and $`\mu `$ are partitions of the same number $`n`$, satisfying $$\mu \lambda \text{ and }\mu ^{}\lambda ^{}$$ then $`\lambda =\mu `$. Define the shape of a sequence of integers to be the common shape of the two tableaux obtained via the Robinson-Schensted-Knuth correspondence. See \[Sa §3.3, St §7.11\]. The following theorem is well known. Schensted’s Theorem. \[Sc\] For any partition $`\lambda `$ and any permutation $`\pi `$ of shape $`\lambda `$, the length of the longest increasing subsequence of $`\pi `$ is equal to $`\lambda _1`$, and the length of the longest decreasing subsequence of $`\pi `$ is equal to $`\lambda _1^{}`$. Schensted’s Theorem was generalized by Greene. Greene’s Theorem. \[Gr\] Let $`\pi `$ be a permutation of shape $`\lambda =(\lambda _1,\mathrm{},\lambda _t)`$. Then, for all $`i`$: $$\underset{j=1}{\overset{i}{}}\lambda _j=\text{ maximal size of a union of }i\text{ increasing subsequences in }\pi ,$$ and $$\underset{j=1}{\overset{i}{}}\lambda _j^{}=\text{ maximal size of a union of }i\text{ decreasing subsequences in }\pi .$$ ## 3 Motivation Let $`\mu `$ be a partition of $`m`$, and let $`C^\mu `$ be the set of all permutations in $`S_m`$ of shape $`\mu `$. A permutation in $`S_n`$ is a $`\mu `$-avoiding permutation if it avoids all the permutations in $`C^\mu `$; denote the set of these permutations by $`\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^\mu `$. The only permutation in $`S_m`$ having shape $`(m)`$ is the identity permutation, i.e., a monotone increasing sequence. Schensted’s Theorem, stated in the previous section, is thus equivalent to the following statement. Fact 3.1. For any pair of positive integers $`mn`$ $$\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^{(m)}=\underset{\{\lambda n|(m)\lambda \}}{}C^\lambda ,$$ and similarly for $`(1^m)`$ instead of $`(m)`$. In other words, the set of permutations in $`S_n`$ avoiding $`(m)`$ is the union of all Knuth cells of shapes not containing $`(m)`$. One may be tempted to think that this is a general phenomenon. “False Conjecture” (First Version). For any pair of positive integers $`mn`$ and any partition $`\mu `$ of $`m`$ $$\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^\mu =\underset{\{\lambda n|\mu \lambda \}}{}C^\lambda .$$ Equivalently, “False Conjecture” (Second Version). For any permutation $`\pi S_n`$ of shape $`\lambda `$, the following two assertions hold: * For any partition $`\mu \lambda `$ there exists a subsequence of $`\pi `$ of shape $`\mu `$. * The shape of any subsequence of $`\pi `$ is contained in $`\lambda `$. Clearly, (1) is equivalent to the inclusion $$\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^\mu \underset{\{\lambda n|\mu \lambda \}}{}C^\lambda ,$$ while (2) is equivalent to the reverse inclusion $$\underset{\{\lambda n|\mu \lambda \}}{}C^\lambda \mathrm{𝐴𝑣𝑜𝑖𝑑}_n^\mu .$$ Note that Greene’s Theorem implies the weaker result that the shape of any subsequence of $`\pi `$ is dominated by $`\lambda `$. Unfortunately, the following examples show that both parts of the “False Conjecture” are false in general. Example 3.2. The permutation $`\pi =(65127843)`$ has shape $`\lambda =(4,2,1^2)`$, but has no subsequence of shape $`\mu =(4,1^3)`$. Example 3.3. The permutation $`\pi =(25314)`$ has shape $`\lambda =(3,1^2)`$, but has a subsequence of shape $`\mu =(2^2)`$. Both examples can be extended to shapes $`\lambda `$ of arbitrarily large size. A central discovery in this paper is that the above “False Conjecture” is nevertheless correct in some important cases. This will be used to deduce asymptotic estimates. ## 4 Rectangular Shapes A rectangular shape is a shape of the form $`(m^k)`$, where $`m`$ and $`k`$ are positive integers. In this section we show that the “False Conjecture” is true whenever $`\lambda `$ is a rectangular shape. Theorem 4.1. If $`\pi `$ is a permutation of rectangular shape $`(m^k)`$, and $`\mu `$ is an arbitrary shape, then: $`\mu `$ is the shape of some subsequence of $`\pi `$ if and only if $`\mu (m^k)`$. In order to prove Theorem 4.1 we need the following consequence of Greene’s Theorem. Lemma 4.2. Let $`\pi `$ be a permutation of shape $`\lambda `$. * If $`\pi `$ contains a disjoint union of $`k`$ increasing subsequences of lengths $`\mathrm{}_1\mathrm{}_2\mathrm{}\mathrm{}_k`$ then $`(\mathrm{}_1,\mathrm{},\mathrm{}_k)\lambda .`$ * If $`\pi `$ contains a disjoint union of $`k`$ decreasing subsequences of lengths $`\mathrm{}_1\mathrm{}_2\mathrm{}\mathrm{}_k`$ then $`(\mathrm{}_1,\mathrm{},\mathrm{}_k)\lambda ^{}.`$ Proof. By Greene’s Theorem, for any $`1ik`$ $$\underset{j=1}{\overset{i}{}}\mathrm{}_j\text{maximal size of a union of }i\text{ increasing subsequences of }\pi =\underset{j=1}{\overset{i}{}}\lambda _j.$$ The proof of the second part is similar. $`\mathrm{}`$ The following lemma characterizes permutations having rectangular shape. Lemma 4.3. * A permutation $`\pi `$ has shape $`(m^k)`$ if and only if the following two conditions are simultaneously satisfied: + $`\pi `$ is a disjoint union of $`k`$ increasing subsequences, each of length $`m`$. + $`\pi `$ is a disjoint union of $`m`$ decreasing subsequences, each of length $`k`$. * If the above conditions hold, then each of the $`k`$ increasing subsequences intersects each of the $`m`$ decreasing subsequences in exactly one element. Proof. (a) Assume that $`\pi `$ has shape $`\lambda `$ and satisfies conditions (a1) and (a2) of the Lemma. By (a1) and Lemma 4.2(a), $`(m^k)\lambda `$. By (a2) and Lemma 4.2(b), $`(k^m)\lambda ^{}`$. Also $`|\lambda |=|(m^k)|=km`$, so by Corollary 2.2, $`\lambda =(m^k)`$. In the other direction: By Greene’s Theorem, if $`\pi `$ has shape $`(m^k)`$ then it is the disjoint union of $`k`$ increasing subsequences $`\alpha _1,\mathrm{},\alpha _k`$ of total size $`km`$. By Schensted’s Theorem, each increasing subsequence of $`\pi `$ has size at most $`m`$, and therefore $`|\alpha _1|=\mathrm{}=|\alpha _k|=m`$. Similarly, $`\pi `$ is a disjoint union of $`m`$ decreasing subsequences $`\beta _1,\mathrm{},\beta _m`$ satisfying $`|\beta _1|=\mathrm{}=|\beta _m|=k`$. (b) Each increasing subsequence $`\alpha _i`$ intersects each decreasing subsequence $`\beta _j`$ in at most one element, and since these $`km`$ intersections cover all elements of $`\pi `$ they are all nonempty. $`\mathrm{}`$ Proof of Theorem 4.1. Let $`\pi `$ be a sequence of shape $`\lambda =(m^k)`$. If $`\mu `$ is the shape of some subsequence of $`\pi `$ then this subsequence contains an increasing subsequence of length $`\mu _1`$. Therefore $`\mu _1\lambda _1=m`$. Similarly $`\mu _1^{}\lambda _1^{}=k`$, so that $`\mu (m^k)`$. In the other direction: By Lemma 4.3, $`\pi `$ is a disjoint union of $`k`$ increasing subsequences, of length $`m`$ each, say $`\alpha _1,\mathrm{},\alpha _k`$ (enumerated arbitrarily). Similarly, $`\pi `$ is a disjoint union of $`m`$ decreasing subsequences, say $`\beta _1,\mathrm{},\beta _m`$ (of length $`k`$ each). Also, each $`\alpha _i`$ intersects each $`\beta _j`$ in a unique element; denote it by $`P(i,j)`$. Now let $`\mu (m^k)`$, and define $`\sigma `$ to be the subsequence of $`\pi `$ consisting of all elements $`P(i,j)`$ with $`j\mu _i`$. We claim that $`\sigma `$ has shape $`\mu `$. Indeed, $`\sigma `$ intersects $`\alpha _i`$ in $`\mu _i`$ elements, and therefore (by Lemma 4.2(a)) $`\mu \mathrm{𝑠ℎ𝑎𝑝𝑒}(\sigma )`$. Similarly, $`\sigma `$ intersects $`\beta _j`$ in $`\mu _j^{}`$ elements, and therefore (by Lemma 4.2(b)) $`\mu ^{}\mathrm{𝑠ℎ𝑎𝑝𝑒}(\sigma )^{}`$. Since $`|\mathrm{𝑠ℎ𝑎𝑝𝑒}(\sigma )|=|\mu |`$ by definition, Corollary 2.2 implies that $`\mathrm{𝑠ℎ𝑎𝑝𝑒}(\sigma )=\mu `$ and the proof is complete. $`\mathrm{}`$ The following theorem is complementary. Theorem 4.4. If $`\pi `$ is a sequence of shape $`\lambda `$ and $`(m^k)\lambda `$, then there exists a subsequence of $`\pi `$ of shape $`(m^k)`$. In other words: For any positive integers $`m`$ and $`k`$ $$\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^{(m^k)}\underset{\{\lambda n|(m^k)\lambda \}}{}C^\lambda .$$ Note that Example 3.3 shows that the converse of Theorem 4.4 is false. Proof. Let $`\pi `$ be a sequence of shape $`\lambda `$. By Greene’s Theorem, $`\pi `$ contains a disjoint union of $`k`$ increasing subsequences of total size $`_{j=1}^k\lambda _j`$. Denote this union by $`\overline{\pi }`$, and let $`\mu :=\mathrm{𝑠ℎ𝑎𝑝𝑒}(\overline{\pi })`$. Obviously, there are at most $`k`$ parts in $`\mu `$ (i.e., $`\mu =(\mu _1,\mathrm{},\mu _k)`$ with $`\mu _k0`$) and $`_{j=1}^k\mu _j=_{j=1}^k\lambda _j`$. By Greene’s Theorem, $$\underset{j=1}{\overset{k1}{}}\mu _j=\text{maximal size of a union of }k1\text{ increasing subsequences in }\overline{\pi }$$ $$\text{maximal size of a union of }k1\text{ increasing subsequences in }\pi =\underset{j=1}{\overset{k1}{}}\lambda _j.$$ Hence, $`\mu _k\lambda _k`$. By assumption $`(m^k)\lambda `$, so that $`m\lambda _k`$. We conclude that there are exactly $`k`$ parts in $`\mu `$, and $`\mu _1\mathrm{}\mu _km`$. In other words, $`\mu _1^{}=k`$ and $`(k^m)\mu ^{}`$. Now, by the second part of Greene’s Theorem, $`\overline{\pi }`$ contains a disjoint union of $`m`$ decreasing subsequences of total size $`km`$. Denote this union by $`\widehat{\pi }`$, and denote its shape by $`\nu `$. $`\widehat{\pi }`$ is a subsequence of $`\overline{\pi }`$, hence, $$\nu _1^{}=\text{length of maximal decreasing subsequence in }\widehat{\pi }$$ $$\text{length of maximal decreasing subsequence in }\overline{\pi }=\mu _1^{}=k.$$ On the other hand, $$|\nu |=\nu _1^{}+\mathrm{}+\nu _m^{}=km.$$ This shows that the shape of the subsequence $`\widehat{\pi }`$ is $`\nu =(m^k)`$. $`\mathrm{}`$ ## 5 General Shapes Theorem 5.1. For any partition $`\mu =(\mu _1,\mathrm{},\mu _k)`$ of $`m`$ and any positive integer $`n`$, $$\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^\mu \underset{\{\lambda n|(\mu _1^k)\lambda \}}{}C^\lambda .$$ $`(5.1)`$ Proof. Let $`\lambda `$ be a shape such that $`(\mu _1^k)\lambda `$. By Theorem 4.4, any permutation of shape $`\lambda `$ contains a subsequence of shape $`(\mu _1^k)`$. By Theorem 4.1, this subsequence contains a subsequence of shape $`\mu `$. $`\mathrm{}`$ Let $`\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^\mu `$ be the size of the set $`\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^\mu `$. Theorem 5.1 implies the following asymptotic estimates. Corollary 5.2. For any fixed partition $`\mu =(\mu _1,\mathrm{},\mu _k)`$, $$\underset{n\mathrm{}}{lim\; sup}(\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^\mu )^{1/2n}\text{ht}(\mu )+\text{wd}(\mu )$$ $`(5.2)`$ and $$\mathrm{max}\{\text{ht}(\mu ),\text{wd}(\mu )\}\underset{n\mathrm{}}{lim\; inf}(\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^\mu )^{1/2n},$$ $`(5.3)`$ where the height of $`\mu `$ $`\text{ht}(\mu ):=\mu _1^{}1`$, and the width of $`\mu `$ $`\text{wd}(\mu ):=\mu _11`$. Proof. Let $`\lambda `$ be a partition of $`n`$, and let $`f^\lambda `$ be the number of standard Young tableaux of shape $`\lambda `$. By the Robinson-Schensted correspondence $$(f^\lambda )^2=\mathrm{\#}\{\pi S_n|\mathrm{𝑠ℎ𝑎𝑝𝑒}(\pi )=\lambda \}.$$ Combining this fact with Theorem 5.1 we obtain $$\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^\mu \mathrm{\#}\{\pi S_n|(\mu _1^k)\mathrm{𝑠ℎ𝑎𝑝𝑒}(\pi )\}=\underset{\lambda n(\mu _1^k)\lambda }{}(f^\lambda )^2.$$ The asymptotics of the sum on the right hand side was studied by Berele and Regev \[BR, Section 7\]. By \[BR, Theorem 7.21\], for fixed $`\mu _1`$ and $`k`$ $$\underset{\lambda n(\mu _1^k)\lambda }{}(f^\lambda )^2c_1(\mu _1,k)n^{c_2(\mu _1,k)}(\mu _1+k2)^{2n},$$ $`(5.4)`$ when $`n`$ tends to infinity. Here $`c_1(\mu _1,k)`$ and $`c_2(\mu _1,k)`$ are independent of $`n`$. This proves the upper bound (5.2). For the lower bound, note that by Schensted’s Theorem any permutation avoiding $`(\mu _1)`$ also avoids $`\mu `$. Similarly, any permutation avoiding $`(1^k)`$ also avoids $`\mu `$. Thus $$\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^{(\mu _1)}\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^{(1^{\mu _1^{}})}\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^\mu .$$ This implies that (for $`n`$ large enough; e.g., $`n>(\mu _11)(\mu _1^{}1)`$ ) $$\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(\mu _1)}+\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(1^{\mu _1^{}})}\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^\mu .$$ Combining this inequality with (5.4) proves the lower bound (5.3). $`\mathrm{}`$ Note: For an evaluation of $`\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(m)}`$ for $`m4`$ see \[St Exer. 7.16(e)\]. An asymptotic evaluation of $`\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(m)}`$ for fixed $`m>4`$ was first done in \[Re\]. ## 6 Other Special Cases ### 6.1 Hooks In this subsection we show that for hook avoiding permutations and $`n`$ large enough the “False Conjecture” is correct. Theorem 6.1. For any hook $`\mu =(m,1^{k1})`$ and $`n>(2m4)(2k4)`$ $$\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^{(m,1^{k1})}=\underset{\{\lambda n|(m,1^{k1})\lambda \}}{}C^\lambda .$$ Note: If either $`m3`$ or $`k3`$ then equality holds for all values of $`n`$. The following analogue of Lemma 4.3 characterizes permutations of hook shape. Lemma 6.2. A permutation $`\pi `$ has shape $`(m,1^{k1})`$ if and only if $`\pi `$ is a union of an increasing subsequence of length $`m`$ and a decreasing subsequence of length $`k`$, intersecting in a unique element. Proof. By Schensted’s Theorem, a permutation $`\pi `$ of shape $`(m,1^{k1})`$ contains an increasing subsequence $`\alpha `$ with $`|\alpha |=m`$ and a decreasing subsequence $`\beta `$ with $`|\beta |=k`$, where $`|\alpha \beta ||\pi |=m+k1`$. Since necessarily $`|\alpha \beta |1`$, it follows that $`|\alpha \beta |=1`$. The converse follows similarly from Schensted’s Theorem. $`\mathrm{}`$ Lemma 6.3. Let $`m`$ and $`k`$ be positive integers. * If either $`m3`$ or $`k3`$ then every permutation whose shape contains the hook $`(m,1^{k1})`$ has a subsequence of shape $`(m,1^{k1})`$. * If $`m4`$ and $`k4`$ then every permutation whose shape contains the hook $`(2m3,1^{k1})`$ or the hook $`(m,1^{2k4})`$ has a subsequence of shape $`(m,1^{k1})`$. * For any $`m4`$ and $`k4`$ there exists a permutation whose shape contains $`(2m4,1^{2k5})`$, but it has no subsequence of shape $`(m,1^{k1})`$. Note: The results in (a) and (b) above are best possible, as far as the assumed size of a hook contained in the shape is concerned. For (a) this is clear, and for (b) this is the content of (c). Proof. We shall prove (b); the proof of (a) is similar. (b) Let $`\pi `$ be a permutation whose shape contains the hook $`(2m3,1^{k1})`$, with $`m,k4`$. Then $`\pi `$ has an increasing subsequence $`\alpha `$ of length $`2m3`$ and a decreasing subsequence $`\beta `$ of length $`k`$. If $`\alpha `$ and $`\beta `$ intersect (necessarily in a unique element), then by truncating $`\alpha `$ to $`m`$ elements we get by Lemma 6.2 a subsequence of shape $`(m,1^{k1})`$. Otherwise (i.e., assuming that $`\alpha `$ and $`\beta `$ do not intersect) we will show that the union of $`\alpha `$ and $`\beta `$ contains the required subsequence. Let $`\alpha =(\alpha _1,\mathrm{},\alpha _{2m3})`$ and $`\beta =(\beta _1,\mathrm{},\beta _k)`$, so that $`\alpha _1<\mathrm{}<\alpha _{2m3}`$ and $`\beta _1>\mathrm{}>\beta _k`$. Let $`\mathrm{𝑖𝑛𝑑}(\alpha _i)`$ denote the index of $`\alpha _i`$ in the union of $`\alpha `$ and $`\beta `$ (as a subsequence of $`\pi `$); similarly for $`\mathrm{𝑖𝑛𝑑}(\beta _j)`$. Concerning the element $`\alpha _{m1}`$ there are three possibilities: * There is an index $`1jk1`$ such that $$\mathrm{𝑖𝑛𝑑}(\beta _j)<\mathrm{𝑖𝑛𝑑}(\alpha _{m1})<\mathrm{𝑖𝑛𝑑}(\beta _{j+1}).$$ * $`\mathrm{𝑖𝑛𝑑}(\alpha _{m1})<\mathrm{𝑖𝑛𝑑}(\beta _1).`$ * $`\mathrm{𝑖𝑛𝑑}(\alpha _{m1})>\mathrm{𝑖𝑛𝑑}(\beta _k).`$ We shall deal with case (1); the other cases are similar. Since $`\beta _j>\beta _{j+1}`$, there are now three subcases: * $`\beta _j>\alpha _{m1}>\beta _{j+1}.`$ * $`\alpha _{m1}<\beta _{j+1}.`$ * $`\alpha _{m1}>\beta _j.`$ In case (1a), $`\alpha _{m1}`$ may be added to the decreasing subsequence $`\beta `$, to obtain two intersecting monotone subsequences of lengths $`2m3`$ and $`k+1`$. By truncating these subsequences we will get an increasing subsequence of length $`m`$ intersecting a decreasing subsequence of length $`k`$. In case (1b), $`(\alpha _1,\mathrm{},\alpha _{m1},\beta _{j+1})`$ is an increasing subsequence of length $`m`$ intersecting $`\beta `$. In case (1c), $`(\beta _j,\alpha _{m1},\alpha _m,\mathrm{},\alpha _{2m3})`$ is an increasing subsequence of length $`m`$ intersecting $`\beta `$. By Lemma 6.2, in all cases we obtain a subsequence of $`\pi `$ having shape $`(m,1^{k1})`$. (c) The construction extends Example 3.2 (for which $`m=k=4`$): take $`\pi =(\gamma ,\alpha ,\delta ,\beta )`$, where $`\alpha `$ and $`\delta `$ are increasing sequences of length $`m2`$ and $`\beta ,\gamma `$ are decreasing sequences of length $`k2`$: $$\alpha =(1,\mathrm{},m2);\beta =(m+k4,\mathrm{},m1);$$ $$\gamma =(m+2k6,\mathrm{},m+k3);\delta =(m+2k5,\mathrm{},2m+2k8).$$ It is easy to see that an increasing subsequence of $`\pi `$ intersecting $`\gamma `$ must be contained (omitting the intersection element itself) in $`\delta `$, so that its total length is at most $`m1`$. Similar analysis of $`\beta `$ shows that an increasing subsequence of length $`m`$ in $`\pi `$ must be contained in $`(\alpha ,\delta )`$. Analogously, a decreasing subsequence of length $`k`$ must be contained in $`(\gamma ,\beta )`$. The two subsequences cannot intersect. $`\mathrm{}`$ Proof of Theorem 6.1. By Schensted’s Theorem, if a permutation $`\pi `$ has a subsequence of shape $`(m,1^{k1})`$ then it has an increasing subsequence of length $`m`$ and a decreasing subsequence of length $`k`$. On the other hand, a permutation in $`_{\{\lambda n|(m,1^{k1})\lambda \}}C^\lambda `$ has either no increasing subsequence of length $`m`$ or no decreasing subsequence of length $`k`$. Thus, $$\underset{\{\lambda n|(m,1^{k1})\lambda \}}{}C^\lambda \mathrm{𝐴𝑣𝑜𝑖𝑑}_n^{(m,1^{k1})}.$$ For the other direction, assume that $`\pi C^\lambda `$ with $`(m,1^{k1})\lambda `$. Hence, $`\lambda _1m`$ and $`\lambda _1^{}k`$. If either $`m3`$ or $`k3`$ then, by Lemma 6.3(a), $`\pi `$ has a subsequence of shape $`(m,1^{k1})`$. Otherwise (i.e., if $`m4`$ and $`k4`$), by assumption $`(2m4)(2k4)<n=|\lambda |\lambda _1\lambda _1^{}`$, and therefore either $`\lambda _1>2m4`$ or $`\lambda _1^{}>2k4`$. We can now use Lemma 6.3(b). $`\mathrm{}`$ Corollary 6.4. For any pair of positive integers $`m`$ and $`k`$, and for $`n4mk`$ $$\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(m,1^{k1})}=\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(m)}+\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(1^k)}=\underset{\lambda n\lambda _1<m}{}(f^\lambda )^2+\underset{\lambda n\lambda _1^{}<k}{}(f^\lambda )^2,$$ where $`f^\lambda `$ is the number of standard Young tableaux of shape $`\lambda `$. Combining Corollary 6.4 with (5.4) we obtain Corollary 6.5. $$\underset{n\mathrm{}}{lim}(\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(m,1^{k1})})^{1/2n}=\mathrm{max}\{m1,k1\}.$$ ### 6.2 Avoiding $`(2^2)`$ In this subsection we compute $`\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(2^2)}`$ and show that $$\underset{n\mathrm{}}{lim}(\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(2^2)})^{1/2n}=\sqrt{2+\sqrt{2}}.$$ In particular, unlike the case of hooks, neither the lower bound nor the upper bound of Corollary 5.2 gives the correct limit in this case. Example 3.3 shows that for any $`n5`$, $$\underset{\{\lambda n|(2^2)\lambda \}}{}C^\lambda \mathrm{𝐴𝑣𝑜𝑖𝑑}_n^{(2^2)}.$$ However, the opposite inclusion does hold. Proposition 6.6. For any positive $`n`$, $$\mathrm{𝐴𝑣𝑜𝑖𝑑}_n^{(2^2)}\underset{\{\lambda n|(2^2)\lambda \}}{}C^\lambda .$$ Proposition 6.6 is a special case of Theorem 4.4. Here we suggest an independent and more informative proof of this result. Proof. By induction on $`n`$. The claim obviously holds for $`n4`$. Assume that it holds for $`n1`$, for some $`n5`$. For the induction step observe that $`C^{(2^2)}=\{2143,2413,3142,3412\}`$ consists of all permutations in $`S_4`$ for which 1 and 4 are in the ‘middle’. It follows that for any permutation $`\pi `$ in $`S_n`$, if $`\pi _1\{1,n\}`$ and $`\pi _n\{1,n\}`$ then $`\pi `$ is not $`(2^2)`$-avoiding. Therefore, if $`\pi S_n`$ is $`(2^2)`$-avoiding then either $`\pi _1\{1,n\}`$ or $`\pi _n\{1,n\}`$. Assume that $`\pi _1\{1,n\}`$. By the induction hypothesis the shape of the subsequence $`(\pi _2,\mathrm{},\pi _n)`$ does not contain $`(2^2)`$ and is therefore a hook $`(r,1^{nr1})`$ for some $`1rn1`$. Adding $`\pi _1=1`$ increases the size of the longest increasing subsequence by 1; thus, by Schensted’s Theorem the resulting shape is $`(r+1,1^{nr1})`$. Adding $`\pi _1=n`$ increases the size of the longest decreasing subsequence by 1; again, by Schensted’s Theorem the resulting shape is $`(r,1^{nr})`$. The case $`\pi _n\{1,n\}`$ is similar. $`\mathrm{}`$ Corollary 6.7. For any positive integer $`n`$ $$\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(2^2)}=\frac{1}{2}(2+\sqrt{2})^{n1}+\frac{1}{2}(2\sqrt{2})^{n1}.$$ Proof. It follows from the proof of Proposition 6.6 that $$\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(2^2)}=4\mathrm{𝑎𝑣𝑜𝑖𝑑}_{n1}^{(2^2)}2\mathrm{𝑎𝑣𝑜𝑖𝑑}_{n2}^{(2^2)}.$$ The solution of this linear recursion (with appropriate initial values) gives the desired result. $`\mathrm{}`$ ## 7 Final Remarks and Open Problems ### 7.1 Algebraic Structure Let $`R`$ be the set of all representatives of minimal length of left cosets of $`S_m`$ in $`S_n`$ (length here, as usual, is in terms of the Coxeter generators, i.e., adjacent transpositions). For any partition $`\mu `$ of $`m`$, the set $`C^\mu `$ of all permutations of shape $`\mu `$ is a two-sided Kazhdan-Lusztig cell in $`S_m`$. For any $`nm`$ the set of all permutations in $`S_n`$ which are not $`\mu `$-avoiding coincides with the set $`RC^\mu R^1`$. Theorem 5.1 claims that for hook shapes the set $`RC^\mu R^1`$ is a union of two-sided Kazhdan-Lusztig cells. This phenomenon generalizes a beautiful well-known fact: The set $`RC^\mu `$ (or: $`C^\mu R^1`$) is a union of Kazhdan-Lusztig left (resp. right) cells \[Sr, BV Prop. 3.15\]. See also \[GaR, Ro\]. Barbasch and Vogan gave an algebraic proof of this fact by associating the set $`RC^\mu `$ to induced representations. An algebraic interpretation for the results in this paper is required. These and other relations with representation theory deserve further study. ### 7.2 Asymptotics Regev calculated, by considering Schensted’s Theorem, the exact asymptotics of $`\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^{(m)}`$ \[Re\]. In this paper we have generalized this “RSK approach” to prove that for any partition $`\mu `$ there exists a constant $`c(\mu )`$ such that, for any $`n`$, $$\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^\mu c(\mu )^n.$$ Note that from Corollary 5.2 and Corollary 6.7 it also follows that, for $`\mu `$ not strictly contained in $`(2^2)`$, there exists a constant $`\stackrel{~}{c}(\mu )>1`$ such that $`\mathrm{𝑎𝑣𝑜𝑖𝑑}_n^\mu \stackrel{~}{c}(\mu )^n`$ for $`n`$ large enough. A far reaching generalization was conjectured by Stanley and Wilf \[Bo1\]. The Stanley-Wilf Conjecture. For any fixed permutation $`\sigma `$ there exists a constant $`c(\sigma )`$ such that, for any $`n`$ $$\mathrm{𝑎𝑣𝑜𝑖𝑑}_n(\sigma )c(\sigma )^n,$$ where $`\mathrm{𝑎𝑣𝑜𝑖𝑑}_n(\sigma )`$ is the number of all $`\sigma `$-avoiding permutations in $`S_n`$. By a result of Arratia \[Ar\], if this conjecture holds then actually the limit $`lim_n\mathrm{}\mathrm{𝑎𝑣𝑜𝑖𝑑}_n(\sigma )^{1/n}`$ always exists (and is finite). The Stanley-Wilf conjecture holds for all $`\sigma S_3`$ \[K, p. 238\] and all $`\sigma S_4`$ \[Bo1, Bo2\], as well as for many other cases (see \[SSi\], \[Bo3\] and their references). Recently, Alon and Friedgut \[AF\] have applied Davenport-Schinzel sequences to prove a somewhat weaker version of the conjecture for arbitrary $`\sigma `$. An interesting challenge is to apply the “RSK approach” to attack the Stanley-Wilf Conjecture; namely, to apply Greene’s Theorem and methods presented in this paper to sets avoiding a single permutation. Acknowledgments. The authors thank Noga Alon, Miklos Bóna, Ehud Friedgut, Nati Linial, Alek Vainshtein and Julian West for useful discussions. Special thanks to Amitai Regev for stimulating comments.
no-problem/9912/cond-mat9912312.html
ar5iv
text
# Viscous fingering in liquid crystals: Anisotropy and morphological transitions. ## I Introduction Interfacial instabilities arise in a wide variety of contexts, often of applied interest, such as dendritic growth, directional solidification, flows in porous media, flame propagation, electrodeposition, or bacterial growth . Notwithstanding this disparity, there has been a search for unifying common features concerning the more fundamental problem of their underlying nonequilibrium dynamics. One such feature seems to be the role of anisotropy in determining the observed morphology. Thus, the finding that anisotropy is necessary for the needle crystal to solve the steady state solidification problem (see e.g. ) and indeed a critical amount of it to stabilize its tip and (possibly) generate side-branches in related local models motivated the inclusion of anisotropy in viscous fingering experiments, either by engraving the plates , or by using a liquid crystal . In turn, these experiments in anisotropic viscous fingering confirmed the existence of a tip-splitting / side-branching transition controlled by anisotropy and driving force. Again, theoretical work on the solidification problem and numerical integration of its (nonlocal) dynamics showed the anisotropy in the surface tension to control the transition in this system together with the dimensionless undercooling. To our knowledge, no analytical or numerical work on anisotropic viscous fingering has focused on this transition so far. In this way, a picture that some kind of anisotropy can control the tip-splitting or side-branching behavior of different systems emerged. However, it is still not clear how different these anisotropies and systems can be. On the one hand, not all kinds of anisotropy seem to control the transition. The channel walls in a viscous fingering experiment, for instance, are known to play the same role as surface tension anisotropy in free dendritic growth as far as the existence of a single finger steady state solution with surface tension is concerned. Moreover, even with isotropic surface tension, this steady finger is stable up to a certain critical amount of noise , but no side-branching is observed, in contrast to free dendritic growth. Above this threshold the tip splits, but again no side-branching is observed, in spite of the anisotropy due to the channel walls. It is necessary to introduce some other type of anisotropy to observe the transition to side-branching. On the other hand, some types of anisotropy not directly acting on the surface tension seem to actually control the transition in some systems, as is the case of liquid crystal viscous fingering experiments, which varied mainly the anisotropy in the viscosity, as well as etched cells, where the exact effect of the grooves on the free boundary equations is unclear. Therefore a connection between surface tension anisotropy (seen to control the transition in the solidification problem both experimentally and in simulations) and other types of anisotropy clearly lacks. Here we present such a connection for the case of a simple model of a liquid crystal. Specifically, we show that two different viscosities in two perpendicular directions (in addition to some already anisotropic surface tension) can be mapped to a two-fold surface tension anisotropy (times the rescaled original anisotropic surface tension) through a convenient axis rescaling. Moreover, we integrate the resulting problem to confirm the existence of the morphological transition also for the viscous fingering equations. The numerics use a previously developed and thoroughly tested phase-field model for viscous fingering. We do find such a transition as a function of the amount of anisotropy and the value of the dimensionless surface tension itself (i.e., the driving force for fixed surface tension). The results are consistent with experimental results in viscous fingering with a liquid crystal and etched cells , and also with theory and simulations for the solidification problem . The layout of the rest of the paper is as follows: in Sec. II we refer to the special features of liquid crystals concerning viscous fingering and present a simple model for them. We then map this model onto the basic Saffman-Taylor problem with a two-fold anisotropy in the surface tension. In Sec. III we briefly describe the phase-field model used and present the numerical results. Finally, in Sec. IV we discuss their consequences for viscous fingering with a liquid crystal and consistency with related problems. ## II Model In the nematic phase of a liquid crystal its molecules are locally oriented, giving rise to anisotropy in the viscosity and surface tension. The degree of orientation depends on the proximity of the other phase(s), namely the isotropic (and for some liquid crystals the smectic), i.e., it still depends on temperature, and so does the anisotropy, mainly in the viscosity . Therefore one should be able to explain the tip-splitting / side-branching transition as a function of temperature in the nematics by means of the anisotropy in the viscosity alone. In a viscous fingering experiment, the director forms a small angle with the velocity field, except maybe for the neighborhood of the interface, where it might follow its normal direction. So, as a first approximation, one can consider that there is flow alignment, and, therefore, a velocity dependent viscosity, which would make the flow nonlaplacian. However, in the vicinity of a finger tip we can approximate the direction of the flow for that of the finger, so that we can make a minimal model with only two different viscosities: one in the direction parallel to the finger and one in the perpendicular one. More details can be found in Ref. . Let us now review the formulation of the Saffman–Taylor equations to account for those two different viscosities in two perpendicular directions $`x`$ and $`y`$. We will do it for the channel geometry, although the result also applies to the circular cell used in the experiments of Ref. with minor changes. (i.e. two different viscosities in the radial and tangential directions would also map to the standard viscous fingering equations and the same functional dependence for the surface tension anisotropy). For the sake of generality, we consider both the displaced ($`1`$) and the injected ($`2`$) fluid to have a certain distinct viscosity ($`\mu _1`$,$`\mu _2`$). As in the usual Saffman–Taylor problem, in each bulk we assume the flow to be incompressible, $$\stackrel{}{}\stackrel{}{u}=0$$ (1) (where $`\stackrel{}{u}`$ is the fluid velocity in the reference frame moving with the mean interface at $`V_{\mathrm{}}`$, the injection velocity), and also Darcy’s law to hold, but now for two different viscosities $`\mu _{x,i}`$, $`\mu _{y,i}`$, $`u_x`$ $`=`$ $`{\displaystyle \frac{1}{\eta _{x,i}}}_xp`$ (2) $`u_y`$ $`=`$ $`{\displaystyle \frac{1}{\eta _{y,i}}}\left(_yp+\rho _ig_{eff}\right)V_{\mathrm{}},`$ (3) where $`i=1,2`$ stand for each fluid, $`u_x`$, $`u_y`$ are the $`x`$, $`y`$ components of $`\stackrel{}{u}`$, $`p`$ is the pressure, $`\eta _{x,i}=(12/b^2)\mu _{x,i}`$ is an inverse mobility in the $`x`$ direction, $`\eta _{y,i}=(12/b^2)\mu _{y,i}`$, in the $`y`$ direction, $`\rho _i`$, the density, and $`g_{eff}`$, the effective gravity in the plane of the channel. Also as in the usual Saffman–Taylor problem, on the interface the normal velocity is continuous and equals that of the interface, $$\widehat{r}\stackrel{}{u}_1=\widehat{r}\stackrel{}{u}_2=v_n$$ (4) (where $`r`$ is a coordinate perpendicular to it increasing towards fluid $`1`$ and $`v_n`$ its normal velocity), and the pressure has a jump given by Laplace’ law, $$p_1p_2=\sigma (\varphi )\kappa ,$$ (5) with $`\sigma (\varphi )`$ the (anisotropic) surface tension and $`\kappa `$ the interface curvature. Due to Eqs. (1), and (4), the flow can be described by a scalar field $`\psi `$, the stream function, defined even on the interface by $`u_x=_y\psi `$, $`u_y=_x\psi `$ (see e.g. Refs. ). However, because of the different viscosities in the $`x`$ and $`y`$ directions, the problem is nonlaplacian (there is vorticity) in the bulk: $$^2\psi =|\stackrel{}{}\times \stackrel{}{u}|0.$$ (6) To circumvent this, we rescale the $`x`$ and $`y`$ axis by a different factor. We also take advantage to adimensionalize the resulting equations in the same way as in Refs. , so that they can be compared to those in these references, and especially to Ref. in order to generalize the phase-field model described there to the case of anisotropic viscosity. Thus, we perform the following change of variables: $`x=a_x\stackrel{~}{x},`$ (7) $`y=a_y\stackrel{~}{y},`$ (8) $`t={\displaystyle \frac{W}{U_{}}}\stackrel{~}{t},`$ (9) where tildes denote new variables, $`a_x,a_y`$ have units of length, $`U_{}`$ is a velocity, and $`W`$, the channel width. We find $$\stackrel{}{}\stackrel{}{u}=\frac{U_{}}{W}\stackrel{~}{\stackrel{}{}}\stackrel{~}{\stackrel{}{u}}=0,$$ (10) so that the flow is still incompressible and we can define a new stream function $`\stackrel{~}{\psi }=(W/U_{})\left[\psi /(a_xa_y)\right]`$, which will be laplacian in the bulk if and only if \[see Eq. (6)\] the velocity field is potential in each fluid, $$\stackrel{~}{\stackrel{}{u}}=\frac{W}{U_{}}\left[\frac{1}{W^2\stackrel{~}{\eta }_i}\left(\stackrel{~}{\stackrel{}{}}p+a_y\rho _ig\widehat{y}\right)+\frac{V_{\mathrm{}}}{a_y}\widehat{y}\right]$$ (11) which is now the case as long as we choose $`a_x,a_y`$ to be such that $$a_x^2\eta _{x,i}=a_y^2\eta _{y,i}W^2\stackrel{~}{\eta }_i.$$ (12) On the interface, Eq. (4) will be formally unchanged as long as the choice of $`a_x,a_y`$ is the same at both sides. Note that, according to Eq. (12), this implies that the ratio $`m\eta _x/\eta _y`$ must be the same for both fluids. In an air-liquid crystal experiment, this is obviously not the case, but, in the limit in which the viscosity of the air is negligible compared to that of the liquid crystal, the anisotropic character of the air viscosity in our model becomes irrelevant. In terms of the stream function, Eq. (4) for the new variables then reads $$_{\stackrel{~}{s}}\stackrel{~}{\psi }_1=_{\stackrel{~}{s}}\stackrel{~}{\psi }_2=\stackrel{~}{v}_n,$$ (13) where $`s`$ is the arclength along the interface and such that $`\widehat{s}\times \widehat{r}=\widehat{x}\times \widehat{y}`$. As for $`_{\stackrel{~}{r}}\stackrel{~}{\psi }`$, the boundary condition for it will be given by that for $`\stackrel{~}{u}_{\stackrel{~}{s}}\stackrel{~}{\widehat{s}}\stackrel{~}{\stackrel{}{u}}`$. Indeed, it will have a jump on the interface due to the fact that $`\stackrel{~}{\stackrel{}{u}}`$ is not potential on the very interface \[see Eq. (11\] because of the jump in $`\stackrel{~}{\eta }_i`$, which gives rise to a singular vorticity on it: $`{\displaystyle \frac{(\stackrel{~}{\eta }_1+\stackrel{~}{\eta _2})(\stackrel{~}{u}_{\stackrel{~}{s},1}\stackrel{~}{u}_{\stackrel{~}{s},2})+(\stackrel{~}{\eta }_1\stackrel{~}{\eta _2})(\stackrel{~}{u}_{\stackrel{~}{s},1}+\stackrel{~}{u}_{\stackrel{~}{s},2})}{2}}`$ (14) $`=`$ $`\stackrel{~}{\eta }_1\stackrel{~}{u}_{\stackrel{~}{s},1}\stackrel{~}{\eta _2}\stackrel{~}{u}_{\stackrel{~}{s},2}`$ (15) $`=`$ $`{\displaystyle \frac{W}{U_{}}}\left\{{\displaystyle \frac{1}{W^2}}_{\stackrel{~}{s}}(p_1p_2)+\left[{\displaystyle \frac{a_y}{W^2}}g(\rho _1\rho _2)+{\displaystyle \frac{(\stackrel{~}{\eta }_1\stackrel{~}{\eta _2})V_{\mathrm{}}}{a_y}}\right]\widehat{y}\stackrel{~}{\widehat{s}}\right\}`$ (16) and therefore, making use of Eq. (5), $`_{\stackrel{~}{r}}\stackrel{~}{\psi }_1_{\stackrel{~}{r}}\stackrel{~}{\psi }_2=\stackrel{~}{u}_{\stackrel{~}{s},1}\stackrel{~}{u}_{\stackrel{~}{s},2}`$ (17) $`=`$ $`{\displaystyle \frac{2}{U_{}}}\left\{{\displaystyle \frac{1}{W^2(\stackrel{~}{\eta }_1+\stackrel{~}{\eta _2})}}_{\stackrel{~}{s}}[\sigma (\varphi )W\kappa ]+\left[{\displaystyle \frac{a_y}{W}}{\displaystyle \frac{g(\rho _1\rho _2)}{(\stackrel{~}{\eta }_1+\stackrel{~}{\eta _2})}}+{\displaystyle \frac{W}{a_y}}cV_{\mathrm{}}\right]\widehat{y}\stackrel{~}{\widehat{s}}\right\}`$ (18) $`c\left(_{\stackrel{~}{r}}\stackrel{~}{\psi }_1+_{\stackrel{~}{r}}\stackrel{~}{\psi }_2\right),`$ (19) where $`c(\stackrel{~}{\eta }_1\stackrel{~}{\eta _2})/(\stackrel{~}{\eta }_1+\stackrel{~}{\eta _2})`$. Now choosing $`a_y/a_x^2=1/W`$, Eq. (12) yields $`m=a_y/W`$, and defining $`U_{}cV_{\mathrm{}}/m+\left[mg(\rho _1\rho _2)\right]/(\stackrel{~}{\eta _1}+\stackrel{~}{\eta _2})`$ we recover the usual result for viscous fingering in a channel (see Refs. ), $$_{\stackrel{~}{r}}\stackrel{~}{\psi }_1_{\stackrel{~}{r}}\stackrel{~}{\psi }_2=2_{\stackrel{~}{s}}[B(\varphi )W\kappa ]2\widehat{y}\stackrel{~}{\widehat{s}}c\left(_{\stackrel{~}{r}}\stackrel{~}{\psi }_1+_{\stackrel{~}{r}}\stackrel{~}{\psi }_2\right),$$ (20) with $`B(\varphi )\sigma (\varphi )/\left[W^2(\stackrel{~}{\eta }_1+\stackrel{~}{\eta _2})U_{}\right]`$, except for the $`m`$ factors in the definition of $`U_{}`$ —and therefore in $`B(\varphi )`$— and the clue fact that $`W\kappa `$ and $`\sigma (\varphi )`$ are still in the old variables and must be rescaled: $`\kappa {\displaystyle \frac{d^2y}{dx^2}}\left[1+\left({\displaystyle \frac{dy}{dx}}\right)^2\right]^{3/2}`$ (21) $`=`$ $`{\displaystyle \frac{a_y}{a_x^2}}{\displaystyle \frac{d^2\stackrel{~}{y}}{d\stackrel{~}{x}^2}}\left[1+\left({\displaystyle \frac{a_y}{a_x}}{\displaystyle \frac{d\stackrel{~}{y}}{d\stackrel{~}{x}}}\right)^2\right]^{3/2}={\displaystyle \frac{1}{W}}{\displaystyle \frac{d^2\stackrel{~}{y}}{d\stackrel{~}{x}^2}}\left[1+m\left({\displaystyle \frac{d\stackrel{~}{y}}{d\stackrel{~}{x}}}\right)^2\right]^{3/2},`$ (22) so that we obtain $$W\kappa =\stackrel{~}{\kappa }\left[\frac{1+\left(d\stackrel{~}{y}/d\stackrel{~}{x}\right)^2}{1+m\left(d\stackrel{~}{y}/d\stackrel{~}{x}\right)^2}\right]^{3/2}=\frac{\stackrel{~}{\kappa }}{\left[1+(m1)\mathrm{cos}^2\stackrel{~}{\varphi }\right]^{3/2}},$$ (23) where $`\stackrel{~}{\varphi }`$ is the angle from $`\widehat{x}`$ to $`\widehat{\stackrel{~}{r}}`$. To summarize, we recover the usual viscous fingering equations, including Eq. (20), which finally reads $$_{\stackrel{~}{r}}\stackrel{~}{\psi }_1_{\stackrel{~}{r}}\stackrel{~}{\psi }_2=2_{\stackrel{~}{s}}[\stackrel{~}{B}(\stackrel{~}{\varphi })\stackrel{~}{\kappa }]2\widehat{y}\stackrel{~}{\widehat{s}}c\left(_{\stackrel{~}{r}}\stackrel{~}{\psi }_1+_{\stackrel{~}{r}}\stackrel{~}{\psi }_2\right),$$ (24) but now with an anisotropic dimensionless surface tension of the form $$\stackrel{~}{B}(\stackrel{~}{\varphi })=\stackrel{~}{B}_0\times \stackrel{~}{\mathrm{\Sigma }}(\stackrel{~}{\varphi })\times \left[\frac{1}{1+(m1)\mathrm{cos}^2\stackrel{~}{\varphi }}\right]^{3/2},$$ (25) where $`\stackrel{~}{B}_0`$ is the dimensionless surface tension of isotropic viscous fingering $$\stackrel{~}{B}_0\frac{\sigma _0}{W^2\left[(\stackrel{~}{\eta }_1\stackrel{~}{\eta _2})V_{\mathrm{}}/m+mg(\rho _1\rho _2)\right]}$$ (26) except for the $`m`$ factors, with $`\sigma (\varphi )\sigma _0\mathrm{\Sigma }(\varphi )`$. This means that, even with an originally isotropic surface tension, that in the rescaled problem has a two-fold anisotropy with a very specific form given by the last (third) factor at the r.h.s. of Eq. (25). On the other hand, the possible original anisotropy in the surface tension will change its functional form in the rescaled problem according to $$\stackrel{~}{\mathrm{\Sigma }}(\stackrel{~}{\varphi })=\mathrm{\Sigma }(\varphi )=\mathrm{\Sigma }\left[\mathrm{arctan}(\sqrt{m}\mathrm{tan}\stackrel{~}{\varphi })\right],$$ (27) and the rescaled problem will have the two-fold anisotropy of the mentioned last factor superimposed to the transformed anisotropy of Eq. (27) \[second factor at the r.h.s. of Eq. (25)\]. A similar result was found in a different context, namely for the nematic-smectic B transition, where two different heat diffusivities in two perpendicular directions could be mapped to the same type of anisotropy in the surface tension and the same type of transformation in the original anisotropy . However, note that here the assumption is that the growth is in the direction of lowest viscosity (because of flow alignment of the director), which results in growth in the direction of largest surface tension ($`\stackrel{~}{\varphi }=\pi /2`$), whereas in Ref. the situation is just the opposite: growth was found to be in the direction of lowest diffusivity because that is the direction of lowest capillary length. Also for isotropic diffusivities it is known that steady needle crystals can only grow in the direction of minimal capillary length , although there the anisotropy is assumed to be four-fold. Finally, for the described minimal model the original anisotropy in the surface tension would be two-fold, e.g. $$\mathrm{\Sigma }(\varphi )=1\alpha \mathrm{cos}^2\left(\varphi \frac{\pi }{2}\right),$$ (28) so that the transformed anisotropy would read $$\stackrel{~}{\mathrm{\Sigma }}(\stackrel{~}{\varphi })=1\frac{m\alpha \mathrm{cos}^2\left(\stackrel{~}{\varphi }\pi /2\right)}{1+(m1)\mathrm{cos}^2\stackrel{~}{\varphi }}.$$ (29) ## III Numerical Integration We now integrate the rescaled problem, namely Laplace equation for the stream function with the boundary conditions Eqs. (13) and (24). In principle, given an initial condition, we should rescale it, evolve it using the rescaled dynamics, and translate the resulting interface back to the original variables, but we will not perform any rescaling, since the initial condition is free, and the tip-splitting or side-branching character of the result is unaffected by the final translation into the original variables. Instead, we will consider the rescaled problem in its own, and simulate it by means of the following phase-field model, $$\stackrel{~}{ϵ}\frac{\psi }{t}=^2\psi +c\stackrel{}{}(\theta \stackrel{}{}\psi )+\frac{1}{ϵ}\frac{1}{2\sqrt{2}}\gamma (\theta )(1\theta ^2)$$ (30) $$ϵ^2\frac{\theta }{t}=f(\theta )+ϵ^2^2\theta +ϵ^2\kappa (\theta )|\stackrel{}{}\theta |+ϵ^2\widehat{z}(\stackrel{}{}\psi \times \stackrel{}{}\theta ),$$ (31) where $`\theta `$ is the phase field, $`ϵ`$, $`\stackrel{~}{ϵ}`$ are model parameters which must be small to recover the sharp-interface equations of the rescaled problem, and we have dropped the tildes of the rescaled variables. We have defined $`f(\theta )\theta (1\theta ^2)`$, and $`\frac{\gamma (\theta )}{2}\widehat{s}(\theta )\left[\stackrel{}{}B(\theta )\kappa (\theta )+\widehat{y}\right]`$, $`\kappa (\theta )\stackrel{}{}\widehat{r}(\theta )`$, with $`\widehat{r}(\theta )\frac{\stackrel{}{}\theta }{|\stackrel{}{}\theta |}`$ and $`\widehat{s}(\theta )\widehat{r}(\theta )\times \widehat{z}`$. This model was introduced for isotropic viscous fingering in Ref. and extensively tested in Ref. . From this work we know that it will yield converged results for the steady fingers (and, in particular, for their widths) if both $`ϵ0.2\sqrt{B_0}`$ and $`\stackrel{~}{ϵ}0.2(1c)`$. The only change to be made for the anisotropic case is to set $`B(\theta )`$ not merely equal to a constant, but to that given by Eq. (25) taking $`\varphi =\varphi (\theta )=\mathrm{arccos}\widehat{x}\widehat{r}(\theta )`$. This gives $`B(\theta )=B(\varphi )+O(ϵ^3)`$, which not only satisfies the desired sharp-interface limit, but also ensures that the introduction of anisotropy will not result in any extra first-order correction to the free boundary problem, so that the above conditions on $`ϵ,\stackrel{~}{ϵ}`$ still hold. The same phase-field equations could be used for the circular geometry reinterpreting the parameters, since an analogue rescaling yields formally the same result. However, the boundary conditions would change. For instance, injection at the center of the cell should also be considered. Anyhow, we choose to simulate the well controlled situation in which an (unstable) steady finger propagates in a channel of width $`W`$. This representation is of course exact for single fingers in experiments carried out in the linear Hele–Shaw cell , but only a (good) approximation for the vicinity of a finger in a multifinger configuration (with many fingers) in the circular geometry . We investigate the transition between the tip-splitting and side-branching behaviors as both the dimensionless surface tension $`B_0`$ and its anisotropy $`m1`$ are varied. We are interested mainly in the effect of the anisotropy coming from the viscosity ($`m1`$), so we drop that in the original surface tension ($`\sigma (\varphi )=\sigma _0`$). The runs use equal viscosities in both fluids, $`c=0`$, for reasons of numerical efficiency ($`\stackrel{~}{ϵ}=0.2`$), but we do not expect the viscosity contrast to affect the stability of the tip for similar reasons for which it does not play any role in the (linear) stability of a flat interface. To check this conjecture we ran simulations with $`B_0=10^3`$ both for $`c=0`$ and $`c=0.8`$, two values of the viscosity contrast for which a dramatic change in the competition dynamics was seen using the same phase-field model , and we found that the transition lies in both cases between $`m=2`$ and $`m=2.25`$. These $`c=0.8`$ runs are indeed the ones shown in Figs. 1 and 2. We use $`ϵ=4\times 10^3`$, so that we can simulate accurately with values of the dimensionless surface tension down to $`B_0=4\times 10^4`$. We first run a steady finger with a large, isotropic dimensionless surface tension $`B(\varphi )=B_0=10^2`$, large enough for the finger not to destabilize for the amount of (numerical) noise we have and thus reach its steady width and velocity. Once this is achieved —see inner interface in Figs. 1(a) and 1(b)—, we perform a “quench” in surface tension, i.e., we instantly reduce it to some lower value. Simultaneously, we also introduce some amount of anisotropy $`m1`$. The subsequent interface evolution for $`B_0=10^3`$ (and $`c=0.8`$, $`\stackrel{~}{ϵ}=0.08`$) is also shown within the reference frame moving with the mean interface in Figs. 1(a) ($`m=2`$) and 1(b) ($`m=2.25`$) in the form of snapshots at time intervals $`0.11`$. (Simulations used only half of the channel and reflecting boundary conditions at its center $`x=0`$). The corresponding $`y`$ position of the interface at the center of the channel (also in the frame of the mean interface) is plotted against time in Fig. 2. For this value of $`B_0`$ the finger clearly destabilizes: First its tip widens and flattens (see Fig. 1) and therefore slows down (see Fig. 2) for any value of the anisotropy. (Note that for $`t<0`$ the tip position would be a straight line in time, since the finger was steady, and, in particular, its velocity). Then, for $`m=2`$ the tip continues to flatten and slow down until its curvature \[Fig. 1(a)\] and eventually its velocity in the frame of the mean interface (lower curve in Fig. 2) reverse their signs. Finally, the velocity of the interface at the center of the channel seems to reach some negative constant value (again, in the frame moving with the mean interface) corresponding to the growth of two parallel fingers at each side. We identify this reversal of the curvature sign at the center of the finger and this always convex tip position vs. time plot with the tip-splitting morphology. In contrast, for $`m=2.25`$ the reversal of the curvature sign takes place at some distance of the center of the channel, while at the center the curvature increases again \[Fig. 1(b)\] and makes it possible for the tip to speed up again as well, giving rise to a change of concavity in the tip position vs. time plot (upper curve in Fig. 2). We identify this reversal of the curvature sign at a distance from the center of the channel and this change of concavity in the tip position vs. time plot with the side-branching morphology. In this way we systematically explore values of the dimensionless surface tension $`B_0`$ ranging from $`B=10^2`$ down to $`B=4\times 10^4`$. For each value of $`B_0`$ we simulate with several values of the anisotropy $`m1`$, and we find that there is a relatively sharp transition between the tip-splitting and side-branching morphologies. In Fig. 3 we show for each value of $`B_0`$ ($`x`$ axis) the two closest values of $`m1`$ ($`y`$ axis) for which the two different morphologies are observed, namely tip-splitting (circles) and side-branching (triangles). Thus we know that the transition line must lie somewhere between the circles and the triangles, and that above (larger values of $`m1`$) and left (lower values of $`B_0`$) of that transition line the morphology is side-branching, and below and right of it, tip-splitting. This means that the critical anisotropy $`m1`$ above which side-branching replaces tip-splitting decreases with decreasing dimensionless surface tension $`B_0`$. In fact, this critical anisotropy vanishes at $`B_05\times 10^4`$, and below this value only side-branching is observed, even if one uses negative anisotropies down to $`m1=0.9`$, which correspond to a viscosity larger in the direction of growth of the finger than in the perpendicular one, and which is not the case of the liquid crystal experiments that motivated this study. ($`m1>1`$ to keep the two viscosities and therefore $`B(\varphi =0)`$ finite and positive). Of course, the specific value of $`B_0`$ for which the critical anisotropy vanishes could be affected by the fact that a residual (four-fold) grid anisotropy remains, but it seems unavoidable that there is such a (finite) value of $`B_0`$, since the transition line curves down as $`B_0`$ is decreased, and for large enough values of $`B_0`$ or the anisotropy $`m1`$ the grid spacing $`\mathrm{\Delta }x=ϵ=4\times 10^3`$ is far too fine to affect the effective anisotropy. On the other hand, for $`B_01.4\times 10^3`$ and for the time elapsed in our runs no clear side-branching is actually observed above the transition line extrapolated from lower values of $`B_0`$, whereas tip-splitting still occurs below that line. For even larger values of $`B_0`$, namely $`B_02\times 10^3`$, not even tip-splitting is observed again within the time elapsed, although the steady finger still destabilizes through the widening and flattening of its tip. Finally, for $`B10^2`$ the finger is completely stable for the amount of noise we have, as was pointed out before. ## IV Discussion and conclusions We have shown that viscous fingering with two different viscosities in two perpendicular directions maps to the standard viscous fingering equations (i.e., for isotropic viscosity) with an extra two-fold anisotropy in the surface tension, which is such that, together with the hypothesis of flow alignment of the director, leads to growth in the direction of maximal surface tension. We have simulated the resulting problem using a previously developed phase-field model , and we have found that there is a transition from the tip-splitting to the side-branching morphology as either the anisotropy in the surface tension is increased or the dimensionless surface tension is decreased. We now draw the connection with the liquid crystal experiments of Ref. . The observed anisotropy dependence is consistent with the experimental finding that there is a transition from tip-splitting to side-branching and back to tip-splitting with temperature in the nematic phase, since close to the other phases the director alignment, and consequently the anisotropy, weaken . The transition is found to be also reentrant with injection pressure , which is explained there with the hypothesis that too low pressures do not achieve flow alignment, whereas too large ones break down the Hele–Shaw approximation because of the importance of inertial terms in the hydrodynamic equations, which then destroy the flow alignment again. This anisotropy dependence is also consistent with simulations of the boundary layer model and the full solidification problem , as well as with analytical approaches to solidification . As for the dimensionless surface tension $`B_0`$ dependence, one first needs to relate the values of $`B_0`$ used in the channel simulations to the experimental parameters in the circular geometry. To do this we consider a virtual channel whose walls are placed at half the distance between a finger and its nearest neighbors. The channel width $`W`$ is given by this distance between adjacent finger tips, whereas the effective injection velocity $`V_{\mathrm{}}`$ turns out to be the ratio between the injection pressure and $`R`$, the mean distance between a tip and the injection point. Then, the following dynamic picture of a typical experiment in the circular cell emerges: Initially some fingers develop. If the anisotropy, $`m1`$, is strong enough, their tips are stable (which corresponds to a point above the transition line in Fig. 3, where the observed morphology is side-branching). As these fingers grow radially, $`W`$ increases as $`R`$, whereas the effective driving force, $`(\eta _1\eta _2)V_{\mathrm{}}`$, decreases as $`1/R`$, so that the dimensionless surface tension $`B_0`$ they experience is found to decrease as $`1/R`$. Thus, the corresponding point in Fig. 3 moves to the left (lower values of $`B_0`$), and the side-branching behavior is preserved. In contrast, if the anisotropy $`m1`$ is not strong enough, the tips split (the corresponding point is below the transition line in Fig. 3, where the observed morphology is tip-splitting). As a result the number of fingers increases, which then compensates for the growth of the distance between finger tips as $`R`$ in such a way that the effective dimensionless surface tension $`B_0`$ keeps roughly steady during the pattern development, so that the corresponding point in Fig. 3 basically does not move. Thus the transition line is not crossed and the tip-splitting behavior is also preserved. In this case, we can estimate $`B_0`$ in the experiments to be of order $`10^3`$, whereas $`m`$ is said to be around 2 at the transition . This is indeed very close to our transition point $`B=10^3,2.05m2.1`$ in Fig. 3, but it should be taken into account that the value of $`B_0`$ below which fingers destabilize is known to depend on the amount of noise present . Accordingly, one expects the whole transition curve to be shifted to different values of $`B_0`$ for a different amount of noise. The latter, however, is uncontrolled both in the experiments and the simulations. On the other hand, the dependence of the value of the anisotropy at the transition on the driving force has been seen in viscous fingering with an etched cell . However, we find the critical amount of anisotropy for side-branching to replace tip-splitting to vanish at a finite value of $`B_0`$. Below this value we only observe side-branching. A more realistic model of viscous fingering in a liquid crystal should include a velocity dependent viscosity. In general the resulting nonlaplacian character of the problem could not be avoided, but in principle it would still be possible to simulate the dynamics by means of a phase-field model. ## Acknowledgements We are grateful to Á. Buka and T. Tóth-Katona for drawing our attention to this problem and for helpful discussions. We acknowledge financial support from the Dirección General de Enseñanza Superior (Spain), Projects No. PB96-1001-C02-02 and PB96-0378-C02-01, and the European Commission Project No. ERB FMRX-CT96-0085. Simulations have been carried out using the resources at CESCA and CEPBA, coordinated by $`\mathrm{C}^4`$. R.F. also acknowledges a grant from the Comissionat per a Universitats i Recerca (Generalitat de Catalunya). ## figure captions ### Fig. 1 Destabilization of the tip of a (stationary) finger after instantly decreasing $`B_0`$ to $`B_0=10^3`$ at the time of the first interface shown. Successive interfaces are shown in the reference frame moving with the mean interface at time intervals 0.11, for $`c=0.8`$, $`\stackrel{~}{ϵ}=0.08`$. The latest interface is represented in bold. (a)Tip-splitting for $`m=2`$. (b) Side-branching for $`m=2.25`$. ### Fig. 2 $`y`$ coordinate of the interface at the center of the channel (x=0) in the reference frame of the mean interface as a function of time corresponding to Figs. 1(a) (lower curve) and 1(b) (upper curve). $`t=0(0.88)`$ corresponds to the first (last) interface shown there. ### Fig. 3 Transition between tip-splitting (circles) and side-branching (triangles) as a function of the surface tension anisotropy $`m1`$ and the dimensionless surface tension $`B_0`$.
no-problem/9912/cond-mat9912235.html
ar5iv
text
# Critical exponents in spin glasses : numerics and experiments ## I Introduction It is well known that at a continuous phase transition, striking critical behaviour is observed. As the transition temperature is approached from above or below, there are power law singularities in a number of physical parameters ( the specific heat, the susceptibility,…). On very general grounds it can be shown that various critical exponents which govern the singularities are related to each other through scaling relationships. Even more remarkable is the fact that systems which are very different from each other at the microscopic level can be arranged into universality classes : within a given class all members have strictly identical exponents. Classes are defined by a restricted number of parameters - basically the space dimension and the number of components of the order parameter of each system. For standard second order transitions this behaviour can be fully understood, and the exponents calculated a priori thanks to renormalization group theory. This is one of the outstanding achievements of statistical physics (see Fisher for an enlightening historical survey). At standard ferromagnetic transitions, the exponents follow mean field behaviour down to dimension 4 (the upper critical dimension). At dimensions $`d`$ lower than 4, the exponents have been calculated though the $`ϵ`$ expansion, where $`ϵ=4d`$. There are well behaved series in increasing powers of $`ϵ`$ which allow one to give renormalization group estimates of the exponents. The values can be compared with those measured from high precision numerical simulations, and contact can be made with the exact values at dimension 2 (where for the Ising system, the exponents have been known exactly for more than $`50`$ years). Renormalization group values, numerical values, and experimental values are all in excellent agreement with each other. The only exponent whose value is not quite so thoroughly established is the dynamic exponent $`z`$ ; even here disagreements between different estimates are small. Universality is a general rule in systems with standard second order transitions, except for a restricted class of two dimensional systems with conflicting interactions. Here it has been shown analytically as well as numerically that the exponents vary continuously as a function of the ratio of the competing interactions . This behaviour can be explained for these particular systems in terms of marginal operators in the renormalization scheme . In the Spin Glass (SG) context, for a long time it was by no means obvious that there were well defined phase transitions at all in real three dimensional materials, or even in 3d model systems. Although the cusp temperature is clearly marked experimentally, the specific heat shows no visible singularity, and the susceptibility does not diverge in the region of the cusp temperature. A very important step forward was the realization in the in the late 1970s that the appropriate parameter to measure is not the standard linear susceptibility but the non-linear susceptibility . The Miyako group in Sapporo in pioneering work showed that there is a divergence of $`\chi _{nl}`$ at the cusp temperature ; this work and that of other groups convinced the community that there is indeed a bona fide transition in a SG. With the existence of a transition established, estimates were found of the critical exponents. Numerical work soon followed . It was important to measure these values in order to compare with the renormalization group approach. Does the combination of frustration and randomness which characterize SGs modify the basic physics of transitions in a fundamental way or not ? This has turned out to be a long story, which is still not finished. Unfortunately, as no clear theoretical guidelines appeared, the enthusiasm for the subject dropped and even the empirical ground rules were not fully established. Our aim here is to show that this study, both numerical and experimental, is well worth pursuing. Before looking in detail at the data, we can first note that the exponents which come out of the numerical or experimental analyses are very different from the classical ferromagnetic values. For the latter, in dimension 3 (for either Ising, XY, or Heisenberg spins) the exponents $`\alpha `$ and $`\eta `$ are numerically small, and $`z`$ is near $`2`$. For the spin glasses $`\alpha `$ is strongly negative (around $`2`$ which is consistent with the lack of a visible singularity in the specific heat )and $`\eta `$ is far from zero ; experimental values range from $`+0.4`$ to $`0.5`$ and simulation values are always distinctly negative. $`z`$ is strong - generally around $`6`$. The major differences compared with the standard second order transition values already indicate that the spin glass transition lies in a quite different category. In Ising Spin Glasses (ISGs) the upper critical dimension is $`6`$ and one could imagine that from dimension $`6`$ down, a similar renormalization group approach would be valid mutatis mutandis as for the ferromagnets. In fact things are much more complicated. The $`ϵ`$ expansion has been calculated to order three , but the successive factors in increasing powers of epsilon grow rapidly so it is not at all obvious how the total series will sum. Valiant efforts have been made over many years to set the theory on a firm footing using field theory , but so far the only clear result is to confirm that the leading term in $`6d`$ from the $`ϵ`$ expansion is correct. After that, numerous badly controled terms proliferate and theory is of little practical help in predicting exponents even at $`d=5`$. As a predictive theory is lacking, we are forced to turn to numerical and experimental methods so as to establish the empirical values of the exponents. The empirical results show clear violation of the Universality rules. ## II Numerical results In the following discussion we will concentrate mainly on the systems which have been studied the most fully - ISGs on (hyper)cubic lattices with random unbiased interactions between near neighbours. The definitions of the critical exponents are familiar, with appropriate modifications for spin glasses to take into account the fact that that the order parameter is the Edwards Anderson parameter. The specific heat exponent is $`\alpha `$, the order parameter exponent below the ordering temperature is $`\beta `$, the spin glass susceptibility exponent is $`\gamma `$. The non-linear magnetization at the ordering temperature exponent is $`\delta `$, with $$M_{nl}H^{2/\delta }$$ (1) . The correlation length exponent is $`\nu `$, and the exponent for the form of the correlation function at the ordering temperature is $`\eta `$. The relaxation time dynamic exponent is $`z`$. The scaling relationships between these exponents are $`\alpha +2\beta +\gamma =2`$, $`d\nu =2\alpha `$, $`\gamma =(2\eta )\nu `$, $`\nu (d2+\eta )=2\beta `$ and $`\delta =(d+2\eta )/(d2+\eta )`$. Numerically each of the exponents can in principle be measured independently through temperature dependences on large samples , though frequently they are measured through using finite size scaling relationships. Experimentally, $`\gamma `$,$`\alpha `$, $`\delta `$ and the combination $`z\nu `$ can be measured directly while $`\beta `$,$`\nu `$ and $`\eta `$ can only be obtained through scaling. There are many other useful relationships ; for instance the relaxation of the autocorrelation function at $`T_g`$ has an exponent $$x=(d2+\eta )/2z$$ (2) In addition to the standard critical exponents other exponents can be defined, in particular the stiffness exponent $`\theta `$. This is not a critical exponent but is defined by the size dependence of changes in energy with boundary conditions. For a spin glass energy measurements can be made with periodic and anti-periodic boundary conditions. The variation of the sample to sample fluctuations of the energy differences scale as $`L^\theta `$. If the zero temperature $`\theta `$ is positive, then the ordering temperature is greater than zero. A first type of quasi-numerical approach is furnished by high temperature expansions. The method consists of an extrapolation from a finite number of exact terms in the high temperature series expansion of some thermodynamic function to its asymptotic coefficients. The asymptotic form of the series contains the information on the singularities of the function. The extrapolation is not exact, but excellent results have been obtained in regular systems. The situation is less favourable in disordered systems. Careful analysis of the spin glass susceptibility from a series with a large number of terms (up to 20 in dimension 3) provides a set of estimates for the values of $`T_g`$ and $`\gamma `$ obtained from different approximant functions. If the series expansion was infinite, the method would become exact but in practice the limited length of the series means that the estimates are not perfect. The exponent results become less accurate as one gets further from the upper critical dimension. For dimensions $`5`$ and $`4`$ the longest series give high quality estimates which can be used as independent yardsticks to compare with the Monte Carlo data which we will discuss below. The method does not have the same problems (such as thermalization) which are encountered in Monte Carlo simulations, but considerable know-how is needed to calculate long series and to extract reliable exponent estimates from the raw series results. Up to now the only specific ISG series expansion results published are for binomial ($`\pm J`$) interactions. For dimension $`5`$ a reliable and accurate ordering temperature and set of exponents is given by . For dimensions $`4`$ and $`3`$ the results quoted in are more transparent ; it is clear that the approximant data points cluster satisfactorally with a strong correlation between estimates for $`T_g`$ and those for $`\gamma `$. The most widely used technique for determining critical exponents numerically has been that of Monte Carlo simulation. Many efforts have been made to measure ISG critical exponents accurately, despite considerable technical difficulties. For measurements exploiting the finite size scaling method , each sample is first annealed numerically until the spin system can be judged to be in thermal equilibrium ; then the fluctuations in the autocorrelation function $`q(t)=<S_i(0)S_i(t)>`$ are measured. The precautions necessary are described in . Long enough times must be used for each part of the procedure, and the time scale defining ” long enough ” depends on the size $`L`$ and the temperature $`T`$. The larger the sample and the lower the temperature the longer the time scales, so it becomes very difficult to obtain significant data on large samples at low temperatures. Sophisticated update methods have been developed which aleviate this problem to some extent. One must be sure that measurements have been done over a sufficient number of independent samples. Even if numerically high quality data has been obtained, there may be intrinsic corrections to finite size scaling which mean that the scaling rules (which are valid asymptoticly for large sizes) may not yet hold exactly for the range of sizes studied. An important parameter which can be deduced from the equilibrium fluctuations is the spin glass susceptibility $`\chi _{SG}=<q^2>`$, directly related to the non-linear susceptibility. The finite size scaling form is $$\chi _{SG}=L^{2\eta }f(L^{1/\nu }(TT_g))$$ (3) Precisely at $`T_g`$, as a function of size $$\chi _{SG}(L)L^{2\eta }$$ (4) $`T_g`$ can be determined quite accurately as the highest temperature at which $`log(\chi _{SG}/L^2)`$ varies linearly with $`log(L)`$ up to large $`L`$. These expressions ignore corrections to finite size scaling. There should be a further factor so that for instance the spin glass suceptibility is multiplied by $`[1L^wf_L(L^{1/\nu }(TT_g))+\mathrm{}]`$. The correction to scaling exponent $`w`$ has been estimated at around $`2.8`$ in the binomial ISG in dimension 3, as against $`0.9`$ for the 3d Ising ferromagnet and $`1.6`$ for 3d site percolation . It turns out that the strength of the correction to scaling can change dramatically from one system to another. Many authors estimate the critical temperature $`T_g`$ through the Binder the cumulant method . The cumulant for a given $`T`$ and $`L`$ is defined by a dimensionless combination of moments of the autocorrelation fluctuations averaged over a large number of samples : $$g_L=1/2(3<q^4>/<q^2>^2)$$ (5) for Ising spins. This cumulant is defined so as to go from zero for a high temperature Gaussian distribution of $`q(t)`$ values, to $`1`$ for a unique low temperature state. (Other related cumulants can be defined). For a continuous transition with a critical temperature $`T_g`$, $`g_L(T_g)`$ should be independent of size $`L`$, with values fanning out as a function of $`L`$ above and below $`T_g`$. Once $`T_g`$ has been established accurately, the exponents can be estimated by plotting the whole set of data for $`g_L(T)`$ and for $`\chi SG(L,T)`$ in an appropriate scaling form. In practice, sample to sample fluctuations in $`<q^4>`$ are strong so very large numbers of samples must be measured (the lack of self averaging is very much worse for $`<q^4>`$ than for $`<q^2>`$). The crossing point can be ill determined because the $`g_L(T)`$ curves do not fan out appreciably at temperatures lower than the crossing point. When this is the case, relatively minor corrections to finite size scaling can modify the apparent crossing temperature. (Correction factors of the form given above should apply to both $`<q^4>`$ and $`<q^2>`$). Finally, the values of the exponents deduced from the scaling plots, strongly correlated with the $`T_g`$ estimate, frequently vary very steeply with the apparent value of $`T_g`$. In conclusion, the exponent estimates obtained in this traditional way must be treated with considerable caution, even when large scale numerical efforts have been made. Accurate and reliable results can be obtained only in favourable circumstances. When calculations have been made to large sizes, it is possible to use the scaling rules for the spin glass susceptibility and for the spin correlation length to estimate the critical temperature and the exponents by extrapolations to infinite size . This method should be reliable, as corrections to finite size scaling are kept well under control. Alternative techniques which have been less widely used rely (at least partly) on dynamic measurements. In massive simulations on large samples ($`64^3`$ spins) Ogielski studied the relaxation of $`q(t)`$ as a function of temperature. An advantage of this method is that a strict thermal equilibrium state is not necessary ; as long as the anneal has been made over a time $`\tau _a`$ much longer than the subsequent measuring time over which $`q(t)`$ is studied, the measured $`q(t)`$ curve will be the true thermal equilibrium form (see for instance ). Ogielski assumed the standard critical behaviour for $`q(t)`$, which is $$q(t)=t^xf(t/\tau (T))$$ (6) He estimated $`T_g`$ from the divergence of the relaxation time $`\tau (T)`$. With $`T_g`$ in hand he estimated the critical exponents from a combination of dynamic and equilibrium measurements. One can also exploit the critical behaviour of strictly non-equilibrium dynamics. Suppose a spin glass sample is initially at infinite temperature (so the spins have random orientations) ; it is then annealed from this configuration to the critical temperature $`T_g`$. The non-equilibrium spin glass susceptibility at a time $`t`$ after the start of the anneal will increase as $`t^h`$ where $$h=(2\eta )/z$$ (7) Analagous non-equilibrium dynamic parameters have been studied extensively in a large number of regular systems . This non-equilibrium scaling behaviour has been established on a very firm theoretical basis . An obvious practical advantage for numerical work is that no preliminary anneal is required. Now it is clearly possible to combine the measurements of the dynamic relaxation exponent $`x(T)`$ and $`h(T)`$ at a series of test temperatures $`T_i`$ to obtain a sequence of apparent exponents $`\eta (T_i)`$ and $`z(T_i)`$ : $$\eta =(4x+(2d)h)/(2x+h)$$ (8) $$z=d/(2x+h)$$ (9) The $`\eta (T_i)`$ can be compared with independent $`\eta (T_i)`$ estimates from the equilibrium spin glass susceptibility (equation (9)). Consistency dictates that at the true $`T_g`$ the two estimates must coincide. This leads to estimates of $`T_g`$, $`\eta `$ and $`z`$ which turn out in practice to be precise (the crossing is clean ) and virtually free from pollution by corrections to finite size scaling. Once these parameters are well established, the other exponents such as $`\nu `$ can be determined from conventional scaling plots with only one unknown parameter. We will refer to this method as the ” three scaling rule ” technique. The low dimensions present special cases. First, in dimension $`1`$ where critical temperatures are certainly always zero for short range interactions, the stiffness exponent $`\theta `$ is exactly $`1`$ for continous distributions of interactions and $`0`$ for $`\pm J`$ interactions . In dimension $`2`$ it is also well established that the ordering temperature $`T_g=0`$ (except possibly when the interactions are $`\pm J`$ ). From the definition of $`\eta `$, when $`T_g`$ is zero and for a unique ground state (corresponding to any continuous distribution of interactions such as the Gaussian distribution) there is an additional scaling rule $$\eta =2d$$ (10) As fewer spins are involved for a given size L than at high dimension, it is easier to cover a wide range of L for finite size scaling in Monte Carlo simulations. Even better, sophisticated numerical techniques exist to find exact ground states for systems up to large sizes. (Recently exact ground states in dimension 3 have also been obtained to quite large $`L`$). ## III Exponent values We will concentrate on the values of $`\eta `$, and start from the low dimensions. For the 2d binomial ISG, $`\eta `$ has been estimated to be $`0.20\pm 0.02`$. Curiously, even if the $`T_g`$ is small but non-zero, the estimated value of $`\eta `$ is very similar. For the Gaussian distribution, from the values of the stiffness exponent $`\theta (d)`$ as a function of dimension, we can estimate the lower critical dimension $`d_l`$ , the dimension at which $`\theta =0`$, figure 1. Because the Gaussian distribution is continuous, as $`d_l`$ is close to $`2.50`$, $`\eta (d_l)`$ must be close to $`0.50`$, from equation (10). Numerous estimates have been given of the exponents for ISGs in dimension 3 with binomial or Gaussian interaction distributions. We have summarized the situation in , where we find that there are strong deviations from finite size scaling for the binomial case and where we obtain reliable exponent estimates principally from the three scaling rule method. Recent data on the binomial case established by extrapolation to infinite size are consistent with the values of . Other estimates relying on the Binder cumulant method are probably affected by corrections to finite size scaling and so are less reliable. We will use the results obtained in dimension 4 to demonstrate the coherence of the different methods when these are used carefully, and the inescapable conclusion that the values of the exponents change with the form of the interaction distribution. Figure 2 shows high precision data for the Binder cumulant in the binomial case. There is a clean intersection of the curves at $`T=2.00\pm 0.01`$. Figure 3 shows the intersection of the two curves for $`\eta (T_i)`$ used in the three scaling method described above. The intersection is at precisely the same temperature, validating this technique. From the intersetion point we can deduce $`T_g`$ and $`\eta =0.30\pm 0.02`$. As the slope of the curve for $`\eta (T_i)`$ for the estimate from the dynamic exponents $`x`$ and $`h`$ is much weaker than from that deduced from the equilibrium finite size scaling spin glass suceptibility, the precision on the value of $`\eta `$ is much higher using this method rather than working only with the pure equilibrium susceptibility and Binder cumulant data. The series expansion data is in quite independent agreement with these Monte Carlo results. ¿From the results plotted in Figure 3 of , for this value of $`T_g`$ one would expect $`\gamma =2.1\pm 0.1`$, or $`\nu =0.92\pm 0.05`$ from the scaling relation. This value of $`\nu `$ is perfectly consistent with direct scaling of the Monte Carlo spin glass susceptibility data. Figure 4 shows Binder cumulmant data for the 4d Gaussian case ; it can be seen that the intersection point lies at $`T=1.785\pm 0.01`$. Figure 5 shows the three scaling rule intersection. Again the agreement is excellent, but the value of $`\eta `$ at the intersection point $`\eta =0.44\pm 0.02`$, is considerably higher than for the binomial case. Figure 6 shows a direct plot of$`log(\chi _{SG}/L^2)`$ against $`log(L)`$. This has to be straight at $`T_g`$. It can be seen that $`T_g`$ must lie just below $`T=1.79`$, and $`\eta `$ can be estimated from the corresponding slope. We can note that there is excellent agreement point by point between the data of and the present results wherever comparisons can be made. Using a scaling plot we find $`\nu =1.08\pm 0.10`$. The 4d binomial exponent values given here are consistent with but more accurate than those obtained in . For the Gaussian system, the value of $`\eta =0.35\pm 0.05`$ quoted in is low because of a marginally overestimated $`T_g`$. A careful analysis of the data on these two systems in dimension 4 but with different interaction distributions shows conclusively that the exponents are different, so universality does not hold. In dimension 5, there is excellent agreement between series estimates and Monte Carlo estimates for the binomial system. In figure 7 we display results for $`\eta `$ as a function of dimension for $`\pm J`$ and Gaussian interactions, including the point at $`d_l`$ for the Gaussian case and the point at $`d=2`$ for the $`\pm J`$ case. The point at $`d=6`$ corresponds to the upper critical dimension value $`\eta =0`$. The straight line starting from this point is the leading term in the $`ϵ`$ expansion : $`\eta (d)=(6d)/3+\mathrm{}`$ . It is clear that the data demonstrate that the exponents for the two systems, binomial and Gaussian, follow regular curves as a function of dimension. Independent results are consistent with each other. However the values at each dimension are different for the two sets of interactions so universality is clearly violated. Results for other sets of interactions confirm this variation. Critical exponents have also been measured in the Ising phase diagram as a function of the ratio $`p`$ of the number of ferromagnetic to antiferromagnetic bonds. In the ferromagnetic region to the right of the Nishimori line, the static exponents do not appear to vary with $`p`$, but the dynamic exponent $`z`$ does change continuously and very significantly . We can note that the Migdal-Kadanoff renormalization approach (which should be exact for a heirachical lattice) has been used to measure effective ordering temperatures and exponents for four different ISG interaction distributions. . For dimension 3, a diamond hierachical lattice, and a renormalization factor $`b=2`$, the ordering temperatures are in excellent agreement with Monte Carlo estimates on cubic lattices. However the Migdal-Kadanoff calculations lead to a universal saddle point critical temperature and exponent values. This result seems to be closely related to the hypotheses intrinsic to the Migdal Kadanoff method. ## IV Spin Glass Experiments Estimates of exponents have been made by many experimental groups, in general using slightly different protocols and on different materials. The non-linear susceptibility is defined as $$\chi _{nl}=\chi _0M/H$$ (11) where $`\chi _0`$ is the zero field magnetic susceptibility. At $`T_g`$ $`\chi _{nl}`$ should behave as $`H^{2/\delta }`$ and above $`T_g`$ $$M=\chi _0H+\chi _2H^3+\chi _4H^5+\mathrm{}$$ (12) with $`\chi _2`$ diverging as $`t^\gamma `$, where $`t`$ is the reduced temperature. The Suzuki equation of state is $$\chi _{nl}=t^\beta g(H^2/t^{\beta +\gamma })$$ (13) Both d.c. and a.c. magnetization techniques have been used . For instance Monod and Bouchiat and Bouchiat used d.c. entirely, taking care to stay in a field range where the non-linear magnetization remained less than 10% of the linear magnetization so as not to corrupt the data. Svedlindh et al first analysed low field and low frequency a.c. measurements to fit to $`\gamma `$ and $`T_g`$. (The ordering temperature was in agreement with the low d.c. field cusp temperature). They then used the equation of state with d.c. measurements up to moderate fields with $`\beta `$ as the only fit parameter. In a second set of experiments they measured the field and frequency dependence of the a.c. susceptibility and obtained an estimate of the dynamics exponent $`z\nu `$. In a sophisticated experiment, Lévy measured the Fourier transform spectrum of the magnetization response to a 0.1 Hz field, picking up a series of non-linear susceptibilities from the different harmonics. He could deduce accurate values of static and dynamic exponents. The experiments have to be performed with care, perticular attention being paid to the proper identification of the critical temperature and to the necessity to remain in a suitably low field range throughout. Recent measurements in which the exponent $`\delta `$ was measured in a range of different materials using one single protocol gave excellent confirmation of earlier data by other teams (except for the case of AuFe where an early experiment had given values of exponents out of line with all other results). These results validate the earlier measurements and show that the considerable spread of values of exponents reported for different materials, is not due to artefacts in the different measuring procedures. As in the numerical data it is evident that the expected universality of exponents breaks down. The finite temperature $`T_g`$ values for real material spin glasses have been somewhat of a mystery for some years. These systems are Heisenberg, and reliable numerical work has demonstrated that Heisenberg spin glasses in dimension 3 should have zero temperature ordering. Kawamura has made the interesting suggestion that the ordering process in real life Heisenberg materials is basically a chiral spin glass ordering. This ordering would not be visible directly to magnetization experiments if there were no anisotropy. However in all real systems random anisotropy (of the DM type ) is always present, and by coupling the chiral degrees of order with the magnetism an anisotropy, however weak, reveals the chiral order. The critical exponents for pure Heisenberg chiral ordering in dimension 3 have been estimated numerically . The best values are around $`\nu =1.25`$ and $`\eta =+0.7`$. It can be noted that $`\nu `$ is similar to the Monte Carlo Ising values which we have presented, while $`\eta `$ is strongly positive rather than negative as seen in the numerical Ising work. A plausible hypothesis is one in which the exponents change progressively from chiral-like for weak anisotropy to Ising-like for strong anisotropy. On this scenario, the value of $`\nu `$ should remain relatively stable for all the materials, while the value of $`\eta `$ should vary progressively, becoming gradually more negative as anisotropy increases. So far the qualitative trend for systems where both anisotropy and exponents have been measured is in excellent agreement with this picture. All the experimental exponent values fall within the chiral limit at one end and the Ising limit at the other end. For the three alloy systems AgMn, CuMn and AuFe the anisotropies are weak, moderate and strong respectively. The $`\nu `$ values are similar, near $`1.3`$, while the $`\eta `$ values are about $`0.4,0.1`$ and $`0.1`$. The trend of exponent values is clearly in the sense predicted by the scenario. ## V Conclusion The main lesson which can be drawn from this overview of numerical and experimental exponent data in spin glasses is that transitions in these glassy systems are quite different from those in regular systems with standard second order transitions. The values of the exponents are far from those in regular systems and the breakdown of Universality is manifest in carefully analysed data. The statistical physics community has been very loth to accept evidence against universality because the renormalization scenario appears to give such an appealingly general picture of behaviour at transitions. However the fact that it has proved extremely difficult on the field theory level to produce predictions for spin glasses is a strong indication that unexpected behaviour cannot be excluded a priori. What possible mechanism could lead to this breakdown ? It has been found analytically that Ising spin systems with ferromagnetic interactions on hierachical lattices show no universality . For the spherical model on graphs of non-integer dimension, the exponents vary continuously with the spectral dimension of the graph . We can speculate that in spin glasses the effective dimension of the system at criticality could depend on the form of the interaction distribution. It would be unfortunate if this phenomenon was left unexplored because of preconceptions as to the physical laws which should hold for complex systems. If it could be accepted that critical behaviour is much richer in glassy systems than at conventional second order transitions without frustration, an important new field of investigation should open out. What control parameters affect the exponents and why ? What are the implications for the physics of the glass transition in the most general sense ?
no-problem/9912/cond-mat9912021.html
ar5iv
text
# Charge and orbital ordering in underdoped La1-xSrxMnO3 \[ ## Abstract We have explored spin, charge and orbitally ordered states in La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> ($`0<x<1/2`$) using model Hartree-Fock calculations on $`d`$-$`p`$-type lattice models. At $`x`$=1/8, several charge and orbitally modulated states are found to be stable and almost degenerate in energy with a homogeneous ferromagnetic state. The present calculation indicates that a ferromagnetic state with a charge modulation along the $`c`$-axis which is consistent with the experiment by Yamada et al. might be responsible for the anomalous behavior around $`x`$ = 1/8. \] La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> have extensively been studied because of its interesting magnetic and electric properties . An antiferromagnetic insulator LaMnO<sub>3</sub> evolves into a ferromagnetic metal with substitution of Sr for La or with hole doping . Underdoped La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> with $`x`$ $``$ 1/8 which is located between the antiferromagnetic insulating region and the ferromagnetic metallic region, shows many anomalous behaviors . La<sub>0.875</sub>Sr<sub>0.125</sub>MnO<sub>3</sub> is a paramagnetic insulator above $`T_{CA}`$ (180 K) and has a canted antferromagnetic state below it . Recently, it has been found that La<sub>0.875</sub>Sr<sub>0.125</sub>MnO<sub>3</sub> becomes a ferromagnetic insulator below 140 K . One important question is why the hole-doped system can exist as an insulator. The superstructure observed by Yamada et al. indicates that charge ordering is responsible for the insulating behavior. However, it is still controversial whether charge ordering is realized in La<sub>0.875</sub>Sr<sub>0.125</sub>MnO<sub>3</sub> or not. Ahn and Millis studied the charge and orbital ordering in La<sub>0.875</sub>Sr<sub>0.125</sub>MnO<sub>3</sub> using a model of strong electron-lattice coupling limit and found that the charge ordering proposed by Yamada et al. can be reproduced using their model . On the other hand, using the resonant x-ray scattering, Endoh et al. confirmed that the superlattice peak found by Yamada et al. does not show resonance at the Mn $`K`$-edge and concluded that there is no Mn<sup>3+</sup>/Mn<sup>4+</sup> charge ordering in La<sub>0.875</sub>Sr<sub>0.125</sub>MnO<sub>3</sub> . Another interesting question is what is the origin of the ferromagnetism. Since the system is insulating, the simple double exchange mechanism cannot be applied. A model Hartree-Fock (HF) calculation for LaMnO<sub>3</sub> has predicted that, if the Jahn-Teller distortion is suppressed, a ferromagnetic insulating state with orbital ordering would be realized . This means that the superexchange interaction between the Mn<sup>3+</sup> ions can be ferromagnetic because of orbital ordering . Endoh et al. observed orbital ordering below 145 K using x-ray scattering technique and argued that the orbital ordering is essential for the ferromagnetic and insulating state . However, the orbital ordered state without charge ordering is expected to be metallic and may not be consistent with the fact that La<sub>0.875</sub>Sr<sub>0.125</sub>MnO<sub>3</sub> is insulating. In the doped manganites, the orbital modulation should couple with the charge modulation in a similar way that the spin modulation couples with the charge modulation in the doped cuprates . In order to understand the electronic structure of the ferromagnetic and insulating state in the doped manganites, it is necessary to consider the complicated interplay between the charge and orbital orderings. In this paper, we study possibility of charge and orbitally ordered states in underdoped La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> using the model HF calculation and explore the origin of the ferromagnetic insulating state in underdoped La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>. We use the multi-band $`d`$-$`p`$ model with 16 Mn and 48 oxygen sites in which full degeneracy of Mn 3$`d`$ orbitals and the oxygen 2$`p`$ orbitals are taken into account . The Hamiltonian is given by $`H=H_p+H_d+H_{pd},`$ (1) $`H_p={\displaystyle \underset{k,l,\sigma }{}}ϵ_k^pp_{k,l\sigma }^+p_{k,l\sigma }+{\displaystyle \underset{k,l>l^{},\sigma }{}}V_{k,ll^{}}^{pp}p_{k,l\sigma }^+p_{k,l^{}\sigma }+H.c.,`$ (2) $`H_d`$ $`=`$ $`ϵ_d{\displaystyle \underset{i,m\sigma }{}}d_{i,m\sigma }^+d_{i,m\sigma }+u{\displaystyle \underset{i,m}{}}d_{i,m}^+d_{i,m}d_{i,m}^+d_{i,m}`$ (3) $`+`$ $`u^{}{\displaystyle \underset{i,mm^{}}{}}d_{i,m}^+d_{i,m}d_{i,m^{}}^+d_{i,m^{}}`$ (4) $`+`$ $`(u^{}j^{}){\displaystyle \underset{i,m>m^{},\sigma }{}}d_{i,m\sigma }^+d_{i,m\sigma }d_{i,m^{}\sigma }^+d_{i,m^{}\sigma }`$ (5) $`+`$ $`j^{}{\displaystyle \underset{i,mm^{}}{}}d_{i,m}^+d_{i,m^{}}d_{i,m}^+d_{i,m^{}}`$ (6) $`+`$ $`j{\displaystyle \underset{i,mm^{}}{}}d_{i,m}^+d_{i,m^{}}d_{i,m^{}}^+d_{i,m},`$ (7) $`H_{pd}={\displaystyle \underset{k,m,l,\sigma }{}}V_{k,lm}^{pd}d_{k,m\sigma }^+p_{k,l\sigma }+H.c.`$ (8) $`d_{i,m\sigma }^+`$ are creation operators for the 3$`d`$ electrons at site $`i`$. $`d_{k,m\sigma }^+`$ and $`p_{k,l\sigma }^+`$ are creation operators for Bloch electrons with wave vector $`k`$ which are constructed from the $`m`$-th component of the 3$`d`$ orbitals and from the $`l`$-th component of the 2$`p`$ orbitals, respectively. The intra-atomic Coulomb interaction between the 3$`d`$ electrons is expressed using Kanamori parameters, $`u`$, $`u^{}`$, $`j`$ and $`j^{}`$ . The transfer integrals between Mn 3$`d`$ and oxygen 2$`p`$ orbitals $`V_{k,lm}^{pd}`$ are given in terms of Slater-Koster parameters $`(pd\sigma )`$ and $`(pd\pi )`$. The transfer integrals between the oxygen 2$`p`$ orbitals $`V_{k,ll^{}}^{pp}`$ are expressed by $`(pp\sigma )`$ and $`(pp\pi )`$. Here, the ratio $`(pd\sigma )`$/$`(pd\pi )`$ is -2.16. $`(pp\sigma )`$ and $`(pp\pi )`$ are fixed at -0.60 and 0.15, respectively, for the undistorted lattice. When the lattice is distorted, the transfer integrals are scaled using Harrison’s law . The charge-transfer energy $`\mathrm{\Delta }`$ is defined by $`ϵ_d^0ϵ_p+nU`$, where $`ϵ_d^0`$ and $`ϵ_p`$ are the energies of the bare 3$`d`$ and 2$`p`$ orbitals and $`U`$ ($`=u20/9j`$) is the multiplet-averaged $`dd`$ Coulomb interaction. $`\mathrm{\Delta }`$, $`U`$, and $`(pd\sigma )`$ for LaMnO<sub>3</sub> are 4.0, 5.5, and -1.8 eV, respectively, which are taken from the photoemission study . In Fig. 1, the energies of the spin, charge and orbitally ordered states are compared with those of the ferromagnetic and $`A`$-type antiferromagnetic states, which are plotted as functions of the hole concentration $`x`$. At $`x`$ of 1/8, several charge ordered states exist as stable solutions. A schematic drawings of the ferromagnetic charge-ordered states are shown in Fig. 2. The unit cell consists of the four layers of $`z`$ = 0, 1/4, 1/2, and 3/4 along the $`c`$-axis. Each layer has four different Mn sites. In the charge-ordered states, the hole-rich planes ($`z`$ = 0 and 1/2) and the hole-poor planes ($`z`$ = 1/4 and 3/4) are alternatingly stacked along the $`c`$-axis. In the hole-poor plane, either $`d_{3x^2r^2}`$-like or $`d_{3y^2r^2}`$-like orbital is mainly occupied at each site and the $`3x^2r^2/3y^2r^2`$-type orbital ordering. This orbital ordering in the hole-poor plane is essentially the same as that in LaMnO<sub>3</sub> although it is weak compared to that in LaMnO<sub>3</sub>. While the orbital orderings at $`z`$ = 0 and at $`z`$ = 3/4 are out of phase for CO1 \[see Fig. 2(a)\], those are in phase for CO2 as shown in Fig. 2(b). Probably, the orbital ordering in the hole-poor plane is related to the observation by Endoh et al. . In the hole-rich plane, for CO1, the Mn<sup>4+</sup>-like sites form a kind of stripe as shown in Fig. 2(a) and the extra holes are sitting at the oxygen sites between the Mn<sup>4+</sup>-like sites. For the CO2 state, the Mn<sup>4+</sup>-like sites form a square lattice and the extra holes are distributed at the oxygen sites surrounding the Mn<sup>4+</sup>-like site. The orbital ordering at the Mn<sup>3+</sup>-like sites in the hole-rich plane is very weak and depends on the orbital ordering in the hole-poor plane. These two ferromagnetic states with the charge and orbital modulations are degenerate in energy within the accuracy of the present calculation and are the lowest in energy among the charge-ordered states obtained in the present model calculations. Since LaMnO<sub>3</sub> is a charge-transfer-type Mott insulator, the Mn<sup>4+</sup>-like site has approximately four electrons. For example, in the CO1 state, The number of $`3d`$ electrons at the Mn<sup>3+</sup>-like sites in the hole-poor plane is $``$ 4.08 and that of the Mn<sup>3+</sup>-like and the Mn<sup>4+</sup>-like sites in the hole-rich plane are $``$ 4.03 and $``$ 4.01, respectively. This calculated result can explain why the resonant x-ray scattering cannot distinguish between the Mn<sup>3+</sup>-like and Mn<sup>4+</sup>-like sites . These ferromagnetic states with the charge modulations are still metallic without lattice distortion. We have studied the effect of the lattice distortion which is shown in the right column of Fig. 3. The hole-poor plane ($`z`$ = 1/4) can couple with the Jahn-Teller distortion of LaMnO<sub>3</sub>. On the other hand, in the hole-rich plane ($`z`$ = 0) for CO1, the shift of oxygen ion sitting between the Mn<sup>4+</sup>-like sites causes a doubling along the stripe of the Mn<sup>4+</sup>-like sites and is expected to open a band gap. Actually, we found that the small shift of these oxygens less than 0.1 $`\AA `$ (Fig. 3), which gives the superstructure along the $`c`$-axis and is consistent with the experiment by Yamada et al. , can open a band gap for the two charge-modulated ferromagnetic states. The lattice distortion shwon in Fig. 3 is enough to open a band gap for the CO2 state although the breathing-type distortion at the Mn<sup>4+</sup>-like sites is expected to be more effective. These ferromagnetic states with the lattice distortions are strong candidates for the ferromagnetic and insulating state found in La<sub>0.875</sub>Sr<sub>0.125</sub>MnO<sub>3</sub>. A sketch of the ferrimagnetic state accompanied by the charge and orbital ordering is shown in Fig. 4(a). In this arrangement, each hole-rich Mn<sup>4+</sup>-like site is surrounded by six hole-poor Mn<sup>3+</sup> sites. This can be viewed as a lattice of orbital polaron which is displayed in Fig. 5(a). The superexchange interaction between the Mn<sup>3+</sup>-like and Mn<sup>4+</sup>-like sites is ferromagnetic and that between the Mn<sup>3+</sup>-like sites is antiferromagnetic in this ferrimagnetic state. The orbital polaron might be related to the $`12\AA `$ magnetic clusters observed in La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> . It is expected that the existence of this orbital polaron makes the magnetic interaction along the $`c`$-axis ferromagnetic and gives three-dimensional ferromagnetic coupling. Since the Mn<sup>3+</sup>-like sites should be accompanied by the Jahn-Teller distortion, the orbital polaron might also be related to the Jahn-Teller polaron observed in La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> with $`x<0.5`$ . On the other hand, in La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> with $`x>0.5`$, the number of Mn<sup>4+</sup>-like sites is larger than that of Mn<sup>3+</sup>-like sites. In such a case, the orbital polaron, in which a Mn<sup>4+</sup>-like site is surrounded by four Mn<sup>3+</sup>-like sites \[see Fig. 5(b)\], is expected to be relevant. This orbital polaron gives two-dimensional ferromagnetic coupling. Actually, it has been reported that Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> with $`x>0.5`$ has the $`A`$-type antiferromagnetic state, in which the ferromagnetic $`ab`$-planes are antiferromagnetically coupled along the $`c`$-axis . This orbital polaron might be relevant in the $`A`$-type antiferromagnetic state of Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>. At $`x`$ = 1/4, the homogeneous ferromagnetic state is very stable and no ferromagnetic charge-modulated state was obtained. However, it is found that an antiferromagnetic state with charge and orbital ordering exists as a stable solution which is schematically shown in Fig. 4(b). This charge-ordered state can be viewed as a lattice of the orbital polaron coupled antiferromagnetically. Although this state is higher in energy than the homogeneous ferromagnetic state as shown in Fig. 1, a lattice distortion of breathing type may stabilize the charge-ordered state relative to the ferromagnetic state. There are two possible ways to describe the charge-ordered states: a polaron lattice language and a charge-density wave language. When the electron-lattice coupling term dominates the other terms, the polaron lattice picture is appropriate. The ferrimagentic charge-ordered state obtained above can be viewed as a orbital polaron lattice and is expected to strongly couple with lattice distortions or the Jahn-Teller distortions. On the other hand, the charge-density-wave language becomes more appropriate when the kinetic energy term is relevant. It is natural to speculate that the electron-lattice coupling is weak in the ferromagnetic charge-modulated state compared to the orbital polaron lattice. The present calculation neglecting lattice distortions indicates that a kind of Umklapp process can give the modulation along the $`c`$-axis and that the charge-density-wave picture might be relevant in La<sub>0.875</sub>Sr<sub>0.125</sub>MnO<sub>3</sub>. Since the number of 3$`d`$ electrons at the Mn<sup>4+</sup>-like site is almost the same as that of the Mn<sup>3+</sup>-like site, the electron-lattice coupling is expected to be small. Actually the observed lattice modulation along the $`c`$-axis is very small . This is also consistent with the experimental observation that resonant x-ray scattering fails to distinguish between Mn<sup>3+</sup>-like and Mn<sup>4+</sup>-like sites . The present calculation fails to give a finite band gap without extra lattice distortion, suggesting that the weak electron-lattice coupling is still important to give the band gap. Here, it should be noted that a perfect nesting is not required in the present system because the Coulomb interaction term between the $`3d`$ electrons is very large and is comparable to the kinetic energy term and that, in this sense, the magnetic coupling between two Mn sites can be viewed as a kind of superexchange coupling. In conclusion, we have studied possible charge and orbitally ordered states in underdoped La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> using the model HF calculation. It has been found that the ferromagnetic state with the charge ordering along the $`c`$-axis, which is consistent with the experiment by Yamada et al. , is stable at $`x`$=1/8. It has been argued that the charge and orbital ordering in the ferrimagnetic state can be interpreted as orbital polaron lattice. In order to clarify the interplay between the charge/orbital ordering and the lattice distortion, the underdoped manganites should be studied in future using a more realistic model which includes the electron-lattice interaction. The authors would like to thank J. L. Garcia-Mu$`\stackrel{~}{\mathrm{n}}`$os for useful discussions. This work was supported by the Nederlands Organization for Fundamental Research of Matter (FOM) and by the European Commission TRM network on Oxide Spin Electronics (OXSEN).
no-problem/9912/cond-mat9912457.html
ar5iv
text
# A simple model for the metal-insulator transition in a two-dimensional electron gas ## Abstract We introduce an elementary model for the electrostatic self-consistent potential in a two-dimensional electron gas. By considering the perpendicular degree of freedom arising from the electron tunneling out of the system plane, we predict a threshold carrier density above which this effect is relevant. The predicted value agrees remarkably well with the onset for the insulator to quasi-metallic transition recently observed in several experiments in SiO<sub>2</sub>–Si and AlGaAs–GaAs heterojunctions. Anderson transition in disordered solids still raises great interest among researchers. While metal-insulator transition (MIT) is well established in three-dimensional (3D) disordered systems, the situation is not completely understood in one- (1D) and two-dimensions (2D). Several decades ago, a number of papers yields the general belief that in those systems all eigenstates are exponentially localized and a MIT no longer exists. Although this seems to be correct in most low-dimensional systems, there exist known exceptions to this rule. Thus, it has been found that extended states may appear in 1D random systems upon introducing either short-range or long-range correlations in the disorder. These purely theoretical considerations were put forward for the explanation of high conductivity of doped polyaniline as well as transport properties of random semiconductor superlattices . The Anderson transition induced by diagonal disorder at the band center in finite 2D systems was already studied by Yoshino and Okazaki . The relevance of these midgap states for the metallic conductance of 2D systems has been discussed in detail by Licciardello and Thouless . These authors suggested that between the mobility edges there is a tendency for the conductance to decrease slowly as the sample size is increased and that they may be no absolute minimum metallic conductance. However, recent experiments have provided clear evidences of the MIT-like in high-quality 2D electron and hole systems. After the pioneering work by Kravchenko et al. on electrons in Si and more recently by Hanein et al. on holes in GaAs , it has be become clear that 2D gases undergo a crossover from an insulating regime at low density to a metalliclike behavior at high density, where the quasi-metallic phase is characterized by a strong decrease of the resistivity as the temperature decreases. Discrepancies between the standard one-parameter scaling theory , establishing that the 2D gas should be insulating, and the above mentioned experiments are usually attributed to a strong electron-electron interaction. A number of theoretical models have then been proposed to explain the observed transition in 2D systems, ranging from new liquid phases , Wigner glass , spin-orbit induced transition , decoherence due to quantum fluctuations , superconducting phase and anyon superconducting model . The basic ingredient of these models is the assumption that the system is purely 2D and, consequently, new phenomena are to be considered to explain the observed transition. In this work we undertake a different way by considering the electronic motion in the perpendicular direction. In so doing, we find the conditions under which this degree of freedom could be relevant. Surprisingly, the critical carrier density leading to perpendicular motion is close to that determined in experiments to observe the transition. Our main aim is then to point out the importance of this perpendicular degree of freedom, which should be taken into account in more elaborated models. Our approach is based on the competition between the confining potential, appearing at the heterojunction even at zero gate voltage, and the repulsive potential arising from the excess carrier at nonzero gate voltage. To proceed, let us start by considering the heterojunction at zero gate voltage. Since we are only interested in the basic phenomena without entering in many details, we make use of a simple variational Hartree calculation presented in standard textbooks . This will make our reasonings clearer while keeping a good qualitative description of the involved physics. The envelope function of the lowest subband of a 2D electron gas is reasonably well accounted for by the Fang-Howard trial function: $$\chi (z)=\sqrt{\frac{b^3}{2}}z\mathrm{exp}\left(\frac{1}{2}bz\right),$$ (1) where $`z`$ is the coordinate along the growth direction (perpendicular to the heterojunction) and $`b`$ is determined by minimizing the kinetic energy plus the Hartree energy per electron (see Ref. for details). In terms of the effective Bohr radius, $`a^{}`$, and the 2D electron density, $`n_s^\mathrm{o}`$, the variational parameter is roughly given by $$b=\left(\frac{16\pi n_s^\mathrm{o}}{a^{}}\right)^{1/3},$$ (2) while the maximum value of the Hartree potential is $$V_0=3\frac{e^2n_s^\mathrm{o}}{ϵb},$$ (3) $`ϵ`$ being the dielectric constant of the medium. The Hartree potential increases smoothly from zero up to $`V_0`$ on increasing the distance $`z>0`$ from the heterojunction. Thus, at zero gate voltage the electrons lie on the lower subband and confined, thus forming a 2D gas. The lower subband energy is expressed as: $$\epsilon _1=2.5\frac{e^2n_s^\mathrm{o}}{ϵb}.$$ (4) The situation may be different when the gate voltage induces an excess carrier density $`\mathrm{\Delta }n_s`$, as we will show below. This excess carrier is not compensated by any other charge close to the heterojunction, thus leading to a local negative charge density. As a crude and first approximation, we assume that this excess charge density is confined to a plane close to the heterojunction. By solving the Poisson equation, one can obtain the potential energy due to this charged plane, namely $`eFz`$ for $`z>0`$, where $$F=\frac{e\mathrm{\Delta }n_s}{2ϵ}.$$ (5) Due to this repulsive potential, electrons may tunnel through the barrier formed by the Hartree potential plus $`eFz`$ and escape. To account for the tunneling process, we replace the actual potential by that depicted in Fig. 1. This replacement is not essential in the calculations since both the confining Hartree potential and the repulsive potential due to the charged plane are known, but it allows us to obtain closed analytical expressions for the transmission coefficient. Since the Fermi energy is usually very small in this system, we can also safely neglect it when calculating the transmission coefficient and take $`E=\epsilon _1`$, as indicated in Fig. 1. Thus, the classical turning points are $`z_1`$ and $`z_2=z_1+(V_0\epsilon _1)/eF`$, shown in the figure. From the WKB semi-classical approximation, the transmission coefficient is given by $`T`$ $`=`$ $`\mathrm{exp}\left[2\sqrt{{\displaystyle \frac{2m^{}}{\mathrm{}^2}}}{\displaystyle _{z_1}^{z_2}}𝑑z\sqrt{V_0\epsilon _1eF(zz_1)}\right]`$ (6) $`=`$ $`\mathrm{exp}\left[{\displaystyle \frac{4}{3}}\sqrt{{\displaystyle \frac{2m^{}}{\mathrm{}^2}}}{\displaystyle \frac{(V_0\epsilon _1)^{3/2}}{eF}}\right].`$ (7) The onset for the crossover from a 2D to 3D behavior can be determined from the condition that the transmission probability is large. We then assume that this crossover appears when the exponent in (7) is of the order of unity in absolute value. This occurs for a critical density $`n_s^c=\mathrm{\Delta }n_s+n_s^\mathrm{o}`$ such that $$\frac{4}{3}\sqrt{\frac{2m^{}}{\mathrm{}^2}}\frac{(V_0\epsilon _1)^{3/2}}{eF}1.$$ (8) Since $`F`$ is a function of $`\mathrm{\Delta }n_s`$, we can readily determine the value of the critical density from the above condition for which the probability for tunneling out of the well is large. Using (3), (4) and (5) we finally obtain $`n_s^c(5/3)n_s^\mathrm{o}`$. Since the carrier density at zero gate voltage is related to the Fermi energy $`E_F^\mathrm{o}`$ by the expression $`n_s^\mathrm{o}=2m^{}E_F^\mathrm{o}/h`$, we finally arrive at the condition $$n_s^c7E_F^\mathrm{o}m^{}\times 10^{11}\mathrm{c}m^2,$$ (9) where the Fermi energy is measured in meV and the effective mass in units of the free electron mass. Now let us compare our qualitative prediction (9) with the experimental values. A 2D electron gas in Si has been studied by Kravchenko et al, who observed the transition for an electron density $`0.85\times 10^{11}`$cm<sup>-2</sup>. The Fermi energy was $`E_F=0.6`$meV and the effective mass $`m^{}=0.2m`$. Inserting both values in (9) we obtain a critical density $`n_s^c0.84\times 10^{11}`$cm<sup>-2</sup>. On the other side, 2D hole gas in GaAs was demonstrated to undergo a MIT at a hole density $`0.15\times 10^{11}`$cm<sup>-2</sup> by Hanein et al. . Taking the values $`E_F=0.04`$meV and $`m^{}=0.4m`$ we get from (9) that $`n_s^c0.11\times 10^{11}`$cm<sup>-2</sup>. The agreement should be regarded as surprisingly good, in view of the crude approximation we made to obtain it. As a conclusion, our model points out the relevance, under some circumstances, of the perpendicular degrees of freedom in the so-called 2D electron gases. As soon as the electron gas becomes a non-perfect 2D system, the scaling theories predicts the occurrence of a MIT transition like that recently observed. The authors warmly thank I. Gómez for his useful comments and criticisms, and E. Diez, C. Kanyinda-Malu and M. Hilke for helpful conversations. FDA was supported by CAM under Project 07N/0034/98. JCF was supported by CICOPS fellowship of University of Pavia.
no-problem/9912/astro-ph9912336.html
ar5iv
text
# The Lick Observatory Supernova Search ## Introduction Located at Lick Observatory atop Mount Hamilton east of San Jose, California, the 0.75-m Katzman Automatic Imaging Telescope (KAIT) is a robotic telescope dedicated to the Lick Observatory Supernova Search (LOSS) and the monitoring of variable celestial objects. It is equipped with a CCD camera and an automatic autoguider (that is, the autoguider is able to find its own guide stars). KAIT is the third robotic telescope in the Berkeley Automatic Imaging Telescope (BAIT) program. The predecessors to KAIT were two telescopes developed at the Leuschner Observatory, which is located about 10 miles east of the campus of the University of California, Berkeley. KAIT inherits the operational concept and the majority of the software from its two predecessors. More thorough descriptions of the BAIT system can be found in references 1–4. LOSS discovered its first supernova in 1997 (SN 1997bs in NGC 3627; Treffers et al. 1997 ). Its performance improved dramatically in 1998 and 19 supernovae (SNe) were discovered. In 1999, 35 SNe were discovered by mid-December. Multicolor photometry of SNe is an important scientific goal of KAIT. Because of the early discoveries of most of the LOSS SNe, many good light curves have been obtained. We report our hardware and software setups for LOSS in Section 2, the SN search in Section 3, and the discoveries and follow-up observations in Section 4. ## The Hardware and Software of LOSS KAIT has a 30-inch diameter primary with a Ritchey-Chretién mirror set. The focal ratio is $`f/8.2`$ which results in a plate scale of $`33.2^{\prime \prime }`$ mm<sup>-1</sup> at the focal plane. The telescope has a very compact design; it is lightweight and slews fast. An off-axis guider designed by one of us (RRT) enables the telescope to obtain long exposures. The CCD camera is an Apogee AP7 with a SITe 512$`\times `$512 pixel back-illuminated chip. It is thermoelectrically cooled to about $`60^{}`$C below the ambient temperature. The quantum efficiency (QE) is good (peak 60%) and flat from 3000 Å to 8000 Å. The field of view is $`6.7^{}\times 6.7^{}`$ with a scale of $`0.8^{\prime \prime }`$ pixel<sup>-1</sup>. Observations done by KAIT are fully robotic. All hardware (telescope, filters, autoguider, CCD camera, slit, dome, weather station, etc.) are automatically controlled by the software (see Richmond, Treffers, and Filippenko 1993 for details). Dark, bias, and twilight flatfield observations are also done automatically. A focusing routine finds a good focus for the telescope in twilight, then runs every 90 minutes during the night. The images are automatically transferred to the U.C. Berkeley campus to be processed. The appropriate template image is subtracted from each galaxy image, and new objects are detected in the resulting images. The most promising candidates are reobserved during the same night, while others require human evaluation before rescheduling. ## The Supernova Search The LOSS galaxy sample includes about 5,000 nearby galaxies. Mosaic images are taken for some large, nearby galaxies. An automatic scheduler selects targets to be observed during the night according to their observation history. Follow-up observations of SNe or routine monitoring of some other objects (active galactic nuclei, variable stars, etc.) are also scheduled at the same time. We have optimized the system in every possible way to increase the observation efficiency. The search images are taken through a hole in the filter wheel (i.e., no filter is used). This greatly increases the observation efficiency compared to observations through an $`R`$-band filter. The exposure time for the search images is only 25 seconds, but because of the high QE of the CCD camera we still reach a limiting magnitude of $`19`$ (sometimes deeper). The order of galaxies to be observed is optimized so as to minimize the accumulated movement of the telescope and dome during the night. Currently the observing efficiency is about 75 images per hour. KAIT can obtain more than 1,000 images during a winter night. Because of our high observing efficiency, all the sample galaxies are observed every 3 to 5 days in periods of good weather. This ensures that most of the LOSS SNe are discovered considerably before their maxima. Follow-up observations of SNe are usually done starting the night after their discovery. A detailed logging system is also designed to keep track of the observation history of every galaxy, which is very useful for statistical studies (e.g., SN rates). ## The LOSS discoveries in 1998 and 1999 1998 discoveries: Supernovae : 19 Novae : 4 Dwarf novae : 2 Comets : 1 1999 discoveries (through mid-December): Supernovae : 35 Novae : 7 Dwarf novae : 2 Comets : 1 For a detailed list of the LOSS discoveries, please visit the LOSS Web page at http://astron.berkeley.edu/$``$bait/kait.html. Multicolor photometric observations of SNe are always emphasized in LOSS. Our goal is to build up a multicolor database for nearby SNe. So far light curves have been obtained for 11 SNe in 1998 and 12 SNe in 1999. Examples of the LOSS discoveries and their light curves are presented in Figure 1. Our supernova research at UC Berkeley is supported by NSF grant AST-9417213 and NASA grant GO-7434.
no-problem/9912/cond-mat9912437.html
ar5iv
text
# Singularity spectra of rough growing surfaces from wavelet analysis ## 1 Introduction Inspired by the great technological importance of epitaxial crystal growth, the past decade has raised much theoretical research in the subject of kinetic roughening of surfaces during growth. The investigation of this effect, which is undesirable in practical applications, promises deep insight into statistical physics far from thermal equilibrium, see e.g. for an overview. We focus on a full-diffusion Monte-Carlo model of homoepitaxial growth of a hypothetical material with simple cubic lattice structure under solid on solid conditions, i.e. the effects of overhangs and displacements are being neglected. Then, the crystal can be described by a two-dimensional array of integers which denote the height $`f(\stackrel{}{x})`$ of the surface. On each site, new particles are deposited with a rate $`r_a`$. Particles on the surface are hopping to nearest neighbour sites with Arrhenius rates $`\nu _0\mathrm{exp}((E_b+nE_n)/(k_bT))`$, where $`E_b`$ and $`E_n`$ are the binding energies of a particle to the substrate and to its $`n`$ nearest neighbours. $`\nu _0`$ is the attempt frequency, and $`k_bT`$ has its usual meaning. In contrast to earlier investigations of similar models , we permit the desorption of particles from the surface with rates $`\nu _0\mathrm{exp}((E_d+nE_n)/(k_bT))`$, where $`E_d>E_b`$. The aim of this publication is twofold: We will first discuss the advantages of the wavelet analysis compared to the structure function (SF) approach, which has to date solely been used in the investigation of multiaffine surfaces. Then, we will apply this formalism to investigate the influence of desorption on kinetic roughening. We conclude with some remarks on the relevance of universality classes for our results. ## 2 Scaling concepts The standard approach of dynamic scaling assumes that the statistical properties of a growing surface before saturation remain invariant under a simultanous transformation of spatial extension $`\stackrel{}{x}`$, heigth $`f(\stackrel{}{x})`$ and time $`t`$, $$\stackrel{}{x}b\stackrel{}{x}=\stackrel{}{x}^{};fb^\alpha f=f^{};tb^zt=t^{};$$ (1) where $`b`$ is an arbitrary positive constant. This implies, that a part of the surface smaller than the correlation length $`\xi (t)t^{1/z}`$ can be regarded as self-affine with Hurst exponent $`\alpha `$. A popular method of measuring $`\alpha `$ uses heigth-heigth correlation functions of (theoretically) arbitrary order $`q`$: $$G(q,\stackrel{}{l},t):=\left|f(\stackrel{}{x},t)f(\stackrel{}{x}+\stackrel{}{l},t)\right|^q_\stackrel{}{x}l^{q\alpha }g(l/\xi (t)),$$ (2) where $`g(x)\text{const.}`$ for $`x0`$ and $`g(x)\text{const.}x^{q\alpha }`$ for $`x\mathrm{}`$. In practice, $`q=2`$ is the most common choice. In principle, there are two different ways to measure $`\alpha `$: The local approach determines $`\alpha `$ from the initial slope of $`\mathrm{ln}(G(q,\stackrel{}{l},t))`$ versus $`\mathrm{ln}(l)`$ for small $`l`$. The global approach analyzes the dependence of the surface width $`w=\sqrt{(f(\stackrel{}{x},t)f(\stackrel{}{x},t))^2_\stackrel{}{x}}`$ in the saturation regime on the system size $`N`$: $`w_{sat}(N)N^{\alpha _g}`$. Before saturation, the surface width increases like $`wt^\beta `$, where $`\beta =\alpha /z`$. An alternative which avoids the simulation of different system sizes uses the complete functional dependence of equation 2: $`\alpha _g`$ and $`z`$ are chosen such that the curves of $`G(2,\stackrel{}{l},t)/l^{2\alpha _g}`$ versus $`l/t^{1/z}`$ collapse on a unique function $`g`$ within a large range of $`t`$ and $`l`$. However, a careful analysis of simulation data has shown, that several models of epitaxial growth show significant deviations from this simple picture. First, one obtains different values of $`\alpha `$ from the local than from the global approach, a phenomenon which is called anomalous scaling. Second, one often finds multiscaling: height-height correlation functions of different order yield a hirarchy of $`q`$-dependent exponents $`\alpha (q)`$, when determined from the initial power-law behaviour of $`G(q,\stackrel{}{l},t)`$. These observations can be interpreted within the mathematical framework of multifractality: The Hölder exponent $`h(\stackrel{}{x_0})`$ of a function $`f`$ at $`\stackrel{}{x}_0`$ is defined as the largest exponent such that there exists a polynomial of order $`n<h(\stackrel{}{x}_0)`$ and a constant $`C`$ which yield $`|f(\stackrel{}{x})P_n(\stackrel{}{x}\stackrel{}{x}_0)|C|\stackrel{}{x}\stackrel{}{x}_0|^{h(\stackrel{}{x}_0)}`$ in the neighbourhood of $`\stackrel{}{x}_0`$. The Hölder exponent is a local counterpart of the Hurst exponent: a self-affine function with Hurst exponent $`\alpha `$ has $`h(\stackrel{}{x})=\alpha `$ everywhere. However, in the case of a multiaffine function different points $`\stackrel{}{x}`$ might be characterized by different Hölder exponents. This general case is characterized by the singularity spectrum $`D(h)`$, which denotes the Hausdorff dimension of the set of points, where $`h`$ is the Hölder exponent of $`f`$. ## 3 The wavelet approach to multifractality There is a deep analogy between multifractality and thermodynamics , where the scaling exponents play the role of energy, the singularity spectrum corresponds to entropy, and $`q`$ plays the role of inverse temperature. So, theoretically $`D(h)`$ might be calculated via a Legendre transform of $`\alpha (q)`$: $`D(h)=\text{min}_q(qhq\alpha (q)+2)`$ , a method which has been called structure function (SF) approach. However, its practical application raises fundamental difficulties: First, to obtain the complete singularity spectrum, one needs $`\alpha (q)`$ for positive and negative $`q`$. But as $`|f(\stackrel{}{x},t)f(\stackrel{}{x}+\stackrel{}{l},t)|`$ might become zero, $`G(q,\stackrel{}{x},t)`$ is in principle undefined for $`q<0`$. Therefore, only the left, ascending part of $`D(h)`$ is accessible to this method. Additionally, the results of the SF method can easily be corrupted by polynomial trends in $`f(\stackrel{}{x})`$ . It might be due to these difficulties, that - to our knowledge - no attempt to determine the singularity spectrum of growing surfaces from $`\alpha (q)`$ has ever been made. Although it has been argued that the $`\alpha (q)`$ collapse onto a single $`\alpha `$ in the limit $`t\mathrm{}`$, which characterizes the asymptotic universality class of the model , we are convinced that deeper insight into fractal growth on experimentally relevant finite timescales can be gained from a detailed knowledge of the $`D(h)`$ spectrum. To this end, we follow the strategy suggested by Arnéodo et. al , which circumvents the problems of the SF approach and permits a reliable measurement of the complete $`D(h)`$. Mathematically, the wavelet transform of a function $`f(\stackrel{}{x})`$ of two variables is defined as its convolution with the complex conjugate of the wavelet $`\psi `$, which is dilated with the scale $`a`$ and rotated by an angle $`\theta `$ : $$T_\psi [f](\stackrel{}{b},\theta ,a)=C_\psi ^{1/2}a^2d^2x\psi ^{}(a^1𝐑_\theta (\stackrel{}{x}\stackrel{}{b}))f(\stackrel{}{x}).$$ (3) Here $`𝐑_\theta `$ is the usual 2-dimensional rotation matrix, and $`C_\psi =(2\pi )^2d^2k|\stackrel{}{k}|^2|\widehat{\psi }(\stackrel{}{k})|^2`$ is a normalization constant, whose existence requires square integrability of the wavelet $`\psi (\stackrel{}{x})`$ in fourier space. Apart from this constraint, the wavelet can (in principle) be an arbitrary complex-valued function. Introducing the wavelet $`\psi _\delta (\stackrel{}{x})=\delta (\stackrel{}{x})\delta (\stackrel{}{x}+\stackrel{}{n})`$, where $`\stackrel{}{n}`$ is an arbitrary unit vector, one obtains easily $$T_{\psi _\delta }[f](\stackrel{}{b},\theta ,a)=C_{\psi _\delta }^{1/2}\left[f(\stackrel{}{b})f(\stackrel{}{b}+a𝐑_\theta \stackrel{}{n})\right]d^2b\left|T_{\psi _\delta }[f](\stackrel{}{b},\theta ,a)\right|^qG(q,a𝐑_\theta \stackrel{}{n}).$$ (4) Consequently, a calculation of the moments of the wavelet transform of the surface yields the SF approach as a special case. To avoid its weaknesses, two major improvements are necessary: First, we use a class of wavelets with a greater number of vanishing moments $`n_\stackrel{}{\mathrm{\Psi }}`$ than $`\psi _\delta (\stackrel{}{x})`$. This increases the range of accessible Hölder exponents and improves the insensitivity to polynomial trends in $`f(\stackrel{}{x})`$. We introduce a two-component version of the wavelet transform $$\stackrel{}{T}_\stackrel{}{\mathrm{\Psi }}[f](\stackrel{}{b},a)=\frac{1}{a^2}d^2x\left(\begin{array}{c}\mathrm{\Psi }_1(a^1(\stackrel{}{x}\stackrel{}{b}))\\ \mathrm{\Psi }_2(a^1(\stackrel{}{x}\stackrel{}{b}))\end{array}\right)f(\stackrel{}{x}),$$ (5) where the analyzing wavelets $`\mathrm{\Psi }_1`$, $`\mathrm{\Psi }_2`$ are defined as partial derivatives of a radially symmetrical convolution function $`\mathrm{\Phi }(\stackrel{}{x})`$: $`\mathrm{\Psi }_1(\stackrel{}{x})=\mathrm{\Phi }/x`$, $`\mathrm{\Psi }_2(\stackrel{}{x})=\mathrm{\Phi }/y`$. Then $`\stackrel{}{T}_\stackrel{}{\mathrm{\Psi }}[f](\stackrel{}{b},a)`$ can be written as the gradient of $`f(\stackrel{}{x})`$, smoothed with a filter $`\mathrm{\Phi }`$ w.r.t. $`\stackrel{}{b}`$. This definition becomes a special case of equation 3, when multiplied with $`\stackrel{}{n}_\theta =(\mathrm{cos}(\theta ),\mathrm{sin}(\theta ))^{}`$, yet allows for an easier numerical computation<sup>1</sup><sup>1</sup>1For simplicity, the irrelevant constant $`C_\mathrm{\Psi }`$ has been omitted. For example, $`\mathrm{\Phi }`$ can be a gaussian, where $`n_\stackrel{}{\mathrm{\Psi }}=1`$, or $`\mathrm{\Phi }_1(\stackrel{}{x})=(2\stackrel{}{x}^2)\mathrm{exp}(\stackrel{}{x}^2/2)`$, which has two vanishing moments. Second, the integration over $`\stackrel{}{b}`$ in equation 4 is undefined for $`q<0`$, since the wavelet coefficients might become zero. The basic idea is to replace it with a discrete summation over an appropriate partition of the wavelet transform which obtains nonzero values only, but preserves the relevant information on the Hölder regularity of $`f(\stackrel{}{x})`$. In the following, we will give a brief outline of the rather involved algorithm and refer the reader to for more details and a mathematical proof. The wavelet transform modulus maxima (WTMM) are defined as local maxima of the modulus $`M_\stackrel{}{\mathrm{\Psi }}[f](\stackrel{}{b},a):=|\stackrel{}{T}_\stackrel{}{\mathrm{\Psi }}[f](\stackrel{}{b},a)|`$ in the direction of $`\stackrel{}{T}_\stackrel{}{\mathrm{\Psi }}[f](\stackrel{}{b},a)`$ for fixed $`a`$. These WTMM lie on connected curves, which trace structures of size $`a`$ on the surface. The strength of each is characterized by the maximal value of $`M_\stackrel{}{\mathrm{\Psi }}[f](\stackrel{}{b},a)`$ along the line, the so-called wavelet transform modulus maxima maximum (WTMMM) . While proceeding from large to small $`a`$, successively smaller structures are resolved. Connecting the WTMMM at different scales yields the set $``$ of maxima lines $`l`$, which lead to the locations of the singularities of $`f(\stackrel{}{x})`$ in the limit $`a0`$. The partition functions $$Z(q,a)=\underset{l(a)}{}\left(\underset{(\stackrel{}{b},a^{})l,a^{}a}{sup}M_\stackrel{}{\mathrm{\Psi }}[f](\stackrel{}{b},a^{})\right)^qa^{\tau (q)}\text{for}a0$$ (6) are defined on the subset $`(a)`$ of lines which cross the scale $`a`$. From the analogy between the multifractal formalism and thermodynamics, $`D(h)`$ is calculated via a legendre transform of the exponents $`\tau (q)`$, which characterize the scaling behaviour of $`Z(q,a)`$ on small scales $`a`$: $`D(h)=\text{min}_q(qh\tau (q))`$. Additionally, $`\tau (q)`$ itself has a physical meaning for some $`q`$: $`\tau (0)`$ is the fractal dimension of the set of points where $`h(\stackrel{}{x})<\mathrm{}`$, while the fractal dimension of the surface $`f(\stackrel{}{x})`$ itself equals $`\text{max}(2,1\tau (1))`$. ## 4 Results In our simulations, we choose the parameters $`\nu _0=10^{12}/s`$, $`E_b=0.9\text{eV}`$ and $`E_n=0.25\text{eV}`$, and a temperature $`T=450\text{K}`$. To study the influence of desorption, we consider three models with different activation energies $`E_d`$: in model A desorption is forbidden, i.e. $`E_d=\mathrm{}`$. Models B and C have $`E_d=1.1\text{eV}`$ and $`E_d=1.0\text{eV}`$. We simulate the deposition of $`210^4`$ monolayers at a growth rate of one monolayer per second on a lattice of $`N\times N`$ unit cells using periodic boundary conditions, our standard value being $`N=512`$. To check for finite size effects, we have also simulated $`N=256`$. In all presented results averages over 6 independent simulation runs have been performed. Although we have used an optimized algorithm, these simulations consumed several weeks of CPU time on our workstation cluster. First, we have checked our results for artifacts resulting from properties of the analyzing wavelet rather than from the analyzed surface by using different convolution functions $`\mathrm{\Phi }_n`$: $`\mathrm{\Phi }_0`$ is the gaussian function, $`\mathrm{\Phi }_n,n1`$ are products of gaussians and polynomials, which have been chosen in a way that the first $`n`$ moments vanish. Then, the analyzing wavelets have $`n_{\stackrel{}{\mathrm{\Psi }}_n}=n+1`$ vanishing moments. We find (figure 1 a), that the $`\tau (q)`$-curve obtained with $`\mathrm{\Phi }_0`$ deviates significantly from those obtained with $`\mathrm{\Phi }_1`$, $`\mathrm{\Phi }_2`$ and $`\mathrm{\Phi }_3`$. The latter agree apart from small differences which are mainly due to the discrete sampling of the wavelet in the numerical implementation of the algorithm. This is explained by the theoretical result that $`d\tau (q)/dq=n_\stackrel{}{\mathrm{\Psi }}`$ for $`q<q_{crit.}<0`$ if the number of vanishing moments of the analyzing wavelet is too small. Consequently, the agreement of the other curves proves their physical relevance. Figure 1 b shows averages of $`\tau (q)`$ curves obtained with the convolution functions $`\mathrm{\Phi }_1`$, $`\mathrm{\Phi }_2`$, $`\mathrm{\Phi }_3`$ from surfaces after $`210^4s`$ of growth on an initially flat substrate. For all our models, their nonlinear behaviour reflects the multiaffine surface morphology. From the fact that these curves are reproduced within statistical errors in simulations with $`N=256`$, we conclude that finite size effects can be neglected. Clearly, desorption reduces the slope of $`\tau (q)`$, although only a small fraction of the incoming particles is desorbed: $`0.18\%`$ in model B and $`2.57\%`$ in model C with slightly higher values at earlier times. The corresponding singularity spectra are shown in figure 2 a. They have a typical shape whose descending part seems to be symmetrical to the ascending part and which changes at most slightly, while the whole spectra are shifted towards smaller Hölder exponents as desorption becomes more important. We emphasize that we find no evidence for a time dependence of the singularity spectra within the range $`9700st210^4s`$, so that our results do not support the idea of an asymptotic regime characterized by a single exponent $`\alpha `$. However, the accessible time range of computer simulations is limited, so that we cannot finally disprove the existence of such a regime. The multifractal formalism has replaced the unique scaling exponent $`\alpha `$ of spatial extension in the simple picture of dynamic scaling (equation 1) with a wide spectrum of Hölder exponents. By analogy, one might find it necessary to replace the scaling exponent $`\beta `$ with a distribution of temporal counterparts of $`h`$. To answer this question, we investigate the propability distribution function (PDF) $`P(ff,t)`$ of surface heights. Dynamical scale invariance with a single $`\beta `$ demands that $$P(ff,t)=\stackrel{~}{P}\left(\frac{ff}{t^\beta }\right)\frac{1}{t^\beta },$$ (7) i.e. the rescaled PDFs $`Pt^\beta `$ should collapse onto a single function $`\stackrel{~}{P}`$ when plotted as a function of $`(ff)/t^\beta `$ within a large time range. We measure $`\beta `$ from the increase of the surface width with time, which follows a power law for $`t150s`$ in models A and B ($`\beta _A=0.19\pm 0.01`$, $`\beta _B=0.17\pm 0.01`$) respectively $`150s<t<7500s`$ in model C ($`\beta _C=0.11\pm 0.01`$), which then starts to approach the final saturation regime. The high quality of the data collapse of the PDFs shown in figure 2 b proves that the scaling form 7 holds, showing that a single exponent describes the scaling behaviour of $`P(ff,t)`$. This parallels the finding of Krug in for the one-dimensional Das Sarma-Tamborenea model. Finally, the WTMM method, which is a precise tool to investigate local scaling properties of surfaces might help to get some insight into the phenomenon of anomalous scaling. The conventional picture notes the difference between the global $`\alpha _g`$ and a “local $`\alpha `$” which is determined from the power-law behaviour of $`G(2,\stackrel{}{l},t)`$ for small $`l`$, and, within the multifractal formalism, simply corresponds to a Hölder exponent on the ascending part of the singularity spectrum. We have determined the global scaling exponents $`\alpha _g`$ and $`z`$ from the data collapse of the scaled height-height correlation function $`G(2,\stackrel{}{l},t)`$ and find agreement within statistical errors between $`\alpha _g`$ and that value of the Hölder exponent $`h_m`$ which maxmizes $`D(h)`$ (table 1). This empirical result can be explained with a saddle-point argument: We calculate the surface width $$w^2=\frac{1}{N^2}d^2x(f(\stackrel{}{x})f)^2=\frac{1}{N^2}𝑑\stackrel{~}{h}\underset{I(\stackrel{~}{h})}{\underset{}{d^2x\delta (\stackrel{~}{h}h(\stackrel{}{x}))(f(\stackrel{}{x})f)^2}}.$$ (8) Since $`I(\stackrel{~}{h})`$ grows like $`N^{D(\stackrel{~}{h})}`$ with the system size, in large systems the integral over $`\stackrel{~}{h}`$ will be dominated by $`I(h_m)`$. That means, that $`w`$ and therefore the global scaling properties of the surface are governed by the subset of points, which has the greatest fractal dimension. Consequently, the surface will behave like a self-affine surface with Hurst exponent $`h_m`$ on lenghtscales comparable to the system size. ## 5 Conclusions Table 1 summarizes our results. Model A without desorption reviews the results in , which have been obtained with slightly different activation energies on smaller systems and shorter timescales. Models B and C show, that desorption is an important process, which, although it affects only a small fraction of the adsorbed particles, must not be neglected, since it alters the scaling properties of the surfaces by reducing $`\beta `$ and by shifting the singularity spectrum towards smaller Hölder exponents. Since the scaling behaviour depends strongly on the height of the energy barrier of desorption, and the singularity spectra have no measurable tendency to narrow with time, our results can not be used to make any decision on the aymptotic universality class of the investigated model. However, they show, that the paradigm of a few universality classes characterized by a small number of exponents, which are independent on details of the model, is not adequate to catch the features of kinetic roughening on experimentally relevant timescales of a few hours of growth. We are convinced, that the application of new mathematical tools like the wavelet analysis will help to find a better description of fractal growth phenomena in the future. We thank A. Arnéodo and J. M. López for providing us recent preprints before publication and A. Freking for a critical reading of the manuscript.
no-problem/9912/astro-ph9912432.html
ar5iv
text
# The thermal history of the intergalactic medium The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. ## 1 INTRODUCTION According to the standard big bang model, the primordial hydrogen and helium comprising the intergalactic medium (IGM) was hot and highly ionized at early times. As the universe expanded, the hot plasma cooled adiabatically, becoming almost completely neutral at a redshift of $`z10^3`$. The IGM remained neutral until the first stars and quasars began to produce ionizing photons. Eventually, the ionizing radiation became intense enough to reionize hydrogen and later, because of its higher ionization potential, to fully reionize helium. Since the thermal evolution of the IGM depends strongly on its reionization history, it can be used as a probe of the end of the ‘dark ages’ of cosmic history, when the first stars and quasars were formed \[Miralda-Escudé & Rees 1994, Hui & Gnedin 1997, Haehnelt & Steinmetz 1998\]. The absence of Gunn-Peterson absorption \[Gunn & Peterson 1965\] in quasar spectra, i.e. the complete absorption of quasar light blueward of the H i and He ii Ly$`\alpha `$ wavelengths requires that hydrogen must have been highly ionized by $`z5`$ \[Schneider, Schmidt & Gunn 1991, Songaila et al. 1999\] and helium by $`z2.5`$ (Davidsen, Kriss & Zheng 1996). Measurements of the He ii Ly$`\alpha `$ opacity suggest that helium may have reionized around $`z3`$ \[Heap et al. 2000, Reimers et al. 1997, Jakobsen et al. 1994, Davidsen et al. 1996, Anderson et al. 1999\]. This would fit in with evidence for a hardening of the UV background around this time, as derived from the ratio of Si iv/C iv in high redshift quasar absorption lines \[Songaila & Cowie 1996, Songaila 1998\], although both the observational result and its interpretation are still controversial \[Boksenberg, Sargent & Rauch 1998, Giroux & Shull 1997\]. The resonant Ly$`\alpha `$ absorption by residual low levels of neutral hydrogen along the line of sight to a quasar produces a forest of absorption lines. Although many of the basic observational facts about the Ly$`\alpha `$ forest at high redshift ($`z2`$–5) had been established before the 10 m telescope era, the advent of the Keck telescope has lead to much larger data samples at much higher signal-to-noise ratio than hitherto available \[e.g. Hu et al. 1995, Lu et al. 1996, Kirkman & Tytler 1997\]. The observational progress has been matched on the theoretical side by semi-analytic models \[e.g. Bi & Davidsen 1997\] and cosmological hydro-simulations \[e.g. Cen et al. 1994, Zhang, Anninos & Norman 1995, Petitjean, Mücket & Kates 1995, Hernquist et al. 1996, Miralda-Escudé et al. 1996\] which together with the new data are now beginning to yield significant quantitative cosmological constraints (see Rauch 1998 for a recent review). These simulations show that the low column density ($`N10^{14.5}\mathrm{cm}^2`$) absorption lines arise in a smoothly varying IGM of low density contrast ($`\delta 10`$), which contains most of the baryons in the universe. Since the overdensity is only mildly non-linear, the physical processes governing this medium are well understood and relatively easy to model. On large scales the dynamics are determined by gravity, while on small scales gas pressure is important. Since shock heating is unimportant for the low-density gas, the interplay between photoionization heating and adiabatic cooling due to the expansion of the universe results in a tight temperature-density relation, which is well described by a power-law for densities around the cosmic mean, $`T=T_0(\rho /\overline{\rho })^{\gamma 1}`$ \[Hui & Gnedin 1997\]. This relation is generally referred to as the ‘equation of state’ (even though the true equation of state is that of an ideal gas). For models with abrupt reionization, the IGM becomes nearly isothermal ($`\gamma 1`$) at the redshift of reionization. After reionization, the temperature at the mean density ($`T_0`$) decreases while the slope ($`\gamma 1`$) increases because higher density regions undergo increased photoheating and expand less rapidly. Eventually, the imprints of the reionization history are washed out and the equation of state approaches an asymptotic state, $`\gamma =1.62`$, $`T_0\left[\mathrm{\Omega }_bh^2/\sqrt{\mathrm{\Omega }_mh^2}\right]^{1/1.7}`$ \[Miralda-Escudé & Rees 1994, Hui & Gnedin 1997, Theuns et al. 1998\]. However, the timescale for recombination cooling in the low density IGM is never small compared to the age of the universe for $`z20`$ and inverse Compton cooling of free electrons off the cosmic microwave background is only efficient for $`z5`$. Consequently, unless both hydrogen and helium were fully reionized at redshifts considerably higher than this, the gas will have retained some memory of when and how it was reionized. A standard way of analyzing Ly$`\alpha `$ forest spectra is to decompose them into a set of distinct absorption lines, assumed to have Voigt profiles (e.g. Carswell et al. 1987). Various broadening mechanisms, such as Hubble broadening (the differential Hubble flow across the absorber), peculiar and thermal velocities contribute to the line widths (Meiksin 1994; Hui & Rutledge 1999; Theuns, Schaye & Haehnelt 2000). However, there exists a lower limit to the line-width, set by the temperature of the gas. Because the physical density of the IGM correlates strongly with the column density of the absorption lines, this results in a cut-off in the distribution of line widths ($`b`$-parameters) as a function of column density, which traces the equation of state of the gas (Schaye et al. 1999, hereafter STLE; Ricotti, Gnedin & Shull 2000; Bryan & Machacek 2000). Hence we can infer the equation of state of the IGM by measuring the minimum Ly$`\alpha `$ line width as a function of column density. Here, we measure the $`b(N)`$ cut-off in nine high resolution, high S/N quasar spectra, spanning the redshift range 2.0–4.5. We use hydrodynamic simulations to calibrate the relations between the parameters of the $`b(N)`$ cut-off and the equation of state. By applying these relations to the observations, we are able to measure the evolution of the equation of state over the observed redshift range. We find that the thermal evolution of the IGM is drastically different from that predicted by current models. The temperature peaks at $`z3`$, which, together with supporting evidence from measurements of the He ii opacity and the Si iv/C iv ratios, we interpret as evidence for the second reionization of helium (He ii $``$ He iii). Ricotti et al. \[Ricotti et al. 2000\] recently applied a similar technique to published lists of Voigt profile fits. A comparison with the method and results of Ricotti et al. is given in section 8. This paper is organized as follows. In sections 2 and 3 we describe the observations and the simulations respectively. We discuss the difference between evolution of the $`b`$-distribution and evolution of the temperature in section 4. In section 5 we briefly describe our method for measuring the equation of state, before we present our results in section 6. Systematic errors are discussed in section 7. Finally, we discuss and summarize the main results in section 8. ## 2 OBSERVATIONS We analyzed a sample of nine quasar spectra, spanning the redshift range $`z_{\mathrm{em}}=2.14`$–4.55 (Table 1). The spectra of Q1100$``$264 and APM 08279+5255 were kindly provided by R. Carswell and S. Ellison respectively. All spectra were taken with the high-resolution spectrograph (HIRES, Vogt et al. 1994) on the Keck telescope, except the spectrum of Q1100$``$264, which was taken with the UCL echelle spectrograph of the Anglo Australian Telescope. Details on the data and reduction procedures, as well as the continuum fitting, can be found in Carswell et al. \[Carswell et al. 1991\] for Q1100$``$264, Ellison et al. \[Ellison et al. 1999\] for APM 08279+5255 and Barlow & Sargent \[Barlow & Sargent 1997\] and Rauch et al. \[Rauch et al. 1997\] for the others. The nominal velocity resolution (FWHM) was $`8\mathrm{km}\mathrm{s}^1`$ for Q1100$``$264 and $`6.6\mathrm{km}\mathrm{s}^1`$ for the others and the data were rebinned onto $`0.04\mathrm{\AA }`$ pixels on a linear wavelength scale. The signal to noise ratio per pixel is typically about 50, except for Q1100$``$264 for which it is about 20. In order to avoid confusion with the Ly$`\beta `$ forest, only the region of a spectrum between the quasar’s Ly$`\beta `$ and Ly$`\alpha `$ emission lines was considered. In addition, spectral regions close to the quasar (typically 8–10$`h^1\mathrm{Mpc}`$, but $`21h^1\mathrm{Mpc}`$ for APM 08279+5255 and $`32h^1\mathrm{Mpc}`$ for Q1100$``$264) were omitted to avoid proximity effects. Regions thought to be contaminated by metals and damped Ly$`\alpha `$ lines were removed (metal line regions were identified by correlating with metal lines redwards of the quasar’s Ly-alpha emission line and with strong H i lines). The absorption features in the remaining spectral regions were fitted with Voigt profiles using the same automated version of VPFIT \[Webb 1987, Carswell et al. 1987\] as was used for the simulated spectra. Using a fully automatic fitting program invariably results in a few ‘bad fits’. However, given that there is no unique way of decomposing intrinsically non-Voigt absorption lines into a set of discrete Voigt profiles, it is essential to apply the same algorithm to simulated and observed spectra. Since ‘bad fits’ will also occur in the synthetic spectra, we have made no attempt to correct them. The Ly$`\alpha `$ forest of a single quasar spans a considerable redshift range ($`\mathrm{\Delta }z0.5`$). In order to minimize the effects of redshift evolution and S/N variation across a single spectrum, we divided each Ly$`\alpha `$ forest spectrum into two parts of equal length. STLE showed that their algorithm for measuring the cut-off of the $`b(N)`$ distribution is relatively insensitive to the number of absorption lines, the statistical variance is almost the same for e.g. 150 and 300 lines. Hence little information is lost by analyzing narrow redshift bins if the absorption line density is high. The two halves of the spectra were analyzed separately and each was compared with its own set of simulated spectra (see section 3). For the two lowest redshift quasars the number of absorption lines is too small to split the data in half. Hence each quasar, except for Q1100$``$264 and Q2343+123, provides two nearly independent data sets. The complete set of absorption line samples is listed in Table 2. The median, minimum and maximum redshifts of the absorption lines used to determine the $`b(N)`$ cut-off are listed in columns 2–4. The minimum column density considered was set to $`10^{12.5}\mathrm{cm}^2`$ for all samples, since blends dominate at lower column densities. The maximum column densities are listed in column 5, they were determined by the following considerations: (a) the cut-off can be measured more accurately if the column density interval is larger; (b) we only want to measure the cut-off for column densities that correspond to the density range for which the gas follows a power-law temperature-density relation. The total number of absorption lines in each sample is listed in column 6 (only lines for which VPFIT gives relative errors in both the $`b`$-parameter and column density less than 0.25 are considered). Finally, column 6 lists the pivot column density of the power-law cut-off, $`b=b_{N_0}(N/N_0)^{\mathrm{\Gamma }1}`$, that was fit to the data (c.f. section 5). Scatter plots of the $`b(N)`$-distribution for all observed samples (Table 2) are shown in Fig. 1. The solid lines are the measured cut-offs, the vertical dashed lines indicate the maximum column density used for fitting the cut-off. There are clear differences between the samples. We will show in section 4 that even a non-evolving $`b`$-distribution would imply a strong thermal evolution. Several samples contain a few lines that fall far below the cut-off. These lines, which have no significant effect on the measured cut-off, are most likely blends or unidentified metal lines. Fig. 2 shows the effective optical depth, $`\tau _{\mathrm{eff}}\mathrm{ln}(F)`$, of the observed samples as a function of (decreasing) redshift. The ionizing background in the simulations was rescaled to match these effective optical depths. The scatter is small, considering that most data points represent just half of a Ly$`\alpha `$ forest spectrum. Note that Rauch et al. \[Rauch et al. 1997\] studied the opacity of the forest using seven out of nine of the quasars from this sample. They found a slightly less rapid increase with redshift, because they rebinned the data into three redshift bins, centered on $`z=2`$, 3 and 4. ## 3 SIMULATIONS In order to calibrate the relation between the parameters of the $`b(N)`$ cut-off and the effective equation of state, we have simulated eight variants of the currently favoured flat, scale invariant, cosmological constant dominated cold dark matter model, which vary only in their heating rates (Table 3). The calibration was repeated for each of the observed samples of absorption lines listed in Table 2. Synthetic spectra were computed along 1200 random lines of sight through the simulation box at the nearest redshift output ($`\mathrm{\Delta }z=0.25`$). The background flux was rescaled such that the mean effective optical depth in the simulated spectra matches that of the observed sample. Each spectrum was convolved with a Gaussian with full width at half maximum (FWHM) identical to that of the observations and resampled onto pixels of the same size. The noise properties of the observed spectrum were computed as a function of flux and imposed on the simulated spectra. The resulting spectra were continuum fitted as described in Theuns et al. \[Theuns et al. 1998\]. Finally, Voigt profiles were fitted using the same automated version of VPFIT as was used for the observed spectra. We will refer to the sample of lines drawn from the synthetic spectra of simulation X, designed to mimic the observed spectrum Y as as model X-Y, e.g. model L1-1442a. All models have a total matter density $`\mathrm{\Omega }_m=0.3`$, vacuum energy density $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, baryon density $`\mathrm{\Omega }_bh^2=0.019`$, present day Hubble constant $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and the amplitude of the initial power spectrum is normalized to $`\sigma _8=0.9`$. The IGM is assumed to be of primordial composition with a helium abundance of 0.24 by mass and is photoionized and photoheated by the UV-background from quasars. The exact cosmology and UV-background is unimportant for this analysis, as is the normalization (see section 7). Although these parameters may affect the equation of state of the IGM in the simulations, they do not change its relation to the $`b(N)`$ cut-off. The numerical simulations used in this paper follow the evolution of a periodic, cubic region of the universe and are performed with a modified version of HYDRA \[Couchman, Thomas & Pearce 1995\], which uses smooth particle hydrodynamics \[Lucy 1970, Gingold & Monaghan 1977\]. The simulations employ $`64^3`$ gas particles and $`64^3`$ cold dark matter particles in a box of of comoving size 3.85 Mpc, so the particle mass is $`1.14\times 10^6\mathrm{M}_{}`$ for the gas and $`6.51\times 10^6\mathrm{M}_{}`$ for the dark matter. Our reference model, L1, is photoionized and photoheated by the UV-background from quasars as computed by Haardt & Madau (1996, hereafter HM), using the optically thin limit. Models L0.3, L2 and L3 are identical, except that we have multiplied the helium photoheating rates (column $`ϵ_{\mathrm{He}}`$ in Table 3) by factors of 1/3, 2 and 3 respectively (keeping the ionization rates constant). The effective helium photoheating rate may be higher than computed in the optically thin limit because of radiative transfer effects (e.g. Abel & Haehnelt 1999). Model Lx is identical to model L1, except that we have included Compton heating by the hard X-ray background as computed by Madau & Efstathiou \[Madau & Efstathiou 1999\] (column $`_\mathrm{X}`$ in Table 3). For a highly ionized plasma, the energy input per particle from Compton scattering of free electrons is independent of the density. Hence, Compton heating tends to flatten the effective equation of state. We have used this fact to artificially construct models with low values of $`\gamma `$, by multiplying the X-ray heating rates by (unrealistic) factors of 2.5 and 5 for models Lx2.5 and Lx5 respectively. Finally, model L1e is identical to model L1, except that we have set the ionization and heating rates for H i and He i for redshifts between 6 and 10 equal to those at $`z=6`$. In this model H i and He i ionize early (at $`z=10`$), which drives $`\gamma `$ to larger values. In addition to the models listed in Table 3, we have performed some simulations to investigate possible systematic effects. We simulated model L1 twice with lower resolutions ($`2\times 54^3`$ and $`2\times 44^3`$ particles) and model L3 in a larger box, but with the same resolution ($`5h^1\mathrm{Mpc}`$ and $`2\times 128^3`$ particles instead of $`2.5h^1\mathrm{Mpc}`$ and $`2\times 64^3`$ particles). Model L1 was simulated twice more with lower normalizations of the initial power spectrum ($`\sigma _8=0.65`$ and 0.4 instead of 0.9). ## 4 Evolution of the $`b`$-distribution vs thermal evolution Williger et al. \[Williger et al. 1994\] and Lu et al. \[Lu et al. 1996\] found that the $`b`$-parameters at $`z4`$ are smaller than at $`z=2`$–3. Kim et al. \[Kim et al. 1997\] showed that the increase in the line widths with decreasing redshift continues over the range $`z=3.5`$ to 2.1. It is tempting to interpret these results as evidence for an increase in the temperature $`T_0`$ with decreasing redshift. However, we will show in this section that the $`b`$-values are smaller at higher redshift even for models in which $`T_0`$ is higher, as is the case for models in which the universe is fully reionized by $`z=4`$. As pointed out by STLE, any statistic that is sensitive to the temperature of the absorbing gas, will in general depend on both the amplitude, $`T_0`$, and the slope, $`\gamma `$, of the equation of state. This is because temperature is a function of density and the absorbing gas is, in general, not all at the mean density of the universe. After reionization, $`T_0`$ will decrease and $`\gamma `$ will increase with time. Consequently, the evolution of the temperature at a given overdensity can be very different from the evolution of $`T_0`$. This is illustrated in Fig. 3, where the temperature $`T_\delta `$ at a density contrast $`\delta \rho /\overline{\rho }1`$ is plotted as a function of redshift for model L1. Even though the temperature at the mean density decreases with time (solid line), the temperature at a density contrast as little as 2 remains almost constant. The general expansion of the universe ensures that the column density corresponding to a fixed overdensity is a strongly increasing function of redshift. In fact, most of the evolution of the Ly$`\alpha `$ forest can be understood in terms of the resulting scaling of the optical depth (e.g. Hernquist et al. 1996; Davé et al. 1999; Machacek et al. 1999). When interpreting the evolution of the $`b`$-distribution, one therefore has to keep in mind that: (a) at fixed column density, absorption lines at higher redshift will correspond to absorbers of smaller overdensities; (b) the evolution of the temperature at a fixed overdensity depends on the evolution of both $`T_0`$ and $`\gamma `$. Together these effects can conspire to make the $`b`$-parameters smaller at higher redshift, even when the temperature $`T_0`$ is higher. Fig. 4 shows that this will happen for models in which the IGM is fully reionized at the observed redshifts ($`z4`$). The discussion in this section shows that to derive the evolution of $`T_0`$ using a statistic that is sensitive to the temperature of the absorbing gas, one needs to determine the evolution of: (1) the temperature of the gas; (2) the overdensity of the gas and (3) the slope of the equation of state. We will see that the uncertainty in $`\gamma `$ is the limiting factor. ## 5 Method STLE demonstrated that the observed cut-off in the distribution of $`b`$-parameters as a function of column density can be used to measure the equation of state of the IGM. In particular, they showed that the $`b(N)`$ cut-off can be fitted by a power-law, $`b=b_{N_0}(N/N_0)^{\mathrm{\Gamma }1}`$, whose parameters $`\mathrm{log}b_{N_0}`$ and $`\mathrm{\Gamma }1`$ are proportional to $`\mathrm{log}T_{\delta (N_0)}`$ and $`\gamma 1`$ respectively. For each observed sample, we first calibrate these relations using the simulations and then use them to convert the observed cut-offs into measurements of the equation of state. For each observed sample of absorption lines (Table 2) we go through the following procedure. First mock spectra are generated from the 8 simulations listed in Table 3. The synthetic spectra are processed to give them the same characteristics (mean absorption, resolution, pixel size and noise properties) as the corresponding observed spectra. These are then fitted with Voigt profiles using the same automated fitting package that was used for the observations. For each of the eight simulated sets of absorption lines, the $`b(N)`$ cut-off is fitted using the iterative procedure developed by STLE. We then use these 8 simulations to calibrate the relations between the parameters of the $`b(N)`$ cut-off, $`(b_{N_0},\mathrm{\Gamma })`$ and the parameters of the equation of state, $`(T_{\delta (N_0)},\gamma )`$. We measure the density contrast corresponding to the pivot column density, $`N_0`$, by using the fact $`\mathrm{log}T_{\delta (N_0)}\mathrm{log}b_{N_0}`$ (STLE), essentially because both thermal broadening and Jeans smoothing scale as the square root of the temperature. Fig. 5 illustrates our method for measuring $`T_\delta `$ and $`\delta (N_0)`$. In the left panel the intercept of the cut-off, measured at $`N_0=10^{14.0}\mathrm{cm}^2`$ in this example, is plotted as a function of $`\mathrm{log}T_0`$ for the simulations of sample 1422a (filled circles). As expected, the relation between the intercept, $`\mathrm{log}b_{14.0}`$, and $`\mathrm{log}T_0`$ is not very tight. The large scatter arises because $`\mathrm{log}b_{14.0}`$ is not proportional to $`\mathrm{log}T_0`$, but to $`\mathrm{log}T_{\delta (N_0)}`$, where $`\delta (N_0)`$ is the density contrast corresponding to the pivot column density $`N_0=10^{14.0}\mathrm{cm}^2`$. Indeed, if we plot $`\mathrm{log}b_{14.0}`$ as a function of $`\mathrm{log}T_{\delta =1.6}`$ (right panel), the scatter becomes very small, implying that $`\delta (N_0)1.6`$. Changing the value of $`\delta `$ shifts the data points in the plot horizontally, the amount depending on the value of $`\gamma `$ in the simulation corresponding to the data point. Because we do not know a priori what the density contrast corresponding to $`N_0`$ is, we vary $`\delta `$ and see for which value the scatter is minimal. The inset in the right panel shows the total $`\chi ^2`$ of the linear least squares fit to the data points as a function of the density contrast. For this example the scatter is minimal for $`\delta =1.6`$, which is the density contrast used in the right panel. Using the optically depth weighted density - column density relation, introduced by STLE, we find that the column density $`N=10^{14.0}\mathrm{cm}^2`$ does indeed correspond to a density contrast of about 1.6. The dot-dashed lines in Fig. 5 show the $`b`$-value expected for pure thermal broadening, $`b=\sqrt{2k_BT/m_p}`$, where $`k_B`$ is the Boltzmann constant and $`m_p`$ is the proton mass. The relation between $`\mathrm{log}b_{14.0}`$ and $`\mathrm{log}T_{\delta =1.6}`$ lies slightly above the pure thermal broadening line, but has the same slope. This implies that the widths at the cut-off are dominated by thermal broadening, with an additional component that also scales as $`T^{1/2}`$. We identify this last component as the differential Hubble flow across the absorber, whose size is set by the Jeans smoothing scale and does indeed depend on the square root of the temperature. We consistently found that minimizing the scatter in the $`\mathrm{log}b_{N_0}`$-$`\mathrm{log}T_\delta `$ relation by varying $`\delta `$ results in a relation that has a slope of about 0.5 and that the value of $`\delta `$ found agrees well with direct measurements of the density contrast corresponding to the column density $`N_0`$. We therefore use this procedure to estimate $`\delta `$ and $`\mathrm{log}T_\delta `$ and conservatively estimate the error in the density contrast to be $`\sigma (\mathrm{log}(1+\delta ))=0.15`$. Although for this work the error from the determination of the density - column density relation does not contribute significantly to the total error in $`T_0`$, it may well become the limiting factor when a larger sample of quasars is used. Having measured $`T_\delta `$, $`\delta `$ and $`\gamma `$ we can compute $`T_0`$, $$\mathrm{log}T_0=\mathrm{log}T_\delta (\gamma 1)\mathrm{log}(1+\delta ).$$ (1) Each measurement of $`\mathrm{log}b_{N_0}`$ and $`\mathrm{\Gamma }`$ comes with associated errors, which are determined from the bootstrap distribution (c.f. STLE). These errors can be converted directly into errors in $`\mathrm{log}T_\delta `$ and $`\gamma `$ respectively, using the linear relations between the cut-off and the equation of state, determined from the simulations. To these errors we add in quadrature the residual scatter of the data points around the linear fit. The error in $`\mathrm{log}T_0`$ is then given by, $`\mathrm{\Delta }^2(\mathrm{log}T_0)`$ $`=`$ $`\mathrm{\Delta }^2(\mathrm{log}T_\delta )+\left[\mathrm{\Delta }(\gamma )\mathrm{log}(1+\delta )\right]^2+`$ (2) $`\left[(\gamma 1)\mathrm{\Delta }(\mathrm{log}(1+\delta ))\right]^2.`$ After having measured the thermal evolution of the IGM, we ran a simulation designed to match the observations (dashed lines in Fig. 6). The open square in Fig. 5 corresponds to this simulation. The difference between the evolution in the calibrating simulations and the true evolution is particularly large at $`z3`$ (c.f. Fig. 6a), the redshift of model 1422a. The fact that the open square follows the same relation as the other data points, confirms that a model whose equation of state matches the one determined from the observations using the methods described in this section, does indeed have the observed $`b(N)`$ cut-off. ## 6 RESULTS The measured evolution of the temperature at the mean density and the slope of the effective equation of state are plotted in Fig. 6. From $`z4`$ to $`z3`$, $`T_0`$ increases and the gas becomes close to isothermal ($`\gamma 1.0`$). This behavior differs drastically from that predicted by models in which helium is fully reionized at higher redshift. For example, the solid curves correspond to our reference model, L1, which uses a uniform metagalactic UV-background from quasars as computed by HM and which assumes the gas to be optically thin. In this simulation, both hydrogen and helium are fully reionized by $`z4.5`$ and the temperature of the IGM declines slowly as the universe expands. Such a model can clearly not account for the peak in the temperature at $`z3`$ (reduced $`\chi ^2`$ for the solid curves are 6.8 for $`T_0`$ and 3.6 for $`\gamma `$). Instead, we associate the peak in $`T_0`$ and the low value of $`\gamma `$ with reheating due to the second reionization of helium (He ii $``$ He iii). If reionization of He ii happens locally on a timescale that is short compared to the recombination timescale, which for He iii is of the order of the age of the universe at $`z3`$, then the energy density injected by photoionization will be proportional to the gas density. Consequently, the temperature increase will be independent of the density and the equation of state of the IGM will become more isothermal. The change in the slope of the equation of state at $`z3`$ is thus physically consistent with our interpretation of the peak in the temperature at the same redshift. The dashed lines in Fig. 6 are for a model that was constructed to fit the data (reduced $`\chi ^2`$ is 0.22 for $`T_0`$ and 1.38 for $`\gamma `$). This model, for which stellar sources ionize H i and He i by $`z5`$ and quasars ionize He ii at $`z3.2`$, has a much softer spectrum at high redshift. Before reionization, when the gas is optically thick to ionizing photons, the mean energy per photoionization is much higher than in the optically thin limit \[Abel & Haehnelt 1999\]. We have approximated this effect in this simulation by enhancing the photoheating rates during reionization, so raising the temperature of the IGM. Since the simulation assumes a uniform ionizing background, the temperature has to increase abruptly (i.e. much faster than the gas can recombine) in order to make $`\gamma `$ as small as observed. In reality, the low-density gas may be reionized by harder photons, which will be the first ionizing photons to escape from the dense regions surrounding the sources. This would lead to a larger temperature increase in the more dilute, cooler regions, resulting in a decrease of $`\gamma `$ even for a more gradual reionization. Furthermore, although reionization may proceed fast locally (as in our small simulation box), it may be patchy and take some time to complete. Hence the steep temperature jump indicated by the dashed line, although compatible with the data, should be regarded as illustrative only. The globally averaged $`T_0`$ could well increase more gradually which would also be consistent with the data. Note that if the reionization is patchy, it would give rise to large spatial fluctuations in the temperature. Because we measure the temperature-density relation from the lower cut-off of the $`b(N)`$-distribution, our results should be regarded as lower limits to the average temperature. Absorption lines arising in a local, hot ionization bubble would not necessarily raise the observed cut-off in the $`b(N)`$-distribution. The errors in Fig. 6 can be directly traced back to the corresponding $`b(N)`$-distributions (Fig. 1). Take for example 0827a, which has an extremely low value of $`\gamma `$, with a very large error. This is clearly due to the gap in the $`b`$-distribution at $`\mathrm{log}N12.5`$–13.0. The lack of lines in that region could be a statistical fluctuation, or it could be an indication of large variations in the temperature of the IGM. ## 7 SYSTEMATICS In this section we will investigate whether there are any systematic effects that could affect our results. ### 7.1 Numerical resolution The $`b`$-distribution has been shown to be very sensitive to numerical resolution \[Theuns et al. 1998, Bryan et al. 1999\]. It is therefore important to check that the lower limit to the line widths in our simulations is not set by the numerical resolution. We have resimulated our reference model, L1, which is the second coldest model, twice more at a lower resolution. The resolution was decreased by decreasing the number of particles from $`2\times 64^3`$ to $`2\times 54^3`$ and $`2\times 44^3`$ respectively, while keeping the size of the simulation box constant. The resulting probability distributions for the intercept and the slope of the $`b(N)`$ cut-off are plotted in Fig. 7 for our lowest and highest redshift samples. Only for the lowest resolution simulation do the differences become noticeable. The intercept increases slightly and the slope becomes slightly shallower, indicating that the lines at the low column density end are not resolved. We conclude that the simulations used for this work have sufficient resolution to provide an accurate determination of the effective equation of state of the IGM. ### 7.2 Simulation box size In order to investigate the effect of the simulation box size, we resimulated model L3 using a larger box, but with the same resolution ($`5h^1\mathrm{Mpc}`$ and $`2\times 128^3`$ particles instead of $`2.5h^1\mathrm{Mpc}`$ and $`2\times 64^3`$ particles). The effect of increasing the size of the simulation box is more difficult to determine than the effect of numerical resolution. When the box size is increased, the gas becomes slightly hotter ($`T_0`$ increases by a few per cent), presumably because shock heating is more effective due to the larger infall velocities. This effect is small and since it does not affect the relation between the $`b(N)`$ cut-off and the equation of state, it is unimportant for this work. We find that using the larger simulation box results in higher values of $`T_0`$. The effect is negligible at $`z4`$ and increases to $`\mathrm{\Delta }\mathrm{log}T_00.05`$ ($`\mathrm{\Delta }T_0/T_00.12`$) at $`z2`$. The change in the derived value of $`\gamma `$ is very small ($`0.05`$). Although $`T_0`$ and $`\gamma `$ may change a bit more if we increase the box size further, the small difference between the two box sizes indicates that the effect of the box size is insignificant compared to the statistical errors. ### 7.3 Cosmology STLE showed that the relation between the cut-off in the $`b(N)`$ distribution and the equation of state is independent of the assumed cosmology. However, the initial power spectra of the models investigated by STLE were all normalized to match the observed abundance of galaxy clusters at $`z=0`$. Bryan & Machacek \[Bryan & Machacek 2000\] claimed that the $`b`$-distribution depends strongly on the amplitude of the power spectrum, as predicted by the model of Hui & Rutledge \[Hui & Rutledge 1999\]. However, Theuns et al. \[Theuns et al. 2000\] found the dependence to be very weak, provided that the line fitting is done using an algorithm, like VPFIT, that attempts to deblend absorption lines into a set of thermally broadened components. Although the absorption features do become broader for models with less small-scale power, the curvature in the line centers does not change much. Consequently, the total number of Voigt profile components used by VPFIT will generally increase, but the fits to the line centers will change very little. To quantify the effect of decreasing the amount of small-scale power, we resimulated our reference model L1 twice using a lower normalization of the initial power spectrum. We find that normalizing to $`\sigma _8=0.65`$ instead of $`\sigma _8=0.9`$, changes the derived values of $`T_0`$ by less than 3 per cent for all redshifts. The change in $`\gamma `$ is never greater than 0.1. For the extreme case of $`\sigma _8=0.4`$, the derived values of $`T_0`$ are about 15 per cent lower and $`\gamma `$ differs by about 0.15. We conclude that the effect of the uncertainty in the normalization of the primordial power spectrum is small. ### 7.4 Continuum fitting and the mean absorption Errors in the continuum fit of the observed spectra, will lead to errors in the effective optical depth. Underestimating the observed continuum will decrease the measured effective optical depth. Decreasing the mean absorption in the simulations will increase the density corresponding to a given column density. Hence the slope of the cut-off will remain unchanged, but the intercept will increase, although the effect is small (STLE). Increasing the intercepts of the calibrating simulations will decrease the derived temperature $`T_\delta `$. The higher derived density contrast will work in the same direction (provided that $`\gamma >1`$), resulting in a lower $`T_0`$. The measured effective optical depth for sample 2343 seems to be relatively high compared to the other samples (Fig. 2). We tried scaling the synthetic spectra to the effective optical depth corresponding to the dashed line in Fig. 2 at the redshift of 2343, which is about 30 per cent lower than the measured value. This resulted in an increase of $`\mathrm{log}T_0`$ by 0.04 ($`\mathrm{\Delta }T_0/T_00.09`$), while leaving $`\gamma `$ unchanged. In addition to errors in the continuum fit of the observed spectra, the continuum fitting of the synthetic spectra could also lead to systematic errors. We checked this by repeating the analysis of the simulations corresponding to our highest redshift sample, 2237b, but this time without continuum fitting the synthetic spectra. In this case the derived value of $`T_0`$ would be 9 per cent lower, while the value of $`\gamma `$ would be higher by 0.07. Hence systematic effects in the continuum fitting of the observed and the synthetic spectra are unlikely to be important. ## 8 SUMMARY AND DISCUSSION We have measured the cut-off in the distribution of line widths ($`b`$) as a function of column density ($`N`$) in a set of nine high-quality Ly$`\alpha `$ forest spectra, spanning the redshift range 2.0–4.5. We emphasized that the evolution of the temperature of the intergalactic medium (IGM) cannot be derived directly from the evolution of the $`b(N)`$ distribution, the decrease of the overdensity corresponding to a fixed column density with redshift has to be taken into account. We therefore used hydrodynamic simulations to calibrate the relations between the $`b(N)`$ cut-off and the temperature-density relation of the low density gas. The calibration was done separately for each observed spectrum, using synthetic spectra that were processed to give them identical characteristics as the observed spectrum. Crucially, Voigt profiles were fitted to the real and simulated spectra using the same automated fitting package (a modified version of VPFIT). We have checked possible systematic errors arising from the finite numerical resolution, the finite size of the simulation box, the amplitude of the initial power spectrum and continuum fitting errors in both synthetic and observed spectra. In all cases, the effects were small compared to the statistical errors. The measured thermal evolution differs drastically from the scenario predicted by current models of the ionizing background from quasars, in which helium is fully reionized by $`z4.5`$. The temperature at the mean density, $`T_0`$, increases from $`z4`$ to $`z3`$, after which it decreases again (Fig. 6). The slope of the equation of state reaches a minimum at $`z3`$, where it becomes close to isothermal. More data at $`z3`$ is needed to determine whether the rise in $`T_0`$ is sharp or gradual. These results suggest that the low density IGM was reheated from $`z4`$–3, which we interpret as reheating associated with the reionization of He ii. These results are in qualitative agreement with those reported recently by Ricotti et al. \[Ricotti et al. 2000\]. They used a method which relies on the assumption that only thermal broadening contributes to the line widths of the absorption lines at the peak of the $`b`$-distribution and used approximate simulation techniques to determine the density - column density relation. By applying their method to published lists of Voigt profile fits they found that at $`z3`$, $`\gamma `$ is smaller than would be expected if the reionization of helium had been completed at high redshift. Ricotti et al. have only three data points in the range $`z=2`$–4 which all overlap at the 0.5 sigma level, so we cannot compare the shape of the temperature evolution. However, it is interesting that they measure a temperature at $`z=2`$ and $`z=4`$ that is about seventy per cent higher than is reported here, although their error bars are sufficiently large to agree with our results at the 1$`\sigma `$ level. Since Jeans smoothing contributes to the line widths, as Fig. 5 indicates (see also Theuns et al. 2000), their method should lead to an overestimate of the temperature (by about sixty per cent for the example in Fig. 5). The contribution of Jeans smoothing to the line width may be larger for the lines near the peak of the $`b`$-distribution than indicated in Fig. 5, which is for the narrower lines near the cut-off. Although the reionization of helium may not be the only process which can explain the peak in $`T_0`$ at $`z3`$, it appears to be the only process that can simultaneously account for the observed decrease in $`\gamma `$. Galactic winds for example, would be less important in the low density regions, and would therefore result in an increase in the slope of the equation of state. Another possible explanation is a hardening of the ionizing background at $`z3`$, as might be expected because of the increase in the number of quasars. If helium had already been ionized, this would still raise the temperature somewhat, but the effect would be stronger in the high density regions where the gas recombines faster. Hence this would also lead to an increase in $`\gamma `$, contrary to what is observed. Recently, it was shown \[Theuns et al. 1998, Bryan et al. 1999\] that high resolution simulations of the standard cold dark model, using the ionizing background computed by HM produce a larger fraction of narrow lines than observed at $`z3`$. Different authors have proposed different solutions to this problem. Theuns et al. \[Theuns et al. 1998\] suggested that the gas temperature in the simulations was too low, while Bryan et al. \[Bryan et al. 1999\] argued that the amplitude of primordial fluctuations was too high. For a given reionization history, the temperature in the simulations could for example be increased by increasing the baryon density and the age of the universe \[Theuns et al. 1999\], including Compton heating by the hard X-ray background \[Madau & Efstathiou 1999\] and possibly photo-electric heating by dust grains \[Nath, Sethi & Shchekinov 1999\]. A comparison of our reference model L1, which uses the HM ionizing background (solid lines in Fig. 6), with the data clearly shows that the model underestimates the temperature at $`z3`$. Since the above mentioned mechanisms for increasing the temperature do not change the overall shape of the thermal evolution, a change in the reionization history is required to bring the data and simulations into agreement. Furthermore, the photoheating rates around reionization need to be enhanced to account for the fact that heating by photons with energies significantly above the ionization potential is important when the gas is not optically thin \[Abel & Haehnelt 1999\], as is generally assumed in cosmological simulations. There are two other lines of evidence for late reionization of He ii. The first are direct measurements of the optical depth from He ii Ly$`\alpha `$ absorption, which have so far been obtained for four quasars \[Jakobsen et al. 1994, Davidsen et al. 1996, Reimers et al. 1997, Anderson et al. 1999, Heap et al. 2000\]. These observations already provide strong evidence for a drop in the mean absorption from $`z3.0`$ to 2.5. The second piece of evidence concerns a change in the spectral shape of the ionizing background. As He ii is ionized, the mean free path of hard UV photons will increase and the spectrum of the UV background will become harder. Songaila & Cowie \[Songaila & Cowie 1996\] and Songaila \[Songaila 1998\] have reported a rapid increase with decreasing redshift of the Si iv/C iv ratio at $`z3`$, which they interpreted as evidence for a sudden reionization of He ii. However, Boksenberg et al. \[Boksenberg, Sargent & Rauch 1998\] found only a gradual change with redshift. The interpretation of this metal line ratio is complicated because local stellar radiation is likely to be important \[Giroux & Shull 1997\]. It should be kept in mind that these three different types of observations probe different physical structures. Our results apply to density fluctuations around the cosmic mean, the effective optical depth depends mostly on the neutral fraction in the voids and the metal line ratios probe the high density peaks. These structures will probably not be ionized simultaneously. After the ionization front breaks through the haloes surrounding the source of He ii ionizing photons, e.g. a quasar, it will propagate quickly into the voids. The filaments, where the recombination rate is much higher, will get ionized more slowly, starting from the outside \[Miralda-Escudé, Haehnelt & Rees 2000, Gnedin 2000\]. The IGM will still be thick to Ly$`\alpha `$ photons when only a small neutral fraction remains in the voids. Hence a drop in the He ii Ly$`\alpha `$ optical depth at $`z3`$ would suggest that the reionization of the voids, which cover most of the volume, is complete \[Miralda-Escudé 1998\]. It is important to note that the peak in $`T_0`$ at $`z3`$ does not imply that the temperature of the general IGM reaches a maximum. In fact, our results imply that the temperature of slightly overdense gas ($`\delta 2`$) is almost constant because the slope of the equation of state, $`\gamma `$, is minimum when $`T_0`$ is maximum. Detailed modeling, probably in the form of large hydrodynamical simulations, which include radiative transfer, is required to see whether the various observational constraints can be fit into a consistent picture. However, the ingredients necessary to explain our discovery of a peak in the temperature of the low density IGM at $`z3`$ are clear even from our crude model: a softer background at high redshift to delay helium reionization and enhanced heating rates compared to the optically thin limit. Once the evolution of helium heating is understood, the measurements of the temperature at higher redshifts can be used to constrain the epoch of hydrogen reionization. Finally, we would like to note that because of their hard spectrum, quasars tend to ionize helium shortly after hydrogen, although the delay depends on the clumpiness of the IGM \[Madau, Haardt & Rees 1998\]. It may therefore be difficult to postpone the reionization of helium until $`z3`$, if quasars were responsible for reionizing hydrogen at $`z>5`$. Hence the mounting evidence for helium reionization at $`z3`$ suggests that hydrogen was reionized by stars. ## ACKNOWLEDGMENTS We would like to thank Bob Carswell for letting us use the spectrum of Q1100$``$264 and for helping us with VPFIT. We are also grateful to Sara Ellison for letting us use the HIRES spectra of APM 08279+5255. We thank Martin Haehnelt, Lam Hui, Jordi Miralda-Escudé and Martin Rees for stimulating discussions. JS thanks the Isaac Newton Trust, St. John’s College and PPARC for support, WLWS acknowledges support from NSF under grant AST-9900733 and GE thanks PPARC for the award of a senior fellowship. Research was conducted in cooperation with Silicon Graphics/Cray Research utilising the Origin 2000 supercomputer COSMOS, which is a UK-CCC facility supported by HEFCE and PPARC. This work has been supported by the TMR network on ‘The Formation and Evolution of Galaxies’, funded by the European Commission.
no-problem/9912/astro-ph9912221.html
ar5iv
text
# Coronal heating and emission mechanisms in AGN ## 1. Introduction Observations of the central regions of AGN show that a significant fraction of their bolometric luminosity comes out in hard X-rays (from $`0.1`$ keVall the way up to a few 100 keV) and sometimes up to 1 GeV. According to the standard paradigm, AGN are powered by accretion onto their central black hole. An accretion disk around a supermassive black hole (in an AGN) leads to the production of a strong optical/ultraviolet continuum, the so–called ’blue bump’. Such a component is attributed to quasi-blackbody emission (e.g. see Koratkar & Blaes 1999 for relevant modifications to the blackbody spectrum for an accretion disk). The effective absorptive optical depth in a disk is typically $`\tau >>1`$ which implies that photons are close enough to being in thermal equilibrium with the electrons to produce a blackbody–like spectrum. The luminosity of this component scales as $`L\pi r_g\sigma T^4`$ where $`r_g`$ is the Schwarzchild radius. This implies gas temperatures in the disk of the order of $$T5\times 10^5L_{44}^{1/4}\left(\frac{L}{L_{Edd}}\right)^{1/2}K,$$ (1) where $`L=10^{44}L_{44}`$. $`L_{Edd}`$ is the Eddington luminosity and the temperature decreases with increasing luminosity (or increasing black hole mass). It is evident from Eqn. 1 that if AGN generated their energy solely by accretion of matter in thermodynamic equilibrium, the highest temperatures achieved would be of the order of $`10^5\mathrm{K}`$ and negligible X-ray emission would be expected. Phenomenologically, therefore, we know that there must be an efficient mechanism for transferring the energy released in an accretion disk into a plasma component that is far from thermodynamic equilibrium with the ambient radiation and that radiates the high energy portion of AGN spectrum. Although there are many uncertainties concerning how such energy transfer occurs, we know there must be mechanisms that can sustain the presence of a very hot plasma near an accretion disk: e.g. the Sun which has a the surface temperature of only $`5500\mathrm{K}`$, is surrounded by a magnetically-dominated corona with a temperature of $`23\times 10^6\mathrm{K}`$. Here, I address the issue of why we expect hot electrons to be present in AGN. I will discuss how AGN coronae formation can be understood as a direct consequence of the internal dynamics of an accretion disk where shock-like events (magnetic reconnection and MHD processes) are responsible for heating the coronal plasma. I will examine the relevant radiative processes in AGN that are responsible for the production of the X-ray emission that we observe. In particular I will discuss the relevance of these processes for both AGN coronae and hot advection-dominated accretion flows (ADAFs) and their relative importance for different regimes of source luminosities. ### 1.1. The X-ray emission Before discussing in more detail the proposed picture of coronae formation I will briefly review the observed characteristics of the X-ray emission in AGN and the information that these give when trying to construct a model for coronae. Assuming for now the existence of a hot plasma (see Section 2), it is well established that the X-ray continuum in AGN can be explained by thermal Comptonization of the soft UV radiation (e.g Haardt & Maraschi 1993). There is evidence also that this X-ray continuum is reprocessed in a cold medium (e.g. the accretion disk) and gives rise to a reflection bump at around $`30\mathrm{keV}`$ and a broad iron, Fe $`K\alpha `$ emission line at 6.4 keV. The presence of these features in the spectrum place constraints on the geometry of the X-ray emitting region and tell us that the hot plasma has to be situated above the colder accretion disk. Also, the different ratios of soft luminosity (attributed to the accretion disk) to hard X-ray luminosity imply that the hot coronal plasma is not a slab but consists of localized active regions (e.g. Haardt, Ghisellini & Maraschi 1994). This is also consistent with the characteristically short X-ray variability timescales observed in Seyfert galaxies (as short as a few hours) which imply that enormous amounts of energy are released in a very short time in flare-like events. Finally, the average X-ray spectra of Seyfert galaxies shows a high-energy cutoff usually above 100 keV which can be reproduced quite well by models of thermal Comptonization. The absence of conspicuous electron pair annihilation line indicates that most of the hot electrons in a corona are thermal. Whatever the processes that operate in coronae to heat the plasmas are, they do not accelerate a large number of electrons. Alternatively, mechanisms exists for efficiently thermalizing the electron population (e.g. Svensson & Ghisellini 1998). ## 2. Accretion disk coronae In recent years significant progress has been made in understanding accretion disks and how angular momentum transport operates with the identification of the Balbus–Hawley instabily (e.g. Balbus & Hawley 1997) for weakly magnatizes disks. Thanks to this fundamental progress, we can now think more confidently of coronae formation as a direct consequence of the internal dynamics of an accretion disk; much the same way the solar corona is thought to be heated by dynamical processes lower in the Sun’s atmosphere. More specifically, Balbus & Hawley have identified an instability in weakly magnetized accretion flows that is responsible for the transport of angular momentum. The way such a magneto-rotational instability works is by producing strong amplification of the seed magnetic fields and in this way channelling the energy present in the system into magnetic energy (see Figure 1). The formation of a corona can be undestood as an efficient way for a disk to saturate the Balbus-Hawley instability and to dissipate the accretion energy/angular momentum into particles, which can then radiate it away. The built-up magnetic energy is dissipated into particles locally in the disk and partly builds-up strong magnetic flux tubes leading to a net vertical flux of magnetic energy which inevitably escapes from the disk to form a magnetically-dominated corona. The idea that buoyancy of strong flux tubes in the disk and their expulsion from the disk to form magnetic coronae has been proposed in the past (Stella & Rosner 1984, Coroniti 1981, Galeev, Rosner & Vaiana 1979), but can only now be integrated in a deeper understanding of accretion phenomena. ### 2.1. Coronal heating: magnetic reconnection Within the context of such a model, the question of how the coronal plasma heats up to X-ray emitting temperatures can be assessed. Such coronae (e.g. ensembles of flux tubes) contain a very small amount of mass and are magnetically dominated. By definition, the magnetic flux tubes become buoyant when $`\beta 8\pi P/B^2>1`$ where $`B`$ is the magnetic field strength and $`P`$ the gas pressure in the disk. The typical speed of the rising flux tubes is then given by their Alven speed e.g. $`V_A=B/\sqrt{4\pi \rho }>c_\mathrm{s}`$ and is by definition always larger then the relevant sound speed ($`c_\mathrm{s}`$) implying (in a simple view) that the buoyant magnetic energy has to be dissipated in shocks. So, whereas the core of the disk is usually dominated by subsonic turbulence the coronal gas above the disk is, inevitably, supersonic. More realistically we would expect this energy to be dissipated in shocks in reconnection sites where strong impulsive heating occurs when magnetic field lines are brought together. Reconnection can occur either ‘spontenously’ in a given magnetic loop or can be ‘driven’ when more than one magnetic tubes are brought together. A reconnection site is thought to be a collection of particle acceleration and heating (e.g. direct Joule heating near X-point, slow shock acceleration, Fermi magnetic mirroring in turbulent outflows, conduction, downstream fast shocks etc..) but the detailed physics of how it occurs is still an unsolved MHD problem (for the case Petschek reconnection). Although the general physical picture of accretion disk coronae described above provide us with an understanding of why we expect to find hot plasmas above accretion disks, there remain many uncertainties. These include the question of which pressure is relevant for magnetic field amplification and buoyancy. It is not clear whether magnetic fields build up to equipartition such as $`B^2/8\pi P_{tot}`$ or $`B^2/8\pi P_{gas}`$ Also, it is uncertain what fraction of the magnetic energy is dissipated into $`e^{}`$ and $`p`$. It is clear that when energy dissipation occurs one needs to treat the plasma as a 2-temperature medium: different wave-particle interactions will heat electrons and protons differently. One can construct 2-T AGN coronae if the protons contain most of the energy (Di Matteo, Blackman & Fabian 1997), but no clear-cut arguments can be made to support their plausibility over one tempearature models. The same problems exist in the case of ADAF plasma where the 2-T condition is a crucial assumption but, at this stage, yet to be proven. Coronal plasmas are often collisionless. It is not clear, therefore, whether electrons are thermalized or dissipative processes cannot accelerate particles efficiently. In other words the importance of direct heating versus acceleration in either coronal or ADAFs plasma, cannot be determined. In AGN coronae or ADAFs, $`V_A`$ can approach $`c`$, and one should really consider the effects of relativistic MHD. Such effects are usually not taken into account. Important recent results from numerical simulations (Miller & Stone 1999) do indeed show the formation and heating of magnetized coronae above accretion disks. In particular, Miller & Stone have shown that when weak $`B`$ fields are amplified in the disk via MHD turbulence driven by the Balbus–Hawley instability some of the magnetic energy is dissipated locally but a good fraction escapes due to buoyancy and forms a strongly magnetized corona above the disk. Most of the energy in their simulations is dissipated at a few scale heights above the disk, and strong shocks are continuously produced making the corona hot up to X-ray emitting temperatures. Their results on the impulsive heating of coronal plasmas, are in accordance with simple analytical estimates (Di Matteo 1998) on the occurrence of an ion–acoustic instability, associated with slow shocks in Petschek magnetic reconnection in flare-like events in a magnetically-dominated corona. The occurrence of an ion–acoustic instability, associated with slow shocks in Petschek magnetic reconnection, can be shown to result in a violent release of energy and heat the coronal plasma to canonical X-ray emitting temperatures (of a few $`\times 10^9`$K). ## 3. Emission mechanisms In the previous sections I have discussed the vertical structure of an accretion disk and how its internal dynamics can lead to the formation of a highly-dynamic, magnetically dominated and heated corona. Both in AGN coronae and in ADAFs (also magnetized and with hot $`10^9\mathrm{K}`$ electrons; see Narayan, Quataert & Mahadevan 1999 for a recent review) the relevant interactions and relative emission mechanisms are: particle-photon $``$ Compton processes; particle-magnetic field $``$ cyclo/synchrotron emission and particle-particle $``$ bremsstrahlung emission. Inverse-Compton scattering of disk photons off the hot electrons is usually the dominant process in most AGN. The importance of Inverse-Compton processes scales $`U_{rad}\mathrm{exp}(y)`$ where $`y4(kT/m_ec^2)\tau `$, $`\tau =n_e\sigma _Tr`$ and the energy density $`U_{rad}L/(R^2c)\tau `$ is usually attributed to the external soft photon flux coming from the disk. Bremsstrahlung instead scales as $`n_e^2T^{1/2}`$, where $`n_e`$ is the electron number density, and dominates IC only in very low luminosity objects e.g. $$IC>BREM\frac{L}{L_{Edd}}>\frac{10^5}{\sqrt{\theta }}r_s$$ (2) (see also Section 3.2), where $`\theta `$ is the dimensionless electron temperature. ### 3.1. Synchrotron emission and Comptonization in coronae and ADAFs Both in the case of an AGN corona or an ADAF, magnetic fields are close to their equipartition values and synchrotron emission should be taken into account. In both cases electrons are considered to be thermal. Thermal synchrotron is heavily self-absorbed up to a frequency $`\nu _sT^2B`$. Equipartition arguments (in the case of a supermassive black hole with $`M10^7\mathrm{M}_{}`$) imply values of $`BP_{gas}+P_{rad}10^{35}`$ Gauss and for canonical corona temperatures of $`10^9\mathrm{K}`$, synchrotron emission peaks in the Infrared/Optical bands (Di Matteo, Celotti & Fabian 1997; see Figure 2a). The synchrotron soft photon flux is Inverse Compton scattered up to X-ray energies by the hot electrons (dotted line in Figure 2a). In most cases though, synchrotron Inverse Compton does not dominate the X-ray emission because the energy density due to the soft disk photon field dominates the scattering. Due to the high self-absorption, the synchrotron energy density $`U_{syn}<B^2/8\pi `$ which, given the equipartion arguments, implies $`U_{syn}<U_{disk}P_{rad}`$ (Fig. 4a) and Comptonization of the soft disk photons dominates the X-ray emission. Given the strong dependences of thermal synchrotron emission on both temperature and $`B`$, and the very dynamical structure of the corona, estimates of an ’average’ $`T`$ and $`B`$, which are usually employed in these calculations are likely to be unrealistic. As shown by the above relations, the importance of synchrotron and its Inverse Compton might be highly enhanced if flares are at different temperatures and some are hotter and/or with higher magnetic fields than the values usually assumed from global arguments. It is plausible that a non-thermal population of particles could be present which would also significantly enhance the synchrotron and its IC component but this has not been taken into account in current models). In contrast, in an ADAF the synchrotron photons are, in most cases, the only source of soft photons for Comptonization (even if the ADAF is matched to a thin disk at large distances, as in models by Esin et al. 1997; Quataert et al 1999 its contribution is negligible; e.g. see Figure 3). Comptonization of the synchrotron component in an ADAF can explain the observed X-ray emission in some low-luminosity AGN. Figure 3 shows the case for M81 and NGC 4579 both of which have an estimated mass for the central black hole, detected hard X-ray emission, and optical/UV emission too low to allow for the presence of a geometrically thin, optically thick accretion disk close to the black holes (Quataert et al. 1999). In general, in a standard ADAF, Comptonization becomes important for $`\dot{m}<\dot{m}_{crit}`$ above which the hot flow cannot exist. In the high $`\dot{m}`$ regime considered here, the characteristic electron scattering optical depth $`\tau `$ of the ADAF becomes of order unity since $`\tau \dot{m}`$. As $`\tau `$ decreases with decreasing $`\dot{m}`$, bremsstrahlung becomes the dominant process (see Figure 3). ### 3.2. Bremsstrahlung emission in elliptical galaxy nuclei Equation 2 shows that bremsstrahlung emission can only be important in sources with very low luminosities (or low radiative efficiencies). The nuclear regions of elliptical galaxies provide excellent environments in which to study the physics of low-luminosity accretion. There is now strong evidence, from high-resolution optical spectroscopy and photometry, that black holes with masses of $`10^810^{10}\mathrm{M}_{}`$ reside at the centers of bulge dominated galaxies, with the black hole mass being roughly proportional to the mass of the stellar component (e.g. Magorrian et al. 1998). X-ray studies of elliptical galaxies also show that they possess extensive hot gaseous halos, which pervade their gravitational potentials. Given the large black hole masses inferred, some of this gas must inevitably accrete at rates which can be estimated from Bondi’s spherical accretion theory. Such accretion should, however, give rise to far more nuclear activity (e.g. quasar-like luminosities) than is observed, if the radiative efficiency is as high as 10 per cent (e.g. Fabian & Canizares 1988), as is generally postulated in standard accretion theory. Accretion with such high radiative efficiency need not be universal, however. As suggested by several authors (Rees et al. 1982; Fabian & Rees 1995), the final stages of accretion in elliptical galaxies may occur via an advection-dominated accretion flow (ADAF; Narayan & Yi 1995, Abramowicz et al. 1995) at roughly the Bondi rates. Within the context of such an accretion mode, the quiescence of the nuclei in these systems is not surprising; when the accretion rate is low, the radiative efficiency of the accreting (low density) material will also be low. Other factors may also contribute to the low luminosities observed. As discussed by Blandford & Begelman (1999; nad emphasized observationally by Di Matteo et al. 1999a), and shown numerically by Stone, Pringle & Begelman (1999), winds may transport energy, angular momentum and mass out of the accretion flows, resulting in only a small fraction of the material supplied at large radii actually accreting onto the central black holes. If the accretion from the hot interstellar medium in elliptical galaxies (which should have relatively low angular momentum) proceeds directly into the hot, advection-dominated regime, and low-efficiency accretion is coupled with outflows (Di Matteo et al. 1999a), the question arises of whether any of the material entering into the accretion flows at large radii actually reaches the central black holes. The present observational data generally provide little or no evidence for detectable optical, UV or X-ray emission associated with the nuclear regions of these galaxies. The discovery of hard X-ray emission from a sample of six nearby elliptical galaxies (Allen, Di Matteo & Fabian 1999), including the dominant galaxies of the Virgo, Fornax and Centaurus clusters (M87, NGC 1399 and NGC 4696, respectively), and NGC 4472, 4636 and 4649 in the Virgo cluster, has important implications for the study of quiescent supermassive black holes. The ASCA data for all six sources provide clear evidence for hard, power-law emission components, with photon indices in the range $`\mathrm{\Gamma }=0.61.5`$ and intrinsic $`110`$ keV luminosities of $`2\times 10^{40}2\times 10^{42}`$ erg s<sup>-1</sup> (Allen et al. 1999). This potentially new class of accreting X-ray source has X-ray spectra significantly harder than Seyfert nuclei and bolometric luminosities relatively dominated by their X-ray emission. We argue that the X-ray power law emission is most likely to be due to accretion onto the central supermassive black holes, via low-radiative efficiency accretion (Allen et al. 1999, Di Matteo et al. 1999b). The broad band spectral energy distributions for these galaxies, which accrete from their hot gaseous halos at rates comparable to their Bondi rates, can be explained by low-radiative efficiency accretion flows in which a significant fraction of the mass, angular momentum and energy are removed from the flows by winds. The observed suppression of the synchrotron components in the radio band (Di Matteo et al. 1999a; excluding the case of M87) and the systematically hard X-ray spectra, which are interpreted as thermal bremsstrahlung emission, support the conjecture that significant mass outflow is a natural consequence of systems accreting at low-radiative efficiencies (see the representative cases of NGC 4649 and M87 in Figure 4 and for all of the objects Di Matteo et al. 1999b). The presence of outflows in the hot flows suppresses completely the importance of Comptonization in ADAF flows and bremsstrahlung becomes (irrespectively of the accretion rate, c.f. Figure 3) the dominant X-ray emission mechanism. A representation of the effects on the ADAF spectra of outflows is shown in Figure 4. ## ACKNOWLEDGEMENTS TDM acknowledges support for this work provided by NASA through Chandra Fellowship grant number PF8-10005 awarded by the Chandra Science Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-39073. ## REFERENCES Abramowicz M.A., Chen X., Kato S., Lasota J.P., Regev O., 1995, ApJ, 438, L37 Allen, S.W., Di Matteo, T., Fabian, A.C., 1999, MNRAS, in press Balbus, S. A., Hawley, J. F., in Accretion Processes in Astrophysica l Systems: Some Like it Hot! Eighth Astrophysics Conference, College Park, MD, October 1997. Edited by Stephen S. Holt and Timothy R. Kallman, AIP Conference Proceedings 431., p.79 Blandford R.D., Begelman M.C., 1999, MNRAS, 303, L1 Coroniti F.V., 1981, ApJ, 244, 587 Di Matteo T., 1998, MNRAS, 299, L15 Di Matteo T., Blackman E.G., Fabian A.C., 1997, MNRAS, 291, L23 Di Matteo T., Celotti A., Fabian A.C., 1997, MNRAS, 291, 805 Di Matteo, T., Fabian, A. C., Rees, M. J., Carilli, C. L., Ivison, R. J. 1999a, MNRAS, 305, 492 Di Matteo, T., Quataert, E., Allen, S. W., Narayan, R., Fabian, A. C., 1999b, MNRAS, in press Fabian A.C., Canizares C.R., 1988, Nature, 333, 829 Fabian A.C., Rees M.J., 1995, MNRAS, 277, L55 Ghisellini, G., Haardt, F.,Svensson, R. 1998, MNRAS, 297, 348 Galeev A.A., Rosner R., Vaiana G.S., 1979, ApJ, 229, 318 Haardt F., Maraschi L., ApJ, 1993, 413, 507 Haardt F., Maraschi L., Ghisellini G., 1994, ApJL, 432, L92 Koratkar A., Blaes O., 1999, Publ. Astron. Soc. of Pacific, 111, 1 Magorrian J. et al., 1998, AJ, 115, 2285 Lee J.C,, Fabian A.C., Reynolds C.S., Brandt W.N., Iwasawa K., 1999, MNRAS, submitted Miller K. A., Stone J.M., 1999, ApJ, submitted Narayan R., Yi I., 1995, ApJ, 444, 231 Narayan R., Mahadevan R., Quataert E., 1998, Theory of Black Hole Accretion Disks, edited by Marek A. Abramowicz, Gunnlaugur Bjornsson, and James E. Pringle. Cambridge University Press, 1998., p.148 Quataert E., Di Matteo T., Narayan R., Ho L., 1999, ApJL, 525, L89 Rees M.J., Phinney E.S., Begelman M.C., Blandford R.D., 1982, Nature , 295, 17 Reynolds C.S., Fabian A.C., Inoue H., MNRAS, 276, 1311 Stone J.M., Pringle J.E., Begelman M.C., 1999, MNRAS, in press Stella L., Rosner R., 1984, ApJ, 277, 312
no-problem/9912/astro-ph9912543.html
ar5iv
text
# The Vela pulsar proper motion revisited with HST astrometryBased on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc. under contract No NAS 5-26555. ## 1 Introduction The Vela pulsar is one of the isolated neutron stars with the widest observational database, spanning from radio waves to high-energy $`\gamma `$-rays. Nevertheless, the value of its distance is still in debate. The canonical value of 500 parsec, derived by Milne (1968) from the radio signals dispersion measure, has been recently questioned by several independent investigations. Studies of both the kinematics of the host supernova remnant (Cha et al. 1999; Bocchino et al. 1999) and constraints on the neutron star radius imposed by the pulsar soft X-ray spectrum (Page et al. 1996) have suggested a significant downward revision of the distance. In both cases, a value of $``$ 250 parsec appears more likely than the canonical one. A model-free evaluation of the distance which could settle the question can be obtained only by measuring the annual parallax of the pulsar. To this aim we have been granted a triplet of consecutive HST/WFPC2 observation to be performed six months apart at the epochs of the maximum parallactic displacement. In this paper we present a first result of our program: an accurate re-assessment of the pulsar proper motion. This is another open point in the Vela pulsar phenomenology: although certainly present, its actual value is still uncertain. Previous estimates obtained through radio and optical observations led to conflicting results (see Tab.1 for a summary), spanning from 38 mas yr<sup>-1</sup> (Ögelman et al.1989) to 116 mas yr<sup>-1</sup> (Fomalont et al. 1992). These discrepancies are due both to the rather poor angular resolution of the first optical images of the field, which reduced the accuracy of relative astrometry, and to the timing irregularities of the Vela pulsar signal, which possibly affected the reliability of radio positions. However, the newly operational southern VLBI has already been used on the Vela pulsar yielding preliminary results of vastly improved accuracy (Legge, 1999). ## 2 The data analysis The best way to gauge the angular displacement of an object between different epochs is to perform relative optical astrometry measurements. This needs: * a set of “good” reference stars, accurately positioned, to provide the relative reference frame for each image * a reliable procedure to align the reference frames In our relative astrometry analysis we have used all the images of the Vela pulsar field collected so far by the HST (see Tab.2). The observations, all taken through the F555W “V filter”, have been obtained either using the original WFPC and the WFPC2, with the pulsar positioned in one of the WFC chips or in the PC one, respectively. Observation #1 has been retrieved from the ST-ECF database and recalibrated on-the-fly by the archive pipeline using the most recent reference data and tables, while observations #2 and #3 were obtained as part of our original parallax program, included in cycle 6 but never completed. The program is now being repeated in cycle 8 and image #4 is indeed the first of the new triplet of WFPC2 observations. For each epoch, cosmic-ray free images were obtained by combining coaligned exposures. The choice of reference stars was limited to the chip containing the pulsar optical counterpart (see Caraveo & Mignani 1999 for a qualitative discussion of the inter-chip astrometry). 25 common stars were selected in the PC images of 1997, 1998 and 1999; 19 of them were identified in the 1993 WFC image. This defines our set of reference stars. Their coordinates were calculated by 2-D gaussian fitting of the intensity profile. The evaluation of the positioning errors was conservative; for each reference star the fit was repeated several times on centering regions of growing areas, till errors showed no more dependence on the background conditions (see Caraveo et al. 1996). Uncertainties on the centroid positions were typically of order 0.02$`÷`$0.06 pixel (i.e. 1$`÷`$3 mas) per coordinate in the WFPC2 images and 0.03$`÷`$0.06 pixel (i.e. 3$`÷`$6 mas) per coordinate in the WFC one. The values of the coordinates of reference stars have been then corrected for the effects of the significant geometrical distortion of the WFC and the WFPC2 CCDs. This correction has been applied following two different mappings of the instrument field of view: (i) the solution determined by Gilmozzi et al. (1995), implemented in the STSDAS task metric and (ii) the solution determined by Holtzman et al. (1995). The procedures turned out to be equivalent, with the rms residuals on the reference stars coordinates after image superposition (see below) consistent within few tenths of mas. As a reference for the image superposition, we have choosen the 1997 one. Given the abundance of reference stars, the alignement has been performed following the traditional astrometric approach, consisting of a linear transformation with 5 free parameters i.e. 2 independent translation factors, 2 scale factors for $`X`$ and $`Y`$ and a rotation angle. The residuals of reference stars positions clustered around 0 and appeared randomly distributed on the field of the image, showing no systematic effect. Thus, we are confident that our analysis is bias free and reliable. As expected, the average rms residual on the reference stars positions is higher in the WFC-to-PC superposition (of order 4 mas) than in the PC-to-PC case (of order 2 mas). We note that in all cases the relative orientation of the images coincide, within few hundredths of degree, with the difference between the corresponding telescope roll angles. The alignement procedure has been repeated after applying different image sharpening algorithms (e.g. gaussian filtering), yielding very similar results. ## 3 The proper motion The displacement of the Vela pulsar position can be immediately appreciated in the isophotal plot of Fig.2, which shows the zoomed superposition of all the frames taken at the different epochs. To measure the pulsar proper motion, we have performed linear fits to its RA/Dec displacements vs time. As a first step, we used all the available points to have a longer time span. However, owing to the coarser angular resolution of the 1993 image, taken with the WFC, almost indistinguishable results are obtained using only the 1997-1999 points, all obtained with the PC. We also tried a direct comparison of the 1997 and 1999 points, taken exactly at the same day of the year i.e. with identical parallax factors, which should lead to the clearest measurement of the proper motion. All of the steps described above yielded results largely consistent within the errors. Since all the proper motion values obtained through different image processing/frame superposition/displacement fit are consistent within $`2`$ mas yr<sup>-1</sup> from each other, we have conservatively assumed the value of 2 mas yr<sup>-1</sup> (per coordinate) as our overall error estimate. Thus, our final estimate of the Vela pulsar proper motion is: $$\mu _\alpha \mathrm{cos}\delta =46\pm 2masyr^1$$ $$\mu _\delta =24\pm 2masyr^1$$ for a total proper motion of: $$\mu =52\pm 3masyr^1$$ with a position angle of $`297^{}\pm 2^{}`$. ## 4 Conclusions Analysis of HST data yielded the most accurate measure of the Vela pulsar proper motion. Our value is in excellent agreement with the one of Nasuti et al. (1997),but the errors are now reduced. What is worth mentioning here is that our data refer to a time span much shorter than the 20 years interval of the previous work. Indeed, two years of HST observations are vastly enough to improve over 20 years observations from the ground. Once more, this is a clear demonstration of the excellent potentialities of HST astrometry. Transforming proper motion into a transverse velocity requires the knowledge of the object’s distance. At the canonical, although uncertain, 500 pc distance the implied velocity would be $`130`$ km s<sup>-1</sup>, somewhat on the low side for the fast moving pulsar family (Lorimer, Bailes & Harrison 1997). A reduction of the pulsar distance will similarly reduce the transverse speed to an embarassingly low value. Nailing down the pulsar proper motion is the first step to assess the annual parallactic displacements of the source. Our next observations will, hopefully, yield a direct measurement of the Vela pulsar distance. ###### Acknowledgements. Part of this work was done at the ST-ECF of Garching. ADL wishes to thank ECF for the hospitality and the support during that period. ADL is sincerely grateful to the Collegio Ghislieri of Pavia (Italy), to the Pii Quinti Sodales association of Pavia and to the Maximilianeum Stiftung of Munich, which offered a truly essential support and a warm hospitality for his stay in Germany.
no-problem/9912/hep-ph9912360.html
ar5iv
text
# REVIEW OF THEORY TALKS AT XXIX INTERNATIONAL SYMPOSIUM ON MULTIPARTICLE DYNAMICS ## 1 Introduction The main trend in theoretical applications of QCD to processes of multiparticle production during recent years, which was well presented at this Symposium, can be summarised as: ”From small to large distances or an interplay of perturbative and nonperturbative aspects of QCD”. The basis of QCD is rather well established now by a comparison of its predictions with experiments sensitive to small distances (large virtualities or momentum transfer), which can be described using the QCD perturbation theory. On the other hand a large distance dynamics is still an open problem and we can not claim that we understand QCD without solving it. So main efforts have been concentrated on investigation of different dynamical aspects of QCD for a broad variaty of phenomena of multiparticle production. One of the crucial problems is to understand what are the relevant degrees of freedom in different processes? Are they point like quarks and gluons or ”reggeized” quarks and gluons or rather white objects,- Pomeron and reggeons? Plan of my talk is related to the problems mentioned above. In the first Section I shall concentrate mainly on small distance processes and physics of jets. In Section 2 the problem of Pomeron and its manifestations in diffractive processes and multiparticle production will be reviewed. There was a substantial progress in this field during last 2 years and in more than 50% of theoretical talks at this Symposium different aspects of this problem have been discussed. In particular in the third Section I shall consider shadowing effects in small-x physics and their relation to heavy ion interactions, though I shall not discuss the field of heavy ion interactions in details as it was perfectly reviewed by J.Stachel . Other theoretical ideas and their applications to multiparticle production processes will be discussed in Section 4. I would like to apologize to many speakers of this Conference for being unable to cover in this talk their interesting contributions and possible misinterpretations of some results included in my summary. ## 2 Perturbative QCD, jets and power corrections Large distance dynamics is present in all physical processes, even in such typical ”small distance” reactions as $`e^+e^{}`$-annihilation at large energies or deep inelastic scattering. The factorization property in QCD allows one to separate contributions from small and large distances. For example cross section for production of jets in hadronic collisions can be expressed as a sum of convolutions of partonic distributions in colliding hadrons with corresponding hard cross sections. These cross sections can be calculated in QCD perturbation theory. A dependence of partonic distributions on a scale $`\mu `$ of a process can be determined using the renormalisation group equations and can be described perturbatively at large $`\mu `$, while initial conditions (values of partonic distributions at fixed scale $`\mu _0`$ ) are determined by both small and large distance dynamics and in general can not be predicted by perturbative QCD. Due to confinement of quarks and gluons in QCD they are observed as jets of hadrons and a transition from partons to hadrons is a necessary step in theoretical calculations. Impressive agreement of perturbative QCD with experimental data has been demonstrated at this Symposium. HERA data on the proton structure function $`F_2`$ can be well described by the QCD evolution equations in a broad region of $`Q^2`$ and provide an information on distributions of quarks and gluons at very small x. Cross sections of jets production are in an agreement with PQCD calculations both at Tevatron , HERA and $`e^+e^{}`$ annihilation . Infrared safe characteristics of jets are well described by perturbative QCD if power corrections are taken into account (see below). A substantial progress in separation of quark and gluon jets has been achieved in recent years and a trend to the asympototic prediction of QCD for multiplicities of these jets $$\frac{\overline{n}_g}{\overline{n}_q}=\frac{C_A}{C_F}=\frac{9}{4}$$ (1) is confirmed . Different aspects of nonperturbative physics of jets hadronization have been discussed in several talks at this Symposium. It was shown by S.Chun that the model based on area law gives a good description of relative yields of different hadrons. An importance of spin-spin interactions, for particle production has been emphasized by P.Chliapnikov . A role of nontrivial color connections between different partons, which lead to $`1/N_c^2`$ corrections to the leading planar configurations, has been studied by Q.-B Xie . A model based on stachastic branching and local parton -hadron duality hypothesis(LPHD) was developed by A.H.Chan . The LPHD hypothesis has been formulated many years ago by Azimov et al. and found to be very useful for description of some global properties of multiparticle production in hard processes. An interesting new application of this LPHD was reported at this Conference by W.Ochs . He calculated a probability of events in $`e^+e^{}`$-annihilation with a rapidity gap. For partons such events with absence of partons in a large region of $`\mathrm{\Delta }y`$ are suppressed by Sudakov factor. If LPHD is correct suppression of the same type should exist also for hadrons. It is clear that LPHD can not be true in all situations and it is very important to understand why it works and where it fails? First experimental indications to violations of LPHD has been discussed in several talks at this Symposium . Hadronisation effects in properties of jets are closely related to power corrections to these quantities, which were found to be essential for accurate description of jets observables. This problem clearly reflects an interplay between PQCD and large distance physics. There is a hope that these corrections can be described in terms of a quantity $`\alpha _0(\mu )`$, which is an average value of the strong coupling $`\alpha _s(k^2)`$ in the region of small virtualities $`k^2<\mu ^2`$ . Such approach is useful only if $`\alpha _0(\mu )`$ is universal for different observables. These corrections were found to be approximately universal for jets observed at LEP and at HERA , though HERA experiments indicate to some differences between values of $`\alpha _0(\mu )`$ from different observables. In my opinion an agreement is better than one can expect for such a simple model of power corrections. Perturbative approach to production of events with large rapidity gaps between jets in hadronic collisions has been discussed by G.Sterman . He proposed to characterize a hadronic activity in rapidity region between jets by energy flow $`Q_c`$ and have shown that it is possible to describe by perturbative QCD the region where $`Q_c`$ is much smaller than $`p_T`$ of jets but larger than characteristic hadronic scale. Cross sections of semihard interactions in hadronic collisions fastly increase with energy and multiparton interactions become important at superhigh energies. This problem has been addressed in several talks . Properties of multiparton distributions have been discussed by D.Treleani and G.Calucci . These distributions are characterized by ratios of momenta $`x_i`$ and impact parameters $`b_i`$ of all partons. It was pointed out that simplest uncorrelated distribution fails to reproduce experimental data on cross section of double parton interaction and possible correlations between valence quarks and gluons can lead to an agreement with experiment. Applications of models with multiple partonic interactions to total interactions cross sections in h-h,$`\gamma h`$ and $`\gamma \gamma `$ collisions have been considered by G.Pancheri and to multiparticle production by R.Ugoccioni . W.D.Walker has demonstrated that multiple interactions are needed for understanding of Tevatron data on multiplicity distributions. Impressive collection of new results on interactions of real and virtual photons has been presented by LEP experiments . A fast increase with energy of these cross sections is observed at highest energies. It is difficult to reconcile the increase observed by L3 group for $`\gamma \gamma `$ interactions with theoretical models based on an eikonalized version of mini-jet model . First results on interaction of highly virtual photons provide a good testing ground for recent predictions based on NLO BFKL Pomeron calculations . Spin effects in processes of interaction of a virtual photon with proton have been discussed by N.Nikolaev . He have shown that simplest diagrams of two gluon exchange for vector meson production by highly virtual photons leads to the spin structure of $`\gamma ^{}V`$ transition, which is in an agreement with HERA data (see also ). In particulat the model leads to a definite pattern of violation of s-channel helicity conservation. It was also shown in this talk that due to double-Pomeron exchange structure function $`g_2`$ has a singular behaviour as $`x0`$. As a result the Burkhardt-Cottingham sum rule and Wandzura-Wilczek relations are violated. ## 3 Pomeron The notion of the Pomeron has been introduced in particle physics in the framework of Regge theory long ago . There is a revival of interest to the Pomeron problem due to small-x physics studied at HERA. The Pomeron plays an important role in theoretical descriptions of high-energy interactions, however there are no unique definition of this object. So I shall first discuss existing definition of the Pomeron. They can be devided into two categories: a) Pomeron is the Regge pole with the largest intercept $`\alpha _P(0)`$ and vacuum quantum numbers. It gives a contribution to high-energy amplitudes of elastic scattering and other diffractive processes. In this approach multipomeron exchanges, which lead to moving cuts in the complex angular momentum plane $`j`$, exist also. They are especially important in the case of ”supercritical” Pomeron (when $`\alpha _P(0)>1`$) to restore unitarity of the theory and to satisfy Froissart limit. b) Pomeron is the singularity at $`j=1`$ (not a Regge pole in general), which satisfies constraints of unitaruty, analyticity and describes asymptotically diffractive processes. I prefer the first definition due to the following reasons: i) It relates high-energy scattering to hadronic spectrum. ii)It is natural in $`1/N`$-expansion in QCD. iii) Multiparticle content of the Pomeron is known (short range correlations). iv) Gribov reggeon diagrams technique allows one to estimate amplitudes for multipomeron exchanges and AGK (Abramovsky, Gribov, Kancheli) cutting rules relate them to multiparticle production processes. Such approach leads to a successful phenomenology . It is very important to understand dynamics of reggeons and of the Pomeron in QCD. A useful framework to classify all diagrams in QCD is 1/N-expansion , where N is either number of colors $`N_c`$ or light flavors $`N_f`$. In this approach the reggeons $`\rho ,\omega ,A_2,`$ are connected to planar diagrams, while the Pomeron is related to cylinder type diagrams. Diagrams with exchange by $`n`$ Pomerons in the t-channel are connected to multicylinder configurations, which are $`(1/N^2)^n`$ and are small in the large $`N`$ limit. In realistic calculations these contributions should be taken into account. Such classification leads to many predictions for high-energy hadronic interactions, which are in a good agreement with experiment . Calculation of reggeon and Pomeron trajectories in QCD with an account of nonperturbative effects is a difficult problem (for some recent results see below). Perturbative calculations of the Pomeron in QCD have been carried out by L.Lipatov and collaborators (BFKL Pomeron) many years ago. Pomeron is related to a sum of ladder type diagrams with exchange by reggeized gluons. Reggeization of gluons (as well as quarks) is an important property of QCD (at least in perturbation theory). In the leading approximation an expression for the intercept of the Pomeron is well known $$\mathrm{\Delta }\alpha _P(0)1=\frac{4N_cln2}{\pi }\alpha _s$$ (2) In this approximation it is not clear which value of $`\alpha _s`$ to use and for $`\alpha _s=0.2`$ an intercept of the Pomeron is substantially above unity $`\mathrm{\Delta }0.5`$. It leads to a fast increase of total cross sections $`(s/s_0)^\mathrm{\Delta }`$ with energy. The arguments were given that next to leading corrections to the Pomeron intercept should be large . This is connected to the observation of relatively small average rapidity intervals in the gluon ladder for realistic values of $`\alpha _s`$, while LO expressions are valid for large rapidity intervals. NLO corrections have been calculated last year and strongly modify LO results for $`\mathrm{\Delta }`$ $$\mathrm{\Delta }=2.77\alpha _s(16.5\alpha _s)$$ (3) For $`\alpha _s>0.15\mathrm{\Delta }`$ becomes negative. It is clear that an origin of large NLO corrections should be clearly established and resummation of these effects is necessary. Some results in this direction were presented at this Conference. It was pointed out by V.Kim that results for $`\mathrm{\Delta }`$ depend on a choice of renormalization scheme and renormalization scale (original result was obtained in $`\overline{MS}`$-scheme). The choice of more physical (BLM) scheme leads to more stable results for $`\mathrm{\Delta }`$, which practically does not depend on $`Q^2`$ and $`\mathrm{\Delta }0.17`$. Another approach has been developed by M.Ciafaloni et al. and was presented by D.Colferai . They perform a partial resummation of subleading corrections using renormalization group analysis. In the case of two scales processes (like DIS) an intercept of the ”hard Pomeron” $`\alpha _P(Q^2)`$ is introduced and investigated. This quantity can determine behaviour of structure function $`F_2`$ for large $`Q^2`$ and not too small x. It is pointed out that an intercept of the leading Regge pole $`\alpha _P`$ (which is of course should not depend on $`Q^2`$) depends on dynamics in nonperturbative region. The Pomeron problem is a clear manifestation of an interplay of soft and hard mechanisms in QCD. An importance of nonperturbative effects and especially of chiral symmetry breaking effects for dynamics of the Pomeron was emphasized by A.White . He uses a powerful tool of reggeon unitarity for investigation of interactions of reggeized gluons and quarks in QCD. An important role of the special U(1) anomaly was demonstrated. New factorization formula for high-energy scattering amplitudes was obtained by Ya.Balitsky . It allows one to formulate an effective action, which can be used for calculation of higher order perturbative corrections to BFKL Pomeron and unitarization effects. This approach is effective for small coupling $`\alpha _s`$ and large fields. Related approach has been developed by L.McLerran with collaborators and was reported at this Conference in talks of J.Jalilian-Marian and R.Venugopalan . It is very important to understand a role of multigluon exchanges for asymptotic behaviour of scattering amplitudes. This problem has been studied in the eikonal approximation by H.Fried . Interesting attempt to calculate spectrum of glueballs using metods developed in the superstring theory has been presented by R.Brower . The leading Regge trajectories of the glueball spectra can be related to the Pomeron Regge pole. Recently Yu.Simonov and myself have calculated spectrum of glueballs using method of vacuum correletors . Predicted masses of the lowest glueballs are in a perfect agreement with lattice calculations. We emphasize an importance of mixing between gluons and quarks in the low t-region. The mixing effects allow to obtain phenomenologically acceptable intercept of the Pomeron trajectory and lead to an interesting pattern of vacuum trajectories in the positive t-region. I think that the Pomeron in QCD has a very rich and interesting dynamics. ## 4 Shadowing effects in small-x region and ”hard” diffraction Experiments at HERA clearly demonstrated a fast increase of densities of quarks and gluons as x decreases. For very large densities partons will interact and shadow each other. This will lead to a suppresion of the fast rise of parton densities and finally to saturation of parton densities as $`x0`$. Same effects can be viewed in the target rest frame as a result of coherent multiple interactions of the initial quark-gluon fluctutation of a virtual photon with the target (note that a fluctuation of a virtual photon with small x has a very long lifetime $`\tau 1/mx`$). In the framework of the reggeon theory these rescatterings correspond to multipomeron exchanges in $`\gamma ^{}p(\gamma ^{}A)`$ elastic scattering amplitudes. Investigations of a role of multipomeron exchanges for dense parton systems have been discussed by E.Gotsman and B.Gay Ducati . An equation, which includes all multipomeron exchanges in the double logarithmic approximation has been obtained from the dipole picture. It coincides with AGL equation obtained ealier using Glauber-Mueller approach and can be considered as a candidate for unitarized evolution equation at small x. Effects of the rescatterings (or screening corrections) on the structure function $`F_2(x,Q^2)`$ have been considered in details by E.Gotsman and the problem of saturation for parton densities was discussed. Rescatterings in reggeon theory are closely related to diffractive production (large rapidity gaps). Experimental results of HERA on these processes have been discussed at this Symposium by A.Zhokin and K.Piotrzkowski There was a considerable interest in the processes of ”hard” diffraction in recent years. If the Pomeron is a factorizable object than one can introduce the Pomeron structure function, which characterize a distribution of quarks in the Pomeron $`F_P(\beta ,Q^2)`$. Experiments at HERA found that effective intercept of the exchanged object for large rapidity gap events $`\mathrm{\Delta }_{eff}=0.15÷0.2`$ at large $`Q^2`$. This value is larger than corresponding values in soft diffraction. Structure function of the Pomeron was also determined by H1 and ZEUS . The model for distribution of quarks and gluons in the Pomeron has been considered by F.Hautmann . It is based on perturbative QCD approach to this problem with the assumption of dominant role of small transverse sizes for initial distribution of quarks and gluons in the Pomeron. Dependence on $`Q^2`$ was calculated using standard QCD evolution. Note that for inelastic diffraction multipomeron exchanges are also present (and even more important than for elastic amplitudes) and in general amplitudes of these processes are not factorizable. A simultaneous sefconsistent description of both $`F_2(x,Q^2)`$ in a broad region of $`Q^2`$ and diffractive production $`F_2^{D3}(x,Q^2,\beta ,x_P)`$ is a difficult problem. First results in this direction were presented at this Symposium . The problem of ”saturation” at large $`Q^2`$ and x much smaller than those available at HERA is still not solved completely. The region of $`ln(1/x)`$ and $`Q^2`$, where the saturation happens should be well defined and the question whether $`\sigma _{\gamma ^{}p}`$ is large( $`Const`$) or it is still small ($`1/Q^2`$) should be solved. Results on hard diffraction at Tevatron have also been presented at this Conference . Diffractive production of jets, W-bosons, b-quarks and $`J/\psi `$-mesons is observed at $`1\%`$ level. These signals are $`5÷10`$ smaller than expected from Regge factorization. This damping factor is expected due to large shadowing effects for inelastic diffraction in hadronic collisions. A similar suppression takes place for total cross section of diffraction dissociation. Theoretical estimates show that these shadowing effects due to multipomeron exchanges influence mostly total rate and s-dependence of diffractive processes, but have a little effect on mass or $`\beta `$-dependence. From this point of view an observation of very fast increase in diffractive production of jets at very small $`\beta `$ observed by CDF group looks very interesting. It contradicts to the parametrisation of distribution for gluons in the Pomeron proposed by H1. Same comparison should be done for other parametrisations of gluons proposed in literature. Note that direct information on small beta behaviour of partonic distributions in the Pomeron (especially for gluons) is practically absent at HERA. An important testing ground for Regge factorization and its vilolation is provided by the process of central production of jets, heavy quarks, e.t.c in hadronic interactions with two large rapidity gaps (double Pomeron exchange). Experimental information on this process is still very limited and more data are clearly needed. Experimental information on diffraction production, including hard diffraction, can be understood using the prescription of ”flux renormalisation”, introduced by K.Goulianos . At present it does not have clear theoretical basis and it is necessary to understand why it works in many situations. Another explanation of ”Dino’s paradox” has been proposed by Chung-I Tan . He emphasizes a role of ”flavouring” of the Pomeron, which accounts for preasymptotic effects due to delayed thresholds of heavy states production. In this approach it is possible to describe a slow rise of $`\sigma ^{SD}`$ at very high energies. It would be interesting to see how this approach reproduces main observational facts for hard diffractive processes. Shadowing for dense parton systems in the small x region are especially important for nuclei, where density of partons for given impact parameter is larger by a factor $`A^{1/3}`$. Nuclei are also convenient for a study of these effects as they can be easily extracted by a study of A-dependence of nuclear structure functions. This problem has been discussed in several talks . Though the models considered in these talks are rather different their predictions look similar. In particular for heavy nuclei $`(A200),Q^25GeV^2`$ and $`x10^4`$ there is a suppression factor $`0.5÷0.6`$ due to shadowing. This result is important also for heavy ion collisions at RHIC and LHC as it reduces density of produced minijets (and hadrons). Same effects were considered in the model of string fusion in the talk of M.Braun . He discussed a possible phase transition due to percolation of strings and its influence on fluctuations in heavy ion collisions. ## 5 Models for multiparticle production and phenomenological applications In this section I shall consider some new developements in models of multiparticle production and applications of existing theoretical ideas to different aspects of high-energy interactions. The model of color mutations with self-similar dynamics for particle production in soft processes has been discussed by R.Hwa . A general organisation of diagrams is similar to the one used in $`1/N`$-expansion approach,-the Pomeron corresponds to the cylinder contribution and multipomeron exchanges in the eikonal approximation are also taken into account in this model. Dynamics of multiparticle production for a single cylinder differs from string models. It is especially important for local (in rapidity) properties of particle production. The model reproduces experimental data on intermittency, which pose a problem for existing string models. Applications of existing models based on $`1/N`$-expansion, reggeon theory and string dynamics to cosmic ray physics have been presented by R.Engel . Comparison of predictions of these models with existing cosmic ray data indicate to possible problems of existing models at superhigh energies. The Pomeron in perturbative QCD is related to exchange by even number of gluons in the t-channel. Exchanges by odd number of gluons lead to a singularity in $`j`$-plane with negative signature and C-parity, which is usually called ”odderon”. Recent perturbative calculations established that in LO approximation intercept of the odderon is below unity, but very close to it. Experimental observation of manifestations of odderon would be an important check of perturbative QCD predictions (note that lattice calculations indicate that nonperturbative glueball trajectories of this type have a very low or even negative intercept). It was shown in the talk of C.Merino that an asymmetry in distribution of charm jets produced in diffractive photoproduction is sensitive to odderon contribution. Interesting applications of small-x QCD physics to superhigh energy $`\nu N(\nu A)`$ interactions and attenuation of $`\nu `$ transversing the Earth have been discussed by A.Stasto . Such calculations are important for $`\nu `$-astronomy as well as for investigation of atmospheric neutrinos. I think that this Symposium demonstrated that our field of QCD studies in processes of multiparticle production is very rich and active. Most topical problems now are related to connection between soft and hard dynamics in QCD. There are many interesting relations between different fields like small-x DIS and heavy ion collisions. New experiments at RHIC and later at LHC will give a new impact to this field of research. I would like to thank organizers of this Symposium and especially Chung-I Tan for invitation to participate and to give theoretical review talk at the Symposium. ## References
no-problem/9912/astro-ph9912018.html
ar5iv
text
# Is the exponential distribution a good approximation of dusty galactic disks? ## 1 Introduction Modeling the dust and stellar content of spiral galaxies is a very crucial procedure needed for the correct interpretation of the observations. The amount of interstellar dust embedded inside spiral galaxies, the way that dust is distributed within spiral galaxies and also the extinction effects of the dust to the starlight are some of the questions that can be answered by performing radiative transfer modeling of individual spiral galaxies. One very important thing that needs consideration when doing such analysis is the right choice of the stellar and dust distributions. In particular, the galactic disk is a quite complex system, where stars and dust are mixed together usually in a spiral formation. For this reason, one has to use realistic distributions able to reproduce quite accurately the observations. On the other hand, simple mathematical expressions for these distributions are chosen in order to keep the free parameters to the minimum. For the distribution of the starlight in the disk of spiral galaxies, the exponential function is very widely in use. This simple mathematical expression is able to describe the distribution of stars in both directions, radially and perpendicular to the disk. Decomposition techniques used by different authors in order to separate the bulge and the disk component strongly support this argument. For galaxies seen face-on (and at moderate inclination angles), radial profile fitting (e.g. Freeman 1970), fitting to azimuthally averaged profiles (e.g. Boroson 1981), as well as ellipse fitting techniques to 2D images (e.g. de Jong 1995) show that the exponential in the radial distance $`R`$ is a good representation of galactic disks with only small deviations mainly due to the spiral structure of the galaxy (see Serna 1997). Other works like those of Shaw & Gilmore (1989) and de Grijs (1997) dealing with modeling of edge-on galaxies support the idea that exponential functions are good representations also for the $`z`$ (vertical to the disk) direction. Performing radiative transfer modeling of edge-on galaxies, Xilouris et al. (1997, 1998, 1999) found that exponential functions for the luminosity density of the stars in the disk as well as for the extinction coefficient give an excellent description of the observations. The advantage of modeling galaxies in the edge-on configuration is that the integration of light along the line of sight is able to cancel out most of the structure of the galaxies (i.e. spiral structure) and therfore allows for simple functions such as exponentials to give good representation of the observations. Thus, although in the face-on configuration a large variation between arm and interarm regions might be present for both the stars and the dust (White & Keel 1992, Corradi et al. 1996, Beckman et al. 1996, Gonzalez et al. 1998), in the edge-on case an average description of the galaxy characteristics can be obtained quite accurately. We are going to investigate the validity of this argument by comparing the exponential distributions with more realistic distributions which include spiral structure. In Sect. 2 we describe the method that we use to address this problem and in Sect. 3 we present the results of our calculations. Finally, in Sect. 4 we summarize our work. ## 2 Method The method that we follow in this work consists of two basic steps. In the first step, model galaxies with realistic spiral structure are constructed. After a visual inspection of the face-on appearance of these models to see the spiral pattern, we create their edge-on images which are now treated as real observations. In the second step we fit these “observations” with a galaxy model where now the galactic disk is described by the widely used plain exponential model. In this way, a comparison between the parameters derived from the fitting procedure and those used to produce the artificial “observations” can be made and thus a quantitative answer about the validity of the plain exponential model as an approximation to galactic disks can be given. ### 2.1 Artificial spiral galaxies We adopt a simple, yet realistic, distribution of stars and dust in the artificial galaxy. A simple expression is needed in order to keep the number of free parameters as small as possible and thus have a better control on the problem. A realistic spiral structure is that of logarithmic spiral arms (Binney & Merrifield 1998). Thus, a simple but realistic artificial spiral galaxy is constructed by imposing the logarithmic spiral arms as a perturbation on an exponential disk. In this way, the azimuthally averaged face-on profile of the artificial galaxy has an exponential radial distribution. For the stellar emissivity we use the formula $$L(R,z)=L_s\mathrm{exp}\left(\frac{R}{h_s}\frac{|z|}{z_s}\right)$$ $$\times \left\{1+w_s\mathrm{sin}\left[\frac{m}{\mathrm{tan}(p)}\mathrm{log}(R)m\varphi \right]\right\}$$ $$+L_b\mathrm{exp}(7.67B^{1/4})B^{7/8}.$$ (1) In this expression the first part describes an exponential disk, the second part gives the spiral perturbation and the third part describes the bulge, which in projection is the well-known $`R^{1/4}`$-law (Christensen 1990). Here $`R`$, $`z`$ and $`\varphi `$ are the cylindrical coordinates, $`L_s`$ is the stellar emissivity per unit volume at the center of the disk and $`h_s`$ and $`z_s`$ are the scalelength and scaleheight respectively of the stars in the disk. The amplitude of the spiral perturbation is described by the parameter $`w_s`$. When $`w_s=0`$ the plain exponential disk is obtained, while the spiral perturbation becomes higher with larger values of $`w_s`$. Another parameter that defines the shape of the spiral arms is the pitch angle $`p`$. Small values of $`p`$ mean that the spiral arms are tightly wound, while larger values produce a looser spiral structure. The integer $`m`$ gives the number of the spiral arms. For the bulge, $`L_b`$ is the stellar emissivity per unit volume at the center, while $`B`$ is defined by $$B=\frac{\sqrt{R^2+z^2(a/b)^2}}{R_e},$$ (2) with $`R_e`$ being the effective radius of the bulge and $`a`$ and $`b`$ being the semi-major and semi-minor axis respectively of the bulge. For the dust distribution we use a similar formula as that adopted for the stellar distribution in the disk, namely $$\kappa (R,z)=\kappa _\lambda \mathrm{exp}\left(\frac{R}{h_d}\frac{|z|}{z_d}\right)$$ $$\times \left\{1+w_d\mathrm{sin}\left[\frac{m}{\mathrm{tan}(p)}\mathrm{log}(R)m\varphi \right]\right\},$$ (3) where $`\kappa _\lambda `$ is the extinction coefficient at wavelength $`\lambda `$ at the center of the disk and $`h_d`$ and $`z_d`$ are the scalelength and scaleheight respectively of the dust. Here $`w_d`$ gives the amplitude of the spiral perturbation of the dust. Note that the angle $`\varphi `$ here need not be the same as that in Eq. (1). The stellar arm and the dust arm may have a phase difference between them. For the parameters describing the exponential disk of the stars and the dust as well as the bulge characteristics we use the mean values derived from the B-band modeling of seven spiral galaxies presented in Xilouris et al. (1999). Since the most dominant spiral structure in galaxies is that of the two spiral arms (Kennicutt 1981; Considere & Athanassoula 1988; Puerari & Dottori 1992) we only consider models where $`m=2`$. Galaxies with strong one-arm structure do exist, but they constitute a minority (Rudnick & Rix 1998). For the parameter $`w_d`$ we take the value of 0.4. With this value the optical depth calculated in the arm region is roughly twice as much as in the inter-arm region. This is in good agreement with studies of overlapping galaxies (e.g. White & Keel 1992). For $`w_s`$ we use the value of 0.3 resulting (with the extinction effects included) in a spiral arm amplitude of $`0.10.2`$ mag, which is a typical amplitude seen in radial profiles of face-on spiral galaxies and reproduces the desired strength for the spiral arms (Rix & Zaritsky 1995). For the pitch angle $`p`$ we consider the cases of $`10\mathrm{°},20\mathrm{°}`$ and $`30\mathrm{°}`$, which give a wide variety of spiral patterns from tightly wound to loosely wound. All the parameters mentioned above are summarized in Table 1. The radiative transfer is done in the way described by Kylafis & Bahcall (1987; see also Xilouris et al. 1997). As described in detail in these references, the radiative transfer code is capable of dealing with both absorption and scattering of light by the interstellar dust and also of allowing for various distributions for the stars and the dust. Using the model described above and the parameters given in Table 1 we produce the images shown in Fig. 1. The top three panels of this figure show the face-on surface brightness distribution of such a galaxy for the three values of the pitch angle ($`10\mathrm{°},20\mathrm{°}`$ and $`30\mathrm{°}`$) from left to right. The spiral structure is evident in these images with the spiral arms being more tightly wound for $`p=10\mathrm{°}`$ and looser when $`p=30\mathrm{°}`$. In the middle three panels of Fig. 1 we show the distribution of the optical depth when the galaxy is seen face-on for the three different values of the pitch angle mentioned above. In these pictures one can follow the spiral pattern all the way to the center of the galaxy since the bulge is assumed to contain no dust. Finally, in the last three panels of Fig. 1 one can see the corresponding edge-on appearance of the galaxies shown face-on in the top three panels. One thing that is very obvious from Fig. 1 (top and middle panels) is that the galaxy is no more axisymmetric as it is the case in the plain exponential disk model. The spiral structure that is now embedded in the model as a perturbation in the disk has broken this symmetry. Thus, in order to do a full analysis of the problem we have to examine the galaxy from different azimuthal views (position angles). To do so we have created nine edge-on model galaxies (for each of the three different pitch angles considered here), covering the range from $`0\mathrm{°}`$ to $`160\mathrm{°}`$ with a step of $`20\mathrm{°}`$ for the position angle. For the definition of the position angle, see Fig. 2. Since the galaxy has exactly the same appearance in the interval from $`180\mathrm{°}`$ to $`360\mathrm{°}`$, we only consider the range of position angles mentioned above. To demonstrate this asymmetry more quantitatively we have computed the central edge-on optical depth ($`\tau ^e`$) for all these nine model galaxies. Unlike the plain exponential disk model where $`\tau ^e`$ can be calculated analytically ($`\tau ^e=2\kappa _\lambda h_d`$), here we have to perform numerical integration of Eq. (3) along the line of sight that passes through the center. The value of $`\tau ^e`$ is shown in Fig. 3 as a function of the position angle. In order to have the full coverage in position angle (from $`0\mathrm{°}`$ to $`360\mathrm{°}`$), the values calculated in the interval ($`0\mathrm{°}`$ \- $`180\mathrm{°}`$) were repeated in the interval ($`180\mathrm{°}`$ \- $`360\mathrm{°}`$). In this figure, the three models constructed with pitch angles $`10\mathrm{°},20\mathrm{°}`$ and $`30\mathrm{°}`$ are denoted with circles, squares and diamonds respectively. In all three cases, a variation of the optical depth with position angle is evident. The largest variation is found for the case where the pitch angle is $`30\mathrm{°}`$ and it is $`5\%`$. It is obvious that all the values are around the true value of 27, used to construct the galaxy (see Table 1). ### 2.2 The fitting procedure The edge-on images created as described earlier are now treated as “observations” and with a fitting procedure we seek for the values of the parameters of the plain exponential disk that gives the best possible representation of the “observations”. The fitting algorithm is a modification of the Levenberg-Marquardt routine taken from the Minipack library. The whole procedure is described in detail in Xilouris et al. (1997). Preliminary tests have shown that the derived values of the parameters describing the bulge are essentially identical to the real values used to construct the model images. In order to simplify the fitting process and since we are only interested in the disk, the bulge parameters were kept constant during the fit. Six parameters are now free to vary. These are the scalelength and scaleheight of the stellar disk with its central surface brightness ($`h_s`$, $`z_s`$ and $`I_s`$ respectively) as well as the scalelength and scaleheight of the dust and the central edge-on optical depth ($`h_d`$, $`z_d`$ and $`\tau ^e`$ respectively). ## 3 Results Figure 4 shows six graphs. The top left graph gives the variation of the deduced edge-on optical depth of the galaxy as we observe it from different angles. From this graph one can see that the variation of the deduced optical depth from different points of view is no more than 6% different than the mean value of the optical depth. Furthermore, a comparison with Fig. 3, which shows the real value of the optical depth, reveals that there is no systematic error in the derived value. The deviations are equally distributed around $`\tau ^e=27`$, which is what we would have without the spiral structure. The variation is of the same order of magnitude regardless of the pitch angle. The top right graph in Fig. 4 presents the deduced central luminosity of the disk. This graph shows that the variation of the inferred central luminosity of the stellar disk is very small and it is weakly dependent on the pitch angle. In the middle left graph of Fig. 4 the derived scalelength of the dust is presented. The variation of the derived value is about 5% for the 10-degree pitch angle and goes up to 17% for the 30-degree pitch angle. The increase of the variation with increasing pitch angle is expected, because for large pitch angles the spiral arms are loosely wound, thus causing the galaxy to be less axisymmetric. As a result, from some points of view the dust seems to be more concentrated to the center of the galaxy and from other points of view more extended. In the middle right graph of Fig. 4, that shows the scalelength of the stars, it is evident that the same effect occurs for the stars as well. Certain points of view give the impression of a more centrally condensed disk, while others of an extended disk. The fact that we have taken the stellar and the dust spiral structure to be in phase (i.e. the dust spiral arms are neither trailing nor leading the stellar spiral arms) causes the deduced scalelengths of the dust and the stars to also vary in phase. The case of a phase difference is examined below. The bottom left and bottom right graphs of Fig. 4 show the variation of the scaleheight of the dust and the stars respectively. The variation of both scaleheights is negligible. This is an attribute of the formula we used for our artificial galaxy. Since the spiral variation we added to the exponential disks is not a function of $`z`$ it is expected that in the $`z`$ direction our artificial galaxy behaves exactly as the exponential model. There are indications (van der Kruit & Searle 1981;Wainscoat et al. 1989) that the dust arms are not located exactly on the stellar arms. Thus, we re-created the edge-on images, but this time the stellar spiral arms were set to lead the dust arms by 30 degrees. We then fitted the new images with the exponential model and the deduced parameters are shown in Fig. 5. As in Fig.4, the top left graph of Fig. 5 shows the optical depth as a function of position angle. A comparison of this graph with the corresponding graph in Fig. 4 reveals that the variation of the values derived from the new set of images is significantly larger. The origin of this effect is the fact that the dust is either in front of the stars (for some position angles) or behind the stars (for other position angles). A strong dependence on the pitch angle is also evident. Note, however, that the mean of all the derived values is unaffected. The same effect can be seen on the top right graph of Fig. 5, where the derived central luminosity of the disk is plotted. The variation of the central luminosity is again larger than in the previous case, but the mean value is equal to the true one. In the middle left graph of Fig. 5 we show the scalelength of the dust as a function of the position angle. The variation of the derived dust scalelength can differ as much as 25% for a galaxy with pitch angle equal to 30 degrees. But again the mean value for all position angles is identical to the one we used to create the images. In the middle right graph one can see that the scalelength of the stars also varies as much as 30%,but the mean of all the derived values is the correct one. The left and right bottom graphs show the variation of the scaleheights of the dust and the stars, which is about 10% for the worst case of a 30-degree pitch angle and is practically zero for a galaxy with more tight arms. The most important conclusion of all the graphs is that the derived values of all quantities tend to distribute equally around the real value we used to create the artificial images. ## 4 Summary In our attempt to investigate how significant the spiral structure is when doing radiative transfer modeling of spiral galaxies seen edge-on, we constructed a model galaxy with very prominent spiral arms in the disk. This quite realistic image of the galaxy is now treated as observation and the widely adopted exponential model for the galactic disk is now fitted to the data. This analysis shows that the plain exponential disk model is a very accurate description for galactic disks seen edge-on with only small deviations of its parameters from the real ones (typically a few percent). Furthermore, the variation from the real parameters would be averaged out if we could see the same galaxy from several point of views. This is of course impossible for an individual galaxy, but suggests that if the exponential model is used for a statistical study of many edge-on galaxies no systematic error is introduced. Thus, we conclude that the exponential model is a very good approximation of the galactic disks.
no-problem/9912/cond-mat9912187.html
ar5iv
text
# Mesoscopic Physics of Granular Flows ## Acknowledgements We thank Georges Debregeas, Cristophe Josserand, Dan Mueth, Sidney Nagel, Heinrich Jaeger, Leo P. Kadanoff, Tomas A. Witten for fruitful discussions. This work was supported in part by MRSEC Program of National Science Foundation under Award Number NSF DMR-9808595.
no-problem/9912/astro-ph9912102.html
ar5iv
text
# Diffuse Galactic Continuum Gamma Rays ## I Introduction This paper discusses recent studies of the diffuse continuum emission and their connection with cosmic-ray physics. The basic question concerns the origin of the intense continuum emission along the Galactic plane observed by EGRET, COMPTEL and OSSE. The answer is surprisingly uncertain. A comprehensive review can be found in Hunter\_review . The present work uses observational results given in Strong98 ; StrongMattox96 ; Kinzer ; new imaging and spectral results from COMPTEL are reported in Bloemen2000 . Most of the analysis reported here is based on the modelling approach described in sm98 ; smr98 . First we present some results from cosmic-ray isotopic composition which bear directly on the $`\gamma `$-ray models. We then discuss the problems which arise when trying to fit the $`\gamma `$-ray spectrum, and present possible solutions, both at high and low energies. The low energy (1–30 MeV) situation is addressed in more detail in sm2000 , and additional references can be found at website . Our basic approach is to construct a unified model which is as far as possible realistic, using information on the gas and radiation fields in the Galaxy, and current ideas on cosmic-ray propagation, including possible reacceleration; we use these to predict many different types of observations: direct measurements in the heliosphere of cosmic ray nuclear isotopes, antiprotons, positrons, electrons; and astronomical measurements of $`\gamma `$-rays and synchrotron radiation. Any given model has to be tested against all of these data and it is a challenge to find even one which is consistent with all observations. In fact we will show that the full range of observations can only be accomodated by additional components such as $`\gamma `$-ray point sources and also differences between local direct measurements and large-scale Galactic properties of cosmic rays. ## II Cosmic ray nucleons First we show results from CR composition which are relevant to the propagation of cosmic rays. For a given halo size (defined here as the $`z`$ value at which the cosmic-ray density goes essentially to zero) the parameters of the diffusion/reacceleration model can be adjusted to fit the important secondary/primary ratios, illustrated in Fig 1 for a halo size of 4 kpc. In addition we can use the constraints on the halo size given by the radioactive CR species <sup>10</sup>Be and <sup>26</sup>Al, Fig 2. For details of Ulysses results on radioactive nuclei see Connell2000 ; Connell98a ; Connell98b ; Simpson98 . Based on Ulysses <sup>10</sup>Be data, a range for the halo height of 4–10 kpc was derived in sm98 ; sm99 . This is consistent with other analyses Ptuskin98 ; Webber98 . New results from the Advanced Composition Explorer satellite (ACE) will constrain the halo size better, but the above range is consistent with ACE results as presented in ACE . Other radioactive nuclei (<sup>36</sup>Cl and <sup>54</sup>Mn) will provide further independent information; at present one can only say that they are consistent with the other nuclei. Having obtained sets of propagation parameters based on isotopic composition, we can proceed to use the model to study diffuse $`\gamma `$-rays. ## III Gamma rays Figure 3 shows the diffuse spectrum of the inner Galaxy for what we call a ‘normal’ or ‘conventional’ CR spectrum which is consistent with direct measurements of high energy electrons and synchrotron spectral indices (Figs 5, 6; see smr98 ; sm2000 ). Clearly this model does not fit the $`\gamma `$-ray data at all well. Consider first the well known problem of the high energy ($`>`$ 1 GeV) EGRET excess Hunter97 . One obvious solution is to invoke $`\pi ^o`$-decay from a harder nucleon spectrum than observed in the heliosphere, which might for example be the case if the local nucleon spectrum were dominated by a local source which is not typical of the large-scale average. Then the local measurements would give essentially no information on the Galactic-scale spectrum. One can indeed fit the EGRET excess if the Galactic proton (and Helium) spectrum is harder than measured by about 0.3 in the index (Fig 3). But there are two critical tests of this hypothesis provided by secondary antiprotons and positrons. It was shown in msr98 that such a hard nucleon spectrum produces too many antiprotons. The new MASS91 measurements Basini99 , which give the absolute antiproton spectrum from 3.7 to 24 GeV, have clinched this test, as shown in Fig 4. Quite independently, secondary positrons give a similar test, which the hard nucleon hypothesis equally fails (Fig 4). Again new data, this time from the HEAT experiment Barwick , give a good basis for this test. We conclude that there are significant problems if one wants to explain the GeV excess with $`\pi ^o`$-decay. This illustrates the importance of considering all the observable consequences of any model. Of course it is anyway difficult to imagine such spectral variations of nucleons given the large diffusion region and isotropy of CR nucleons. An alternative idea, first investigated in detail in PohlEsposito , is inverse Compton (IC) from a hard electron spectrum. The point is that the electron spectrum we measure locally may not be representative of the large-scale Galactic spectrum due to the large spatial fluctuations which arise because of the large energy losses at high energies. What is measured directly may therefore depend only on the chance locations of the nearest electron sources, and the average interstellar spectrum could be very different, in particular it could be much harder. An injection spectral index around 1.8 is required (Fig 5) and the corresponding $`\gamma `$-ray spectrum is shown in Fig 7. Note that modern theories of SNR shock acceleration can give hard electron injection spectra Baring so such a behaviour is not entirely unexpected. To predict reliably the IC emission, we also need an updated model for the interstellar radiation field; we have recomputed it smr98 using new information from IRAS, COBE, and stellar population models. There is still much scope for further improvement in the ISRF calculations however. Note that for these hard electron spectra IC dominates above 1 GeV, and is everywhere a very significant contributor, while bremsstrahlung is relegated to third position in contrast to the more conventional picture (presented e.g. in Strong97 ). Even if we can fit the inner Galaxy spectrum, the critical test is the spatial distribution: from Fig 8 one can see that it can indeed reproduce the longitude and latitude profiles. In fact it can reproduce latitude profile up to the Galactic pole (Fig 9) which is not the case for models with less IC. This can be seen as one proof of the importance of IC. But there is at least one problem associated with the hard electron spectrum hypothesis. A recent reanalysis of the full EGRET data for the Orion molecular clouds Digel determined the $`\gamma `$-ray emissivity of the gas, and this also shows the GeV excess, which would not expected since it should not involve IC. This could be a critical test. Perhaps the increased radiation field in the Orion star-forming region could boost the IC, and this ought to be investigated in detail. An earlier analysis correlating EGRET high-latitude $`\gamma `$-rays with 408 MHz survey data Chen found evidence for IC with an $`E^{1.88}`$ spectrum. This is very much in accord with the present models. More recently a study Dixon which used a wavelet analysis to look for deviations from the Hunter et al. Hunter97 model provided evidence for a $`\gamma `$-ray halo with a form similar to that expected from IC. An effect which may be important at high latitudes is the enhancement due to the anisotropy of the ISRF and the fact that an observer in the plane sees preferentially downward-travelling electrons due to the kinematics of IC ms2000 . This can enhance the flux by as much as 40% for a large halo. Even in the plane it can have a significant effect. Note that the halo sizes considered here imply an increased contribution from Galactic emission at high latitudes, which will affect determinations of the isotropic extragalactic emission. More precise evaluation of these implications is in progress. We mention finally low energies, for which a detailed account is given in smr98 ; sm2000 . Conventionally one invoked a soft electron injection, $`E^{2.1}`$ or steeper, and this could then explain the 1–30 MeV emission as the sum of bremsstrahlung and IC. However it seems impossible to find an electron spectrum which reproduces the $`\gamma `$-rays without violating the synchrotron constraints, unless there is a very sharp upturn below 200 MeV; but even there it fails at to give the intensities measured by OSSE below 1 MeV. Therefore a source contribution appears to be the most likely explanation.
no-problem/9912/astro-ph9912059.html
ar5iv
text
# A Gamma-Ray Burst Bibliography, 1973-1999 ## I Introduction I have been tracking the gamma-ray burst literature for about the past twenty-one years, keeping the authors, titles, references, and key subject words in a machine-readable form. The present version updates previous ones reported in 1994, 1996, and 1998 hurley94 ; hurley96 ; hurley98 . In its current form, this information is in a Microsoft Word 97 “doc” format. My purpose in doing this was first, to be able to retrieve rapidly any articles on a given topic, and second, to be able to cut and paste references into manuscripts in preparation. The following journals have been scanned on a more or less regular basis starting with the 1973 issues: Advances in Physics\* Annals of Physics\* Astronomical Journal\* Astronomische Nachrichten\* Astronomy and Astrophysics (letters, main journal, and supplement series)\* Astronomy and Astrophysics Review\* Astronomy Letters\* (formerly Soviet Astronomy Letters) Astronomy Reports\*(formerly Soviet Astronomy) Astrophysical Journal (letters, main journal, and supplements)\* Astrophysical Letters and Communications Astrophysics and Space Science\* ESA Bulletin\* ESA Journal\* IEEE Transactions on Nuclear Science\* Journal of Astrophysics and Astronomy\* Monthly Notices of the Royal Astronomical Society\* Nature Nuclear Instruments and Methods in Physics Research Section A\* Observatory\* Physical Review (main journal A and letters)\* Proceedings of the Astronomical Society of Australia\* Publications of the Astronomical Society of Japan\* Publications of the Astronomical Society of the Pacific\* Reports on Progress in Physics\* Science\* Scientific American Sky & Telescope The asterisks indicate journals which are scanned using the on-line version of Current Contents. In addition, the following journals have been scanned, but in many cases less regularly, particularly in the past: Annals of Geophysics Astrofizika Bulletin of the American Astronomical Society Bulletin of the American Physical Society Chinese Astronomy Cosmic Research Journal of Atmospheric and Terrestrial Physics Journal of the British Interplanetary Society Journal of the Royal Astronomical Society of Canada Progress in Theoretical Physics Solar Physics Soviet Physics The above lists are not exhaustive. For example, where theses, newspaper articles, or internal reports have come to my attention, I have included them, too. To be included, an article had to have something to do with gamma-ray burst theory, observation, or instrumentation, or be closely related to one of these topics (e.g., merging neutron stars, AXPs, SGRs, the Bursting Pulsar), and must have been published in some form. With only a few exceptions, preprints which were never published have not been included. ## II Organization of the Bibliography The overall organization is chronological by year. Within a given year, articles published in journals are listed first, in alphabetical order by first author. Then come theses and conference proceedings articles. The latter are listed in the order in which they appear in the proceedings. The entries are numbered consecutively, so that paper copies which are kept on file can be retrieved quickly. However, to avoid having to renumber this entire file when a new article is added, numbers are skipped at the end of each year and reserved for later inclusion. The complete author list follows, as it appears in the journal, along with the title, journal, volume number, page number, and year. A line containing key words follows this. These are generally not the same key words as the ones listed in the journal, nor are they taken from the title or any particular list. Rather, they are meant to reflect the true content of the article, and provide a list of machine-searchable topics. In general, however, key words have not been included for conference proceedings articles. ## III A Few Interesting Statistics The number of articles published each year since 1973 is shown in figure 1. Starting with a modest article per month in 1973, it began to exceed one per day in 1994. Several milestones are indicated as the probable causes of sudden increases in the publication rate. The apparent decreases in the rates in 1995 and 1997 are in fact due to a 2 year periodicity in the publications caused by the influx of a large number of articles from the Huntsville Workshop proceedings. In keeping with this publication rate, the bibliography is updated on an approximately daily basis. Note that there are still about as many papers published as there are gamma-ray bursts. The cumulative total is shown in figure 2. The sheer volume of the literature has necessitated the development of a program which can search for and extract particular articles. I have written such a program in Microsoft Word Basic (a variant of the BASIC programming language). It allows one to extract all articles between two dates whose entries contain a particular key phrase and write it to a separate file. ## IV Availability The IPN web site ssl.berkeley.edu/ipn3/index.html contains a version of this bibliography. More up-to-date versions in plain ASCII, “doc”, and “rich text” (rtf) formats can be made available to interested parties as time permits. Please contact me at khurley@sunspot.ssl.berkeley.edu to request copies, and indicate your preference for the format. I would appreciate it if users would communicate errors and omissions to me. This work was carried out under JPL Contract 958056 and CGRO guest investigator grant NAG5-7810.
no-problem/9912/astro-ph9912278.html
ar5iv
text
# Finding typical high redshift galaxies with the NOT ## 1. Damped Ly$`\alpha `$ Absorbers and high redshift galaxies Damped Ly$`\alpha `$ Absorbers are QSO absorption line systems with HI column density larger than $`2\times 10^{20}cm^2`$. This very large column density absorption occurs in regions of self shielding, cooled gas, i.e. where we expect stars to form. Hence DLAs are prime candidates for being the progenitors of present day galaxies. This hypothesis is strengthened by the fact that the neutral gas content of DLAs at high redshift, within the uncertainties, is known to be the same as that of visible matter in present day galaxies (Wolfe et al., 1995). Hence, DLAs being HI column density selected galaxies are truly representative of the progenitors of present day galaxies. There are primarily two pieces of information we wish to obtain via the study of DLAs : (i) the size and (ii) the stellar content of typical high redshift galaxies. Concerning (i), it is a long standing controversy whether DLAs are large fully formed disk galaxies or small merging galaxy subunits (e.g. Wolfe et al., 1986, Haehnelt et al., 1998). The actual size of typical high redshift galaxies will give us information about the nature of the dark matter that forms the haloes containing the baryons. Concerning (ii), it has become clear that the star formation histories of all local group members differ from that of the Milky Way and differ amongst each other. Therefore we cannot expect any single galaxy in the local group to be a good tracer of the global star formation history (e.g. Tolstoy, 1998). Another line of approach that has been pursued heavily in the last few years has been to try to obtain the global star formation history via the study of so called Lyman break galaxies (LBGs) in the early universe. LBGs are found using a technique based on the fact that young, star forming galaxies will have a strong spectral break at the lyman limit, which at high redshift is redshifted into the optical window (Steidel et al., 1996). LBGs need to be bright enough for spectroscopical confirmation of their high redshift so they are typically brighter than R(AB)=26. Assuming that DLAs arise in gaseous discs associated with LBGs we can compare DLAs and LBGs by calculating how faint we need to integrate down the extrapolation of the luminosity function of LBGs in order to explain the observed probability for a QSO line of sight to cross a DLA. Results of this calculation are presented in Fynbo et al., 1999, and summarized here. At $`z=3`$ we find that 70-90% of DLA galaxy counterparts are fainter than the current limit for spectroscopic confirmation of LBG candidates of R(AB)=26. Hence LBGs are highly atypical high redshift galaxies, probably the progenitors of present day bright cluster galaxies (Baugh et al., 1998). Studying high redshift DLAs is therefore the only way to obtain information about the nature of typical (in that they contain the baryons found in galaxies today) galaxies in the early universe. The most obvious method by which to determine the sizes and stellar contents of DLAs is to detect emission from them. ## 2. Imaging of DLAs with the NOT From an observational point of view the main problems in studying emission from DLAs are (i) that they are very faint and (ii) the presence of a much brighter QSO at a distance of only 0-3 arcsec on the sky. In the spectrum of the background QSOs DLAs at redshifts $`z2`$ produce regions of 15-25Å (the width depending on the HI column density) of saturated absorption. Hence imaging in a narrow filter with a width corresponding to the width of the damped absorption line will circumvent problem (ii). If the DLA is a Ly$`\alpha `$ emitter it will be relatively easy to detect against the modest sky background in the narrow band filter which helps circumventing problem (i). Narrow band imaging of DLAs have been pursued in more than a decade (e.g. Lowenthal et al., 1995), but only recently with success. The DLA at $`z=2.81`$ towards the $`z_{em}=2.79`$ PKS0528-250 (Møller and Warren, 1993, 1998, Warren and Møller, 1996) was detected with narrow band imaging using the ESO 3.6m telescope and confirmed by spectroscopy on the ESO NTT. Here we describe our results on narrow and broad band imaging of the DLAs towards Q0151+048A (z<sub>abs</sub>=1.9342) and PKS1157+014 (z<sub>abs</sub>=1.9436). These two DLAs were chosen because they are $`z_{abs}z_{em}`$ systems, as is the DLA towards PKS0528-250. The QSO redshifts are $`z_{em}=1.921`$ and $`z_{em}=1.978`$ for Q0151+048A and PKS1157+014 respectively. Moreover, Q0151+048 is very interesting in being a physical QSO pair (not a lensed system) with two QSOs at nearly the same redshift (Meylan, et al., 1990). The B component has redshift $`z_{em}=1.937`$ (Møller, Warren and Fynbo, 1998). Moreover, the DLA towards PKS1157+014 has one of the highest HI column densities ($`6\times 10^{21}cm^2`$) of all known DLAs. NOT was the perfect instrument for these DLAs due to the high spacial resolution and the very high UV sensitivity of the Loral CCD at the wavelength of redshifted Ly$`\alpha `$ ($``$3600Å). The DLA towards Q0151+048 was imaged in narrow-band, U and I in four nights in September 1996 with StanCam. We obtained 5$`\sigma `$ point source detection limits of n(3567)=24 (corresponding to $`5.0\times 10^{17}ergs^1cm^2`$ for Ly$`\alpha `$ at the absorption redshift), I(AB)=25.7 and U(AB)=26.0 respectively. The three left panels in fig. 1 show $`100\times 60arcsec^2`$ from the combined I-frame (top), U-frame (middle) and narrow band frame (bottom). North is up and east is to the left. Seen are the two QSOs in the center of the frames and a candidate $`z=1.93`$ Ly$`\alpha `$ emitting galaxy 40<sup>′′</sup> east of the QSOs. In the lower frame we have subtracted the Point-Spread-Functions of the two QSOs so that the extended Ly$`\alpha `$ emission from the DLA is clearly seen. The DLA towards PKS1157+014 was imaged in narrow-band, U and I in two nights in March 1998 with Alfosc. We obtained 5$`\sigma `$ point source detection limits of n(3567)=23.2 (corresponding to $`7.5\times 10^{17}ergs^1cm^2`$ for Ly$`\alpha `$ at the absorption redshift), I(AB)=25.9 and U(AB)=25.3 respectively. The three right panels in fig. 1 show extractions from the combined I-frame (top), U-frame (middle) and narrow band frame (bottom) with the same field size as the left frames, but with north to the left and east down. Seen are the QSO in the center and two candidate $`z=1.94`$ Ly$`\alpha `$ emitting galaxies. In the lower frame the QSO has completely vanished due to the extremely strong damped absorption line. There is no evidence for Ly$`\alpha `$ emission from the DLA at impact parameters smaller than 10<sup>′′</sup>. ## 3. Discussion It is not yet possible to draw general conclusion about the nature of DLA galaxies based only on the few DLA galaxy counterparts currently studied in emission. We do, however, note that in all three DLA fields studied with narrow band imaging so far we have found one or more candidate galaxies at the DLA redshift. In the right frames of fig. 1 the two galaxies seem to be aligned with the QSO. As noted by Møller and Warren, 1998, there is growing evidence for filamentary structure in the distribution of Ly$`\alpha `$ emitting galaxies at high redshift. Fig. 2 is an updated version of their Fig.6 showing alignments in 5 galaxy groups at high redshift, including the field around PKS1157+014. This trend is in agreement with N-body simulations of hierarchical structure formation were galaxies predominantly form along filaments (e.g. Evrard et al., 1994). ## References Baugh C.M., Cole S., Frenk C.S., Lacey C.G., 1998, ApJ, 498, 504 Evrard A.E., Summers F.J., Davis M., 1994, ApJ, 422, 11 Fynbo J.U., Møller P., Warren S.J., 1999, MNRAS, 305, 849 Haehnelt M.G., Steinmetz M., Rauch M., 1998, ApJ, 495, 647 Lowenthal J.L., Hogan G.J., Green R.F., Woodgate B., Caulet A., Brown L., and Bechthold J., 1995, ApJ, 451, 484 Meylan G., Djorgovski G., Weir N., Shaver P., 1990, The Messenger, 59, 47 Møller P., Warren S.J., 1993, A&A, 270,43 Møller P., Warren S.J., Fynbo J.U., 1998, A&A, 330, 19 Møller P., Warren S.J., 1998, MNRAS, 299, 661 Warren S.J., Møller P., 1996, A&A, 311,25 Tolstoy, E., 1998, In: ’Dwarf Galaxies & Cosmology’, eds. T.X. Thuan, C. Balkowski, V. Cayatte, J. Tran Thanh Van, in press Steidel C.C., Giavalisco M., Pettini M., Dickinson M., Adelberger K.L., 1996, ApJ, 112, 352 Wolfe A.M., Turnshek D.A., Smith H.E., Cohen R.D., 1986, ApJS, 61, 249 Wolfe A.M., Lanzetta K.M., Foltz, C.B., Chaffee F.H., 1995, ApJ, 454, 698
no-problem/9912/cond-mat9912432.html
ar5iv
text
# Solution of real-axis Eliashberg equations with different pair symmetries and tunneling density of states ## Abstract The real-axis direct solution of the Eliashberg equations for the retarded electron-boson interaction in the half-filling case and in the presence of impurities is obtained for six different symmetries of the order parameter: $`s`$, $`s+\mathrm{i}d`$, $`s+d`$, $`d`$, $`anisotropic`$-$`s`$ and $`extended`$-$`s`$. The spectral function is assumed to contain an isotropic part $`\alpha _{is}^2F\left(\mathrm{\Omega }\right)`$ and an anisotropic one $`\alpha _{an}^2F\left(\mathrm{\Omega }\right)`$ such that $`\alpha _{is}^2F\left(\mathrm{\Omega }\right)=g\alpha _{an}^2F\left(\mathrm{\Omega }\right)`$, where $`g`$ is a constant, and the Coulomb pseudopotential $`\mu ^{}`$ is set to zero for simplicity. The density of states is calculated for each symmetry at $`T=2,4,40`$ and $`80`$ K. The resulting curves are compared to those obtained by analytical continuation of the imaginary-axis solution of the Eliashberg equations and to the experimental tunneling curves of optimally-doped Bi 2212 crystals. In this paper, we make use of the Migdal-Eliashberg theory for the strong electron-boson coupling to discuss the effect of different possible symmetries of the order parameter on the tunneling curves of copper-oxide superconductors. Because of the layered structure of these materials, we can suppose the quasiparticle wavevectors $`𝐤`$ and $`𝐤^{}`$ to lie in the CuO<sub>2</sub> plane and call $`\varphi `$ and $`\varphi ^{}`$ their azimuthal angles in this plane. Then we solve the Eliashberg equations (EE) using a single-band approximation with a nearly-circular Fermi line. In the real-axis formalism the EE take the form of a set of coupled integral equations for the order parameter $`\mathrm{\Delta }(\omega ,\varphi )`$ and the renormalization function $`Z(\omega ,\varphi )`$, containing the retarded interaction $`\alpha ^2(\mathrm{\Omega },\varphi ,\varphi ^{})F(\mathrm{\Omega })`$ and the Coulomb pseudopotential $`\mu ^{}(\varphi ,\varphi ^{})`$ . We hypothesize that the two last quantities contain an $`isotropic`$ and an $`anisotropic`$ part and we expand both of them in terms of basis functions. Actually, even though we are able to solve the EE for an arbitrary constant value of $`\mu ^{}`$, we put it to zero for simplicity. The spectral function expanded at the lowest order is then expressed by: $`\alpha ^2(\mathrm{\Omega },\varphi ,\varphi ^{})F(\mathrm{\Omega })`$=$`\alpha _{is}^2F(\mathrm{\Omega })\psi _{is}\left(\varphi \right)\psi _{is}\left(\varphi ^{}\right)`$ +$`\alpha _{an}^2F(\mathrm{\Omega })\psi _{an}\left(\varphi \right)\psi _{an}\left(\varphi ^{}\right)`$ where the basis functions $`\psi _{is}\left(\varphi \right)`$ and $`\psi _{an}\left(\varphi \right)`$ are chosen as follows: $`\psi _{is}\left(\varphi \right)`$=1; $`\psi _{an}\left(\varphi \right)`$=$`\sqrt{2}\mathrm{cos}\left(2\varphi \right)`$ for the $`d`$-wave, $`\psi _{an}\left(\varphi \right)`$=$`8\sqrt{2/35}\mathrm{cos}^4\left(2\varphi \right)`$ for the $`anisotropic`$-$`s`$, and $`\psi _{an}\left(\varphi \right)`$=$`2\sqrt{2/3}\mathrm{cos}^2\left(2\varphi \right)`$ for the $`extended`$-$`s`$ . For simplicity again, we suppose that $`\alpha _{an}^2F(\mathrm{\Omega })`$=$`g\alpha _{is}^2F(\mathrm{\Omega })`$ where $`g`$ is a constant . Thus, the electron-boson coupling constants for the $`isotropic`$-wave channel and the $`anisotropic`$-wave one, which are given by $`\lambda _{is,an}`$= $`(1/\pi )_0^{2\pi }d\varphi \psi _{is,an}^2(\varphi )_0^+\mathrm{}d\mathrm{\Omega }\alpha _{is,an}^2F(\mathrm{\Omega })/\mathrm{\Omega }`$, result to be proportional: $`\lambda _{an}`$=$`g\lambda _{is}`$. We are interested in solutions of the real-axis EE of the form: $`\mathrm{\Delta }(\omega ,\varphi )`$=$`\mathrm{\Delta }_{is}(\omega )`$+$`\mathrm{\Delta }_{an}(\omega )\psi _{an}\left(\varphi \right)`$ and $`Z(\omega ,\varphi )`$=$`Z_{is}(\omega )`$+$`Z_{an}(\omega )\psi _{an}\left(\varphi \right)`$. The equations are reported explicitly elsewhere . Here we suppose $`Z_{an}(\omega )`$ to be identically zero . The numerical solution of the real-axis EE is performed by using an iterative procedure. In view of a comparison to the experimental tunneling curves obtained in Bi 2212 break junctions, we take $`\alpha _{is}^2F(\mathrm{\Omega })`$=$`(\lambda _{is}/\lambda _{\mathrm{Bi2212}})\alpha ^2F(\mathrm{\Omega })_{\mathrm{Bi2212}}`$ , where $`\alpha ^2F(\mathrm{\Omega })_{\mathrm{Bi2212}}`$ has been experimentally determined in a previous paper and rescaled to have $`T_c`$=97 K. Once determined $`\mathrm{\Delta }(\omega ,\varphi )`$ and $`Z(\omega ,\varphi )`$, we calculate the quasiparticle density of states $`N(\omega )`$= $`\left(1/2\pi \right)_0^{2\pi }d\varphi \mathrm{Re}\left(\omega /\sqrt{\omega ^2\mathrm{\Delta }(\omega ,\varphi )^2}\right)`$, whose convolution integral with the Fermi distribution is the quantity that must be compared to the experimental tunneling data. Incidentally, the explicit solution shows that the symmetry of $`\mathrm{\Delta }(\omega ,\varphi )`$ is affected by the choice of the coupling constants $`\lambda _{is}`$ and $`\lambda _{an}`$ and, for some particular values of $`\lambda _{is}`$ and $`\lambda _{an}`$, by the starting values of $`\mathrm{\Delta }_{is}(\omega )`$ and $`\mathrm{\Delta }_{an}(\omega )`$. The figure reports the theoretical normalized conductance at $`T`$=4 K in the six symmetries analyzed and the Bi 2212 experimental data obtained in break-junction tunneling experiments . The $`d`$-wave curve gives the best fit of the peak at the gap edge and of the conductance behaviour inside the gap, suggesting that the symmetry of Bi 2212 could be pure $`d`$-wave. Nevertheless, none of the symmetries studied here is able to give, at the same time, $`T_c`$=92-94 K and a peak at 35-41.5 meV, as experimentally observed by STM on Bi 2212 and recently reported in literature . The same theoretical curves have also been obtained at $`T`$=2, 40 and 80 K. The $`anisotropic`$-$`s`$ and $`extended`$-$`s`$ curves are highly temperature-dependent, and some fine structures evidenced at $`T`$=2 K are already indistinguishable at 4 K. For $`TT_c/3`$ it is practically impossible to distinguish between $`d`$-wave and mixed symmetries, while the $`s`$-wave curve remains clearly distinct. The curves at $`T`$=2 K can be compared to those obtained by analytically continuing the imaginary-axis solutions $`\mathrm{\Delta }(\mathrm{i}\omega _n)`$ and $`Z(\mathrm{i}\omega _n)`$. In general, at this low temperature, the analytical continuation gives a *reasonable agreement* with the real-axis solutions. The agreement is satisfactory for the $`s+\mathrm{i}d`$, $`d`$, $`anisotropic`$-$`s`$ cases, and becomes a little worse in the $`extended`$-$`s`$ case. The imaginary-axis $`s`$-wave curve is instead markedly shifted (of about 3 meV) toward higher energies, and then is unable to approximate the real-axis solution. In the ($`s+d`$)-wave symmetry, even the shape of the curve is heavily different in the two cases . Finally, we have studied the effect of *non-magnetic impurities* on the tunneling density of states by solving the appropriate real-axis EE . We have found that this effect is greater on the $`d`$-wave component of the order parameter. A small amount of impurities in the unitary limit gives rise to a zero bias in the $`d`$-curve, and then improves its agreement with our experimental points . In the non-unitary case, the $`d`$-curve peak is further lowered, broadened and shifted leftward. In the same conditions, all the mixed-symmetry curves are modified in non-trivial ways, e.g. the $`s+\mathrm{i}d`$ and the $`anisotropic`$-$`s`$ curves become practically indistinguishable from the $`s`$-wave one, but are shifted toward lower energies. Finally, the $`s`$-wave curve is nearly unaffected by this kind of impurities, apart from a small shift of the gap toward higher energies.
no-problem/9912/cond-mat9912381.html
ar5iv
text
# Phase-Locking of Vortex Lattices Interacting with Periodic Pinning \[ ## Abstract We examine Shapiro steps for vortex lattices interacting with periodic pinning arrays driven by AC and DC currents. The vortex flow occurs by the motion of the interstitial vortices through the periodic potential generated by the vortices that remain pinned at the pinning sites. Shapiro steps are observed for fields $`B_\varphi <B<2.25B_\varphi `$ with the most pronounced steps occurring for fields where the interstitial vortex lattice has a high degree of symmetry. The widths of the phase-locked current steps as a function of the magnitude of the AC driving are found to follow a Bessel function in agreement with theory. \] Vortex lattices interacting with periodic pinning arrays show a wide range of interesting commensurability or matching effects when the number of vortices is a multiple or rational-multiple of the number of pinning sites. These pinning arrays can be created with lithographic techniques in which arrays of microholes or ”antidots” and magnetic dot arrays can act as pinning sites. For small pinning sites only one vortex is trapped on a site as observed in transport measurements , Lorentz-microscopy experiments and simulations . Additional vortices sit in the areas between the pins and under the influence of an applied driving force they can flow between the vortices that have remained trapped at the pinning sites The flowing interstitial vortices experience a periodic potential caused by the repulsive interactions from the vortices at the pinning sites. The motion of the driven interstitial vortices is then analogous to an over-damped particle moving down a tilted washboard. With the addition of an AC driving current, interference effects in the form of Shapiro steps can be expected to occur when the frequency of the particles moving over the washboard matches with one of the harmonics of the driving frequency . Recently Shapiro steps have been observed for driven vortices moving in samples with a periodic array of pinning sites at twice the matching field $`B=2B_\varphi `$ where $`B_\varphi `$ is the field for which there is one vortex per pinning site. The height of these current steps (range of phase-locking) strongly suggests that the vortex motion consists of the interstitial vortices moving in the periodic potential from the pinned vortices. Shapiro steps have also been observed by Martinoli et al. for vortices moving over a one dimensional periodic potential created from a periodic thickness modulation. It has further been proposed that Shapiro steps can be seen for vortices in driven flux-transformers. In this work we investigate numerically and analytically the Shapiro steps for driven vortices in thin film superconductors with periodic pinning arrays. The vortex lattice consists of the pinned vortices at the pinning sites and the sublattice of vortices that sit in the interstitial region. As a function of increasing drive we observe the interstitial vortices moving in one dimensional channels between the pinning sites. With a superimposed AC drive we observe Shapiro steps. We find that for certain commensurate fields, such as $`B=2B_\varphi `$, the system can be modeled as an overdamped driven pendulum with the associated phase locking. We find numerically that the widths of the steps depend on the magnitude of the AC driving as a Bessel function in agreement with theory. The Shapiro steps are most pronounced for highly symmetric interstitial vortex lattice arrangements. For $`B>2B_\varphi `$ the steps vanish out due to complicated vortex configurations leading to nontrivial flow patterns. We numerically integrate the overdamped equation of motion for a vortex $`i`$ $$𝐟_i=𝐟_i^{vv}+𝐟_i^{vp}+𝐟_d+𝐟_{ac}=𝐯_i.$$ (1) Here the total force acting on vortex $`i`$ is $`𝐟_i`$. The vortex-vortex interaction potential is logarithmic, $`U_v=\mathrm{ln}(r)`$, and the force on vortex $`i`$ from all the other vortices is $`𝐟_i^{vv}=_{ji}^{N_v}_iU_v(r_{ij})`$ . We impose periodic boundary conditions and evaluate the periodic long-range logarithmic interaction with an exact and fast converging sum . The pinning is modeled as attractive parabolic wells with $`f_i^{vp}=(f_p/r_p)\mathrm{\Theta }(|𝐫_i𝐫_k^{(p)}|/\lambda )\widehat{𝐫}_{ik}^{(p)}`$. Here, $`\lambda =1`$, $`\mathrm{\Theta }`$ is the step function, $`𝐫_k^{(p)}`$ is the location of pinning site $`k`$, $`f_p`$ is the maximum pinning force and $`\widehat{𝐫}_{ik}^{(p)}=(𝐫_i𝐫_k^{(p)})/|𝐫_i𝐫_k^{(p)}|`$. The pinning is placed in a rectangular array ($`L_x,L_y`$) with the ratio of the pinning radius $`r_p`$ to pinning lattice constant $`L_y`$ being $`r_p/L_y=0.164`$, close to the ratio $`0.2`$ used in the experiments . The pinning is placed in a $`4\times 4`$ array and the initial vortex configurations are obtained by annealing from a high temperature state where the vortices are liquid and cooling to $`T=0`$. For certain parameters we have also considered simulations for pinning arrays up to $`10\times 10`$ and found only minor differences. We only consider the case for $`B>B_\varphi `$ so that the vortex motion will be strictly from the flow of interstitial vortices. The driving force $`𝐟_d`$ represents the Lorentz force from an applied current. We gradually increment $`𝐟_d`$ from zero simulating each DC current value for 17500 time steps (the normalized time step is $`dt=0.003072`$) to obtain the average of the vortex velocities. The resulting DC force-velocity curve is proportional to the DC current-voltage curve. The AC offset is added as $`f_a\mathrm{cos}(\omega t)`$. We conduct a series of simulations where the amplitude $`f_a`$ is varied. In this work both the DC and AC driving forces will be in the $`x`$-direction. We first consider the $`B=2B_\varphi `$ case where the interstitial vortices form a perfectly ordered square sub-lattice. The vortex trajectories above depinning are shown in the upper inset of Fig. 1 for this case. Here the interstitial vortices travel in one dimensional paths between the pinned vortex sub-lattice. Further, the moving interstitial vortex lattice retains the same square symmetry as the pinned interstitial vortex lattice. Fig. 1 shows typical simulation results of the voltage response $`V_x=(1/N_v)_{i=1}^{N_v}\widehat{𝐯}_i\widehat{𝐱}`$ versus an applied DC driving force at several different AC amplitudes for $`B=2B_\varphi `$. The simulation parameters are $`\omega =1.6276`$ and $`L_x=L_y=1.83`$. For zero AC driving the vortex velocities increase linearly with the DC driving force. With applied AC driving there are clear steps where the vortex velocities remain constant for a finite range of DC driving, indicative of phase-locking of the vortex motion. The widths of the steps depend on the magnitude of the AC drive. In order to demonstrate that the phase-locking of the interstitial vortex motion is indeed closely related to the well-known Shapiro steps in the AC driven pendulum equation we first make the observation from the inset in Fig. 1 that the interstitial vortices are moving one-dimensionally along the x-direction at the symmetry line between the pinned vortices $`y=\frac{L_y}{2}`$, where $`L_y`$ is the distance between two pinning centers along the y-direction. This allows us to write the equation of motion for the unpinned vortices as, $`{\displaystyle \frac{d}{dt}}x_if_i^{vv}(x,y={\displaystyle \frac{L_y}{2}})`$ $`=`$ $`f_d+f_{ac},`$ (2) where we have neglected the pinning interaction and motion in the transverse direction. We will make the additional assumptions that unpinned vortices form a perfect rectangular lattice, meaning that they effectively do not interact due to symmetry, and that the pinned vortices are effectively pinned exactly to their pinning site; i.e., that the pinned vortices have no dynamics and form a perfect rectangular lattice with dimensions $`L_x`$ and $`L_y`$. Under these assumptions each moving vortex obeys the following equation of motion : $`{\displaystyle \frac{d}{dt}}x_i{\displaystyle \frac{\pi }{L_x}}{\displaystyle \underset{k}{}}{\displaystyle \frac{\mathrm{sin}(2\pi \frac{x}{L_x})}{\mathrm{cosh}\left(2\pi \frac{L_y}{L_x}(k+\frac{1}{2})\right)\mathrm{cos}(2\pi \frac{x}{L_x})}}`$ $`=`$ $`f_d+f_{ac}.`$ (3) Considering only the leading term in the above sum, we can simplify the interaction between pinned and unpinned vortices to yield the equation, $`{\displaystyle \frac{d}{dt}}x{\displaystyle \frac{2\pi }{L_x}}\mathrm{sech}\left(\pi {\displaystyle \frac{L_y}{L_x}}\right)\mathrm{sin}(2\pi {\displaystyle \frac{x}{L_x}})=f_d+f_a\mathrm{cos}(\omega t),`$ (4) where we have considered only the contributions, $`k=1,0`$, and allowed for a relative error in the force of $`\mathrm{sech}\left(\pi \frac{L_y}{L_x}\right)`$, which is obviously small as long as $`L_y/L_x`$ is not small. This equation describes the driven overdamped pendulum, and we can therefore apply the procedure for evaluating phase-locking ranges between a pendulum and an AC drive. Assuming phase-locking where the pendulum (vortex) moves with a frequency $`n\omega `$, we insert the following ansatz (valid for large AC amplitudes) into the above equation, $`x(t)=x_0+n\omega \frac{L_x}{2\pi }t+\frac{f_a}{\omega }\mathrm{sin}\omega t`$, and equate the DC components of the resulting expression, yielding the relationship between the applied AC force and the phase, $`2\pi x_0/L_x`$, for a given integer $`n`$: $`nL_x{\displaystyle \frac{\omega }{2\pi }}{\displaystyle \frac{2\pi }{L_x}}\mathrm{sech}\left(\pi {\displaystyle \frac{L_y}{L_x}}\right)J_n\left({\displaystyle \frac{2\pi f_a}{\omega L_x}}\right)\mathrm{sin}(2\pi {\displaystyle \frac{x_0}{L_x}})`$ $`=`$ $`f_d,`$ (5) where $`J_n`$ is the $`n`$th order Bessel function of the first kind. The size of the range, $`\mathrm{\Delta }f_d`$, in $`f_d`$ for which the vortex motion may stay locked to the AC drive’s $`n`$th harmonic is then given by the extreme values of $`\mathrm{sin}(2\pi \frac{x_0}{L_x})`$: $`\mathrm{\Delta }f_d`$ $`=`$ $`{\displaystyle \frac{4\pi }{L_x}}\mathrm{sech}\left(\pi {\displaystyle \frac{L_y}{L_x}}\right)\left|J_n\left({\displaystyle \frac{2\pi f_a}{\omega L_x}}\right)\right|.`$ (6) By conducting a series of simulations with different AC driving amplitudes we can compare our simulation results for the dependence of the step widths with those predicted from equation (6). In Fig. 2a we plot the widths of the locking ranges for the harmonics, $`n=0,1,2`$, predicted for our parameters from equation (6) (solid lines) and the widths of the simulated locking ranges, $`n=0`$ ($``$), $`n=1`$ ($``$), and $`n=2`$ ($`\mathrm{}`$). There is very good agreement between the simulation data and the predicted curves. We note that although equation (6) is for a single interstitial moving vortex at $`B=2B_\varphi `$ the interstitial vortex lattice is symmetric (see Fig. 1a) so the interstitial-interstitial vortex interactions cancel. We also obtain good agreement for the predicted widths from equation (6) and the simulations for the higher harmonics $`n>2`$ which are not shown here. We note that the agreement between the simulation data and the predicted behavior is not expected to be exact since the force that the interstitial vortices experience from the pinned vortex lattice is not strictly sinusoidal. We have also tested equation (6) for different ratios of $`L_x/L_y`$ by considering a rectangular pinning array with $`L_x/L_y=2`$. The ratio of the step widths for the different directions is $`52`$ in good agreement with the theoretical prediction of $`57`$. The agreement is still good when we compare the simulated ranges of phase-locking with those predicted by equation (6) for the same parameters as above, but with $`L_x=2L_y=3.66`$. Since the vortices are forced in the x-direction, this is a case where the harmonic potential approximation made in equation (4) is not expected to be as good as for the square lattice case, $`L_x=L_y`$. Figure 2b shows that simulations at the second matching field, $`B=2B_\varphi `$, ($`\mathrm{}`$ and $``$) show less than predicted ranges of locking suggesting that assumptions in the analysis are not well within validity. However, performing the same simulations, but at the matching field with one additional interstitial vortex (filled markers), reveals locking-ranges very close to what is predicted. This underlines that the harmonic potential assumption made in equation (4) is reasonable even for $`L_x=2L_y`$. Closer examination of the dynamics at $`B=2B_\varphi `$ (open markers) shows that internal modes in the moving vortex lattice are being excited and the assumption of cancelation of interstitial vortex interactions become invalid, which is responsible for the deviation between simulations and our prediction in figure 2b for $`B=2B_\varphi `$. It is, of course, important to emphasize that the overall features of the locking range is still predicted well by equation (6). The above analysis suggests that whenever the interstitial vortex lattice is rectangular and the interstitial vortex interactions therefore cancel, Shapiro steps should be observed and be well approximated by equation (6) when $`\mathrm{sech}(\pi L_y/L_x)`$ is small. Square interstitial vortex arrangements are found at $`B/B_\varphi =2,1.5,1.25,1.0625`$. For other filling fractions the interstitial vortex lattice is not symmetrical and the interstitial-interstitial interactions do not cancel, leading to some deviations from the predicted phase-locking (the locking range is usually smaller than predicted). This is illustrated in Fig. 3 where we show the widths of the Shapiro steps for different filling fractions for a fixed AC amplitude and frequency. The Shapiro step widths for the different symmetrical vortex configurations at $`B/B_\varphi =2,1.5`$ and $`1.0625`$ are essentially identical. For $`B/B_\varphi =1.375`$, and $`1.68`$ the steps are considerably reduced and some fractional Shapiro steps also appear. We find in general that for $`B_\varphi <B<2B_\varphi `$, the filling fractions that produce square interstitial vortex lattices have the same Shapiro step widths as at $`B=2B_\varphi `$. Interestingly for $`B>2B_\varphi `$ we find that the step widths remain the same as at $`B=2B_\varphi `$; however, there is a component in $`V_x`$ of the steps that linearly increases with increasing $`f_d`$. For increasing magnetic fields this linear increase in $`V_x`$ of the steps increases until $`B2.25B_\varphi `$, when the steps can no longer be discerned. This linear increase suggests that only a portion of the vortices are phase locked. The images (not shown) from the simulations suggest that the extra vortices which have been added to the $`B=2B_\varphi `$ sub-lattice cause an additional soliton-like motion which moves at a different speed than the interstitial vortices. To examine this we plot in figure 4 the time dependent vortex velocities for two separate interstitial vortices along the same row at the $`n=1`$ step ($`f_d=0.39`$). In Figs. 4a and 4b for $`B=2B_\varphi `$ the signals for the two particles are identical indicating that the vortices are moving in phase. In Figs. 4c and 4d we plot the signal from a row containing an extra vortex for $`B=2.0625B_\varphi `$. Here the same oscillation as in Figs. 4a and 4b is seen, indicating that phase-locking is occurring; however, there is an additional lower frequency oscillation superimposed. The soliton like nature of this disturbance can be seen by noting this extra oscillation out of phase between the two vortices; similar to a kink soliton on a Frenkel-Kontorova chain . In conclusion, we have observed Shapiro steps in the current-voltage characteristics of driven vortex lattices interacting with periodic pinning. At $`B=2B_\varphi `$ where the vortex motion consists of the one dimensional flow of interstitial vortices between the pinned vortices, Shapiro steps are observed in agreement with recent experiments . We show that for certain filling fractions the equation of motion for a driven interstitial vortex with a drive can be mapped to a driven overdamped pendulum. We derive the widths of the Shapiro steps as a function of relevant experimental parameters, and find excellent agreement between theory and simulations. For filling fractions where interstitial-interstitial vortex interactions become relevant the step widths are reduced. For $`B>2B_\varphi `$ the steps begin to vanish due to an additional soliton like flow and other dynamical complexity. Acknowledgments: We thank C.J. Olson for critical reading of this manuscript. This work was supported by the Director, Office of Advanced Scientific Computing Research, Division of Mathematical, Information, and Computational Sciences of the U.S. Department of Energy under contract number DE-AC03-76SF00098 as well as CLC and CULAR (Los Alamos National Laboratory).
no-problem/9912/astro-ph9912012.html
ar5iv
text
# 1 Introduction ## 1 Introduction The interest to the problem of dynamics of stellar systems with double massive centre is due to the discoveries of double nuclei in the centers of several galaxies have various separation in projection – from 2 pc for M31 up to 800pc for Markarian 273; double massive object was descovered in the centre of Arp 220 too. A massive binary system located in the core of a galaxy can lead to a number of dynamical effects. In the present paper we investigate the role of double centers, i.e. binary massive bodies situated in the center of N-body systems, in the dynamical instability of the gravitating systems using the Ricci criterion (has been introduced in ) for estimation of the relative instability (chaos) of those systems. This criterion, for example, had enabled to establish in that the regular central field is increasing the instability of the N-body systems. It is remarkable that this result has been obtained via numerical study of relatively small number of particles and later has been confirmed with extensive simulations on powerful computers . The effect of the central regular field is crucial for the understanding of the relaxation, mixing and evolution of the galactic cores. The situation is complicated in a sense that there are both type of effects - acting to increase and decrease the chaos in the system. The flow of numerical methods range from the Lyapunov numbers and KS-entropy, approximate expansion and frequency maps up to powerful methods based on direct solutions , , . The adequacy and efficiency of a given method in each particular case is itself an interesting problem. For example, the efficiency of estimation of Lyapunov numbers is limited by a number of reasons, particularly due to the exponential growth of the errors at large enough N inevitable at any iterated numerical procedure. Geometrical methods based on the theorems of the theory of dynamical systems provide an alternative way of study of N-body systems via reducing the problem to that of the geometry of the phase and configurational spaces of the system . In physical problems this method has been initially used by Krylov and for N-body gravitational systems – by Gurzadyan and Savvidy . In the latter papers it was shown that spherical N-body systems are K-systems, i.e. are mixing systems with exponential instability. The statistical properies plays a key role for the understanding of the relaxation and evolution of many astrophysical objects - from the Solar system to galaxies and clusters of galaxies. First, we will briefly describe the Ricci curvature formalism, and the algorithm of the numerical calculations. ## 2 The Ricci curvature criterion N-body gravitating system with potential $$V=G\underset{i<j}{\overset{N}{}}Gm_im_j/r_{ij},$$ ($`r_{ij}`$ is the distance between the particles with masses $`m_i`$ and $`m_j`$) via a variational principle can be transformed to a geodesic flow in a Riemannian space. The behavior of close geodesics in this space is described by the Jacobi equation $$_u_un+Riem(n,u)u=0,$$ where $`u`$ is the velocity of geodesics, $`n`$ is the separation vector of close geodesics and $``$ denotes the covariant derivative. For the normal component of the deviation one can obtain the following equation $$\frac{d^2n^2}{ds^2}=2K_{u,n}n^2+2_un,$$ where $$K_{u,n}=\frac{[Riem(n,u)u]n}{n}$$ is the so-called two-dimensional curvature (Riem is the Riemannian curvature). If $`K_{u,n}`$ is strongly negative in all two-directions (u,v) and everywhere in a compact manifold, the system possesses maximally strong instability properties is isomorphic to Bernoulli shift and is an Anosov system . However this condition is too strong and therefore is not fulfilled for real physical systems. It is reasonable therefore to look for a weaker criterion using some average deviation of geodesics and a mean curvature in the manifold $`M`$. Consider the Ricci curvature $$r_u(s)=R_{\mu \nu }\frac{u^\mu u^\nu }{u^2},$$ where $`R_{\mu \nu }`$ is the Ricci tensor. The criterion of relative instability based on the Ricci curvature reads (Gurzadyan and Kocharian ): The geodesics $`\gamma _1(s)`$ with velocity $`u_1`$ is more unstable with respect to the geodesics $`\gamma _2(s)`$ with velocity $`u_2`$ within some interval $`[0;S_1]`$, if $$r=\frac{1}{3N}\mathrm{inf}[\mathrm{r}_\mathrm{u}(\mathrm{s})]$$ and $$r_1<r_2;r_1<0.$$ The advantage of this criterion is that it is checkable via computer simulations of many dimensional systems, including of N-body systems. Note also, that as distinct from the Lyapunov numbers, this criterion describes the local (in time) properties of the system. ## 3 Numerical simulations At our computer experiments we estimated the Ricci curvature for several evolving configurations, i.e. traced the variation of the Ricci curvature in time. The formula for the Ricci curvature for N-body systems can be derived from the formulae given in previous section and has the following form , , . $$r_u(s)=\frac{(3N2)}{2}\frac{W_{ik}u^iu^k}{W}+\frac{3}{4}(3N2)\frac{(W_iu^i)^2}{W^2}\frac{(3N4)}{4}\frac{|W|^2}{W^3},$$ where $$W=EV;W_i=\frac{W}{q_i};$$ $$W_{ik}=\frac{^2W}{q_iq_k};\mathrm{\Delta }W=\underset{i}{}W_{ii};$$ $$|W|^2=\underset{i}{}(\frac{W}{q_i})^2.$$ We built the systems with $`N=22`$ using the scheme described in , i.e. we considered systems, so that the particles were located at the apexes of concentric cubes of unit sides. The velocities of particles were chosen in way to have no rotational momentum for the system. We estimated the variation of the Ricci curvature by time for the following configurations (Figures 1-3): 1. Homogeneous, i.e. all particles have the same mass $`m`$; 2. With one central massive particle $`m_1`$, while $`N1`$ particles have the same mass $`m<<M`$; 3. With two massive particles of masses $`m_1`$ and $`m_2`$ situated in the central part of the system, while $`N2`$ particles have the same mass $`m<<m_1,m_2`$;. Our calculations showed that the systems with massive center are more unstable than the homogeneous ones, thus confirming the conclusions in , , , . Note, the growth of the instability with the increase of the central mass $`M`$. Figures 1-3 illustrate the comparative instability of the three different types of systems mentioned above. The most unstable among the initial configurations is the system with double-central masses. Note, that as the systems evolves the Ricci curvature tends to zero for all three systems, however the rate of tending is most rapid for the system with double centers. Physically it is clear that the third type of systems has to dissolve quicker for small number of particles. Just this tendency has been also noticed at the numerical experiments, namely the tending to zero of the Ricci curvature becomes slower with the increase of $`N`$. In other words, the double massive central objects make the system more unstable initially, however then the system evolves quicker to its final dissolved state with regular orbits. This tendencies have been confirmed in numerous experiments with configurations with various initial conditions. ## 4 Conclusions Thus we used the Ricci curvature criterion to study the relative instability of three types of N-body systems: homogeneous systems, those with one central mass and two central massive bodies. The following main conclusions had been drawn via the numerical experiments: 1. The presence of the second massive central object makes the system more unstable as compared with that those of a single massive center and the homogeneous ones. 2. The system with double massive objects is evolving more quicker towards dissolution, i.e. to a more global regular situation. 3. The greater is the ratio of the mass of the massive objects to the mass of the rest particles, the more is the difference in the initial instability and the rate of evolution. The main conclusion is however the efficiency of the Ricci curvature criterion for the study of such complex many dimensional systems – N-body systems – by means of simple numerical experiments. We thank V.G.Gurzadyan and S.J.Aarseth for valuable discussions. This work is supported in part by INTAS grant.
no-problem/9912/astro-ph9912170.html
ar5iv
text
# Unipolar outflows and global meridional circulations in rotating accretion flows ## 1 Introduction The considerable recent interest in models of geometrically thick rotating accretion flows has been motivated by ability of these models to explain unusual observational properties of some accreting black hole candidates. There are two major classes of such models. (a) At low accretion rates, the optically thin accretion flow is not able to radiate efficiently and the internal energy stored in the flow is advected into the black hole. Models of such flows have been developed by Ichimaru (1977), Rees et al. (1982), Narayan & Yi (1994), Abramowicz et al. (1995), and others see in recent reviews by Narayan, Mahadevan & Quataert (1998) and Kato, Fukue & Mineshige (1998). These models are called advection dominated accretion flows (ADAFs) and have been applied for explanation of spectral properties of low luminosity high-energy sources. (b) At high accretion rates the flow is optically thick. The liberated binding energy is converted to radiation which is trapped by the flow and advected into the black hole as shown by Katz (1977) and Begelman (1978) for spherical accretion. Rotating accretion flows of this type are called thick discs, Polish doughnuts, or slim discs. They have been developed by Abramowicz, Jaroszyński & Sikora (1978), Jaroszyński, Abramowicz & Paczyński (1980), Abramowicz et al. (1988) and others. Most of the ADAF models have been constructed in a one-dimensional approach which restricts properties of the solutions by considering only the vertically-averaged purely inward motion: the multidimensional character of flow is missing. Narayan & Yi (1995a) pointed out the possible importance of polar outflows in ADAFs. Analytic study of accretion flows with polar outflows was performed by Xu & Chen (1997) and Blandford & Begelman (1999) in a self-similar approach, and numerical two-dimensional studies have recently been carried out by Igumenshchev, Chen & Abramowicz (1996), Igumenshchev & Abramowicz (1999, hereafter IA99) and Stone, Pringle & Begelman (1999). These numerical studies demonstrated that in the case of small or moderate viscosity ($`\alpha 0.1`$), non-radiative accretion flows are convectively unstable and accompanied by irregular bipolar outflows. They do not form powerful unbound winds. At large viscosity ($`0.1\alpha <1`$) flows are stable and may have (but do not have to have) strong outflows. In this Letter we present results from two-dimension axisymmetric hydrodynamical simulations of non-radiative rotating accretion flows with large viscosity. The flows form powerful unipolar or bipolar outflows and could be stationary or nonstationary, depending both on $`\alpha `$ and on the adiabatic index $`\gamma `$, and self-organize into global meridional circulation cell(s). The energy required to support the circulation is extracted from matter accreted into the black hole with efficiency $`10^310^2`$. ## 2 Numerical method We compute models by solving the non-relativistic hydrodynamical equations $$\frac{d\rho }{dt}+\rho \stackrel{}{v}=0,$$ (1) $$\rho \frac{d\stackrel{}{v}}{dt}=P+\rho \mathrm{\Phi }+𝚷,$$ (2) $$\rho \frac{de}{dt}=P\stackrel{}{v}+Q,$$ (3) where $`\rho `$ is the density, $`\stackrel{}{v}`$ is the velocity, $`\mathrm{\Phi }=GM/r`$ is the Newtonian gravitational potential for a central point mass $`M`$, $`e`$ is the specific internal energy, $`𝚷`$ is the viscous stress tensor with all components included, and $`Q`$ is the dissipation function. We adopt the ideal gas equation of state, $`P=(\gamma 1)\rho e`$, and consider only the shear viscosity, assuming the $`\alpha `$-prescription, $$\nu =\alpha \frac{c_s^2}{\mathrm{\Omega }_K},$$ (4) where $`c_s=\sqrt{P/\rho }`$ is the isothermal sound speed, and $`\mathrm{\Omega }_K=\sqrt{GM/r^3}`$ is the Keplerian angular velocity. Details of the numerical technique used to solve (1)-(3) in axial symmetry were discussed by IA99. We use a spherical grid $`N_r\times N_\theta =130\times 50`$ with the inner radius at $`r_{in}=3r_g`$ and the outer radius at $`r_{out}=8000r_g`$, where $`r_g=2GM/c^2`$ is the gravitational radius of black hole. The grid extends from $`0`$ to $`\pi `$ in the polar direction. To model the relativistic Roche lobe overflow, that governs the flow near to the black hole (Abramowicz 1981), in Newtonian gravity we adopt absorbing boundary conditions at $`r=r_{in}`$ together with the condition of the derivatives $`d(v_\theta /r)/dr`$ and $`d(v_\varphi /r)/dr`$ being zero. The latter means that there is no viscous energy flux from the inner boundary associated with the ($`r\theta `$) and ($`r\varphi `$) components of the shear stress. At $`r_{out}`$ we apply free outflow boundary conditions by interpolating all dynamical variables behind $`r_{out}`$. In the calculations we assume a source of matter with a constant ejection rate. The source is located around $`\theta =\pi /2`$ in the vicinity of $`r_{out}`$. Matter is ejected there with angular momentum equal to $`0.95`$ times the Keplerian angular momentum. Due to the viscous spread, part of the ejected matter moves inwards and forms the accretion flow. The other part leaves the computation domain freely through the outer boundary. We start the computation of a model from an initial state, the choice of which is not crucial, and follow the evolution until a quasi-stationary flow pattern is established. Typically, this takes a few viscous time scales estimated at $`r_{out}`$. ## 3 Two-dimension hydrodynamical models Four models with various values of $`\alpha `$ and $`\gamma `$ are listed in Table 1, which also lists the type of the outflow, the stability and the efficiency $`ϵ`$ defined by (8). All of the models have powerful outflows, launched at radial distances $`10100r_g`$. Models A and B are stationary, and Model C shows a stable time-averaged global flow pattern, perturbed from time to time by hot convective bubbles that originate very close to $`r_{in}`$. Model D is less stationary, with significant convection activity which originates in the innermost part of the flow. The flow patterns with bipolar outflows in Models A and D are quite similar to those discussed by IA99 (their Models 1 and 5, respectively), despite differences in $`\alpha `$ and $`\gamma `$. The most interesting feature of Models B and C is the unipolar outflow. Figure 1 presents the flow pattern for Model B. Matter contained within the calculation domain forms a global meridional circulation cell with the spatial scale $`r_{out}`$. Most of the stream-lines of the flow start at the source of matter at $`r_{out}`$, go inward, approach some minimum radius near to the equatorial plane, turn outward, and leave the computational domain through the upper hemisphere. In Model B we observe one large circulation cell (toroidal in three-dimensions), whereas in Model C the large circulation cell co-exists with a smaller one of opposite vorticity. The circulation is powered by the one-sided outflow generated in the vicinity of black hole. The part of the outflow which is close to the polar axis is most efficiently accelerated and becomes supersonic at a radial distance $`1000r_g`$. This part of the flow contains a small mass fraction and has outward directed velocities, which are larger than the escape velocity, $`v_r>v_{esc}=\sqrt{2GM/r}`$. Obviously, it can escape to infinity, even if cooling processes become efficient at large distances $`r1000r_g`$. However, most of the outflowing matter forms a ‘subsonic wind’. The evolution of this wind at large distances will be governed by cooling processes, which are not considered in our models. What drives such powerful outflows and circulations? Obviously, the power is extracted from the rotating accretion flow with the help of a mechanism which redistributes energy and momentum between different parts of fluid. In our non-radiative viscous models only the shear stress can provide such a mechanism. Due to the stress, the inner more rapidly rotating parts of the accretion flow pass a fraction of kinetic (not only rotational) energy to the outer parts. The importance of outward energy transport supported by convection in geometrically thick accretion discs has been independently recognized by Bardeen (1973), Abramowicz (1974) and Bisnovatyi-Kogan & Blinnikov (1977), and first studied in detail by Paczyński & Abramowicz (1982) with a follow-up by Różyczka & Muchotrzeb (1982). However, these models do not include inward advection. In later works (Begelman & Meier 1982; Narayan & Yi 1995a; Honma 1996; Kato & Nakamura 1998; Manmoto et al. 2000) advection was included, but the convective outward energy flux was found to be weak, and was always dominated by the assumed purely inward directed advective energy flux. In the ‘self-similar’ models of ADAFs (Narayan & Yi 1994) the viscous energy flux, $$\dot{E}_{visc}(r)=2\pi r^2_0^\pi \left(v_r\mathrm{\Pi }_{rr}+v_\theta \mathrm{\Pi }_{r\theta }+v_\varphi \mathrm{\Pi }_{r\varphi }\right)\mathrm{cos}\theta d\theta ,$$ (5) is positive, directed outward, and is exactly balanced by the inward advection of energy with the corresponding flux $$\dot{E}_{adv}(r)=2\pi r^2_0^\pi \rho v_r\left(\frac{v^2}{2}+W\frac{GM}{r}\right)\mathrm{cos}\theta d\theta ,$$ (6) where $`W`$ is the specific enthalpy. Thus, the total energy flux, $$\dot{E}=\dot{E}_{adv}+\dot{E}_{visc},$$ (7) is zero everywhere, $`\dot{E}(r)=0`$. Abramowicz, Lasota & Igumenshchev (1999) demonstrated that $`\dot{E}=0`$ is an artifact of the assumed self-similarity: ADAFs that obey physically reasonable boundary conditions have, in general, $`\dot{E}0`$. Our models have a specific geometry of flow which is very different from pure equatorial inflow. In Figure 2 we show the flow pattern for Model B in the vicinity of the inner boundary. In this figure the arrows show the velocity directions, and the ellipse is the projection of the radius $`r=R_A`$ at the equatorial plane. Matter, which crosses the equatorial plane inside $`R_A`$, accretes into the black hole at the rate $`\dot{M}`$. Using the analogy of the flow pattern in Figure 2 with Bondi-Hoyle accretion, we call $`R_A`$ the ‘accretion radius’. In the case of Model B, $`R_A10r_g`$. Model C has a similar flow pattern, but $`R_A`$ varies with time around the average value $`80r_g`$. The flow geometry shown in Figure 2 allows a significant transport of energy by shear stress from the matter accreted into the black hole to the matter outflowing to the upper hemisphere. Indeed, the stream lines of matter which is eventually inflowing and outflowing are located close together until the accreting matter reaches $`rR_A`$. During this phase, efficient momentum and energy exchange takes place. We shall characterize the outward energy transport $`\dot{E}`$ by the ‘accretion efficiency’ $$ϵ=\dot{E}/\dot{M}c^2,$$ (8) which is different from the standard definition of radiative efficiency, $$ϵ_{rad}=L/\dot{M}c^2,$$ (9) where $`L`$ is the total luminosity of accretion flow. Assuming that matter falls freely into the black hole inside $`R_A`$ and that only the binding energy at the orbit $`r=R_A`$ is liberated, one can obtain an estimate of the maximum accretion efficiency, $$ϵ\frac{1}{4}\frac{r_g}{R_A}.$$ (10) In the case of Models B and C, formula (10) predicts $`ϵ0.02`$ and $`0.004`$, respectively. Both values are significantly larger than the prediction of $`ϵ_{rad}`$ for ADAFs, $`ϵ_{rad}10^4`$ (Narayan & Yi 1995b). $`\dot{E}(r)`$, $`\dot{E}_{adv}(r)`$ and $`\dot{E}_{visc}(r)`$ for Model B are shown in Figure 3. In a steady state, $`\dot{E}`$ must be constant. The variation in $`\dot{E}`$ seen in Figure 3 is about $`10\%`$. This can be partially explained by a small non-stationarity of the flow, and small inaccuracies in our numerical code, which does not exactly conserve the total energy. Test calculations have shown that this inaccuracy gives an error of less than $`5\%`$. From Figure 3, one can see that for Model B, $`ϵ0.01`$, which is only a factor of 2 smaller than what is predicted by estimate (10). The viscous flux $`\dot{E}_{visc}`$ (the dashed line in Fig.3) is always positive and dominates the advection flux $`\dot{E}_{adv}`$ (solid line) inside the radius $`30r_g`$. At larger radii the energy is transported outward mainly by advection. Note, that $`\dot{E}_{adv}`$ changes sign at $`r5r_g`$. Inside this radius the inward energy advection compensates a fraction of the outward viscous energy flux. In all models, the time averaged net accretion rate $`\dot{M}(r)`$ through successive spheres of radius $`r`$, is a constant inside $`1000r_g`$ to good accuracy. However, the rates of mass inflow $`\dot{M}_{in}`$ and outflow $`\dot{M}_{out}=\dot{M}_{in}\dot{M}`$ are both increasing functions of $`r`$. In Model B $`\dot{M}_{in}(r)`$ is well approximated by a power law with index $`\beta 0.5`$ in the radial range $`101000r_g`$. Models A, C and D show a similar power law behaviour for $`\dot{M}_{in}(r)`$, but with slightly different indices. Such a fast increase outward of $`\dot{M}_{in}`$ and $`\dot{M}_{out}`$ indicates the importance of global circulation motions in the dynamics of accretion flow in the models considered. Only a small part of the matter circulated in the computational domain is accreted by the black hole. Most of the matter in the case of (quasi) stable Models A, B and C, which starts at the source near to the outer boundary, has escaped outside the outer boundary after one cycle of circulation. This behaviour is in agreement with the results of Blandford & Begelman (1999), but unlike these authors, we see powerful outflows only in the case of large viscosity, $`\alpha 0.1`$. We also note that the radial dependence of $`\dot{M}_{in}`$ and $`\dot{M}_{out}`$ in our simulations is in good agreement with the results of Stone et al. (1999), although in thier models accretion flows are dominated by small-scale vortices and circulation motions. ## 4 Discussion Our numerical models can be applied to the two well-studied accretion regimes mentioned in the Introduction: optically thick and optically thin. The matter in optically thick flows is radiation dominated and characterized by $`\gamma =4/3`$, so Models C and D can be relevant in this case. In optically thin flows the ‘effective’ adiabatic index ranges from about 3/2 to 5/3, depending on the strength of the magnetic field (see Narayan & Yi 1995b). Flow patterns of ‘optically thin’ models consist of large-scale stable subsonic circulation cell(s) (in the $`r\theta `$ plane): two equatorially symmetric cells in Model A and one cell in Model B. Only a small fraction of the outflowing matter forms the unbound supersonic unipolar outflow in Model B. A fraction $`ϵ0.01`$ of the total energy of the matter accreted by the black hole is extracted and remains in the form of kinetic and thermal energy of the circulated matter. Although the present numerical models include no energy losses, one can speculate about the fate of subsonic outflows when energy losses are important, using the recent results obtained by Allen, Di Matteo & Fabian (1999) and Di Matteo et al. (1999). They have noted that in optically thin accretion flows coupled with strong outflows, the radiative energy losses are dominated by bremsstrahlung. The most efficient cooling in such accretion flows occurs at the maximum radius where this flow exists. Taking into account this result and the results of our numerical simulations we propose the following scenario of black hole accretion accompanied by global meridional circulation of matter. Subsonic outflows originate near to the black hole and are mainly supported by the pressure gradient force while the matter radiates weakly. At large radial distances, where the dynamical time scale of flows becomes comparable with the bremsstrahlung cooling time, the matter cools down and its outward motion cannot be supported any longer by the pressure gradient. The matter turns back and forms the inflowing part of the circulation cell(s). A schematic illustration of such a flow pattern in the case of one circulation cell near to the black hole is shown in Figure 4. Assuming that all of the thermal energy carried by outflows is radiated away, we can roughly estimate the radiative efficiency to be $`ϵ_{rad}ϵ0.01`$. Note that this scenario can only be correct if the thermal instability develops slowly at the outer region of circulation cell(s), but this is what one would expect. Basic properties of accretion flows with circulations can roughly be estimated. Let $`R_c`$ be the spatial scale of the circulation cells which contain matter with the average density $`\rho _c`$. Comparing the bremsstrahlung cooling time for matter with the virial temperature and the dynamical time scale $`t_d=R_c/V_c`$ (where $`V_c=\beta V_K`$ is the mean circulation velocity at $`R_c`$, $`V_K`$ is the Keplerian velocity and the value of the parameter $`\beta \alpha 0.1`$ is taken from numerical models), we can estimate $`\rho _cR_c^2`$ and consequently, the mass involved in circulations, $$M_c\left(\frac{\beta }{0.1}\right)\left(\frac{R_c}{10^3r_g}\right)\left(\frac{M}{10^9M_{}}\right)^2M_{},$$ (11) and the bremsstrahlung luminosity, $$\frac{L}{L_{Edd}}310^5\left(\frac{\beta }{0.1}\right)^2\left(\frac{10^3r_g}{R_c}\right)^{3/2}.$$ (12) Thus, the less massive and more compact object the higher luminosity it has. In a steady state, the external mass supply to the circulation cells is equal to the mass accretion rate $`\dot{M}=L/ϵc^2`$. Without the mass supply, the characteristic lifetime of the circulation cells is $$t_c=\frac{M_c}{\dot{M}}10^3\left(\frac{0.1}{\beta }\right)\left(\frac{ϵ}{0.01}\right)\left(\frac{R_c}{10^3r_g}\right)^{5/2}\left(\frac{M}{10^9M_{}}\right)\mathrm{yrs}.$$ (13) Note, that in estimates (11)-(13), we ignore the mass and energy losses due to the supersonic unipolar/bipolar outflows, and due to a wind which starts on the ‘outer surface’ of circulation cells and carries out all of the excess angular momentum. Estimates (11) and (12) agree quite well with the observed data for the core of the elliptical galaxy M87 (Allen et al. 1999). Also, our finding of unipolar outflows from accreting black holes can be used to explain the one-sided jets observed in M87 and other objects. ### Acknowledgments We thank Marek Abramowicz and John Miller for stimulating discussions and comments on drafts of this paper. We gratefully acknowledge hospitality at the International School for Advanced Studies in Trieste, where a part of this work was done.
no-problem/9912/astro-ph9912171.html
ar5iv
text
# A Coordinated Radio Afterglow Program ## Introduction BeppoSAX revolutionized gamma-ray burst (GRB) astronomy not only through its discovery of X-ray afterglows but also through the dissemination of accurate and timely GRB positions to ground-based observers, who then conduct searches of afterglows at optical and radio wavelengths. Our collaboration uses the interferometer facilities of the Very Large Array (VLA), the Australia Telescope Compact Array (ATCA), the Very Long Baseline Array (VLBA) and the Owens Valley Radio Observatory (OVRO) Interferometer. At high frequencies, we use single dish telescopes which include the James Clerk Maxwell Telescope (JCMT) and the OVRO 40-m Telescope. All afterglow searches begin with the VLA in the northern hemisphere (dec.$`>45^{}`$, $`\sigma _{\mathrm{rms}}=45`$ $`\mu `$Jy in 10 min., FOV$`5^{}`$) and the ATCA in the southern hemisphere (dec.$`<45^{}`$, $`\sigma _{\mathrm{rms}}=45`$ $`\mu `$Jy in 240 min., FOV$`5^{}`$), typically at a frequency of 8.5 GHz, which provides a balance between sensitivity and field-of-view. Follow-up programs at the other radio facilities are begun after a VLA or ATCA transient is discovered. As with quasars, radio observations provide unique diagnostics complementary to those obtained at X-ray and optical wavelengths. Our collaboration has discovered all known radio afterglows to date, leading to a number of important results: the direct demonstration of relativistic expansion of the ejecta (Frail et al. 1997a), evidence for a reverse shock (Kulkarni et al. 1999), the first true calorimetry of a GRB explosion (Frail, Waxman & Kulkarni 1999), the discovery of optically obscured events (Taylor et al. 1998), the first unambiguous evidence that the ejecta are collimated in jets (Harrison et al. 1999), and the discovery of a possible link between supernovae and GRBs (Kulkarni et al. 1998). ## Radio Afterglow Statistics Since 1997 we have observed 19 GRBs with the VLA and detected a total of eight radio afterglows (see Figure 1, Tables 1 and 2). The peak fluxes (F<sub>peak</sub>) of the detections range from 1200 $`\mu `$Jy to 150 $`\mu `$Jy. This small range of F<sub>peak</sub> values suggests that our ability to detect radio afterglows is severely limited by the sensitivity of the telescope. The “lifetime” (i.e. t<sub>max</sub>) of the radio afterglows is signal-to-noise limited but it is clear, at least among bursts of comparable brightness, that t<sub>max</sub> varies substantially. Of special note are the three GRBs (970828, 981226, and 990506) which have no optical counterparts (i.e. XR class). These may represent an important group of GRBs whose optical emission is extincted by dust. There are 11 GRBs for which a VLA search of the error box failed to detect a radio afterglow (see Table 2). The peak fluxes given in the table are conservative upper limits for a radio afterglow on a timescale of 1 to 30 days and at frequencies between 1.4 and 8.5 GHz. These non-detections vary in quality depending on the size of the error circle but most observations had sufficient sensitivity to detect radio afterglows with fluxes comparable to those listed in Table 1. There have been two radio afterglow detections made at the ATCA (see Table 1). The possible relation of GRB 980425 to SN1998bw makes it a rather unusual event, so we do not include it in the detection statistics. The upper limits of the six ATCA non-detections in Table 3 were not sufficient to have detected the weaker radio afterglows in Table 1. ## Summary In summary, our coordinated program has been very successful in detecting radio afterglows from GRBs. In particular: * Six gamma-ray bursts are seen at X-ray, optical and radio wavelengths (GRB 970508, GRB 980329, GRB 980519, GRB 980703, GRB 990123, GRB 990510) (see Figure 1). * Of the 23 X-ray afterglows, nine have been detected at radio wavelengths (XOR + XR) for a rate of 39%. At the VLA the detection rate is 8/19 or 42%. The small range in the observed peak flux densities suggests that our ability to detect radio afterglows is mainly limited by the sensitivity of the telescopes (VLA and ATCA). * Of the 23 X-ray afterglows, ten have been detected at optical wavelengths (XOR + XO) or 43%. The detection rate of well-localized GRBs is comparable at optical and radio wavelengths. * There exists a growing class (XR) of “dark” GRBs which have X-ray and radio afterglows but no known optical afterglow. These may represent an important group of GRBs whose optical emission is extincted by dust.
no-problem/9912/hep-lat9912045.html
ar5iv
text
# 1 Introduction ## 1 Introduction Revealing the mechanism which operates in Yang-Mills theories for confining quarks to color singlet particles is one of the most important tasks of modern quantum field theory. A knowledge of this mechanism will help to understand nuclear forces from first principles and will, e.g., have a strong impetus on the present understanding of hadron physics. In the recent twenty years of active research, many proposals have been launched to explain quark confinement (see \[1-7\] for an incomplete list). Nowadays large scale computer simulations assign top priority to the so-called color superconductor mechanism . This mechanism applies in the Abelian gauges , which allow for a residual U(1) gauge degree of freedom. Projection is performed to reduce the full SU(2) to compact U(1) gauge theory. After this projection, monopoles which carry quantized color-magnetic charges with respect to the residual U(1) group naturally appear as degrees of freedom . The color-superconductor mechanism operates as follows: a condensation of these monopoles implies a (dual) Meissner effect. Color-electric flux is squeezed into flux tubes implying that the potential between two static color sources is linearly rising at large distances. Modern computer facilities provide the testing grounds: evidence has been accumulated that the dual superconductor picture captures parts of the roots of quark confinement . Unfortunately, the color-superconductor picture suffers from the following drawback: color states which are neutral with respect to the residual U(1) gauge are insensitive to the condensate of the color magnetic monopoles and therefore do not acquire a confining potential . In this case, one would expect additional ”light” states in the particle spectrum besides hadrons and glueballs. This contradiction to the experiment requires a refinement of the dual superconductor picture. For this purpose, it was argued that all color-magnetic monopoles which are defined by different Abelian projection schemes condense while only those monopoles which correspond to the gauge fixing at hand are manifest. The idea of the condensation of ”hidden” monopole degrees of freedom might conceptually solve the ”neutral particle problem”, but conceals the non-Abelian nature of the superconductor mechanism. In this paper, we throw a new glance onto the non-Abelian Meissner effect by refraining from a residual U(1) gauge group which is uniquely embedded into the SU(2) gauge group all over space time. Instead of, space-time is decomposed into regions in each of which the embedding of the residual U(1) group into the SU(2) gauge group is chosen to yield the minimal error by a subsequent projection. Particles which are ”neutral” with respect to the U(1) subgroup in one particular region carry charge in another region. If the average over all configurations is performed during the Monte-Carlo sampling, all particles are confined on distances which exceed the intrinsic length scales of the regions. For putting this idea on solid grounds, we propose a new type of gauge (referred to as m-gauge) which appears as a generalization of the Maximal Abelian gauge (MAG). In the m-gauge, the orientation of the U(1) subgroup in SU(2) is specified by a unit-color vector $`\stackrel{}{m}`$ which depends on space-time. In a first step, we show by means of numerical calculations that the m-projected theory still bears quark confinement at full strength. Secondly, for putting the m-gauge into a proper context, we investigate, by virtue of a gauge fixing parameter, a gauge fixing which smoothly interpolates between the MAG and the m-gauge. For a wide range of the gauge fixing parameters, the vacuum decomposes into regions of (approximately) aligned color vectors $`\stackrel{}{m}`$. The paper is organized as follows: in the next section, we will introduce the novel type of gauge fixing and the corresponding projection. The numerical errors induced by projection are studied for the case of MAG and the case of m-gauge, respectively. In section 3, we reveal the color ferromagnetic correlations of the vector $`\stackrel{}{m}`$ present in the m-gauge. The interpolating gauges are introduced and the vacuum structure in these gauges is discussed. Conclusions are left to the final section. ## 2 M-gauge ### 2.1 The new gauge fixing and projection A particular configuration of the SU(2) lattice gauge theory is represented by a set of link matrices $`U_\mu (x)SU(2)`$, which are transformed by a gauge transformation $`\mathrm{\Omega }(x)SU(2)`$ into $$U_\mu (x)U_\mu ^\mathrm{\Omega }(x)=\mathrm{\Omega }(x)U_\mu (x)\mathrm{\Omega }^{}(x+\mu ).$$ (1) Our new proposal for the gauge fixing condition is given by $`S_{\mathrm{fix}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{\mu ,\{x\}}{}}\text{tr}\left\{U_\mu ^\mathrm{\Omega }(x)m(x)\left(U^\mathrm{\Omega }\right)_\mu ^{}(x)m(x)\right\}\text{maximum},`$ (2) $`m(x)`$ $`=`$ $`m^a(x)\tau ^a,\stackrel{}{m}^T(x)\stackrel{}{m}(x)=1,`$ where $`\tau ^a`$ are the SU(2) Pauli matrices. For a given configuration $`U_\mu (x)`$ we allow for a variation of the gauge matrices $`\mathrm{\Omega }(x)`$ and of the auxiliary unit vector $`m^a(x)`$ for maximizing the functional $`S_{\mathrm{fix}}`$. Note that a reflection of the vector, i.e., $`\stackrel{}{m}(x)\stackrel{}{m}(x)`$, does not change the gauge fixing functional. Identifying the points $`\stackrel{}{m}`$ and $`\stackrel{}{m}`$ of the sphere $`S_2`$ defines a projective space $`RP_2`$ which carries the gauge fixing information. Furthermore, $`S_{\mathrm{fix}}`$ is invariant under a multiplication of the gauge matrix $`\mathrm{\Omega }`$, under consideration, with a center element of the SU(2) gauge group, i.e., $`\mathrm{\Omega }(1)\mathrm{\Omega }`$. This implies that the theory after gauge fixing possesses a residual $`Z_2`$ gauge invariance, at least. It turns out that a further invariance is unlikely to exist for generic link configurations $`U_\mu (x)`$ (see discussion in subsection 2.2). The concept of projection is to reduce the number of degrees of freedom while preserving the confining properties of SU(2) gauge theory. It might be easier in the reduced theory to reveal the mechanism of confinement than resorting to the full SU(2) gauge theory. In the present case, we define the projected links $`\widehat{U}_\mu (x)`$ by $$\widehat{U}_\mu (x):=N\left[U_\mu ^\mathrm{\Omega }(x)+m(x)U_\mu ^\mathrm{\Omega }(x)m(x)\right],$$ (3) where the normalization $`N`$ is obtained by demanding $`\widehat{U}_\mu (x)\widehat{U}_\mu ^{}(x)=\mathrm{\hspace{0.17em}1}`$. It is convenient for an illustration of the gauge fixing (2) and the projection (3) to decompose the link variable as $$U_\mu ^\mathrm{\Omega }(x):=a_\mu ^{(0)}(x)+i\stackrel{}{a}_\mu (x)\stackrel{}{\tau },\text{ }\left(a_\mu ^{(0)}(x)\right)^2+\stackrel{}{a}_\mu ^{\mathrm{\hspace{0.33em}2}}(x)=\mathrm{\hspace{0.33em}1}\mu ,x$$ (4) In this case, the gauge fixing condition $`S_{\mathrm{fix}}`$ (2) becomes $$S_{\mathrm{fix}}=\underset{\mu ,\{x\}}{}\left\{\left(a_\mu ^{(0)}(x)\right)^2\stackrel{}{a}_\mu ^{\mathrm{\hspace{0.33em}2}}(x)+\mathrm{\hspace{0.17em}2}\left(\stackrel{}{m}\stackrel{}{a}_\mu \right)^2\right\}\text{maximum}.$$ (5) Representing the vector $`\stackrel{}{a}_\mu (x)`$ by components parallel to $`\stackrel{}{m}(x)`$ and perpendicular to $`\stackrel{}{m}(x)`$, i.e., $`\stackrel{}{a}_\mu (x)=(a_\mu ^{},a_\mu ^1,a_\mu ^2)^T`$, the condition (5) is equivalent to $$S_{\mathrm{fix}}=\underset{\mu ,\{x\}}{}\left\{1\mathrm{\hspace{0.33em}2}\left[\left(a_\mu ^1(x)\right)^2+\left(a_\mu ^2(x)\right)^2\right]\right\}\text{maximum}.$$ (6) This equation tells us that the gauge fixing (2) minimizes the link components $`\stackrel{}{a}_\mu (x)`$ perpendicular to the vector $`\stackrel{}{m}(x)`$. Inserting (4) in (3), one finds for the projected link variables $`\widehat{U}_\mu (x)`$ $`=`$ $`\mathrm{\hspace{0.33em}2}N\left[a_\mu ^{(0)}(x)+i\stackrel{}{m}(x)\stackrel{}{a}_\mu (x)\stackrel{}{m}(x)\stackrel{}{\tau }\right]`$ (7) $`=`$ $`2N\left[a_\mu ^{(0)}(x)+ia_\mu ^{}(x)\stackrel{}{m}(x)\stackrel{}{\tau }\right].`$ (8) Projecting link configurations is a two-step process: firstly, one exploits the gauge degrees of freedom to minimize the link components $`a_\mu ^1(x)`$, $`a_\mu ^2(x)`$, perpendicular to $`\stackrel{}{m}(x)`$, and, secondly, these components $`a_\mu ^1(x)`$, $`a_\mu ^2(x)`$ are dropped for obtaining the projected link variable $`\widehat{U}_\mu (x)`$. For quantifying the error of this projection, we introduce $$\omega :=\frac{1}{N_{\mathrm{link}}}S_{\mathrm{fix}}^{\mathrm{max}}[U]_U,$$ (9) where $`N_{\mathrm{link}}`$ is the number of lattice links and $`S_{\mathrm{fix}}^{\mathrm{max}}[U]`$ is the maximum value of the gauge fixing functional $`S_{\mathrm{fix}}`$ (2) for a given link configuration $`U_\mu (x)`$. The brackets in (9) denote the Monte-Carlo average over all link configurations. An inspection of the equations (6) and (8) tells us that projection yields the exact result if $`\omega `$ possesses the largest value possible, i.e., $`\omega =1`$. The error increases if $`\omega `$ decreases. Note that the space-time dependence of $`m(x)`$ relative to the link variables $`U_\mu ^\mathrm{\Omega }(x)`$ in the functional $`S_{\mathrm{fix}}`$ (2) is obtained by the demand for minimizing the error induced by projection. This demand dictates that $`m(x)`$ cannot be identified with a so-called Higgs field which figures in general Abelian gauges . For an illustration of this fact, let $`m(x)`$, $`\mathrm{\Omega }(x)`$ be the configurations which maximize $`S_{\mathrm{fix}}[U]`$ (2) for a given link configuration $`U_\mu (x)`$, and let assume that the fields $`m(x)`$, $`\mathrm{\Omega }(x)`$ are uniquely defined (this is the generic case; see subsection 2.2). We introduce $`U_\mu ^V(x)`$ as the link variables which are obtained from $`U_\mu (x)`$ by the gauge transformation $`V(x)`$, and repeat the gauge fixing procedure with the $`U_\mu ^V(x)`$ as basis. If $`m^V(x)`$, $`\mathrm{\Omega }^V(x)`$ denote the configurations which correspond to the maximum of $`S_{\mathrm{fix}}[U^V]`$, one finds $`\mathrm{\Omega }^V(x)=\mathrm{\Omega }(x)V(x)`$ and $`m^V(x)=m(x)`$. The later relation tells us that $`m(x)`$ only depends on the gauge invariant parts of the link variables $`U_\mu (x)`$, and therefore encodes physical information. By contrast, an auxiliary Higgs fields transforms homogeneously under the gauge transformation $`V(x)`$. Let us compare our new gauge, defined by (2), with the Maximal Abelian gauge (MAG). The latter gauge can be obtained from the gauge condition (2) if one does not allow a variation of $`m(x)`$ with space time. For constant vectors $`\stackrel{}{m}`$, one might choose $`\stackrel{}{m}`$ to point in three direction in color space without a loss of generality. For an SU(2) gauge theory in four dimensions, one associates four link variables with each space time point, and therefore counts $`12`$ degrees of freedom at each lattice site ($`9`$ physical and $`3`$ gauge degrees of freedom). The MAG projection effectively reduces the SU(2) gauge theory to an U(1) one. After projection, the number of degrees of freedom per site is therefore $`4`$. In the new gauge, presented here, naive counting yields four Abelian links and the unit vector $`\stackrel{}{m}(x)`$, i.e., $`6`$ degrees of freedom at each site. We finally comment on the Gribov problem to round out this subsection. Note that the following remarks also apply to the class of general Abelian gauges, and that, in particular, the practical problem in implementing the gauge is not a specialty of the new gauge proposed in the present letter. Although the gauge fixing starting from the condition (2) is conceptually free of Gribov ambiguities if one seeks out the global maximum of the functional $`S_{\mathrm{fix}}`$ (2), one recovers the Gribov problem in practice when the algorithm fails to detect the global maximum. In the context of variational gauges, several strategies have been proposed for evading this problem. One possibility is introducing a Laplacian version of the gauge fixing condition for adapting the problem to the numerical capabilities. Results are available in the literature for the case of Landau gauge , for the case of MAG and for the case of the center gauge . Another possibility is to introduce quantum gauge fixing for putting the gauge fixing which is implemented by the algorithm in the proper context. Here, we will not perform a detailed study of the ”practical” Gribov problem in the context of the new gauge (2). In the present first investigation, we will only check whether the numerical results (see next section) are stable against random gauge transformations on the link variables before invoking the gauge fixing algorithm. ### 2.2 Quality of projection Our numerical simulations were performed using the Wilson action and a lattice with $`12^4`$ space-time points. For $`\beta `$-values in the scaling window, i.e., $`\beta [2.1,2.5]`$, 200 heat–bath steps were performed for initialization. When gauge fixing is requested by the application of interest, we used a standard iterative procedure with over-relaxation for finding the maximum value of the gauge fixing functional $`S_{\mathrm{fix}}`$ (2). Once we have obtained configurations $`m(x)`$, $`\mathrm{\Omega }(x)`$ which maximize the functional $`S_{\mathrm{fix}}`$, we distort these configurations and re-handle the gauge fixing. Repeating this procedure (for a given link configuration $`U_\mu (x)`$) several times, we find unique fields $`m(x)`$, $`\mathrm{\Omega }(x)`$ at the maximum of $`S_{\mathrm{fix}}`$. This provides numerical evidence that the (local) maximum of $`S_{\mathrm{fix}}`$ is stable against small gauge transformations, and that flat directions in the configuration space of $`m(x)`$, $`\mathrm{\Omega }(x)`$ do not exist for generic link configurations. A thorough study of the Faddeev-Popov determinant in the case of m-gauge is requested for a rigorous proof of this fact. This is left to future work. In a first investigation, we calculated the Creutz ratios $`\chi _{kk}`$ with help of the expectation values of quadratic Wilson loops of length $`r=ka`$, where $`a(\beta )`$ is the lattice spacing and $`k`$ is an integer. It is convenient for the extrapolation to the continuum limit to introduce the scale $`\mathrm{\Lambda }`$ via $$\mathrm{\Lambda }^2=\mathrm{\hspace{0.33em}0.12}\frac{1}{a^2(\beta )}\mathrm{exp}\left\{\frac{6\pi ^2}{11}\left(\beta 2.3\right)\right\},$$ (10) which is a renormalization group invariant quantity when one-loop scaling applies in the asymptotic $`\beta `$-region. The normalization of $`\mathrm{\Lambda }^2`$ is chosen for reproducing the full SU(2) string tension, i.e. $`\mathrm{\Lambda }^2\sigma `$ for $`\beta 2.1`$. Figure 1 shows our numerical data for $`\chi _{kk}`$ in units of $`\mathrm{\Lambda }^2`$ as function of $`r\mathrm{\Lambda }`$. It turns out that the data of the simulation employing the full SU(2) Wilson action is best fitted by (solid line) $$\chi _{kk}\mathrm{\Lambda }^2=\gamma _1+\gamma _2/r^2.$$ (11) This ansatz for $`\chi _{kk}`$ is expected by relating Creutz ratios to the derivative of the static quark potential. The second term of the latter equation refers to the Coulomb interaction while the first term is present due to a non-vanishing string tension. Figure 1 also shows the data points for $`\chi _{kk}`$ calculated from links $`\widehat{U}_\mu (x)`$ which are obtained from m-projection (see (7)). These data are contrasted with the result for $`\chi _{kk}`$ calculated with MAG projected links. In any case, the asymptotic, i.e. $`r\mathrm{\Lambda }1`$, value for the string tension is reproduced within the numerical accuracy. A striking feature of figure 1 is that the data from m-projected links agree with the full result for the sizes $`r`$ explored in figure 1. For a quantitative study of the error induced by projection, we calculated $`\omega `$ (9) for the case of the m-projection and the MAG projection, respectively (see table 1). One observes that $`\omega `$ is much bigger for m-projection rather than for the case of MAG projection. In particular, the components of the (gauge fixed) links $`U^\mathrm{\Omega }`$ which are dropped by projection, i.e. $`U^\mathrm{\Omega }\widehat{U}`$, are roughly of $`5\%`$ in size while the generic error due to projection in the case of MAG is of order $`30\%`$. One therefore expects that not only the string tension but also other physical observables which are calculated with m-projected links are well described. On one hand this feature is highly desired for constructing effective theories which cover a wide span of low energy properties of Yang-Mills theory. On the other hand, the introduction of additional degrees of freedom (compared with the case of MAG) obscures those ones which are responsible for confinement. ## 3 Color alignment in m-gauge ### 3.1 M-vector correlations Let us assume that we are investigating a physical observable which possesses a correlation length $`\xi `$ by comparing the full with the m-projected theory. If the color vector $`\stackrel{}{m}(x)`$ is uniquely oriented in a space-time domain of size $`l\xi `$, one would recover the standard MAG scenario (provided that $`l`$ is bigger than the scale set by the critical temperature, i.e., $`l>0.7`$fm, for detaining the Casimir effect). In particular, the dual superconductor picture would be expected operating if the string tension is the quantity of interest. For investigating the existence of such domains, and, in case, for relating the infra-red physics in the m-gauge to the well studied physics in MAG, a thorough study of the space-time correlations of the vectors $`\stackrel{}{m}(x)`$ is highly desired. The space-time dependence of the gauge transformation $`\mathrm{\Omega }`$ which maximizes the functional $`S_{\mathrm{fix}}`$ (2) induces a correlation of the unit vectors $`\stackrel{}{m}(x)`$ in space-time. For revealing these correlations, we numerically calculated the probability distribution of finding a particular scalar product $`\eta `$ of two vectors $`\stackrel{}{m}`$ located at neighboring sites of distance $`a(\beta )`$. The raw data of this distribution are shown in the left panel of figure 2. A random distribution of vectors would correspond to $`dP/d\eta =1`$. We clearly observe a maximum of the probability distribution at $`\eta =1`$. We find a color ferromagnetic correlation between the color vectors $`\stackrel{}{m}(x)`$. The value at the maximum position increases for increasing $`\beta `$, i.e., for a decreasing distance $`a(\beta )`$ between the neighboring vectors. This indicates that $`\stackrel{}{m}(x)`$ which constitutes a lattice vector model so far will become a smooth field in the continuum limit. For an interpretation of these results in the scaling limit $`a(\beta )0`$, it is useful to parameterize the probability distribution as follows: $$\frac{dP}{d\eta }=\mathrm{\hspace{0.33em}1}+c\mathrm{exp}\left\{\frac{a(\beta )}{L}\right\}f(\eta ),$$ (12) where the function $`f(\eta )`$ satisfies without a loss of generality the constraints $$_0^1f(\eta )𝑑\eta =\mathrm{\hspace{0.33em}0},\text{ }f(1)=1.$$ (13) The crucial finding is that the constants $`L`$ and $`c`$ as well as the function $`f(\eta )`$ are universal, i.e., independent of the renormalization point specified by $`\beta `$ (see figure 2 right panel and figure 3). Comparing the numerical data for $`a(\beta )/L`$ with the asymptotic one-loop $`\beta `$-dependence (dashed line in figure 3) $$a(\beta )\mathrm{\hspace{0.33em}0.16}\text{fm}\mathrm{exp}\left\{\frac{3\pi ^2}{11}\beta \right\}$$ (14) (where a string tension $`\sigma =(440\mathrm{MeV})^2`$ was used as reference scale), the extrapolation to the continuum limit yields $$L=\mathrm{\hspace{0.33em}0.1}(2)\text{fm},\text{ }c=\mathrm{\hspace{0.33em}0.50}(6).$$ (15) Note that for observing color ferromagnetic domains a probability distribution $`dP/d\eta (\beta \mathrm{})`$ is required which diverges at $`\eta =1`$. However, our numerical data obtained in the scaling window $`\beta [2.1,2.5]`$ agree with a finite value for $`dP/d\eta `$ at $`\eta =1`$. Further numerical investigations (e.g. of the volume dependence of the distribution) are necessary for a definite conclusion on this issue. In conclusion of this section, we find color ferromagnetic interactions between neighboring color vectors $`\stackrel{}{m}`$ which increase for decreasing distance $`a(\beta )`$ thus indicating that the vector field $`\stackrel{}{m}(x)`$ is smooth in the continuum limit. In the scaling limit, we find that these correlations extend over a range of roughly $`0.12`$fm. The color ferromagnetic interaction between the vectors is (most likely) not strong enough to induce the formation of color ferromagnetic domains in space-time. ### 3.2 Interpolating gauges Subsection 2.2 has demonstrated that the m-gauge is well adapted for projection. Unfortunately, the distribution of the auxiliary color vectors $`\stackrel{}{m}(x)`$ does not support an arrangement of these vectors in domains of constant orientation therefore impeding an interpretation of the m-gauge as local realization of MAG. For taking full advantage of the elaborated studies of physics in MAG , we generalize the m-gauge condition (2) for allowing a smooth interpolation between the MAG and the m-gauge by virtue of a gauge fixing parameter $`\kappa `$. The generalized gauge fixing action <sup>1</sup><sup>1</sup>1We thank Torsten Tok for helpful discussions on useful extensions of $`S_{\mathrm{fix}}`$ (2). is $`S_{\mathrm{fix}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{\mu ,\{x\}}{}}\text{tr}\left\{U_\mu ^\mathrm{\Omega }(x)m(x)\left(U^\mathrm{\Omega }\right)_\mu ^{}(x)m(x)\right\}`$ (16) $`+`$ $`\kappa {\displaystyle \underset{\mu ,\{x\}}{}}\left[\stackrel{}{m}(x)\stackrel{}{m}(x+\mu )\right]^2\text{maximum},`$ (17) Note that the additional term (17) also respects the reflection symmetry $`\stackrel{}{m}(x)\stackrel{}{m}(x)`$. For $`\kappa =0`$, one recovers the m-gauge (2). For $`\kappa 1`$, on the other hand, there is a large penalty in action $`S_{\mathrm{fix}}`$ for non-uniformly oriented color vectors $`\stackrel{}{m}`$. One therefore obtains the MAG for sufficiently large $`\kappa `$. For quantifying the color ferromagnetic interaction strength, we introduce $$\mu :=\frac{1}{N_{\mathrm{link}}}\underset{\mu ,\{x\}}{}\stackrel{}{m}(x)\stackrel{}{m}(x+\mu ).$$ (18) One finds $`\mu =1/2`$ for a random distribution of $`\stackrel{}{m}RP_2`$, and retrieves the MAG for $`\mu =1`$. Figure 4 shows our numerical results for the ”quality of projection”, i.e. $`\omega `$ (9), and $`\mu `$ as function of $`\kappa `$ for a $`12^4`$ lattice and for $`\beta =2.4`$. As expected, the strength parameter $`\mu `$ gradually increases with rising $`\kappa `$ while $`\omega `$ monotonically decreases. The minimal error by projection is obtained in m-gauge ($`\kappa =0`$). The strength parameter $`\mu `$ at large values of $`\kappa `$ indicate that regions of uniformly oriented color vector $`\stackrel{}{m}`$ form. Figure 4 also shows the fraction of vector pairs $`\stackrel{}{m}(x)`$, $`\stackrel{}{m}(x+\mu )`$ which possess a scalar product larger than $`0.95`$ ($`0.99`$). The data are obtained on $`12^4`$ lattice and for $`\beta =2.4`$. For illustrating the regions of aligned color vectors at large values of $`\kappa `$, figure 5 presents the spatial orientation of the color vectors $`\stackrel{}{m}`$ for one Monte-Carlo sample at a given time slice. The sample was obtained for $`\kappa =0.6`$. A reference vector was chosen at the center of the spatial hypercube. If the scalar product of a vector $`\stackrel{}{m}`$ located at the position $`x`$ with the reference vector exceeds $`0.95`$, an elementary cube which is spanned by the four points $`x`$, $`x+\mu `$, $`\mu =1\mathrm{}3`$ is marked. One observes that a particular region of (approximately) aligned vectors $`\stackrel{}{m}`$ is multi-connected and extends all over the lattice universe. This property of the regions of alignment does not match with its analog in solid state physics, i.e. the Weiss domains of ferromagnetism. ## 4 Conclusions In the Maximal Abelian gauge (MAG), a uniquely oriented color vector $`\stackrel{}{m}`$ defines the embedding of the residual U(1) into the SU(2) gauge group. Evidence has been accumulated that in this case an (Abelian) dual Meissner effect confines particles which carry color-electric charge with respect to the U(1) subgroup. Since colored states which are, however, neutral from the viewpoint of the U(1) gauge group escape the confining forces provided by the dual superconductor mechanism, a refinement, i.e., a non-Abelian version, of the dual Meissner effect is highly desired. The concept of ”hidden monopoles” is one possibility . By generalizing the MAG gauge condition, we have here proposed another possibility for a non-Abelian version of the dual Meissner effect. The new gauge (m-gauge) admits a space-time dependent embedding, characterized by the color vector $`\stackrel{}{m}(x)`$, of the residual U(1) into SU(2) gauge group. The space-time dependence of $`\stackrel{}{m}(x)`$ is self-consistently chosen to achieve the minimal error induced by projection. It turns out that the color vector $`\stackrel{}{m}(x)`$ does not change under ”small” gauge transformations of the link variable. Thus, the field $`\stackrel{}{m}(x)`$ carries gauge invariant information encoded in the link variables. Our numerical results show color ferromagnetic correlations of these vectors $`\stackrel{}{m}`$ which extends over a range of $`0.1(2)`$fm. The strength of these correlations seems to be too small for causing the formation of color ferromagnetic domains. For relating the m-gauge to the MAG, we have introduced a class of gauges which smoothly interpolates between the MAG and the m-gauge by virtue of a gauge fixing parameter $`\kappa `$. For a wide span of $`\kappa `$, the vacuum decomposes into multi-connected regions which are characterized by uniquely oriented vectors, and which extend all over the lattice universe. The internal structure of these regions define an intrinsic lenght scale $`l_0`$. Each region bears the potential of an Abelian Meissner effect which operates with respect to the residual U(1) subgroup of SU(2). Colored states which do not feel a confining force in one particular region generically carry charge in another sector of space time. We speculate that, on performing the Monte-Carlo sampling, all colored states are confined on length scales bigger than the intrinsic size $`l_0`$ of the regions of color alignment. Note that the size $`l_0`$ is controlled by the gauge parameter $`\kappa `$. The request that the average size is a physical quantity defines the ”running” of the gauge parameter, i.e., the function $`\kappa (\beta )`$. The actual size of the regions of color alignment in physical units then defines the renormalized value $`\kappa _R`$ and must be provided by a renormalization condition. In subsumption, for a class of gauge conditions, specified by a gauge parameter $`\kappa >0`$, the vacuum consists of regions of aligned color vectors $`\stackrel{}{m}(x)`$. The m-gauge appears as the limiting case $`\kappa =0`$. In this case, the error induced by projection is minimal at the expense of additional degrees of freedom as compared with the MAG. This fact renders the identification of the degrees of freedom relevant for quark confinement more difficult than in the MAG, but makes the m-gauge a convenient starting point for formulating an effective theory covering a wide span of low energy properties of SU(2) Yang-Mills theory. Acknowledgments: We greatly acknowledge helpful discussions with M. Engelhardt, M. Quandt, H. Reinhardt and T. Tok. We are indebted to H. Reinhardt for support.
no-problem/9912/cond-mat9912401.html
ar5iv
text
# Incommensurate Geometry of the Elastic Magnetic Peaks in Superconducting La1.88Sr0.12CuO4 ## Acknowledgments We acknowledge K. Machida, T. Imai, H. Fukuyama, S. Maekawa, T. Tohyama, J. M. Tranquada, and G. Aeppli, for valuable discussions. This work was supported in part by a Grant-In-Aid for Scientific Research from the Japanese Ministry of Education, Science, Sports and Culture, and by a Grant for the Promotion of Science from the Science and Technology Agency and also supported by CREST and the US-Japan cooperative research program on Neutron Scattering. Work at Brookhaven National Laboratory was carried out under contact No. DE-AC02-98CH10886, Division of Material Science, U. S. Department of Energy. Present address: Research Institute for Scientific Measurements, Tohoku University, Katahira 2-1-1, Aoba-ku, Sendai 980-8577, Japan. Present address: Institute for Materials Research, Tohoku University, Katahira 2-1-1, Aoba-ku, Sedani 980-8577, Japan. Present address: Center for Neutron Research, NIST, Gaithersburg, MD 20899.
no-problem/9912/cond-mat9912264.html
ar5iv
text
# Universal Distributions for Growth Processes in 1+1 Dimensions and Random Matrices ## Abstract We develop a scaling theory for KPZ growth in one dimension by a detailed study of the polynuclear growth (PNG) model. In particular, we identify three universal distributions for shape fluctuations and their dependence on the macroscopic shape. These distribution functions are computed using the partition function of Gaussian random matrices in a cosine potential. Growth processes lead to a rich variety of macroscopic patterns and shapes . As has been recognized for some time, growth may also give rise to intriguing statistical fluctuations comparable to thermal fluctuations at a critical point. One of the most prominent examples is the Kardar-Parisi-Zhang (KPZ) universality class . In essence one models a stable phase which grows into an unstable phase through aggregation, as for example in Eden type models where perimeter sites of a given cluster are filled up randomly. In real materials, mere aggregation is often too simplistic an assumption and one would have to take other dynamical modes, such as surface diffusion, at the stable/unstable interface into account . In our letter we remain within the KPZ class. From the beginning there has been evidence that in one spatial dimension KPZ growth processes are linked to exactly soluble models of two-dimensional statistical mechanics. Kardar mapped growth to the directed polymer problem. The replica trick then yields the Bose gas with attractive $`\delta `$–interaction which in one dimension can be solved through the Bethe ansatz . In , considerably generalized in , for a particular discrete growth model the statistical weights for the local slopes were mapped onto the six vertex model. To solve the six vertex model one diagonalizes the transfer matrix, again, through the Bethe ansatz, which also allows for a study of finite size scaling . Unfortunately none of these methods go beyond what corresponds to the free energy in the six vertex model and the associated dynamical scaling exponent $`\beta =1/3`$. In this letter we point out that within the KPZ universality class the polynuclear growth (PNG) model plays a distinguished role: it maps onto random permutations, the height being the length of the longest increasing subsequence of such a permutation, and thereby onto Gaussian random matrices . We use these mappings to obtain an analytic expression for certain scaling distributions, which then leads to an understanding of how the self-similar height fluctuations depend on the initial conditions and to a more refined scaling theory for KPZ growth. PNG is a simplified model for layer by layer growth . One starts with a perfectly flat crystal in contact with its super-saturated vapor. Once in a while a supercritical nucleus is formed, which then spreads laterally by further attachment of particles at its perimeter sites. Such islands coalesce if they are in the same layer and further islands may be nucleated upon already existing ones. The PNG model ignores the lateral lattice structure and assumes that the islands are circular and spread at constant speed. The nucleation rate and the lateral speed can be set to one by the appropriate choice of space-time units. We specialize to a one-dimensional surface, returning to higher dimensions at the end. The height, $`h(x,t)`$, at time $`t`$ above the point $`x`$ on the substrate is counted in lattice spacings. The upward steps of $`h`$ move deterministically with velocity $`1`$, the downward steps with velocity $`+1`$, and they annihilate upon touching. Through a nucleation event at $`(x,t)`$, randomly in space-time, $`h`$ increases at $`x`$ by one unit thereby creating a new up-down pair of steps. To explain the mapping from PNG to permutations it is convenient to first use a droplet geometry, where a single island starts spreading from the origin and further nucleations take place only above this ground layer. The initially flat substrate and other initial conditions will be handled along the lines of this blueprint. We want to compute the height $`h(x,t)`$ of the droplet. Clearly it is determined by the set of nucleation events inside the rectangle $`R_{(x,t)}=\{(x^{},t^{}):|x^{}|t^{}\text{ and }|xx^{}|tt^{}\}`$. In lightlike coordinates, $`r=(t^{}+x^{})/\sqrt{2}`$, $`s=(t^{}x^{})/\sqrt{2}`$, the rectangle $`R_{(x,t)}`$ equals $`[0,R]\times [0,S]`$ with $`R=(t+x)/\sqrt{2}`$, $`S=(tx)/\sqrt{2}`$. We label the nucleation events as $`(r_n,s_n)`$, $`n=1,\mathrm{},N`$, such that $`0r_1<\mathrm{}<r_NR`$. The corresponding order in the second coordinate $`s`$, $`0s_{p(1)}<\mathrm{}<s_{p(N)}V`$, defines then a permutation $`p`$ of length $`N`$, compare with Fig. 1. There is a simple rule of how to determine the number of the layer in which each nucleation event is located. Points in layer $`1`$ are obtained by scanning the permutation $`(p(1),\mathrm{},p(N))`$ from left to right and marking all those entries which are smaller than the so far smallest. After deleting the subsequence of the first layer, the second layer is obtained by repeating this construction. One marks those of the remaining entries of the permutation which are in decreasing order. At the end the permutation $`p`$ has been subdivided into decreasing subsequences. In the example of Fig. 1 we have the permutation $`(4,7,5,2,8,1,3,6)`$. The first decreasing subsequence is $`(4,2,1)`$, corresponding to the nucleation events in the first layer. The remaining subsequences are $`(7,5,3)`$ and $`(8,6)`$. The height $`h(x,t)`$ is the number of these subsequences and therefore the length of the longest increasing subsequence of $`p`$ . In a dual picture one draws a directed path from $`(0,0)`$ to $`(x,t)`$, joining nucleation events by straight lines, with the restriction that both coordinates $`r`$ and $`s`$ are increasing along the path. Equivalently, the path must be in the forward light cone at each nucleation event. This is the celebrated directed polymer, cf. for example , in the context of the PNG model. If to each path we assign as negative energy the number of nucleation centers traversed, then $`h(x,t)`$ equals the ground state energy of the directed polymer. The PNG model is thus in the strong coupling regime. The nucleation events have density one and are independently and uniformly distributed in the rectangle $`R_{(x,t)}`$ with area $`\lambda =(t^2x^2)/2`$. This induces a Poisson distribution for the length, $`N`$, of the permutation, $`\text{Prob}\{N=n\}=e^\lambda \lambda ^n/n!`$, and for a given length $`n`$ each permutation has the same probability, namely $`1/n!`$. Thus the problem of computing the distribution of the height $`h(x,t)`$ is converted into determining the statistics of the length, $`l`$, of a longest increasing subsequence of a random permutation. Since to leading order $`h(x,t)t`$, $`l`$ must be of order $`\sqrt{\lambda }`$ and the relative fluctuations should be of order $`\lambda ^{1/6}`$, if we accept $`\beta =1/3`$ for KPZ growth in $`1+1`$ dimension. The same construction can be carried out for an initially flat substrate. By translation invariance, it suffices to study $`H(t)=h(0,t)`$. The rectangle $`R_{(0,t)}`$ is now replaced by the triangle $`T_t=\{(x^{},t^{}):|x^{}|tt^{},t^{}0\}`$. To relate to the directed polymer we add the mirror image relative to $`t=0`$, including the nucleation events, to obtain the square $`R_t=\{(x^{},t^{}):|x^{}|t|t^{}|\}`$. Then $`2H(t)`$ equals again the ground state energy of the directed polymer from $`(0,t)`$ to $`(0,t)`$. However the statistics of nucleation centers inside $`R_t`$ is constrained to satisfy the reflection symmetry relative to $`t=0`$. For a random permutation with Poisson distributed length $`N`$, $`N=\lambda `$, the length $`l`$ of the longest increasing subsequence satisfies the amazing identity $$\text{Prob}\{lm\}=e^\lambda _{m\times m}𝑑U\mathrm{exp}\left(\sqrt{\lambda }\text{Tr}(U+U^1)\right),$$ (1) where the integration is uniformly over all $`m\times m`$ unitary matrices. A proof can be found for example in . The partition function in (1) appeared before in the context of quantum gravity and has a third order phase transition at $`m2\sqrt{\lambda }`$ with finite size scaling governed by the Painlevé II equation . Baik *et al* prove that $`l2\sqrt{\lambda }+\lambda ^{1/6}\chi _2`$ for large $`\lambda `$, where $`\chi _2`$ is a random variable distributed according to the GUE Tracy-Widom distribution, i. e. the distribution of the largest eigenvalue of a complex hermitian random matrix . One has $`\text{Prob}\{\chi _2x\}=F_2(x)=e^{g(x)}`$, where $`g^{\prime \prime }(x)=\text{u}(x)^2`$, $`g(x)0`$ as $`x\mathrm{}`$, and $`\text{u}(x)`$ is the global positive solution of the Painlevé II equation $`\text{u}^{\prime \prime }=2\text{u}^3+x\text{u}`$. Its asymptotics are $`\text{u}(x)\sqrt{x/2}`$ for $`x\mathrm{}`$ and $`\text{u}(x)\text{Ai}(x)`$ for $`x\mathrm{}`$, $`\text{Ai}(x)`$ the Airy function. To translate to the PNG model we introduce the growth velocity $`v(u)`$, depending on the macroscopic slope $`u=h/x`$, and the static roughness $`A(u)`$ , which for PNG are $`v(u)=\sqrt{2+u^2}`$, $`A(u)=\sqrt{2+u^2}`$ in our units. Then $$h(v^{}(u)t,t)\mathbf{(}v(u)uv^{}(u)\mathbf{)}t+(\frac{1}{2}v^{\prime \prime }(u)A(u)^2t)^{1/3}\chi _2$$ (2) in the limit of large $`t`$. We emphasize that all nonuniversal factors are given through the model dependent quantities $`v(u)`$, $`A(u)`$ and remark that (2) is also confirmed by the rigorous result of Johansson for a discrete growth model equivalent to the totally asymmetric simple exclusion process. For the flat substrate one might expect to have the same fluctuation law as for the droplet, since in both cases the mean curvature vanishes on a microscopic scale. A result of Baik and Rains tells us however that the fluctuations are GOE . More precisely, there is a similar formula for $`\text{Prob}\{lm\}`$ as (1) in the case that the random permutation $`p`$ is reflection symmetric relative to the anti-diagonal, $`p\mathbf{(}N+1p(k)\mathbf{)}=N+1k`$. The asymptotic analysis of results in $`l4\sqrt{\lambda }+(2\lambda )^{1/6}\chi _1`$, where $`\chi _1`$ is distributed as the largest eigenvalue of a real symmetric random matrix. Translated to the surface this means $$H(t)=\sqrt{2}t+(t/\sqrt{2})^{1/3}\chi _1.$$ (3) One has $`\text{Prob}\{2^{2/3}\chi _1x\}=F_1(2^{2/3}x)=e^{(f(x)+g(x))/2}`$, $`g(x)`$ as above and $`f^{}(x)=\text{u}(x)`$, $`f(x)0`$ for $`x\mathrm{}`$. The distributions of $`\chi _2`$ and $`\chi _1`$ are plotted in Fig. 2. Superimposed are Monte Carlo data for the PNG model, which differ distinguishably from the analytical curves only at the tails where statistics becomes bad. We conclude that the droplet and the flat substrate have the same scaling form but distinct universal distributions. The flat substrate, although used in many simulations, and the droplet are rather special as initial conditions. From a statistical mechanics point of view stationary growth would be regarded as singled out, which for PNG corresponds to initial conditions where the up and down steps are random with densities $`\sqrt{2}`$ each. Physically, another natural initial condition is to have a staircase configuration representing a tilted surface. In addition we could have sources, for example additional nucleation events at the origin. The mapping to the directed polymer works as before. Our crucial observation is that such other initial conditions translate in essence to defect lines and/or boundary potentials for the directed polymer. To illustrate, we discuss only one special geometry. As for the droplet we consider random nucleations of density one in the square $`R_{(0,t)}`$. In addition there are random nucleations at the two lower edges $`\{s=0\}`$ and $`\{r=0\}`$ with constant line densities $`\rho _+`$, resp. $`\rho _{}`$. Thus the path of minimal energy, with starting point at $`(0,0)`$, sticks for a while at one of the two edges and then enters the bulk to reach $`(0,t)`$ eventually. If $`\rho _+<1`$, $`\rho _{}<1`$, it does not pay to stay at the edges, and from the bulk we have GUE energy fluctuations according to (2). On the other hand if $`\rho =\mathrm{max}\{\rho _+,\rho _{}\}>1`$, the optimal path stays for a length $`t(11/\rho ^2)/\sqrt{2}`$ at the edge with the higher density. Since the edge events are random, the $`t^{1/3}`$ bulk fluctuations are dominated by the Gaussian $`\sqrt{t}`$ edge fluctuations. Parenthetically we remark that for regularly spaced edge points one should recover GOE. The length distribution along the critical lines $`\rho _+=1`$, $`\rho _{}<1`$, resp. $`\rho _+<1`$, $`\rho _{}=1`$, was identified by Baik and Rains , in a generalization of the techniques in . They obtain GOE<sup>2</sup> fluctuations, i.e. the distribution of the maximum of two independent GOE random variables. The path of minimal energy stays for a length of order $`t^{1/3}`$ at the density one edge. At the critical point $`\rho _+=\rho _{}=1`$, the polymer has a choice between the left and right edge. By a limiting procedure one obtains the universal distribution for the energy fluctuations, $`F_0(x)=\text{Prob}\{\chi _0<x\}`$, with $$F_0(x)=[1(x+2f^{\prime \prime }+2g^{\prime \prime })g^{}]e^{(g+2f)}.$$ (4) An interpretation in terms of the eigenvalue distribution of random matrices has yet to be found. In Fig. 2 we plot the distribution of $`\chi _0`$. Superimposed are simulation data for the PNG model, taken before the analytic result had been obtained. The first four moments of $`\chi _j`$, $`j=0,1,2`$, are listed in Table I. Of interest are also the asymptotics of the probability densities $`F_j^{}(x)`$. From Painlevé II we obtain $`\mathrm{log}F_j^{}(x)=c_j|x|^3/12`$ for $`x\mathrm{}`$ and $`\mathrm{log}F_j^{}(x)=d_jx^{3/2}/3`$ for $`x\mathrm{}`$ up to logarithmic corrections with prefactors $`c_j=1,2,1`$ and $`d_j=2,4,4`$ for $`j=0,1,2`$, respectively. We have to translate back to surface growth. For stationary growth with zero slope, in the space-time picture, the height lines cross the forward light cone with the densities $`\rho _+=1=\rho _{}`$ and the intersection points are Poisson distributed . Thus for the directed polymer with edge densities the critical point is precisely stationary growth with zero slope. If $`\rho _+\rho _{}=1`$, $`\rho _+1`$, we have also stationary growth but now with slope $`u=(\rho _{}\rho _+)/\sqrt{2}`$. As argued already the fluctuations along the line $`x=0`$ are then Gaussian $`\sqrt{t}`$. For the $`t^{1/3}`$ fluctuations one has to record height differences along the line $`\{x=v^{}(u)t\}`$, as can be seen from a similarity transformation. In Fig. 3 we illustrate the macroscopic shape for general boundary sources. If $`\rho _+=0=\rho _{}`$, we have the droplet discussed before. Nonzero boundary sources enforce flat segments tangential to the droplet shape. The profile at $`x=0`$ is curved for $`\rho _+<1`$, $`\rho _{}<1`$, flat otherwise, the marginal case corresponding to the critical lines. Our detailed study of the PNG model suggests the following scaling theory for all growth models in the KPZ universality class. First of all we require a self-similar macroscopic shape. Locally this leaves only two possibilities, either a flat piece or a curved piece with a shape determined through the slope dependent growth velocity . We draw a ray from the center of symmetry. If the surface at the point of intersection with the ray has non-zero curvature, then the height fluctuations in this direction are GUE with scaling form (2). If the curvature is zero, we have to know the roughness of the initial conditions, i. e. $`|h(x,0)h(0,0)||x|^\alpha `$ with roughness exponent $`\alpha `$, and/or the corresponding roughness for boundary sources. If $`\alpha =0`$ the height fluctuations are GOE and the general scaling form is as in (2) with $`\chi _2`$ replaced by $`\chi _1`$. If $`\alpha =1/2`$ the height fluctuations are Gaussian with variance proportional to $`t`$, except along the line $`\{x=v^{}(u)t\}`$, where they again have the scaling form (2) with the random variable $`\chi _2`$ replaced by $`\chi _0`$, as defined in (4). The intermediate cases $`0<\alpha <\frac{1}{2}`$ have not been studied systematically. Also the fluctuations at the endpoints of flat pieces have still to be classified. There are two exceptions. One is the case of Fig. 3, which has GOE<sup>2</sup> and the second one is the half-droplet with an external source at $`x=0`$. Translating to PNG one finds GOE, GSE, and Gaussian depending on the strength of the source. Our constructions carry over immediately to higher dimensions, as can be seen most directly in the polymer picture. The square is replaced by a $`(d+1)`$–dimensional (hyper)cube with uniformly distributed nucleation centers, the polymer running from the lower to the upper tip. For the PNG model this corresponds to droplet growth with islands having the shape of a regular simplex (a triangle in $`2+1`$, a tetrahedron in $`3+1`$, and so on). One axis of the cube defines the order $`1,\mathrm{},N`$, while the remaining $`d`$ axes define the permutations $`p_i(1),\mathrm{},p_i(N)`$, $`i=1,\mathrm{},d`$. Increasing means now increasing in all coordinates, i.e. $`j<j^{}`$ and $`p_i(j)<p_i(j^{})`$ for all $`1id`$. The length of the longest increasing subsequence equals, again, the height of the droplet. At present we study numerically the statistics of this length with the goal to have information on scaling more precise than the one of previous investigations . In conclusion, we have obtained distinct scaling functions for the PNG model, which depend on the choice of initial conditions. By universality we argued that from the knowledge of the self-similar curvature one can infer the type of height fluctuations. It would be of interest to study also joint probability distributions of the height at distinct space-time points. Perhaps such a program could identify the universal field theory hiding behind KPZ growth in one dimension. We are grateful to J. Baik and E. M. Rains for making their results available to us prior to publication and to C. Tracy for help on Painlevé II.
no-problem/9912/hep-lat9912020.html
ar5iv
text
# RUHN-99-7 Summary Talk at Chiral ’99**footnote *Talk at Chiral ’99, Sept 13-18, 1999, Taipei, Taiwan. ## I Introduction By a rough count this was the third in the Chrial’XX series of conferences started in Rome in 1992. I guess that a summary ought to first reorder points made by various speakers by topics and then try to abstract generally accepted conclusions and identify issues on which agreement is lacking. As far as the first step, the data was subjected to severe cuts: there were several very interesting talks outside the narrow topic of massless fermions on the lattice which I shall not mention. From the talks that do concern massless lattice fermions I shall pick only what I think I understood; this is a major cut. I apologize in advance for omissions and misunderstandings. The coarsest classification of the topics is into two classes: * Chiral gauge theories. * Vector-like theories with global chiral symmetries. ## II Chiral gauge theories Let’s walk through a list of issues of principle on which I shall present a status report and, at times, and my personal opinion in a different font. * There exists no complete construction of asymptotically free chiral gauge theories where the symmetry that is gauged is perturbatively non-anomalous. * There is a disparity in beliefs on whether we have passed the point of “physical plausibility”. By this I mean that, as physicists, we have established so many features that the remainder of the problem can be “shipped over” to mathematical physics, where in due time (hopefully $`<\mathrm{}`$) all hairy technicalities will be nailed down. But, we no longer have serious doubts about the outcome. Most of us would agree, for example, that the RG framework is far beyond physical plausibility. Nevertheless there is no mathematical proof beyond perturbation theory that there always exists a hierarchy of fixed points ordered by degrees of stability with appropriate connecting flows, etc. My opinion is: * The older approaches still are below the point of “physical plausibility”. On the other hand, the new approach is past the point of “physical plausibility”. I think many of us here disagree on this assessment. * There exits only one new approach . It is obvious, even if not represented at this conference, that there are some workers worldwide that would disagree with this. * I think that most criticisms of the new approach, e.g. , are rooted in the difficulty to make the new approach look completely conventional. ### II-1 Unconventional features of the new approach The new approach is unconventional in that the chiral fermion determinant is (at the first step at least) not gauge invariant, but the fermion propagator is gauge covariant. This implies that the fermion determinant and the fermion propagator are not related in the conventional manner. In the continuum this issue also exists although it is hidden behind the overall formal character of the path integral formulation. Fujikawa, in his work on anomalies associated this feature with the fermion integration measure rather than with the determinant but this separation is artificial because we see only the product of the “measure” and the fermion determinant, at least to any order in perturbation theory. Nevertheless Fujikawa’s view consists of a deep insight, not as much in the terminology, but because it tells us precisely what I just mentioned above: the fermion propagators are well behaved under gauge transformations, only the fermion determinant is not so (in the anomalous case). In diagrams this means that anomalies only come from triangular fermion loop insertions, and when phrased in this way it sounds less surprising. But, on the lattice there is no such thing as an integration measure for fermions: There are no infinities and Grassmann integration has nothing to do with measures. So, on the lattice one must do something somewhat unconventional to get the fermion determinant break gauge invariance while the fermion propagator does not. In the continuum, when anomalies cancel, we can get rid of the gauge violation in the fermion determinant and we might expect a totally conventional formulation to hold. There are some conjectures how to ultimately achieve this on the lattice, but nobody has done it yet. I think that to actually achieve this in full detail will end up having been unnecessary. * The new approach requires us to choose bases in subspaces of a finite (if the lattice is finite) dimensional vector space. This choice depends on the gauge background. The definition of the spaces is gauge covariant but the choice of bases is not. In my opinion * the ambiguity in phase choice that results from the above is best interpreted as a descendant from an ambiguity in an underlying path integral of conventional appearance but over an infinite number of fermion lattice fields. There is no doubt that this is a possible interpretationCould the Heat Kernel approach of provide another interpretation ?, because the new construction has been obtained from a system containing an infinite number of fermions and integrating all but the lightest out. The effective theory governing the lightest fermion can be formulated directly and then the infinite number of fermions picture is no longer necessary in the framework of Euclidean field theory. But, if one wishes to give some argument for why the theory should be unitary after taking the continuum limit and subsequently analytically continuing to real time, the single known way to date is to go back to the infinite number of fermion language, where one has a familiar form of lattice unitarity, at least at a formal level. * It is at the stage of making the phase choice that the obstructive role of anomalies shows up. It is also at this stage that possibly new obstructions could come in, “non-perturbative anomalies” . I believe that * no such problems will occur in many “good” theories, but I won’t exclude cases we would deem good today, but find out that they are bad tomorrow. Some complications in finite volume in two dimensions might contain a hint in this direction. It is important to emphasize that the fermions enter the action bilinearly. The bilinearity has significant consequences and the entire new approach is dependent on it. Bilinearity means that all one needs to know about the fermions is their propagator, the fermion determinant and the possible ’t Hooft vertices, all functions of the gauge background. In trivial topology, there are no ’t Hooft vertices to worry about, and bilinearity gives a simple prescription for the result of the integral over fermions for any set of fermionic observables. This is the content of Wick’s theorem. The extension to nontrivial topology with the help of inserting ’t Hooft vertices requires some extra functions (zero modes). If we have the propagators, the zero modes (when present) and the fermionic determinant we know all there is to know and whether we also employ and action and Grassmann integration is a manner of notation but not substance. What is unconventional for a lattice theory is that the fermion propagator does not fully define the fermion determinant. Just like in Euclidean continuum, it does define the absolute value of the determinant. The phase of the determinant however needs to be determined separately. The main conceptual obstacle overcome by the overlap construction was concretely realizing this apparently paradoxical situation. ### II-2 Phase choice and fine tuning * What is missing at the moment in the asymptotically free context is a full natural choice of the phase of the chiral determinant making it explicit that if anomalies cancel gauge invariance can be exactly preserved but, if they are not, such a choice cannot be made by locally changing some operators. But, we have some partial results: * If anomalies do not cancel one can show that a good definition of phase, at least within one framework, is impossible. * In the case of $`U(1)`$, if anomalies do cancel, at least in a rather formal infinite lattice setting, one can find a good definition of the phase of the chiral determinant. I believe that * the problem of finding a good phase is almost entirely a technical problem. I also believe that it is a hard technical problem, at least at finite volume. Let us now turn to the issue of fine tuning which generated much discussion. First of all even the concept of fine tuning isn’t perfectly well defined. I’ll adopt the following definition: Fine tuning is the need to choose some functions of field variables which, when viewed as a series in elementary functions of fields, contain numerical coefficients that have to be of some exact value, with no deviations admitted. The numerical values of the coefficients are not directly determined by a symmetry principle. * The solution to the technical problem of phase choice, according to all conjectures and results to date, requires fine tuning somewhere. I believe that * if a solution to the technical problem exists, that solution defines a neighborhood, a region in coupling space, so that for any point in it the correct continuum limit will emerge after gauge averaging. So, you only need to be in a good neighborhood, not exactly at its center. This, in my definition eliminates fine tuning, but we had some disagreements both on whether this can work and on whether if it does work it really is natural. The basic way this is pictured to work is that in the anomaly free case one can do a strong coupling type of expansion in the deviation from the ideal point in the center of the neighborhood. One cannot see this work in weak coupling perturbation theory. * Currently there is an effort to define the phase of the chiral determinant in a perfect way. Kikuakwa’s work on the $`\eta `$-invariant , Lüscher’s attempts in the non-abelian case, including their respective conjectures are all part of this effort. The conjectures I presented in my talk are an earlier, somewhat different attempt in the same direction. In my attempt I tried to restrict all fine tuning to gauge covariant operators, while in the newer way one fine tunes at the non-gauge covariant level. In practice I think one shall need to rely on the existence of the “good neighborhood” and try to guess a phase choice residing in it. There is numerical evidence that the Brillouin-Wigner phase convention (maybe more appropriately termed the Pancharatnam convention), at least in two dimensions, provides a realistic possibility. ### II-3 Future * A successful conclusion of any approach to find a perfect phase choice would constitute a significant result in mathematical physics. Some personal opinions: * I am not convinced that we need many people working on this. We should all be happy if this issue is taken out of the way by somebody. The likelihood that new physics would emerge from a full solution of this problem is not high. * Technically, things might simplify if one starts by considering more closely a mathematical construction directly at infinite lattice volume. ## III Vector-like gauge theories with massless fermions In this area there was a substantial amount of progress recently and contributions have been both original and coming from many people. The activity here is closely connected to numerical QCD and therefore of potential importance to particle phenomenology. * I think in this area there are easier open problems. On the other hand there are no fundamental open issues even at the level of mathematical physics (like the phase choice in the chiral case). We can have confidence in the basic premise that we know now how to formulate QCD with exactly massless quarks on the lattice. ### III-1 Numerical QCD We have heard about two basic implementations of the new way to make fermions massless. * Domain Wall Fermions, (DWF), the more traditional approach, were reviewed by Christ . * Overlap fermions, a bit newer, were discussed by Edwards, Liu and McNeile . What are the advantages of these new methods, when compared to employing Wilson fermions, say ? * Small quark masses are attainable without exceptional penalties and without having to go to staggered fermions with the associated flavor identification difficulties. But, the price is still high. Actually, with DWF we only saw something like $`\frac{m_\pi }{m_\rho }.5`$ while we really would like $`\frac{m_\pi }{m_\rho }.25`$. To go so low a prohibitively large number of slices in the extra dimension seems to be required . On the other hand we heard a report of attaining $`\frac{m_\pi }{m_\rho }.2`$ with overlap fermions . * My guess is that the overlap went to lower masses because of the so called projection technique which allows a numerically accurate representation of the sign function down to very small arguments. This could be done also with DWF, but would be costly, because the transfer matrix is more complicated than the Hermitian Wilson Dirac operator. It would be illuminating if DWF people were to test the projection method in their framework, only to potentially identify the badness of their implicit approximation to the sign function at the origin as a possible source of the problems they encounter when trying to go to lower quark masses. * Related to my comment above, we have seen also first steps in the design of an HMC dynamical simulations method for overlap fermions incorporating the projection technique . * One has very clean lattice versions of topological effects and the related $`U(1)_A`$ problem. Both DWF and overlap work give very nice results. For example, we saw that indeed $`U(1)_A`$ is not restored at $`T>T_c`$ , that Random Matrix models work as expected also at non-zero topology and that the condensate $`\overline{\psi }\psi `$ behaves as expected . * It is potentially very advantageous to have a formulation where operator mixing is restricted just like in the continuum. This can provide substantial numerical progress on matrix elements. There are good previous results on the Kaon B-parameter and surprising new results on $`\frac{ϵ^{}}{ϵ}`$ . * A natural question is then what can be done with the overlap in this context. There is a big factor difference in the machine sizes that are applied to DWF versus overlap, so we may have to wait for quite a while. * A cloud on the horizon has been discussed extensively . It has to do with the fact that the density of eigenvalues of the hermitian Wilson Dirac operator $`H_W`$ at zero seems not to vanish on the lattice at any coupling. This might indicate a serious problem since the definition of the overlap Dirac operator involves the sign function of $`H_W`$. The problem also directly affects DWF, making absurdly large numbers of slices necessary. The overlap permits a simpler fix. But, the problem isn’t serious so long one works at fixed physical volume. In that case, taking the scaling law shown by Edwards, , we immediately see that, in principle, going with the lattice $`\beta `$ to infinity at fixed physical volume will eliminate the low lying states of $`H_W^2`$. How to avoid the problem at low values of $`\beta `$, say $`5.85,6.0,6.2`$, is an open and practically important question. Several options were discussed, including changing the pure gauge action and changing the form of $`H_W`$. In this context there might be some relevance in the new exact bounds on the spectrum of $`H_W^2`$ which were not yet complete at the time of the conference. These bounds were derived using also eigenvalue flow equations. Such equations were emphasized by Kerler in his talk . * The main advantage of DWF over overlap fermions is the lower cost in dynamical simulations. It seems possible to combine the good features of DWF with those of overlap fermions using various tricks mentioned by Edwards . There are many possibilities and we should be imaginative. ### III-2 Non-QCD * Kaplan discussed DW formulations of SUSY theories with no matter. In the continuum, with $`𝒩=1`$ supersymmetry, the masslessness of the gaugino is known to imply supersymmetry at the renormalized level. * Going to higher $`𝒩`$ supersymmetries employing dimensional reduction might not work . * The fermion pfaffian related to the lattice gluinos was shown to be non-negative, thus eliminating a potential thorny numerical problem . * Lower dimensional theories might provide interesting playgrounds . In particular some simple 3 dimensional gauge theories with massless fermions might have interesting symmetry breaking patterns. ### III-3 Ginsparg-Wilson Relation, Index * The Ginsparg Wilson relation is an algebraic requirement best thought of in terms of Kato’s pair . We had some discussion about the GW-overlap equivalence and the role of the operator $`R`$ in the GW relation, see . * The following formula for the index is reminiscent of the continuum treatment of Fujikawa. $$\mathrm{Index}=Tr[sf(h^2)]$$ (1) where, $$h=\frac{1}{2}\left[\gamma _5+\mathrm{sign}(H_W)\right],s=\frac{1}{2}\left[\gamma _5\mathrm{sign}(H_W)\right],h\gamma _5D_o,$$ (2) and $`f(0)=1`$. There might be some connection between this and Fujikawa’s talk here , which centered on the operator $`s`$ (the formula $`s=\gamma _5h=\gamma _5(1D_o)`$ is slightly different because of different conventions involving factors of two). * We saw an analytical calculation showing that the lattice reproduces the correct anomalies even in backgrounds which are non-trivial topologically . Previously, this has been checked only numerically and in two dimensions. ### III-4 Future There clearly is more to do and we have some good prospects for progress. On the numerical front further investigations of ways to implement the overlap Dirac operator, or of some equivalent object, are called for. While DWF are easy to visualize, and indeed produce, in the limit of an infinite number of slices, the sign function of $`\mathrm{log}T_W`$ where $`T_W`$ is a transfer matrix and $`\mathrm{log}T_W`$ is the same as $`H_W`$ up to lattice corrections, I see a danger in the concentration of large amounts of computer power on this one version of the new way to put fermions on the lattice. Once too many cycles are invested in DWF, better ways will get suppressed for a long time and, if any of the hints we are already seeing develop into serious obstacles, there will be no developed alternatives. This would cause delays in translating the beautiful theoretical progress we are witnessing into better practical number acquisition. In short, I urge DWF implementers to be more broad minded; control over a large machine comes with a large responsibility. ## IV Conclusions It is rare that a subfield of theoretical physics solves one of its longstanding problems in a direct and “honest” way, rather than redefining it. Such a rare event has taken place in the context of lattice fermions. The solution may have implications for physics beyond the SM, because it is a way to fully regulate a chiral gauge theory, outside perturbation theory. This lattice theoretical development holds promise also for SM phenomenology because it could change substantially the methods of numerical QCD. At the moment there are some tensions in the field surrounding issues of priority and implementation. These problems would get solved if we had: * More imagination. * More young people. * More computing power. ## V Acknowledgments My research at Rutgers is partially supported by the DOE under grant \# DE-FG05-96ER40559. I wish to express my appreciation of the immense hospitality and great effort invested by the organizers of Chiral 99 in Taipei. In particular I think I speak for all of us when I profess the chiral community’s indebtedness to Ting-Wai Chiu for doing so much to produce an inspiring and enjoyable meeting.
no-problem/9912/astro-ph9912014.html
ar5iv
text
# SCUBA observations of NGC 6946 Based on observations at the James Clerk Maxwell Telescope. JCMT is operated by The Joint Astronomy Centre on behalf of the Particle Physics and Astronomy Research Council of the United Kingdom, the Netherlands Organisation for Scientific Research, and the National Research Council of Canada. ## 1 Introduction Until recently,the main source of Far-Infrared (FIR) data for spiral galaxies has come from the IRAS satellite. The availability of instruments capable of observing at $`\lambda >100\mu `$m, like those onboard the satellite ISO, has allowed the detection of large amounts of cold dust, much colder than IRAS was able to detect. Spiral galaxies are found to have a dust content similar to the one in the Galaxy (Alton et al. 1998a ). Cold temperatures (T$`<`$20K) can be reached by diffuse dust heated by the general interstellar radiation field, while dust close to star-forming region is hotter (T$`>`$50K; Whittet WhittetBook1992 (1992)). Since diffuse dust is the main contributor to the internal extinction in a galaxy, observations of cold dust help to trace its opacity. Additionally, ISO observations have suggested that the dust distribution is more extended than the stellar disk (Alton et al. 1998a ; Bianchi, Davies & Alton 1999b ). If this is confirmed, observation of the distant universe may be severely biased, because of the large cross section of the dust disks. Unfortunately, the poor resolution ($`2\mathrm{}`$) of the current FIR images does not allow detailed studies of the spatial distribution of dust. Higher resolution can be achieved in the sub-mm, but a high sensitivity is required because of the fainter dust emission. High sensitivity and resolution are both characteristics of the recently developed SCUBA sub-mm camera. Only a few large nearby galaxies have been observed with SCUBA, notably the highly inclined galaxy NGC 7331 (Bianchi et al. BianchiMNRAS1998 (1998)) and the edge-on galaxy NGC 891 (Alton et al. 1998b ; Israel et al. IsraelA&A1999 (1999)). The observed dust emission is found to correlate well with the molecular gas phase, dominant in the centre. However, a dust component associated with the atomic gas is needed to explain the dust and gas column density at large galactocentric distance along the major axis of NCG 891 (Alton et al. AltonSub1999 (1999)). In this Letter we present SCUBA observations of the face-on galaxy NGC 6946. Because the galaxy is larger than the camera field of view, images have been produced with the scan-mapping technique, chopping within the observed field. The observation and the data reduction needed to restore the source signal are described in the next section. The description and the discussion of the results are given in Section 3. ## 2 Observations and data reduction NGC 6946 was observed at 450 $`\mu `$m and 850 $`\mu `$m, during April 10, 11 and June 17, 18, 19, 20 1998. SCUBA consists of two bolometer arrays of 91 elements optimised to observe at 450 $`\mu `$m and 37 elements optimised at 850 $`\mu `$m, covering a field of view of about 2.3 arcmin (Holland et al. HollandMNRASprep1998 (1999)). The camera, mounted on the Nasmyth focus of the telescope can be used simultaneously at both wavelengths, by means of a dichroic beamsplitter. In the scan-map mode, the telescope scans the source at a rate of 24 arcsec per second, along specific angles to ensure a fully sampled map. Meanwhile the secondary chops with a frequency of 7.8 Hz within the observed field. While this ensures a correct subtraction of the sky background, the resulting maps unfortunately have the profile of the source convolved with the chop. The profile of the source is restored deconvolving the chop from the observed map by mean of Fourier Transform (FT) analysis. Scan-maps of NGC 6946 presented here are fully sampled over an area of 8$`\mathrm{}`$x8$`\mathrm{}`$. Each set of observations consisted of six scans, with different chop configurations: chop throws of 20$`\mathrm{}`$, 30$`\mathrm{}`$ and 65$`\mathrm{}`$ along RA and Dec are needed to retrieve the final image. Data have been reduced using the STARLINK package SURF (Jenness & Lightfoot JennessMan1997 (1999)). Images were first flat-fielded to correct for different sensitivities of the bolometers. Noisy bolometers were masked and spikes from transient detections removed by applying a 5-$`\sigma `$ clip. A correction for atmospheric extinction was applied, using measures of the atmosphere opacities taken several times during the nights of observation. Zenith optical depth varied during the six nights, with $`\tau _{450}=0.42.5`$ and $`\tau _{850}=0.10.5`$. The 450 $`\mu `$m opacity on the last night was too high ($`\tau >3`$) for the source to be detected and therefore the relative maps were not used for this wavelength. Because of the chopping in the source field, each bolometer sees a different background: a baseline, estimated from a linear interpolation at the edges of the scan, has been subtracted from each bolometer. Sky fluctuations were derived from the time sequence of observations for each bolometer, after the subtraction of a model of the source, obtained from the data themselves. The images have then been corrected by subtacting the systematic sky variations from each bolometer. Data taken with the same chop configuration were rebinned together into a map in an equatorial coordinate frame, to increase the signal to noise. Six maps with 3$`\mathrm{}`$ pixels were finally obtained for each wavelength, combining 33 and 25 observations, at 850 and 450 $`\mu `$m, respectively. In each of the six maps the signal from the source is convolved with a different chop function. The final deconvolved image is retrieved using the Emerson II technique (Holland et al. HollandMNRASprep1998 (1999); Jenness, Lightfoot & Holland JennessProc1998 (1998)). Essentially, for each rebinned image, the FT of the source is derived by dividing the FT of the map by the FT of the chop function, a simple sine-wave. Since the division boosts up the noise near the zeros of the sine-wave, different chop configuration are used. For the chosen chop throws, the FT of the chop functions do not have coincident zeros, apart from the zero frequency. A smoother FT of source can therefore be obtained, and the final image is retrieved by the applying an inverse FT. Unfortunately the deconvolution introduces artifacts in the images, like a curved sky background. This may be due to residual, uncorrected, sky fluctuation at frequencies close to zero, where all the chops FT goes to zero. Work to solve this problem is ongoing (Jenness, private communication). To enhance the contrast between the sky and the source, we have modelled a curved surface from the images, masking all the regions were the signal was evidently coming from the galaxy. The surface has been then subtracted from the image. Calibration was achieved from scan-maps of Uranus, that were reduced in the same way as the galaxy. Integrated flux densities of Uranus were derived, for each observing period, using the STARLINK package FLUXES (Privett, Jenness & Matthews PrivettMan1998 (1998)) for JCMT planetary fluxes. Comparing data for each night we derived a relative error in calibration of 8 per cent and 17 per cent, for 850 $`\mu `$m and 450 $`\mu `$m respectively. From the planet profile, the beam size was estimated: FWHMs of 15.2$`\mathrm{}`$ and 8.7$`\mathrm{}`$ were measured for the beam at 850 and 450 $`\mu `$m, respectively. To increase the signal to noise, the 850 $`\mu `$m image has been smoothed with a gaussian of 9$`\mathrm{}`$ thus degrading the beam to a FWHM of 17.7$`\mathrm{}`$ The 450 $`\mu `$m image has been smoothed to the same resolution as for the 850 $`\mu `$m one, to facilitate the comparison between features present in both. The sky $`\sigma `$ in the smoothed images is 3.3 mJy beam<sup>-1</sup> at 850 $`\mu `$m and 22 mJy beam<sup>-1</sup> at 450 $`\mu `$m. The final images, after removing the curved background and smoothing are presented in Fig. 1. For each wavelength, the grey scale shows all the features $`>`$1-$`\sigma `$, while contours starts at 3-$`\sigma `$ and have steps of 3-$`\sigma `$. ## 3 Results and discussion The 850 $`\mu `$m image shows a bright nucleus and several features that clearly trace the spiral arms (in Fig. 1 the sub-mm contours are overlayed to a U-band image of the galaxy (Trewhella TrewhellaThesis1998 (1998))) As already seen in optical images (Tacconi & Young TacconiApJ1990 (1990)), the spiral arms originating in the northeast quadrant are more pronounced than the others, where only regions with bright HII regions have detectable emission in the sub-mm. The 850 $`\mu `$m image presents a striking similarity to the <sup>12</sup>CO(2-1) emission map in Sauty, Gerin & Casoli (SautyA&A1998 (1998)), observed with the IRAM 30m radiotelescope with a comparable resolution (13$`\mathrm{}`$). The image of molecular line emission is also shown in Fig. 1, with the 850 $`\mu `$m contour overlayed. The similarity with the sub-mm image is hardly surprising, since the molecular gas is the dominant component of the ISM over the optical disk of NGC 6946 (Tacconi & Young TacconiApJ1986 (1986)). The nucleus is elongated in the direction north-south, as observed for the central bar of molecular gas (Ishizuki et al. IshizukiApJ1990 (1990); Regan & Vogel ReganApJL1995 (1995)). Emission associated with a more diffuse atomic gas component cannot be detected, for several reasons. First of all the face-on inclination of the galaxy: since dust is optically thin to its own emission a faint component can be observed only if the dust column density is large. This is the case for the high inclination galaxies NGC 7331 (Bianchi et al. BianchiMNRAS1998 (1998)) and NGC 891 (Alton et al. 1998b ), where higher signal to noise were obtained coadding a smaller number of observations. The large face-on galaxy M51 has been observed using the scan-map mode and confirms the necessity of long integrations (Tilanus, private communication). Furthermore, chopping inside the source field removes not only the emission from the sky but also from possible components with a shallow gradient: this may be the case for dust associated with the flat HI distribution in NGC 6946 (Tacconi & Young TacconiApJ1986 (1986)). Finally, a faint diffuse emission could have been masked by the mentioned artifacts and subtracted together with the curved background. The 450 $`\mu `$m image is much noisier than the 850 $`\mu `$m one, because of the larger sky emission at this wavelength. Only a central region of 0$`\stackrel{}{.}`$75 x 1$`\stackrel{}{.}`$5 can be clearly detected, although most of the features at a 3-$`\sigma `$ level correspond to regions emitting in the long wavelength image. The temperature from the two sub-mm fluxes can be measured only for the central region with significant 450 $`\mu `$m flux. Sub-mm fluxes are 1.2 Jy at 850 $`\mu `$m and 9.3 Jy at 450 $`\mu `$m. We checked for the contribution of the strong <sup>12</sup>CO(3-2) line emission at 346 GHz to the 850 $`\mu `$m flux using the observation of NGC 6946 centre in this line reported by Mauersberger et al. (MauersbergerA&A1999 (1999)) for a beam of 21$`\mathrm{}`$. Converting from the original units to Jansky (Braine et al. BraineA&A1995 (1995)) and averaging over the 30 GHz bandwidth of 850 $`\mu `$m filter (Matthews MatthewsMan1999 (1999)), a flux density of 80 mJy/beam is derived. As pointed out by the referee, the pointing of the Mauersberger et al. observations was offset from the strongly concentrated central emission by nearly one beamwidth in the SW direction. Using the <sup>12</sup>CO(2-1) image as a template of the higher state emission, we corrected for the offset and derived the flux for the central region, larger than the beam. A total contribution of 0.6 Jy is derived for the 346 GHz line (50% of the 850 $`\mu `$m flux). However, this large contribution is due to the high density and gas temperature of the central region. In fact, for 850 $`\mu `$m fluxes on larger apertures, the contamination is much smaller: Israel et al. (IsraelA&A1999 (1999)) derive a contribution of only 4% to the total sub-mm flux of NGC 891. Therefore, the derivation of cold dust temperature at large galactic radii (Alton et al. 1998b ) are not severely biased. We did not correct the 450 $`\mu `$m flux for the contribution of the <sup>12</sup>CO(6-5) line, that lies at the edge of the filter (Israel et al. IsraelA&A1999 (1999)). After the correction, the dust temperature of the central region is T=34$`\pm `$6 K, where the large quoted error comes from the calibration uncertainties. Here and in the following, dust temperature and masses are computed using the emissivity law $`Q_{\mathrm{em}}(\lambda )`$ derived by Bianchi, Davies & Alton (1999a ) from observation of diffuse FIR emission and estimates of optical extinction in the Galaxy. For a wavelength dependence of the emissivity $`\lambda ^\beta `$, changing smoothly from $`\beta =1`$ to $`\beta =2`$ at 200 $`\mu `$m (Reach et al. ReachApJ1995 (1995)) they obtain $`Q_{\mathrm{ext}}(V)/Q_{\mathrm{em}}(\text{100 }\mu \text{m})=2390\pm 190`$, where $`Q_{\mathrm{ext}}(V)`$ is the extinction efficiency in the V-band ($`Q_{\mathrm{ext}}(V)`$1.5; Casey CaseyApJ1991 (1991)). Lacking information outside of the centre, a mean temperature for a larger aperture can be derived from the lower resolution IRAS and ISO images at 100 $`\mu `$m and 200 $`\mu `$m (Alton et al. 1998a ). The total flux inside the B-band half light aperture (5$`\mathrm{}`$ in diameter) is 240$`\pm `$40 Jy at 100 $`\mu `$m and 280$`\pm `$40 Jy at 200 $`\mu `$m (Bianchi, Davies & Alton 1999b ). The temperature from IRAS and ISO fluxes is T=24$`\pm `$2 K. We derived a point-to-point correlation between the 850 $`\mu `$m flux and the <sup>12</sup>CO(2-1) line, resampling the sub-mm image to the same pixel size as the line emission map (10$`\mathrm{}`$, roughly equivalent to both beam sizes) and using all positions with signals larger than 3-$`\sigma `$ in both observations. A linear correlation is found (Fig. 2). Assuming a mean dust grain radius $`a=`$ 0.1$`\mu `$m and mass density $`\rho `$=3 g cm<sup>-3</sup> (Hildebrand HildebrandQJRAS1983 (1983)), the emissivity of Bianchi et al. (1999a )<sup>1</sup><sup>1</sup>1 Using the Bianchi et al. emissivity for $`Q_{\mathrm{em}}\lambda ^2`$ at any wavelength, results in dust column densities smaller by only 15%. and T=24K, the dust column density and hence the mass along the line of sight can be easily computed. The molecular gas column density has been derived from the <sup>12</sup>CO(2-1) emission using a conversion factor appropriate for the <sup>12</sup>CO(1-0) emission in the general ISM in the Galaxy (X=1.8 10<sup>20</sup> cm<sup>-2</sup> K<sup>-1</sup> km<sup>-1</sup> s; Maloney MaloneyProc1990 (1990)) and a line ratio I(2-1)/I(1-0)=0.4 (Casoli et al. CasoliA&A1990 (1990)). The slope of the linear correlation can then be converted into a gas-to-dust mass ratio of 170$`\pm `$20, a value very close to the local Galactic one (160; Sodroski et al. SodroskiApJ1994 (1994)). This confirms the association of dust with the local dominant phase of the galactic ISM. The dust content of NGC 6946 has been studied carrying out an energy balance between the stellar emission in the optical and the FIR dust emission, through the help of radiative transfer models. If an exponential disk is used to model the dust distribution, a central face-on optical depth $`\tau _V5`$ is needed to explain the FIR emission (Evans EvansThesis1992 (1992); Trewhella TrewhellaThesis1998 (1998); Bianchi et al. 1999b ). The 850 $`\mu `$m image clearly show that the dust distribution is more complex, but still the column densities derived from the sub-mm flux support the idea of an optically thick dust distribution. Under the same assumption of the previous paragraph, the diffuse component of the north-east spiral arms at a 3-$`\sigma `$ level corresponds to a V-band optical depth $`\tau _V2.2`$. The quite high optical depth corresponding to the sky noise ($`\tau _V0.7`$) shows how difficult is to obtain sub-mm images of dust emission in the outskirts of face-on galaxies, even for a high sensitivity instrument like SCUBA. Thus, possible extended dust distributions (Alton et al. 1998a ) are better revealed through deep sub-mm imaging of edge-on galaxies, where the dust column density is maximized. However, the high inclination makes the interpretation of the dust emission along the line of sight more complex. ###### Acknowledgements. It is a pleasure to thank Gerald Moriarty-Schieven and Tim Jenness, for their support during observations and data reduction, respectively. The paper has also benefited from the help of U. Klaas and R. Chini.
no-problem/9912/astro-ph9912418.html
ar5iv
text
# Probing strange stars and color superconductivity by 𝑟-mode instabilities in millisecond pulsars ## Abstract $`R`$-mode instabilities in rapidly rotating quark matter stars (strange stars) lead to specific signatures in the evolution of pulsars with periods below 2.5 msec, and may explain the apparent lack of very rapid pulsars. Existing data seem consistent with pulsars being strange stars with a normal quark matter phase surrounded by an insulating nuclear crust. In contrast, quark stars in a color-flavor-locked (CFL) phase are ruled out. Two-flavor color superconductivity (2SC) is marginally inconsistent with pulsar data. Starting with Andersson’s realization, that rotating relativistic stars are generically unstable against the $`r`$(otational)-mode instability , a series of papers have investigated the many implications for gravitational radiation detection and the evolution of pulsars . Originally it appeared that young, hot neutron stars would spin down to rotation periods of order 10 msec within their first year of existence. In contrast, thousands of years would be required for any $`r`$-mode driven spin-down of a hot strange star or a neutron star with significant quark matter content, and the rotation period would not increase above 3 msec in this case, making young, rapid pulsars potential “smoking guns” for quark matter (meta)stability . Decisive for the localization of the instability regimes were the viscosities damping the modes, with strange matter characterized by a huge bulk viscosity relative to nuclear matter. Recently Bildsten and Ushomirsky pointed out, that a very important effect damping $`r`$-modes in neutron stars had been overlooked. This is damping due to viscosity in the boundary layer between the oscillating fluid and the nearly static crust, which is more than $`10^5`$ times stronger than that from the shear in the interior. Matching of boundary conditions at the crust is particularly important for $`r`$-modes, since these are characterized by significant horizontal flows. As a result, $`r`$-modes in neutron stars are only important for rotation periods faster than 2 msec, and only for very high core temperatures (with the possible exception of the very brief time span before the crust forms). Unless it turns out, that neutron stars are able to spin down significantly before their crust solidifies, this means that young, rapid pulsars could be neutron stars after all; not just strange stars. But other, perhaps even better, probes of strange stars result from the $`r`$-mode instability as demonstrated below. Furthermore, pulsar data turn out to be very sensitive probes of color superconductivity in quark matter . These prospects are pursued in the following. In contrast to neutron stars, the quark matter fluid in a strange star need not be at rest at the base of the crust, and therefore the $`r`$-modes in a quark matter star are not damped significantly by “surface rubbing”. If strange quark matter is absolutely stable (having lower energy per baryon than nuclear matter), strange stars may be bare, consisting of quark matter fluid all the way to the surface. In this case, no surface rubbing takes place. But even in the more likely case, where gas from the supernova explosion or later accretion reaches the surface, the crust formed floats on top of a huge electrostatic potential, separated from the quark surface. This is a result of the strong interactions confining the quarks much tighter than the electrostatic forces holding the electrons, so that electrons create an atmosphere of a few hundred Fermi thickness , effectively separating quark matter from the nuclear crust. Some viscosity results from the interaction between the outer part of the electron atmosphere and the base of the crust, but as illustrated later, due to the low density in the strange star crust relative to that of a neutron star, this effect is only dominant when other sources of viscosity are exponentially suppressed in the case of a color-flavor locked quark phase. Rapidly rotating millisecond pulsars (periods below 2.5–3 msec) are unstable to the $`r`$-mode instability for core temperatures in the range of $`\mathrm{a}\mathrm{few}\times 10^5`$$`\mathrm{a}\mathrm{few}\times 10^7`$K, if they are quark stars with a normal fluid quark phase. Interestingly, the fastest pulsars known may be just outside this window of instability, reaching it within $`10^4`$ years due to cooling. When a pulsar reaches the instability window it will slow down by gravitational wave emission on a time scale of $`10^4`$$`10^5`$ years to a period near 2.5 msec, slowing down further on a much longer time scale due to magnetic dipole braking. As demonstrated below, the $`r`$-mode spin-down would be characterized by a so-called braking index with an unusually high value of $`N9`$, a clear observational indication of the process. Some pile-up of pulsar periods near 3 msec, and an underrepresentation of short periods would be expected in this scenario, apparently consistent with the data. In contrast, quark matter with diquark pairing into a color-flavor locked phase , would have an exponential reduction in the viscosities, expanding the $`r`$-mode instability region to encompass low-mass x-ray binaries (LMXB’s) as well as many known pulsars, which should then rapidly spin down, in disagreement with observations. Unless an as yet unconsidered viscous effect could prevent this , it seems that these pulsars cannot be quark stars with properties expected for CFL. A 2-flavor color superconducting phase (2SC) has less dramatic consequences, but still seems marginally ruled out by the data. Metastable quark matter in CFL or 2SC-phases cannot be ruled out, however, since a hybrid star with quark matter in the interior and nuclear matter in the outer layers, probably with a mixed phase in between, must obey the crust boundary condition as an ordinary neutron star leading to surface rubbing. The critical rotation frequency for a given stellar model as a function of temperature follows from $$\frac{1}{\tau _{\mathrm{gw}}}+\frac{1}{\tau _{\mathrm{sv}}}+\frac{1}{\tau _{\mathrm{bv}}}+\frac{1}{\tau _{\mathrm{sr}}}=0,$$ (1) where $`\tau _{\mathrm{gw}}<0`$ is the characteristic time scale for energy loss due to gravity wave emission, $`\tau _{\mathrm{sv}}`$ and $`\tau _{\mathrm{bv}}`$ are the damping times due to shear and bulk viscosities, and $`\tau _{\mathrm{sr}}`$ is the surface rubbing time scale. Surface rubbing is decisive for neutron stars , whereas $`1/\tau _{\mathrm{sr}}=0`$ for bare quark stars, and is suppressed by more than 5 orders of magnitude even for strange stars with maximal crust. Ref. used an analytic description of $`r`$-mode instability in uniform stars to derive the characteristic time scales for strange stars. A strange star has nearly constant density except for masses very close to the gravitational instability limit, so a polytropic equation of state with a low index, $`n`$, provides a very good approximation. The case $`n=0`$ corresponding to constant density was discussed in , whereas $`n=1`$ was studied in . The time scale for gravity wave emission is $$\tau _{\mathrm{gw}}=3.26(1.57)\mathrm{s}\left(\pi G\overline{\rho }/\mathrm{\Omega }^2\right)^3,$$ (2) where prefactors outside (inside) parentheses correspond to $`n=1`$ (0), $`G`$ is the gravitational constant, $`\mathrm{\Omega }`$ is the angular rotation frequency, and $`\overline{\rho }`$ is the mean density. With shear viscosity coefficient taken from , the time scale for shear viscous damping is $$\tau _{\mathrm{sv}}=5.37(2.40)\times 10^8\mathrm{s}(\alpha _S/0.1)^{5/3}T_9^{5/3}.$$ (3) Here, $`T_9`$ denotes temperature in units of $`10^9`$K, and $`\alpha _S`$ is the strong coupling. The bulk viscosity of strange quark matter depends mainly on the rate of $`u+ds+u`$, which is the fastest of the reactions trying to reestablish weak equilibrium between massive strange quarks and the much lighter up and down quarks. To very good approximation the bulk viscosity is given by $`\zeta =\alpha T^2/[(\kappa \mathrm{\Omega })^2+\beta T^4],`$ with $`\alpha `$ and $`\beta `$ given in . For the dominant $`r`$-mode, $`\kappa =2/3`$. A low (high) $`T`$-limit is relevant when the first (second) term in the denominator dominates. In cgs-units, the low-$`T`$ limit is $`\zeta ^{\mathrm{low}}3.2\times 10^3m_{100}^4\rho T^2(\kappa \mathrm{\Omega })^2,`$ where $`m_{100}`$ is the strange quark mass in units of 100 MeV. The high-$`T`$ limit takes over for $`T>10^9`$K. Here , $`\zeta ^{\mathrm{high}}3.8\times 10^{61}m_{100}^4\rho ^1T^2.`$ For the bulk viscous damping time the approximation in used in has turned out to be too crude, since bulk viscosity coupling to the $`r`$-modes happens at second order. Lindblom et al. reevaluated $`\tau _{\mathrm{bv}}`$ for a strange star in the low-$`T`$ limit and found $$\tau _{\mathrm{bv}}^{\mathrm{low}}=0.886\mathrm{s}\left(\pi G\overline{\rho }/\mathrm{\Omega }^2\right)T_9^2m_{100}^4,$$ (4) The prefactor here is 7 times smaller than used in , and the scaling with $`\mathrm{\Omega }`$ is opposite, resulting in some changes in the results, though not in the conclusions of . But now also the high-$`T`$ limit becomes important. In this limit (not considered in ) $$\tau _{\mathrm{bv}}^{\mathrm{high}}=0.268\mathrm{s}\left(\pi G\overline{\rho }/\mathrm{\Omega }^2\right)^2T_9^2m_{100}^4.$$ (5) Because $`\tau _{\mathrm{bv}}^{\mathrm{high}}`$ increases with temperature, it can lead to $`r`$-mode instability for very high $`T`$. Figure 1 shows the regions of $`r`$-mode (in)stability in a plot of pulsar spin frequency ($`\nu \mathrm{\Omega }/(2\pi )`$) versus temperature for a strange star with mass $`M=1.4M_{}`$ and radius $`R=10`$km. An $`n=1`$ polytrope was assumed, but conclusions are not changed for uniform density. Also indicated are the positions of LMXB’s, presumed to be old pulsars being spun-up by accretion to eventually become rapid millisecond pulsars, as well as the positions of the two most rapidly spinning pulsars known, with periods of 1.5578 and 1.6074 msec ($`\nu 642`$ and 622 Hz). It should be stressed that the core temperatures are uncertain upper limits derived from x-ray limits on the surface temperatures, increased by roughly two orders of magnitude to include effects of an insulating crust. The actual numbers are taken from , valid for a neutron star model, but similar limits apply to strange stars with a significant crust. Bare strange stars or stars with a thin crust would have a core temperature close to the surface temperature, moving them closer to or even inside the region of $`r`$-mode instability. Completely bare strange stars are very poor emitters of radiation below the quark matter plasma frequency of 20 MeV , but even a thin crust/atmosphere would allow normal thermal radiation. For simplicity both categories are denoted “bare”, but conclusions based on surface temperatures only relate to strange stars with a tiny layer of surface pollution. One notes that the LMXB’s are well within the region stable against $`r`$-mode instabilities, allowing them to accrete and speed-up unhindered by the instability. The rapid pulsars are also apparently in the stable regime (at least for $`m_{100}=2`$), but as the time scale for cooling to $`10^7`$K is only around $`10^4`$years , they should soon enter the unstable region and start spinning down. Since $`\tau _{\mathrm{gw}}\tau _{\mathrm{cool}}`$, the pulsars should follow a track indistinguishable from the curve marking the instability region, corresponding to an unusually high braking index of $`N9`$. This value follows because $`\mathrm{\Omega }T^{1/2}t^{1/8}`$ (the latter comes from $`t\tau _{\mathrm{cool}}10^4\mathrm{yr}T_9^4`$ for standard neutrino cooling of a quark star ). Thus $`\dot{\mathrm{\Omega }}\frac{1}{8}t^{9/8}`$, $`\ddot{\mathrm{\Omega }}\frac{1}{8}\frac{9}{8}t^{17/8}`$, and the braking index $`N\mathrm{\Omega }\ddot{\mathrm{\Omega }}/\dot{\mathrm{\Omega }}^2=9`$. The star will reach a spin frequency of 400Hz (2.5 msec rotation period) within a cooling time scale of $`10^5`$years. Notice that pulsars with the highest spin frequencies need less time to reach the region of instability. This may explain why no frequencies above 642Hz have been observed, and the spin-down to 2.5–3 msec by the $`r`$-mode instability may lead to some clustering of observed rotation periods around this value (not inconsistent with data, but statistics is not overwhelming due to a low number of objects). No similar effects arise in the case of ordinary neutron stars, where $`r`$-mode instabilities only seem to work at frequencies above 500Hz, and then mainly for $`T10^{10}`$K (dashed curve in Fig. 1). Notice that the agreement with pulsar data only remains valid if an insulating crust allows the bulk temperature of the pulsar to be some two orders of magnitude higher than the observed upper limits on the surface temperature (about $`6\times 10^5`$K and $`9\times 10^5`$K for the pulsars plotted) to locate the pulsars to the right of the $`r`$-mode instability range. A position inside or to the left of this regime seems ruled out (the latter because pulsars spun up in the LMXB-domain would have to cross the instability regime before reaching a position to the left). Therefore, strange stars without a significant crust (having comparable surface and bulk temperatures) are ruled out as models for these rapid pulsars unless they are completely bare and therefore hidden in x-rays. Superfluidity in the quark phase completely changes the behavior. If quark pairing is characterized by an energy gap, $`\mathrm{\Delta }`$, reaction rates involving two quarks (as relevant for bulk as well as shear viscosities) are suppressed by a factor $`\mathrm{exp}(2\mathrm{\Delta }/T)`$, assuming equal behavior for all quark flavors, as expected in a high density color-flavor locked phase. This increases the bulk viscous time scale by $`\mathrm{exp}(2\mathrm{\Delta }/T)`$ and $`\tau _{\mathrm{sv}}`$ (including screening) by $`\mathrm{exp}(\mathrm{\Delta }/(3T))`$, significantly increasing the parameter space where the $`r`$-mode instabilities are active. In fact at low $`T`$ the viscosity is now determined by shear due to electron-electron scattering or by surface rubbing. The time scale for electron shear is $`\tau _{\mathrm{sv}}^{ee}2.95\times 10^7\mathrm{s}(\mu _e/\mu _q)^{14/3}T_9^{5/3}.`$ In Fig. 2 (dashed curve) the effect of electron shear is maximized, using a very high $`\mu _e/\mu _q=0.1`$. Surface rubbing due to the electron atmosphere being carried along by the $`r`$-modes in the quark phase, scattering mainly on phonons in the nuclear crust, corresponds to a viscous time scale $`\tau _{\mathrm{sr}}1.42\times 10^8\mathrm{s}T_9(\nu /1\mathrm{k}\mathrm{H}\mathrm{z})^{1/2}`$ for a crust with maximal density (dash-dot curve). For lower crust base density, the effect of surface rubbing is reduced further. Figure 2 shows $`r`$-mode instabilities in strange stars dominated by CFL phase with $`\mathrm{\Delta }=1`$MeV. Much higher energy gaps (50–100 MeV) are expected in recent studies of color superconductivity , but as seen from Fig. 2, even a value of 1 MeV is incompatible with pulsar data, since basically all rapid pulsars are located in the unstable regime, and therefore should spin down by gravitational wave emission in a matter of hours, c.f. $`\tau _{\mathrm{gw}}`$. Clearly in contradiction with the facts. At lower density, the high mass of the $`s`$-quark relative to $`u`$ and $`d`$ prevents creation of the CFL phase; instead two color states of $`u`$ and $`d`$ may pair, creating a 2-flavor color superconducting phase (2SC) that introduces energy gaps for 4 out of 9 quark color-flavor states. If the corresponding energy gap is of any significance, the states with a gap can be safely ignored compared to the remaining unpaired $`s`$-quarks and one color of $`u`$ and $`d`$. This reduces the rate of the weak reaction $`u+sd+u`$ by a factor 1/9, increasing $`\tau _{\mathrm{bv}}`$ by a factor 9. The strong scattering rates responsible for the shear viscosity are reduced by $`(5/9)^{1/3}`$, thus increasing $`\tau _{\mathrm{sv}}`$ by $`(9/5)^{1/3}`$. This expands the domain of $`r`$-mode instability as shown in Fig. 3 to an extent where some rapid pulsars are in the unstable zone, in disagreement with data. It is fair to say, though, that the uncertainties and approximations involved may be large enough that 2SC quark matter stars may not be ruled out completely. The $`r`$-mode instability thus provides several interesting tests of the hypothesis of stable quark matter stars (strange stars). If strange quark matter is absolutely stable, pulsars would be expected to consist of quark matter. Data on pulsar rotation properties are consistent with this if the quark matter is non-superfluid (but only for strange stars with a thick crust or completely bare strange stars). The lack of observed very rapid pulsars may be due to the $`r`$-mode instability, and rapid pulsars reaching the region of instability will spin down in a characteristic manner, that can be tested by observations. Strange stars in a color-flavor locked phase are, in contrast, not permitted by pulsar data. Most rapid pulsars would be $`r`$-mode unstable, and should spin down within hours, which clearly they do not. So if strange quark matter is stable, it may be concluded that a CFL phase, and probably a 2SC phase as well is not reached at densities relevant in pulsars, i.e. up to a few times nuclear density. These arguments do not rule out a color superconducting phase at such densities if quark matter is only metastable, because then a pulsar, even if it contains quark matter, will not have the separation of the crust characteristic of a strange star. Thus, such a star is susceptible to the full surface rubbing effect, and will not be $`r`$-mode unstable to a similar degree. A detailed study of $`r`$-modes in such hybrid stars would be interesting. This work was supported in part by the Theoretical Astrophysics Center under the Danish National Research Foundation. I thank the referees and Krishna Rajagopal for comments on an earlier version.
no-problem/9912/hep-ph9912520.html
ar5iv
text
# 1 Introduction ## 1 Introduction A precise prediction of the cross sections for high-energy $`e^+e^{}`$ scattering frequently requires an estimation of the radiative corrections beyond the lowest order calculations. Among various corrections from the electro-weak interactions it is known that the QED radiative corrections to the initial-state particles gives the greatest contribution in general. Sometimes all order summation is necessary to give the needed precision. In the leading-logarithmic approximation the higher-order summation of the QED corrections can be done easily thanks to the factorization theorem. The theorem given by ref. guarantees that the final state corrections result only a small contribution to the total cross section. For the $`e^+e^{}`$ annihilation processes we have tools. They are universally applicable because only the initial-state QED corrections are involved; the structure function and the parton shower methods widely used in high energy physics today. The recent experiments, however, require the higher order corrections for the multi-particle final states, such as the four-fermion productions at LEP2. Even though the exact calculations of the higher order corrections are very difficult or impossible, still it is possible to include the biggest QED corrections by making use of those tools as long as the process is dominated by the annihilation. On the other hand for the non-annihilation processes it has been not well investigated how to apply these tools. Only a few examples are the Bhabha scattering to which the structure function has been used in and the parton shower in. In the present work it will be shown that the structure function and the parton shower can be also the universal tools for any non-annihilation process once the evolution energy is settled. First we apply and examine both methods to the two-photon process, $`e^+e^{}e^+e^{}\mu ^+\mu ^{}`$ in the next section. The obtained total and differential cross sections are compared with those given by the $`O(\alpha )`$ corrections in section 3. These methods must be universal to any process, again thanks to the factorization theorem. The energy scale, however, with which the radiative correction is evolved should be carefully chosen for the non-annihilation processes. An unique and definite way to find this energy scale is explained in section 4. ## 2 Calculation method ### 2.1 Structure Function Method The structure function(SF) for the initial state radiative correction(ISR) is well established for the case of the $`e^+e^{}`$ annihilation processes. The observed cross section corrected by ISR can be expressed by using SF as $`\sigma _{total}(s)`$ $`=`$ $`{\displaystyle ^1}𝑑x_1{\displaystyle ^1}𝑑x_2D_e^{}(x_1,s)D_{e^+}(x_2,s)\sigma _0(x_1x_2s),`$ (1) where $`D_{e^\pm }(x,s)`$ is the electron(positron) structure function with $`x_{1,2}`$ being the energy fractions of $`e^\pm `$. The SF up to the $`O(\alpha ^2)`$ is given by $`D(x,s)`$ $`=`$ $`{\displaystyle \frac{\beta }{2}}\left(1x\right)^{\frac{\beta }{2}1}\left[1+{\displaystyle \frac{3}{8}}\beta +\beta ^2\left({\displaystyle \frac{9}{128}}{\displaystyle \frac{\pi ^2}{48}}\right)\right]{\displaystyle \frac{\beta }{4}}\left(1+x\right)`$ (2) $`+`$ $`\left({\displaystyle \frac{\beta }{4}}\right)^2\left[2(1+x)\mathrm{𝚕𝚗}{\displaystyle \frac{1}{1x}}{\displaystyle \frac{1+3x^2}{2(1x)}}\mathrm{𝚕𝚗}x{\displaystyle \frac{5+x}{2}}\right],`$ $`\beta `$ $`=`$ $`{\displaystyle \frac{2\alpha }{\pi }}\left(\mathrm{𝚕𝚗}{\displaystyle \frac{s}{m_e^2}}1\right).`$ (3) In deriving this formula, which is given by solving the Altarelli-Parisi equation in the LL approximation, we have used an ad hoc trick to get more accuracy. The factor $`\beta =(2\alpha /\pi )\mathrm{𝚕𝚗}(s/m_e^2)`$ is replaced by $`\beta =(2\alpha /\pi )(\mathrm{𝚕𝚗}(s/m_e^2)1)`$ to match with the perturbative calculations. Let us apply the SF method to the two-photon process, $`e^{}(p_{})+e^+(p_+)`$ $``$ $`e^{}(q_{})+e^+(q_+)+\mu ^{}(k_{})+\mu ^+(k_+).`$ (4) For the forward scattering of $`e^\pm `$, the multi-peripheral diagrams shown in Fig.1 give the dominant contribution to the total cross section. Thus only the multi-peripheral diagrams are taken into account in this work. In this case the corrected cross section is given by $`\sigma _{total}(s,t_\pm )`$ $`=`$ $`{\displaystyle 𝑑x_I𝑑x_F𝑑x_{I+}𝑑x_{F+}D_e^{}(x_I,Q_{}^2)D_e^{}(x_F,Q_{}^2)}`$ (5) $`D_{e^+}(x_{I+},Q_+^2)D_{e^+}(x_{F+},Q_+^2)\sigma _0(\widehat{s},\widehat{t}_\pm ).`$ Here $`Q_\pm ^2`$ is the energy scale to be fixed, with which SF should be driven. Since these functions are common to the initial and the final radiations from the $`e^\pm `$’s in the leading-logarithmic(LL) approximation, we shall drop the subscript $`e^\pm `$ from the SF hereafter. After(before) the photon radiation the initial(final) momenta $`p_\pm `$ ($`q_\pm `$) become $`\widehat{p}_\pm `$ ($`\widehat{q}_\pm `$) in the following ways $`\widehat{p}_{}`$ $`=`$ $`x_Ip_{},\widehat{q}_{}={\displaystyle \frac{1}{x_F}}q_{},`$ (6) $`\widehat{p}_+`$ $`=`$ $`x_{I+}p_+,\widehat{q}_+={\displaystyle \frac{1}{x_{F+}}}q_+,`$ (7) respectively. Then the CM energy squared($`s=(p_{}+p_+)^2`$) and the momentum transfer squared($`t_\pm =(p_\pm q_\pm )^2`$) are scaled as follows, $`\widehat{s}=x_Ix_{I+}s,\widehat{t}_\pm `$ $`=`$ $`{\displaystyle \frac{x_{I\pm }}{x_{F\pm }}}t_\pm .`$ (8) Note that SF behaves like $`\delta (1x)`$ when $`\beta 0`$, that is, $`{\displaystyle _0^1}𝑑x{\displaystyle \frac{\beta }{2}}(1x)^{\frac{\beta }{2}1}f(x)`$ $`=`$ $`f(1)+{\displaystyle _0^1}𝑑x\left[(1x)^{\frac{\beta }{2}}1\right]f^{}(x)`$ (9) $`=`$ $`f(1)+{\displaystyle \frac{\beta }{2}}{\displaystyle _0^1}𝑑x\mathrm{𝚕𝚗}(1x)f^{}(x)+O(\beta ^2),`$ where $`f(x)`$ is an arbitrary smooth function. The choice of the energy scale in SF is not a trivial matter. As pointed out in Ref. it is natural to use $`t_\pm `$ instead of $`s`$ for the two-photon processes. The justification of this choice is given by comparing them with the perturbative calculations. This could be done in the region where the soft-photon approximation of the total cross section is valid. Since the corrections for the $`e^{}`$ and $`e^+`$ sides must be symmetric, it is enough to consider only those from the $`e^{}`$ side. The total correction will be obtained by doubling them. Then Eq.(5) can be simplified as $`\sigma (s)`$ $`=`$ $`{\displaystyle 𝑑x_I𝑑x_FD(x_I,Q^2)D(x_F,Q^2)\sigma _0(x_Is)}.`$ (10) The integrations are performed in the region $`1x_{I,F}1`$. Since the Born cross section of this process($`\sigma _0(\widehat{s})`$) is a smooth function of $`\widehat{s}`$, the cross section in the soft photon approximation is written as $`\sigma _{soft}`$ $`=`$ $`\sigma _0(s){\displaystyle _{1\frac{k_c}{E}}^1}𝑑x_I{\displaystyle _{(1\frac{k_c}{E})/x_I}^1}𝑑x_FD(x_I,Q^2)D(x_F,Q^2)`$ (11) $`=`$ $`\sigma _0(s){\displaystyle _0^{\frac{k_c}{E}}}𝑑yH(y,Q^2),`$ $`H(y,Q^2)`$ $`=`$ $`{\displaystyle _{1y}^1}{\displaystyle \frac{dx_F}{x_F}}D(x_F,Q^2)D({\displaystyle \frac{1y}{x_F}},Q^2),`$ (12) where $`k_c`$ is the maximum energy of the sum of the initial and the final photon energies and $`E=p^0q^0`$. The function $`H`$ is called radiator and can be obtained easily from SF as $`H(x,s)`$ $`=`$ $`D(1x,s)|_{\beta 2\beta }`$ (13) $`=`$ $`\beta x^{\beta 1}\left[1+{\displaystyle \frac{3}{4}}\beta +{\displaystyle \frac{\beta ^2}{4}}\left({\displaystyle \frac{9}{8}}{\displaystyle \frac{\pi ^2}{3}}\right)\right]\beta \left(1{\displaystyle \frac{x}{2}}\right)`$ $`+`$ $`{\displaystyle \frac{\beta ^2}{8}}\left[4(2x)\mathrm{ln}x{\displaystyle \frac{1+3(1x)^2}{x}}\mathrm{ln}(1x)6+x\right].`$ The integration of the function $`H`$ in the small $`k_c/E`$ region gives $`{\displaystyle _0^{\frac{k_c}{E}}}𝑑yH(y,Q^2)`$ $`=`$ $`1+{\displaystyle \frac{\alpha }{\pi }}\left[2l(L1)+{\displaystyle \frac{3}{2}}(L1)\right]+O(\alpha ^2),`$ (14) where $`L=\mathrm{ln}(Q^2/m_e^2)`$, and $`l=\mathrm{ln}(E/k_c)`$. Then the cross section in the soft photon approximation up to the $`O(\alpha )`$ is obtained, $`\sigma _{soft}`$ $`=`$ $`\sigma _0(s)\left\{1+{\displaystyle \frac{\alpha }{\pi }}\left[2l(L1)+{\displaystyle \frac{3}{2}}(L1)\right]\right\}.`$ (15) This expression is compared with the perturbative calculation given by Berends, Daverveldt and Kleiss(BDK hereafter). In the BDK program the multi-peripheral diagrams and their $`O(\alpha )`$ corrections of the self-energy correction to $`e^\pm `$, the vertex correction for $`e^\pm `$-$`e^\pm `$-$`\gamma `$ vertex, the soft- and hard-photon emissions and the vacuum polarization of the virtual photons are calculated. The corrections from the photon bridged between different charged lines are not given because the contributions from the box diagrams with photon exchange between $`e^+`$ and $`e^{}`$ is known to be small. The LL approximation of the virtual correction factors(vertex $`+`$ soft photon) is $`2\mathrm{R}\mathrm{e}F_1+\delta _s`$ $``$ $`{\displaystyle \frac{\alpha }{\pi }}\left(2l(L_t1)+{\displaystyle \frac{3}{2}}L_t2\right),`$ (16) where $`L_t=\mathrm{ln}(t/m_e^2)`$ and $`t=(p_{}q_{})^2`$. By comparing Eqs.(15) and (16) one concludes that the energy scale of SF should be $`Q^2=t`$ This comparison shows that once the proper energy scale is found the SF can reproduce the correct evolution of the soft-photon emission along the electron line up to the $`O(\alpha )`$. However, there remains some mismatch with the constant term. This can be compensated by multiplying an overall factor to the SF. This factor is usually called K-factor. When the SF is evolved with $`t`$ this factor is found to be $`1\alpha /2\pi `$. Then the total K-factor for both $`e^+`$ and $`e^{}`$ must be $`1\alpha /\pi `$. It should be noted that the assumption $`t/m_e^21`$ does not hold for the forward scattering of the two-photon process. The LL-terms in the SF are no more leading if this happens. To find the region where the approximation is valid the LL terms of the form factor $`F_1`$ is compared with the exact form given by BDK in Fig.2. While they agree well at high $`Q^2`$ region as expected, they show a large deviation in the region $`Q^2/m_e^2<10`$. In order to make SF well-defined, the energy evolution must be truncated at some point, say $`L=1`$. ### 2.2 Parton Shower Method Instead of the analytic formula of the structure function, a Monte Calro method based on the parton shower algorithm in QED (QEDPS) can be used to solve the Altarelli-Parisi equation in the LL approximation. The detailed algorithm of the QEDPS is found in Ref. for the $`e^+e^{}`$ annihilation processes and in Ref. for the Bhabha process. For the two-photon process the energy scale for the parton shower evolution is also $`t`$. The truncation at $`L=1`$ is again imposed. One difference between SF and QEDPS is that the ad hoc replacement $`LL1`$, which was realized by hand for SF, cannot be done for QEDPS. This causes a deviation of the K-factor from the SF method. The K-factor for QEDPS is given by $`14\alpha /\pi `$. Another significant difference between these two is that QEDPS can treat the transverse momentum of emitted photons correctly by imposing the exact kinematics at the $`ee\gamma `$ splitting. It does not affect the total cross sections so much when the final $`e^\pm `$ have no cut. However, the finite recoiling of the final $`e^\pm `$ may result a large effect on the tagged cross sections as shown below. In return to the the exact kinematics at the $`ee\gamma `$ splitting, $`e^\pm `$ are no more on-shell after photon emission. On the other hand the matrix element of the hard scattering process must be calculated with the on-shell external particles. A trick to map the off-shell four-momenta of the initial $`e^\pm `$ to those at on-shell is needed. Following method is used in the calculations. 1. $`\widehat{s}=(\widehat{p}_{}+\widehat{p}_+)^2`$ is calculated, where $`\widehat{p}_\pm `$ are the four-momenta of the initial $`e^\pm `$ after the photon emission by $`\mathrm{𝚀𝙴𝙳𝙿𝚂}`$. $`\widehat{s}`$ can be positive even for the off-shell $`e^\pm `$. (When $`\widehat{s}`$ is negative, that event is discarded.) 2. All four-momenta are generated in the rest-frame of the initial $`e^\pm `$ after the photon emission. Four-momenta of the initial $`e^\pm `$ in this frame are $`\stackrel{~}{p}_\pm `$, where $`\stackrel{~}{p}_\pm =m_e^2`$ (on-shell) and $`\widehat{s}=(\stackrel{~}{p}_{}+\stackrel{~}{p}_+)^2`$ 3. All four-momenta are rotated and boosted to match the three-momenta of $`\stackrel{~}{p}_\pm `$ with those of $`\widehat{p}_\pm `$. This method respects the direction of the final $`e^\pm `$ rather than the CM energy of the collision. The total energy does not conserve because of the virtuality of the initial $`e^\pm `$. ## 3 Numerical Calculations ### 3.1 No-cut Case First the total cross section of $`e^+e^{}e^+e^{}\mu ^+\mu ^{}`$ without any experimental cut is considered. The exact matrix element is generated by the GRACE system. Only the multi-peripheral diagrams are generated for the test of the approximation methods. The phase-space integration of the matrix element squared is carried out numerically by BASES using an adaptive Monte Calro method. It is trivially confirmed that the Born cross sections agree with BDK results within the statistical error of the numerical integration. The convolution of SF or QEDPS with the Born cross section is not a straightforward task because of the complicated four-body kinematics. For a simple two-body process such as the Bhabha scattering the momentum transfer($`t`$) can be taken as one of the integration variables and the Eq.(5) will be easily performed then. It is, however, practically impossible to use this technique for the case of the four-body kinematics. Hence we have to admit the following approximation: 1. Eight random numbers obtained from BASES correspond to the eight independent variables of the phase-space integration. 2. All the kinematical variables are determined with no radiative effect. The $`t_\pm `$’s are fixed also. 3. The radiative correction factors of Eq.(5) are determined from $`t_\pm `$’s, $`x_I`$ and $`x_{I+}`$. 4. A new CM energy is obtained by the fixed $`x_I`$ and $`x_{I+}`$. 5. All the kinematical variables are re-calculated from the same set of the random numbers and the new CM energy. In this method the evolution energy scale of the radiator is not exactly the same after the re-calculation with the new CM-energy. However, this difference must be beyond the LL order. The total cross sections without any experimental cuts with SF and QEDPS are summarized in Table 1. The BDK program also includes the correction from the vacuum polarization of the photon propagator. In order to compare the SF and QEDPS results with BDK, this correction is removed. After including the K-factor the results of our two methods are in good agreement with the $`O(\alpha )`$ calculation(without the vacuum polarization). The effect of the vacuum polarization can be included if one uses the running QED coupling in the SF method. The $`ff\gamma `$ coupling evolved by the renormalization group equation is given by $`g_{ff\gamma }(t_\pm )=g_{ff\gamma }(0)\left(1{\displaystyle \frac{\alpha }{3\pi }}{\displaystyle \underset{i}{}}C_ie_i^2\mathrm{log}{\displaystyle \frac{t_\pm }{m_i^2}}\mathrm{\Theta }(t_\pm m_i^2)\right)^1,`$ (17) where $`\alpha =1/137.036`$ is the QED coupling at the zero momentum transfer, $`C_i`$ the color factor, $`e_i`$ the electric charge in unit of the $`e^+`$ charge, $`m_i`$ the mass of the $`i`$-th fermion. The index $`i`$ runs over all massive fermions. Only those fermions whose mass is greater than $`t_\pm `$ are taken into account through the step function. The quark masses are chosen so as to match with the vacuum polarization in the BDK program. The results are shown in Table 2. The deviation from the BDK program is typically around 0.5%. The differential cross sections with respect to the $`e^{}`$ energy and angle, the CM energy of the final four-fermions and the invariant mass of the $`\mu ^\pm `$-pair at the CM energy of 200 GeV are shown in Fig.3. The two programs give consistent distributions. In order to check the recoiling effect of the final $`e^{}`$ due to the photon emission by QEDPS, the $`e^{}`$ polar angle is compared in Fig.4 between SF and QEDPS. The cross sections with the $`e^{}`$ angle between $`10^{}`$ and $`20^{}`$ are found to be 6.00(5.80) pb for QEDPS(SF) at the CM energy of 200 GeV. It is not surprising that the agreement becomes worse than the total cross sections, because SF includes no recoiling at all. If a wrong energy scale of $`s=(p_{}+p_+)^2`$ is used instead of $`t_\pm `$ as the energy evolution scale in the ISR tool, one may get over-estimation of the ISR effect. At the CM energy of 200 GeV, SF with the energy scale of $`s`$ gives the total cross section of $`257.8`$nb instead of $`262.6`$nb with the correct energy scale. ### 3.2 Single-tagging Case The same comparison is done for the $`e^{}`$-tagging case. The experimental cuts applied are: For the $`e^{}`$, * $`10^{}<\theta _e^{}<170^{}`$, * $`E_e^{}>1`$ GeV. For the $`\mu ^\pm `$, * $`10^{}<\theta _{\mu ^\pm }<170^{}`$, * $`E_{\mu ^\pm }>1`$ GeV, * $`M_{\mu \mu }>1`$ GeV. The total cross sections with the above cuts at the CM-energy of 200 GeV are calculated to be $`1.169\pm 0.004`$pb($`1.13\pm 0.01`$pb) by GRACE with QEDPS(by BDK). The vacuum polarization, i.e. the running $`\alpha `$, is included. This small discrepancy may come from the finite recoiling of the final $`e^{}`$ by the soft photon emissions in QEDPS. The differential cross sections are also compared in Fig.4. The results of GRACE with QEDPS are in good agreement with BDK. ## 4 Energy Scale Determination The factorization theorem for the QED radiative corrections in the LL approximation is valid independent of the structure of the matrix element of the kernel process. Hence SF and QEDPS must be applicable to any $`e^+e^{}`$ scattering processes. However, the choice of the energy scale in SF and QEDPS is not a trivial issue. For a simple process like the two-photon process with only the multi-peripheral diagrams considered so far, the evolution energy scale could be determined by making use of the exact perturbative calculations. However, this is not always possible for more complicated processes. Hence a way to find a suitable energy scale without knowing the exact loop calculations should be established somehow. First let us look at the general consequence of the soft photon approximation. The soft photon cross section(including both the real and the virtual photon effects) is given by the Born cross section multiplied by some correction factor in the LL order as $`{\displaystyle \frac{d\sigma _{soft}(s)}{d\mathrm{\Omega }}}`$ $`=`$ $`{\displaystyle \frac{d\sigma _0(s)}{d\mathrm{\Omega }}}`$ (18) $`\times `$ $`\left|\mathrm{exp}\left[{\displaystyle \frac{\alpha }{\pi }}\mathrm{ln}\left({\displaystyle \frac{E}{k_c}}\right){\displaystyle \underset{i,j}{}}{\displaystyle \frac{e_ie_j\eta _i\eta _j}{\beta _{ij}}}\mathrm{ln}\left({\displaystyle \frac{1+\beta _{ij}}{1\beta _{ij}}}\right)\right]\right|^2,`$ $`\beta _{ij}`$ $`=`$ $`\left(1{\displaystyle \frac{m_i^2m_j^2}{(p_ip_j)^2}}\right)^{\frac{1}{2}},`$ (19) where $`m_j`$’s($`p_j`$’s) are the mass(momentum) of $`j`$-th charged particle, $`k_c`$ the maximum energy of the soft photon (boundary between soft- and hard-photons), $`E`$ the beam energy, and $`e_j`$ the electric charge in unit of the $`e^+`$ charge. The factor $`\eta _j`$ is $`1`$ for the initial particles and $`+1`$ for the final particles. The indices ($`i,j`$) run over all the charged particles in the initial and final states. For the process (4) one can see that the soft-photon factor in Eq.(18) with a ($`p_{}q_{}`$)-term reproduces Eq.(16) in the LL approximation. This implies that one is able to read off the possible evolution energy scale in SF from Eq.(18) without explicit loop calculations. However, one may have a question why the energy scale $`s=(p_{}+p_+)^2`$ does not appear in the soft-photon correction even they are included in Eq.(18). When we applied SF to the two-photon process in the previous section, we have ignored those terms which come from the photon bridged between different charged lines. This is because the contributions from the box diagrams with photon exchange between $`e^+`$ and $`e^{}`$ is known to be small. Fortunately the infrared part of the loop correction is already included in Eq.(18) and no need to know the full form of the loop diagram. For the two-photon processes if one looks at two terms with, for example ($`p_{}p_+`$)- and with ($`q_{}p_+`$)-terms, the momentum of $`e^{}`$ is almost the same before and after the scattering($`p_{}q_{}`$). Only the difference appears in $`\eta _j\eta _k=+1`$ for a ($`p_{}p_+`$)-term and $`\eta _j\eta _k=1`$ for a ($`q_{}p_+`$)-term. Then these terms compensate each other after summing them up for the forward scattering which is the dominant kinematical region of this process. This is why the energy scale $`s=(p_{}+p_+)^2`$ does not appear in the soft-photon correction despite the fact that it exists in Eq.(18). When some experimental cuts are imposed, for example the final $`e^{}`$ is tagged in a large angle, this cancellation is not perfect but partial and the energy scale $`s`$ must appear in the soft-photon correction. In this case the annihilation type diagrams will also contribute to the matrix elements. Then the usual SF and QEDPS for the annihilation processes are justified to be used for the ISR with the energy scale $`s`$. One can check which energy scale is dominant under the given experimental cuts by numerically integrating the soft-photon cross section given by Eq.(18) over the allowed kinematical region. Thus in order to determine the energy scale it is sufficient to know the infrared behavior of the radiative process using the soft-photon factor. In some region of the phase-space two or more energy scales may be involved in the soft-photon cross section with comparable amount of contribution. In this region a simple SF and QEDPS are not applicable. ## 5 Conclusions Two practical tools to incorporate with the QED radiative corrections are developed for non-annihilation processes by means of the structure function and the parton shower. These programs are applied to the two-photon process, $`e^+e^{}e^+e^{}\mu ^+\mu ^{}`$. The results are compared with the perturbative calculation of the $`O(\alpha )`$ and show a good agreement. These tools should be applicable to any non-annihilation process universally. It is demonstrated that the energy scale for the evolution, which depends on the dominant diagrams in the interested kinematical region, can be determined with the help of the well known formula of the soft photon factor. As an example we have tried a real $`W`$ production in $`e^+e^{}`$ annihilation. The application of these tools to more complicated processes like four-fermion final state including a single-$`W`$ production are left to the future publications.
no-problem/9912/astro-ph9912361.html
ar5iv
text
# Extracting Neutron Star Properties from X-ray Burst Oscillations ## Introduction Shortly after the launch of the Rossi X-ray Timing Explorer (RXTE) in late 1995, single kilohertz brightness oscillations were discovered in RXTE countrate time series data from thermonuclear X-ray bursts in several neutron-star low-mass X-ray binaries. These oscillations are remarkably coherent and their frequencies are very stable from burst to burst in a given source SSZ98 . These oscillations are therefore thought to be at the stellar spin frequency or its first overtone. This suggests that the oscillations are caused by rotational modulation of a hot spot produced by non-uniform nuclear burning and propagation. Analysis of these oscillations can therefore constrain the mass and radius of the star and yield valuable information about the speed and type of thermonuclear propagation. In turn, this has implications for strong gravity and dense matter, and for astrophysical thermonuclear propagation in other contexts, such as classical novae and Type Ia supernovae. A comparison of theoretical waveforms with the observations is required to extract this fundamental information. Here we exhibit waveform calculations that we have produced using general relativistic ray-tracing codes. We survey the effects of parameters such as the spot size, the stellar compactness, and the stellar rotational velocity, and demonstrate that our computations fit well the phase lag data from SAX J1808–3658. ## Computational Method To compute observed light curves, we do general relativistic ray tracing from points on the surface to the observer at infinity in a way similar to, but more general than, PFC83 and ML98 . For simplicity, we assume that the exterior spacetime is Schwarzschild, that the surface is dark except for the hot spot or spots, and that there is no background emission. The amplitudes would be reduced by a constant factor if there were background emission. The angular dependence of the specific intensity at the surface depends on both radiation transfer effects and Doppler boosting (see WML99 ). For each phase of rotation we compute the projected area of many small elements of a given finite size spot. We then build up the light curve of the entire spot by superposing the light curve of all the small elements. The grid resolution of the spot is chosen so that the effect of having a finite number of small elements can alter the value of the computed oscillation amplitudes by a fraction no larger than $`10^4`$. After computing the oscillation waveform using the above approach, we Fourier-analyze the resulting light curve to determine the oscillation amplitudes and phases as a function of photon energy at different harmonics. ## Results Panel (a) of Figure 1 shows the fractional rms amplitudes at the first two harmonics as a function of spot size and stellar compactness for one emitting spot centered on the rotational equator, as seen by a distant observer in the rotational plane. These curves demonstrate that a common result of the hot-spot model is large-amplitude brightness oscillations with a high contrast in strength between the dominant harmonic and weaker harmonics, as is observed in several sources. The curves for the first harmonic illustrate the general shape of most of the first harmonic curves. Initially, the amplitude depends only weakly on spot size. However, once the spot grows to an angular radius of $`40^{}`$ there is a steep decline in the oscillation amplitude which flattens out only near the tail of the expansion. This expected behavior appears to be in conflict with the decline in amplitude observed by Strohmayer, Zhang, & Swank (1997) from 4U 1728–34, in which the initial decline is steep. The cause of this could be that the initial velocity of propagation is large, or that the observed amplitude is diminished significantly by isotropization of the beam due to scattering (Weinberg, Miller, & Lamb 1999). Panel (b) of Figure 1 shows the fractional rms amplitude at the second harmonic under the same assumptions but for two identical, antipodal emitting spots. The range in spot size here is $`0^{}90^{}`$ since two antipodal spots of $`90^{}`$ radii cover the entire stellar surface. Note that in this situation, there is no first harmonic. These figures show that when there is only one emitting spot, the fundamental is always much stronger than higher harmonics. Thus, a source such as 4U 1636–536 with a stronger first overtone than fundamental M99 must have two nearly antipodal emitting spots. As described in detail in WML99 , we confirm the results of PFC83 and ML98 that the rms amplitude decreases with increasing compactness until $`R/M4`$, then increases due to the formation of caustics. We also find that a finite surface rotational velocity increases the amplitude at the second harmonic substantially, while having a relatively small effect on the first harmonic (left panel of Figure 2). As an application to data, in the right panel of Figure 2 we use our models to fit phase lag data from the millisecond accreting X-ray pulsar SAX J1808–3658. The waveforms from the accreting spot are expected to be similar to the waveforms from burst brightness oscillations, and the signal to noise for this source greatly exceeds that from burst sources such as Aql X-1 F99 . As is apparent from the figure, the fit is excellent. Further data, especially from a high-area timing mission, could be used to constrain the stellar mass or radius from phase lag data.
no-problem/9912/astro-ph9912444.html
ar5iv
text
# Mass loss from a magnetically driven wind emitted by a disk orbiting a stellar mass black hole ## I Introduction Most of the sources which are now discussed to explain GRBs (the coalescence of two compact objects or the collapse of a massive star to a black hole (collapsar) Narayan92 ; Woosley93 ; Paczynski98 ) lead to the same system : a stellar mass black hole surrounded by a thick debris torus. The release of energy by such a configuration can come from the accretion of disk material by the black hole or from the rotational energy of the black hole extracted by the Blandford-Znajek mechanism. The released energy is first injected into a relativistic wind and then converted into gamma–rays, via the formation of shocks probably within the wind itself Rees94 ; Daigne98 . The wind is finally decelerated by the external medium which leads to a shock responsible for the afterglow emission observed in the X–rays, optical and radio bands Wijers97 . The production of the relativistic wind is a very complex question because of the very low baryonic load that has to be achieved in order to reach high values of the terminal Lorentz factor. Just a few ideas have been proposed and none appears to be fully conclusive. A first possibility to extract the energy from accretion is the annihilation of neutrino–antineutrino pairs emitted by the hot disk along the rotation axis of the system, which is a region strongly depleted in baryons due to centrifugal forces. The low efficiency of this process however requires high neutrino luminosities and therefore short accretion time scales Ruffert97 . Another possibility to extract the energy from accretion is to assume that the magnetic field in the disk is amplified by differential rotation to very large values ($`B10^{15}\mathrm{G}`$). A magnetically driven wind could then be emitted from the disk with a fraction of the Poynting flux being eventually transferred to matter. The energy can also be extracted from the rotational energy of the black hole by the Blandford-Znajek mechanism Lee99 . We present here an exploratory study of the case where a magnetically driven wind is emitted by the disk. Matter is heated at the basis of the wind (by $`\nu \overline{\nu }`$ annihilation, viscous dissipation, magnetic reconnection, etc.) and then escapes, guided along the magnetic field lines. Section II describes a “toy model” to explore the behavior of such a wind. Despite its extreme simplicity, we expect that it can help to identify the key parameters controlling the baryonic load. Our results are presented in section III and discussed in section IV in the context of different scenarios for GRBs. ## II A “Toy Model” We solve the wind equations with the following simplifications : (i) we assume a geometrically thin disk and a poloidal magnetic field with the most simple geometry (straight lines making an angle $`\theta `$ with the disk) ; (ii) we consider that a stationnary regime has been reached by the wind; (iii) we use non–relativistic equations (to obtain the mass loss rate we just need to solve them up to the sonic point, where $`v<0.1c`$) but we adopt the Paczyński-Wiita potential for the black hole $$\mathrm{\Phi }_{\mathrm{BH}}=\frac{GM_{\mathrm{BH}}}{rr_\mathrm{S}}\mathrm{with}r_\mathrm{S}=\frac{2GM_{\mathrm{BH}}}{c^2}.$$ (1) We write the flow equations (continuity, Euler and energy equations) in a frame corotating with the foot of the field line, anchored at a radius $`r_0`$ in the disk $`\rho vs(x)`$ $`=`$ $`\dot{m},`$ (2) $`v{\displaystyle \frac{dv}{dx}}`$ $`=`$ $`g(x)r_0{\displaystyle \frac{1}{\rho }}{\displaystyle \frac{dP}{dx}},`$ (3) $`v{\displaystyle \frac{dϵ}{dx}}`$ $`=`$ $`\dot{q}(x)r_0+v{\displaystyle \frac{P}{\rho ^2}}{\displaystyle \frac{d\rho }{dx}},`$ (4) where $`x=\mathrm{}/r_0`$, $`\mathrm{}`$ being the distance along the magnetic field line, and $`\rho `$, $`P`$, $`ϵ`$ and $`v`$ are the density, pressure, specific internal energy and velocity in the flow. The total acceleration $`g(x)`$ includes both gravitational and centrifugal terms. In this exploratory study the power deposited per unit mass $`\dot{q}(x)`$ only takes into account the heating and cooling due to neutrinos. We assume that the inner part of the disk is optically thick (which is probably justified for compact object mergers but is more questionnable for collapsars except for low $`\alpha `$–viscosity ($`\alpha <0.01`$) Popham99 ). We include the following processes : neutrinos capture on free nucleons, neutrino scattering on relativistic electrons and positrons and neutrino–antineutrino annihilation (heating); neutrino emission by nucleons and annihilation of electron–positrons pairs (cooling). The temperature distribution in the disk corresponds to a gemetrically thin, optically thick disk : $$T_\nu (r)=T_{}\left(\frac{r_{}}{r}\right)^{3/4}\left(\frac{1\sqrt{\frac{r_{in}}{r}}}{1\sqrt{\frac{r_{in}}{r_{}}}}\right)^{1/4}(T_{}\mathrm{is}\mathrm{the}\mathrm{temperature}\mathrm{at}r_{});$$ (5) The section of the wind $`s(x)`$ is easily related to the field geometry because the field and stream lines are coincident. We adopt the equation of state computed by Bethe80 which includes nucleons, relativistic electrons and positrons and photons. The acceleration $`g(x)`$ along a field line is negative up to $`x=x_1`$ for angles larger than $`\theta _160^{}`$ ($`60^{}`$ is the exact value for a Newtonian instead of a Paczyński–Wiita black hole potential). For $`x>x_1`$, $`g(x)`$ is dominated by the centrifugal force. The sonic point of the flow is located at a distance $`x_s`$ just below $`x_1`$ (the relative difference never exceeds $`1\%`$). We solve the flow equations in a classical way by inward integration along the field line. We start at the sonic point by fixing trial values of the temperature $`T_s`$ and the density $`\rho _s`$ from which we get the velocity $`v_s`$ and the position $`x_s`$ (from the condition of regularity at $`x=x_s`$) and then the value of the mass loss rate $`\dot{m}`$. We observe that at some position $`x_{cr}`$, the velocity $`v`$ begins to fall off rapidly while $`T`$ reaches a maximum $`T_{max}T_\nu (r_0)`$. We adjust $`T_s`$ and $`\rho _s`$ so that $`x_{cr}`$ is as close as possible to $`0`$ and $`T_{max}`$ to $`T_\nu (r_0)`$. ## III Results We have studied the dependence of the mass loss rate $`\dot{m}`$ on the different model parameters and found the following expression : $$\dot{m}(r)\mathrm{3.8\; 10}^{13}\left(\frac{M_{\mathrm{BH}}}{2.5\mathrm{M}_{}}\right)\left(\frac{T_\nu (r)}{2\mathrm{MeV}}\right)^{10}f[\frac{r}{r_\mathrm{g}};\theta (r)]\mathrm{g}/\mathrm{cm}^2/\mathrm{s}.$$ (6) The geometrical function $`f`$ is normelized in such a way that it is equal to unity for $`r=4r_\mathrm{g}`$ and $`\theta (r)=85^{}`$. The very strong dependance of $`\dot{m}`$ with $`T_\nu (r)`$ (tenth power) is in agreement with what is found for neutrino driven winds in spherical geometry Duncan86 . Figure 1 shows that $`\dot{m}`$ also strongly depends on the inclination angle. The other important parameters are the position in the disk and the mass of the black hole, while $`\dot{m}`$ depends only weakly on all other parameters like the size of the optically thick region (here $`r_{\mathrm{in}}=3r_\mathrm{g}`$ and $`r_{\mathrm{out}}=10r_\mathrm{g}`$). In the more general case where the source of heating is not restricted to neutrino processes but can also include viscous dissipation, magnetic reconnection, etc, we have obtained a very simple and general analytical approximation for $`\dot{m}`$ Daigne2000 $$\dot{m}\frac{\dot{e}}{\mathrm{\Delta }\mathrm{\Phi }}\delta ,$$ (7) where $`\dot{e}`$ is the rate of energy deposition (in $`\mathrm{erg}/\mathrm{cm}^2/\mathrm{s}`$) between the plane of the disk ($`x=0`$) and the sonic point ($`x=x_sx_1`$), $`\mathrm{\Delta }\mathrm{\Phi }`$ is the difference of potential (gravitational+centrifugal) between $`x=0`$ and $`x=x_1`$ and $`\delta `$ is a factor close to unity depending on the distribution of energy injection between $`x=0`$ and $`x=x_s`$. We can now estimate the average Lorentz factor $`\overline{\mathrm{\Gamma }}=\dot{E}/\dot{M}c^2`$ at infinity. The total mass loss rate $`\dot{M}`$ and the power injected into the wind $`\dot{E}`$ are given by $`\dot{M}`$ $`=`$ $`2{\displaystyle _{r_{in}}^{r_{out}}}\dot{m}2\pi r𝑑r=\mathrm{2.6\; 10}^{26}\left({\displaystyle \frac{M_{\mathrm{BH}}}{2.5\mathrm{M}_{}}}\right)^3\left({\displaystyle \frac{T_{}}{2\mathrm{MeV}}}\right)^{10}F_{\mathrm{geo}}\mathrm{g}/\mathrm{s}`$ (8) $`\mathrm{and}\dot{E}`$ $`=`$ $`\mathrm{2\; 10}^{51}\left({\displaystyle \frac{\mathrm{\Omega }/4\pi }{0.1}}\right)\left({\displaystyle \frac{f_\gamma }{0.05}}\right)^1\left({\displaystyle \frac{\dot{E}_\gamma }{10^{51}/4\pi \mathrm{erg}/\mathrm{s}/\mathrm{sr}}}\right)\mathrm{erg}/\mathrm{s},`$ (9) where $`F_{\mathrm{geo}}=_{r_{in}/r_\mathrm{g}}^{r_{out}/r_\mathrm{g}}f[x;\theta (x)]x𝑑x`$ is a function of the field geometry only; $`\dot{E}_\gamma `$ is the burst power in gamma–rays, $`\mathrm{\Omega }/4\pi `$ is the beaming factor and $`f_\gamma `$ is the efficiency for the conversion of kinetic energy into gamma–rays. The wind is powered by accretion but at the same time the disk is heated by viscous dissipation and cools by emitting neutrinos. We assume that these losses represent a fraction $`\alpha `$ of the power $`\dot{E}`$ injected into the wind, so that we can estimate $`T_{}`$ at $`r_{}=4r_\mathrm{g}`$ : $`\dot{E}_\nu `$ $`=`$ $`\alpha \dot{E}=2{\displaystyle _{r_{in}}^{r_{out}}}{\displaystyle \frac{7}{8}}\sigma T_\nu ^4(r)2\pi r𝑑r`$ (10) $`\mathrm{and}T_{}`$ $`=`$ $`1.72\alpha ^{\frac{1}{4}}\left({\displaystyle \frac{M_{\mathrm{BH}}}{2.5\mathrm{M}_{\mathrm{odot}}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{\mathrm{\Omega }/4\pi }{0.1}}\right)^{\frac{1}{4}}\left({\displaystyle \frac{f_\gamma }{0.05}}\right)^{\frac{1}{4}}\left({\displaystyle \frac{\dot{E}_\gamma }{10^{51}/4\pi \mathrm{erg}/\mathrm{s}/\mathrm{sr}}}\right)\mathrm{MeV}.`$ (11) From equations (8), (9) and (11), we can calculate the average Lorentz factor $$\overline{\mathrm{\Gamma }}=\frac{8500}{F_{\mathrm{geo}}}\alpha ^{\frac{5}{2}}\left(\frac{M_{\mathrm{BH}}}{2.5\mathrm{M}_{}}\right)^2\left(\frac{\dot{E}_\gamma }{10^{51}/4\pi \mathrm{erg}/\mathrm{s}/\mathrm{sr}}\right)^{\frac{3}{2}}\left(\frac{\mathrm{\Omega }/4\pi }{0.1}\right)^{\frac{3}{2}}\left(\frac{f_\gamma }{0.05}\right)^{\frac{3}{2}}.$$ (12) The value of $`F_{\mathrm{geo}}`$ is $`56`$ for a constant inclination $`\theta =85^{}`$ and $`250`$ if $`\theta `$ decreases from $`90^{}`$ to $`80^{}`$ between $`r=3`$ and $`10r_\mathrm{g}`$. We therefore conclude that large terminal Lorentz factors can be reached only if several severe constraints are satisfied : (i) low $`F_{\mathrm{geo}}`$ values, i.e. quasi–vertical field lines; (ii) low $`\alpha `$ values, i.e. good efficiency for energy injection into the wind with little dissipation ; (iii) low value of $`\mathrm{\Omega }/4\pi `$, i.e. necessity of beaming. With the more general equation (7) we can obtain another simple and useful constraint : if the power $`\dot{e}`$ deposited below the sonic point represents a fraction $`\chi `$ of the total power $`\dot{e}_{\mathrm{tot}}`$ injected into the wind, we have $$\mathrm{\Gamma }\frac{\dot{e}_{\mathrm{tot}}}{\dot{m}c^2}\frac{\mathrm{\Delta }\mathrm{\Phi }/c^2}{\delta \chi }.$$ (13) For $`r=4r_\mathrm{g}`$ and $`\theta =85^{}`$, we obtain $`x_1=2.182`$ and $`\mathrm{\Delta }\mathrm{\Phi }/c^2=0.18`$ which implies that $`\chi `$ should not exceed $`10^3`$ to have $`\mathrm{\Gamma }>100`$ ! ## IV Discussion This study is clearly limited by its crude assumptions. However the severe constraints we get show how difficult it may be to produce a relativistic MHD wind from the disk. An optimistic view of our results would be to consider that this difficulty could just be a way to explain the apparent discrepancy between the observed rate of GRBs and the birthrate of sources in the collapsar scenario, most collapsars failing to give a GRB. A more pessimistic point of view would be to conclude that the baryonic load of such winds is never sufficiently low so that they remain non relativistic. If one choose to rely on the Blandford-Znajek mechanism to power the wind Lee99 it should however be checked that this process is not ”contaminated” by frozen material carried along magnetic field lines coming from the disk and trapped by the black hole.
no-problem/9912/astro-ph9912318.html
ar5iv
text
# 1 Why Galaxies in X-rays? ## 1 Why Galaxies in X-rays? Galaxies are key objects for the study of cosmology, the life cycle of matter, and stellar evolution. Insights into the nature and evolution of the Universe have been gained by using galaxies to trace the distribution of matter in large scale structures; mass measurements, whenever feasible, have revealed the presence of Dark Matter; galaxies evolution and intercourse with their environment are responsible for the chemistry of the Universe and ultimately for life. X-ray observations have given us a new key band for understanding these building blocks of the Universe, with implications ranging from the the study of extreme physical situations, such as can be found in the proximity of Black Holes, or near the surface of neutron stars; to the interaction of galaxies and their environment; to the measure of parameters of fundamental cosmological importance. The discrete X-ray source population of galaxies gives us a direct view of the end-stages of stellar evolution. Hot gaseous halos are uniquely visible in X-rays. Their discovery in E and S0 galaxies has given us a new, potentially very powerful, tool for the measurement of Dark Matter in galaxies, as well as for local estimates of $`\mathrm{\Omega }`$. Galaxy ecology - the study of the cycling of enriched materials from galaxies into their environment - is inherently an X-ray subject. Escape velocities from galaxies, when thermalized, are kilovolt X-ray temperatures. The X-ray band is where we can directly witness this phenomenon (e.g in M82; NGC 253; see , and refs. therein)(fig 1). The study of galaxies and their components in the local universe allows us to establish the astrophysics of these phenomena. This knowledge can then be used to understand the properties of galaxies at the epoch of formation and their subsequent evolution, both in the field and in clusters. ## 2 Requirements for an X-ray Telescope Fig. 2 summarizes the requirements for an X-ray telescope that will significantly advance our knowledge of galaxies. The purpose of this telescope is two-fold: 1) Very detailed studies of the X-ray components of nearby galaxies, to gain the needed deeper astrophysical understanding of their properties. Nearby galaxies offer an unique opportunity for studying complete uniform samples of galaxian X-ray sources (e.g. binaries, SNRs, black hole candidates), all at the same distance, and in a variety of environments. This type of information cannot be obtained for Galactic X-ray sources, given our position in the Galaxy. These population studies will be invaluable for constraining X-ray properties and evolution of different types of sources. 2) Study of deep X-ray fields, where galaxies are likely to be a very large component of the source population. Looking back in time, and comparing these results with the detailed knowledge of the X-ray properties of more nearby objects, we will be able to study galaxy evolution in the X-ray band. We will be able to look at galaxies when substantial outflows were likely to occur and therefore witness the chemical enrichment of the Universe at its most critical time. I discuss below in more detail three key elements of fig. 2: collecting area, angular resolution, and spectral capabilities. Collecting Area - A collecting area in the 10-30 sq.meters range is needed for both in-depth studies of individual galaxies in the nearby Universe (fig. 3), and for looking back in time (fig. 4). Angular Resolution - Arcsecond or better angular resolution is a must, to avoid confusion in both the study of nearby galaxies, and in the study of deep fields. Chandra images demonstrate the richness of detail one obtains with subarcsec resolution. With Chandra-like angular resolution galaxies can be picked out easily from unresolved stellar-like objects in deep exposures. Fig. 5 shows the deep X-ray count that can be reached with a 25sqm telescope in 100ks. In the deepest decade galaxies will be a major contributor and may even dominate the counts, if there is luminosity evolution in the X-rays, comparable to that observed in the FIR. Based on the HDF results () high z galaxies may be visible. However, arcsec resolution is needed to avoid confusion at these faint fluxes (; as demonstrated by recent simulations by G. Hasinger). Such deep exposures would allow the study of the evolution of galaxies in X-rays, and of the evolution of their stellar binary population as well as of their hot gaseous component. Based on the Madau cosmic SFR, White & Ghosh () show that a comparison of the z-dependence of the X-ray and optical luminosity functions is related to the evolution of the X-ray Binary population in galaxies. Moreover, if hot outflows are prominent at early epochs we will have a first hand account of the metal enrichment of the Universe. Spectral Capabilities - Fig 6 illustrates the scope of the spectral work one would like to perform. With X-ray spectroscopy we can determine the physical status of hot plasmas as well as their chemical composition. We can also measure cooler ISM by studying the absorption spectra of background quasars. Spectroscopy goals related to galaxy studies are described in figs. 7 and 8. How do these requirements compare to the characteristics of planned future X-ray observatories (Constellation X under study by the NASA community, and XEUS under study in Europe) is shown in Figs 9 and 10. While spectra and bandwidth characteristics of these missions under study accomplish our goals in both cases, the other requirements fall below those we need. XEUS has the required large collecting area, while Con-X area is significantly smaller. In both cases angular resolution is significantly sub-Chandra. Based on what Chandra has shown and on the characteristics of the objects we want to study \- galaxies are complex objects - a Chandra-like resolution is a must. The field of view is also small in both cases, especially in the case of Con-X. As we have done in the past we advocate that the X-ray community consider a large area, Chandra-like resolution mission to push X-ray astronomy from an exploratory discipline to a discipline at a par with the other wavelength astronomies. Both the scientific potential of the studies that can be performed with such a telescope, and more directly the exciting discoveries resulting from Chandra’s high resolution images, support this project. Acknowledgements. I acknowledge partial travel support from the Chandra Science Center. Parts of this talk were also given at the Cosmic Genesis Workshop held at Sonoma State University, Oct 27-30, 1999, and at the meeting Astrophysical Plasmas: Codes, Models & Observations, Oct 25-29, Mexico City, and will be included in the proceedings of these meetings. Part of the material presented here has also been presented at the XEUS Symposium 1999. I thank my colleagues who have contributed thoughts and material included in this paper. In particular, I acknowledge fruitful discussions with Martin Elvis, Pat Slane, Josh Grindlay and Paul Gorenstein.
no-problem/9912/chao-dyn9912012.html
ar5iv
text
# 1 Introduction ## 1 Introduction Dynamical systems come with a variety of behaviours, which have been the object of many studies. In particular, the long term predictability is a subject of interest since the first considerations on the solar system and received their modern formulation from Poincaré a century ago . It has been shown that birational maps have a great variety of dynamical behaviours, in spite of their apparent simplicity: We need only the basic four operations to compute the successive points and we can even dispense with the division if we keep with homogeneous coordinates. The purpose of this note is to relate different characterizations of dynamical systems for birational maps. The first one is the existence of invariants, which allow for a Liouville like form of integrability. An other one is the sensitivity to initial conditions, which is related to the derivative. The last one is the “algebraic entropy” . Common factors in the homogeneous coordinates of the image of a point result in simplifications of its expression and decrease the degree of the $`n`$-th iterate of the transformation from the $`d^n`$ value obtained without simplifications. This notion is directly linked with Arnold’s complexity , since the degree of a map gives the number of intersections of the image of a line and a hyperplane. A zero value of the algebraic entropy corresponds to a sub-exponential growth of the successive degrees and in all known cases to a degree depending polynomially on $`n`$. A first proof of such a polynomial bound on the degrees was given in . It has been observed on numerous examples through the determination of the images of a generic line. This would result from the conjectured existence of recurrence relations on the degrees. This work is motivated by the observation that whenever the iterates are confined to a low dimensional subvarieties by conserved quantities, it has been possible to find that the algebraic entropy is zero. Moreover, in many cases the growth is simply quadratic. Here I show how this polynomial behaviour of the degrees can be derived from the existence of invariant subvarieties which are abelian varieties. In the simplest case of rational or elliptic curves, the exact exponent will be proven. The first section will precise what is called integrability in this context of discrete time dynamics. Next will come an analytic derivation of a quite general bound on the growth of the degrees of the iterates, based on the expression of the degree of a variety from its volume. The last section makes use of the notion of addition on an elliptic curve to build iterates with degrees of quadratic polynomial growth. The practical use of such constructions and the foreseeable extensions are the subject of the conclusion. ## 2 Action-angle variables Even if it is intuitively clear that integrability has to do with the existence of a sufficient number of conserved quantities, the question remains of what is sufficient. A discrete time analogue of Hamiltonian mechanics has been proposed , allowing for a direct generalizations of the answer provided by Liouville . However, the resulting class of systems is too restrictive for our porpoise and we cannot rely on a definitive answer. Since even two-dimensional algebraic varieties can have birational maps with complex dynamical behaviour, the only safe case is the one of one-dimensional invariant varieties. In this case, the induced birational map on the invariant varieties are in fact holomorphic: Singular varieties of birational maps are of codimension at least two and there are no such subvarieties in curves. It remains to classify automorphisms of infinite order of the curves, which exist only for curves of genus 0 or 1. Higher genus curves have but a finite number of automorphisms. The only possible automorphisms have therefore finite order and as dynamical systems, they are periodic. In the genus 0 case, the curve is the Riemann sphere and the automorphisms are homographic transformations. By a reparameterization, we can always obtain one of the cases $`g(z)=z+\alpha `$ or $`g(z)=\lambda z`$, according to the number of fixed points of the transformation. In the multiplicative case, we can describe it as a translation if we set $`z=\mathrm{exp}t`$. The point depends on the parameter $`t`$ with the period $`2\pi I`$ and now the transformation is a simple addition on $`t`$. Non-singular genus 1 curves are holomorphically isomorphic to a torus $`/+\tau `$. There is therefore a parameterization of these curves by doubly periodic functions of $`z`$. In terms of this parameter, the infinite order automorphisms are translations $`zz+\beta `$. In all these cases, the map is described by a simple translation in a parameter which can be one to one, simply periodic or doubly periodic. The $`n`$-th iterate of the map can be simply expressed as the translation by the same parameter multiplied by $`n`$. However there is a price to pay, especially in the genus 1 case, since we have no longer algebraic functions, but a parameterization by Weierstraß functions or other elliptic functions. Nevertheless, the scheme is that in adapted coordinates, the transformation takes the simple form: $$\varphi (X,\theta )=(X,\theta +\sigma (X)),$$ (1) with $`X`$ designing the invariants of the integral curve and $`\theta `$ a parameterization of this curve. In these coordinates, the $`n`$-th iterates of $`\varphi `$ is easy to write: $$\varphi ^n(X,\theta )=(X,\theta +n\sigma (X)).$$ (2) From the explicit form (2) of $`\varphi ^n`$, its differential is deduced to be: $$d(\varphi ^n)=\left(\begin{array}{cc}1& 0\\ n\sigma ^{}& 1\end{array}\right).$$ (3) The important property is that the matrix elements grow at most linearly in $`n`$, giving a computable behaviour for long times, since numerical errors get multiplied by small numbers. In generic systems, there would be hyperbolic fixed points of $`\varphi ^N`$. At these points, the differential of $`\psi ^{kN}`$ grows exponentially. Small errors in the first few iterations can have sizable effects after a moderate number of iterations. There are natural generalizations of this scheme for higher dimensional varieties, but in the absence of an equivalent to the symplectic structure which is essential in the theorem of Liouville, they will not be necessary outcomes of the existence of invariant varieties. Even in a two-dimensional variety with rational parameterization, the restriction of a birational map can have complex behaviours. And it is not even sufficient that the map restricted to the invariant varieties has no singularities, since a torus of two complex dimensions can have exponentially diverging trajectories. An example is the following bijective transformation of $`^2/^2+\tau ^2`$ (the product of two elliptic surfaces with the same modulus): $$(\theta _1,\theta _2)(2\theta _1+\theta _2,\theta _1+\theta _2).$$ (4) The following section is equally valid for the case of abelian invariant varieties if the restriction of the map to these varieties is a simple translation. The formulas (1,2,3) apply with a multidimensional $`\theta `$. Examples of such systems were shown to exist in , with a map which reduces to an addition on the Jacobian of a curve. It is however not clear if these systems can be expressed as maps on projective spaces, since they are a priori defined on the quotient of a linear space of matrices by gauge transformations. In the case which gives one-dimensional Jacobians, it is possible to fix the gauge and the system reduces to the one associated to the height-vertex model of Baxter . In the other cases, it is not clear whether such a choice is possible in general, without restricting to a lower dimensional part of the Jacobian. Part of the symmetry group of a three dimensional model of statistical mechanics gives a dynamical system in $`𝐏^9`$ with two-dimensional invariant varieties . From the images of the orbits, it seems that the restriction of the map on an invariant variety is a shift on a torus of complex dimension two, but this has yet to be proved. ## 3 Analytic method. ### 3.1 Fixed points An immediate consequence of the form (2) of the compositions of $`\varphi `$ is that the fixed points of the iterates are not isolated, but form whole varieties. If $`p`$ is a fixed point of $`\varphi ^n`$, it means that $`n\sigma (X)`$ is one of the periods of the parameterization of the invariant curve it belongs to and all points of this curve are fixed points of $`\varphi ^n`$. Moreover, the equation determining the fixed point variety reduces to $`\sigma _n(X)=0`$. This is as many equations as there are components in $`\theta `$. The codimension of the fixed point varieties is therefore the dimension of the invariant varieties. In the bidimensional case, the fixed point variety is of the same dimension as the invariant varieties and reduces to a product of invariant varieties. This factorization is however not straightforward since the corresponding values of the invariants are generically irrational algebraic numbers. For the determination of the individual invariant varieties, the factorization of the equation of the fixed point variety must be done in an algebraic extension of $``$ which has to be determined. In higher dimensions, the fixed point variety would correspond to a variety in the space of invariants, so that it would be still more difficult to deduce the equations of individual invariant varieties. However, the main problem in determining invariant varieties is to find their covariance factor, that is the factor they get multiplied by when applying the transformation. The fixed point varieties are invariant and their covariance factors are powers of the covariance factors of the invariant varieties. Moreover, the study of the differential of the map at fixed points should be a good starting point for a proof of the existence of invariant varieties. ### 3.2 Analytic proof of a polynomial growth of the degrees The degree $`d`$ of an algebraic subvariety is proportional to its volume in the projectively invariant Kähler metric of $`𝐏^N`$ . The basic idea is that the volume forms induced on complex subvarieties by a Kähler metric are simply exterior powers of the associated Kähler form. The Kähler form being closed, this volume is invariant by deformations and in particular, can be computed from the retraction of the variety to a linear space $`L`$ of the same dimension. In a generic situation, points outside a lower dimensional subspace of $`L`$ have $`d`$ preimages by the retraction, so the volume of a variety is $`d`$ time the volume of a linear space. Since all linear spaces are related by $`U(N+1)`$ transformations which leave the metric invariant, the volume of a linear space is a universal constant. This gives an analytic way of computing the degrees. The degree of the map $`\varphi `$ is the degree of the image of a generic complex line $`L`$ and this degree can be computed by an integration, since it is proportional to the area $`S`$: $$S=_{\varphi ^n(L)}\omega =_L\varphi _{}^n\omega .$$ (5) In this formula $`\omega `$ denotes the Kähler form. The definition of its pull-back $`\varphi _{}^n\omega `$ involves the differential $`d\varphi ^n`$ of $`\varphi ^n`$ to transform the tangent vectors of $`L`$ into tangent vectors of its image. More precisely, the form $`\omega `$ and the differential $`d\varphi `$ are sections of fiber bundles with values at a point $`x`$ denoted respectively by $`\omega _x`$ and $`d\varphi _x`$. $`\omega _x`$ is a bilinear form on the tangent space $`T_xM`$, $`d\varphi _a`$ is a linear map from $`T_aM`$ to $`T_{\varphi (a)}M`$. $`\varphi _{}\omega `$ is given by $$(\varphi _{}\omega )_a(X,Y)=\omega _{\varphi (a)}(d\varphi _aX,d\varphi _aY),$$ (6) with $`X`$, $`Y`$ in $`T_aM`$. From the formula (3) for $`d\varphi ^n`$ follows that we have a simple dependence on $`n`$ of the integral in (5). The Kähler form must be evaluated in the variable point $`\varphi ^n(a)`$, with the tangent vector expressed on a basis associated to the action-angle coordinates. $`𝐏^N`$ is compact, $`\omega `$ is bounded in any affine coordinate patch from its explicit expression and the change from action-angle variables to projective coordinates is smooth, so that $`\omega `$ remains bounded when expressed on an adapted basis of the tangent space. The boundedness of $`\omega _x`$ is then sufficient to conclude that the degree of $`\varphi ^n`$ is less than some constant times $`n^2`$. ## 4 Algebraic proof ### 4.1 Generalities In the preceding section, I showed limits on the degrees of the iterated maps. The proof however does not show how to build transformations of the given degree. This will be remedied in this section. The basic idea is that eq. (2) can be expressed algebraicly, without the introduction of a specific parameterization of the invariant variety. Since every analytic operation on an algebraic variety is algebraic, $`A+B`$ is an algebraic function of $`A`$, $`B`$ and parameters describing the invariant variety. The algebraic translation of eq. (2) is made of operations independent of $`n`$ and the computation of $`n\sigma `$. The whole $`n`$ dependence comes from this operation and we want to find a good bound of the degree of this operation. All other operations can contribute an overall factor or some additional constants but cannot modify the growth behaviour. Computing $`2kX`$ from $`kX`$ is a simple addition. $`2^pX`$ can therefore be computed by $`p`$ additions and for a general $`n`$, the number of additions to perform to calculate $`nX`$ is of order $`\mathrm{log}_2n`$. If the degree of the result is multiplied by a constant $`r`$ by each addition, the degree of $`n\sigma `$ will be of order $`r^{\mathrm{log}_2n}`$, which is equal to $`n^{\mathrm{log}_2r}`$. We therefore have built an expression for the image of a point with a degree satisfying the required bound. In fact, this discussion is somehow naive. The invariant variety depends on the starting point and care must be taken of the dependence of the addition on the parameters of the invariant variety, which will in turn be some polynomial expressions of the coordinates of the starting point. Going from the point $`pz`$ to the point $`2pz`$ is an operation of degree $`d`$ in the coordinates of the point $`pz`$ and of some degree $`l`$ in the coordinates of the initial point. Let us call $`m_k`$ the degree of the operation $`z2^kz`$. $`m_k`$ satisfies the following relation: $$m_{k+1}=dm_k+l.$$ (7) It is elementary to verify that $`m_k`$ is also bounded by $`Cd^k`$, except in the trivial case $`d=1`$. The degree of the iterate of order $`n`$ of an integrable mapping is therefore bounded by a polynomial function of $`n`$. In the following, the precise form of this bound will be established in the case of curves. A generalization for higher dimensional cases is in principle straightforward, but rather involved. ### 4.2 Curves The case of rational invariant curves can be easily settled, since the parameterizations of the invariant curves are rational. A birational change of variable allows for a product structure $`𝐏_1`$ times the space of the invariants. In the case where the $`\theta `$-variable is directly the rational parameter, the expression of $`\varphi ^n`$ can be directly read from (2). In this case, the successive iterates have all the same degree. In the other case, the transformation takes the form $`t\lambda (X)t`$. The iterates depend on $`\lambda (X)^n`$ and the degree of the transformation is linear in $`n`$. In the case of elliptic invariant curves, the choice of a special point $`O`$ allows for the identification of the curve with its Jacobian. This determines the addition on the curve: the sum $`S`$ of the points $`A`$ and $`B`$ is the second zero of a meromorphic function with simple poles at these two points and a zero at the origin $`O`$. Such a meromorphic function can be obtained as the ratio of sections of a line bundle. In the simplest cases, these sections are simply linear functions of the homogeneous coordinates. Their zeroes are given by the intersection of the curve and some hyperplane. There are two simple algebraic descriptions of an elliptic curve. The first is a degree 3 plane curve, the second is the intersection of two quadrics in three-dimensional space. In both cases, the explicit determination of multiples of a point will allow for a quadratic bound on the degrees of the successive iterates. Otherwise, we could parameterize the curves by doubly periodic functions, Weierstraß function for the degree 3 plane curve or $`\theta `$-functions for the biquadratic. Then addition formulas for these functions would translate in algebraic formulas for the determination of the sums, but I do not want to introduce any considerations on these special functions. Higher degree representations are singular with multiple points. These multiple points correspond to multiple values of the parameter of the curve and are sent to a number of differing points. They are therefore singular points of the transformation and it is possible to make birational transformations which resolve those multiple points without introducing other multiple points. ### 4.3 Degree 3 plane curve Let us consider a curve with the equation: $$y^2=P_3(x).$$ (8) $`P_3`$ is a degree three polynomial with no multiple roots. The natural choice for the origin $`O`$ in this case is the point at infinity. This point is common to all invariant curves so that the parameter $`\sigma (X)`$ cannot be obtained directly as the image of the origin $`O`$, but must be computed as $`\varphi (P)P`$ for a generic point $`P`$ on the curve. Affine functions of $`x`$ and $`y`$ are sections of a line bundle when restricted to the curve. The zeros of these sections are given by the intersection points of a line and the elliptic curve: there are three of them. To build a meromorphic function with poles in $`A`$ and $`B`$, we first consider the line through $`A`$ and $`B`$: it will cross the curve in a third point $`M`$. We will take now a line going through $`M`$ and the origin $`O`$: I claim that the third intersection point of this line with the curve is the sum $`S`$. Indeed, taking the ratio of the two above mentioned linear functions, we get a meromorphic function on the curve with poles in $`A`$ and $`B`$ and zeros in $`O`$ and $`S`$. The zeros in $`M`$ of the two linear functions cancel each other. Lines going through $`O`$ have equation $`x=x_0`$ and intersect the curve in two points of the form $`(x,\pm y)`$. When defining the double of a point, we need to parameterize the tangent to the curve at the given point. It will be given by $`(x+2\alpha y,y+\alpha P_3^{}(x))`$. Assuming a reduced form for $`P_3`$, that is $`P_3=x^3+ax+b`$, the equation for $`\alpha `$ to get a point on the curve reduces to: $$P_3^{}(x)^2\alpha ^2=8\alpha ^3y^3+12\alpha ^2xy^2.$$ (9) The non zero solution for $`\alpha `$ can be substituted and $`y^2`$ expressed in terms of $`x`$ from the equation (8). Remembering that the point $`2z`$ has the opposite $`y`$ from the intersection, we obtain the following coordinates for it: $$(2x+\frac{P_3^{}(x)^2}{4P_3(x)},y\left(1\frac{3xP_3^{}(x)}{2P_3(x)}+\frac{P_3^{}(x)^3}{8P_3(x)^2}\right)).$$ (10) The important point is here that the new $`x`$ is of degree 4 in the old $`x`$, without any dependence on $`y`$. As in the preceding section, it is possible to deduce that the degree of $`x`$ for the $`k`$-th iterates is at most quadratic in $`k`$. The degree of $`y`$ does not seem to be as easily bounded. But the very expression of the new $`y`$ shows that its degree is the degree of the old $`y`$ plus 6 times the degree of the old $`x`$. The degree of the variable $`y`$ after a number of steps will be 1 plus some times the sum of the degrees of the preceding $`x`$. This is the sum of a geometric series, so that it will also be of order $`4^p`$, apart from some constant prefactor. In this case, the integer $`d`$ is 4, that is to say the bound derived in the preceding paragraph is a quadratic one. ### 4.4 Biquadratics in 3 dimensional space. The other possible equation for a non-singular elliptic curve is that of a biquadratic in a three dimensional space. By a proper choice of coordinates and of combination of the equations, they can be brought to the form: $`x^2ay^2bz^2`$ $`=`$ $`0,`$ $`t^2by^2az^2`$ $`=`$ $`0.`$ (11) This readily shows the symmetries of the curve, since these two equations are invariant by the change of sign of any of the coordinates. Since the change of the sign of all coordinates is the identity in projective space, this gives a group $`_2\times _2\times _2`$ of symmetries. Section of the tautological bundle are just given by a linear form in $`x,y,z,t`$. They are defined up to a factor by a plane in this three-dimensional space, that is by three points. Intersection of a plane with the curve will generically consist in 4 points. Computation in the Jacobian can easily be done if we take as zero a point with one of the coordinates 0. There are four of these points on any curve, but they are related by the afore mentioned symmetries. A plane is defined by three points, so that we choose a point $`R`$ on the curve which will belong to the planes corresponding to the two factors defining a meromorphic function. The plane defined by $`A`$, $`B`$ and $`R`$ will have a fourth intersection with the curve, $`H`$, and the plane going through $`H`$, $`R`$ and $`O`$ will meet the curve at the sum $`S`$: when taking the quotient, the zeroes in $`H`$ and $`R`$ cancel and we get the required meromorphic function with the divisor $`A+BSO`$. $`R`$ should be chosen to be one of the points with the same zero as the origin, since in this case $`S`$ is simply related to $`H`$. In this case, the direction of the line $`OR`$ is one of the coordinate axis. It is either the axis where the coordinates of $`O`$ and $`R`$ differ by a sign or the axis of the null coordinate in the case $`R`$ and $`O`$ coincide, since the tangent in this point is simply giving by the vector whose only non-zero component is on the null coordinate. Then if we take for $`S`$ the point obtained from $`H`$ by negating the coordinate along the axis $`OR`$, $`HS`$ and $`OR`$ have the same direction and they are coplanar as two parallel lines define a plane. A remark which be useful in the sequel is that if $`O`$ is the point $`(x_0,y_0,z_0,0)`$ adding the point $`O_z=(x_0,y_0,z_0,0)`$ to a point is simply changing the sign of the $`z`$ and $`t`$ coordinates. With obvious notation, the addition of the $`O_x`$ and $`O_y`$ points are similarly obtained as symmetries of the curve changing the sign of two of the coordinates. Since the negative for the addition on the curves is obtained by changing the sign of the $`t`$ coordinate, all obvious symmetries of the curve can be simply obtained by a combination of the addition of one of the points $`O_i`$ and the opposite. The obvious way of adding multiple times the same point is to define a plane which is tangent to the curve at this point, but in this case, we do not end with a result of sufficiently low degree. It is however possible to consider the plane going through the original point and two of its transforms by symmetries to compute a triple. The plane going through $`A`$, $`A+O_x`$ and $`A+O_y`$ intersects the curve in the point $`3A+O_z`$ ($`O_x+O_y=O_z`$). If we start from three points $`A_1`$, $`A_2`$ and $`A_3`$ on the curve, the plane they define is parameterized by the linear combination $`\alpha _1A_1+\alpha _2A_2+\alpha _3A_3`$. When substituting in the equations of the curve, the $`\alpha _i^2`$ terms disappear since the $`A_i`$’s satisfy these equations and we have two linear equations for the three products $`\alpha _1\alpha _2`$, $`\alpha _1\alpha _3`$ and $`\alpha _2\alpha _3`$, whose coefficients are bilinear in the coordinates of the points. These products are then of degree 4 in the points, the $`\alpha _i`$ of degree 8 and the final point is of degree 9. We obtain a variant of the result we announced. Here, we proceed by tripling and the operation is of degree 9, which is the cube of 3, so that we still obtain a quadratic bound for the degrees of the iterations of the maps. There still remains the problem that in fact, the origin $`0=(x_0,y_0,z_0,0)`$ of the curve is not algebraicly determined. $`x_0^2`$, $`y_0^2`$ and $`z_0^2`$ are expressed as quadratic polynomials of the initial point, but the actual coordinates are square roots. We will have to keep track of the usage of each of these quantities to be sure that the end result only depends on even powers of these coordinates, so that the result is truly an algebraic one. Here again, the symmetries of the curve give us the possibility to simply derive what happens if we change the sign of one of these coordinates. Changing the sign of $`z_0`$ for example, is changing $`O`$ to $`O_z`$. Multiplying the image of the origin will give the iterated image of this point. Then to obtain the image of the starting point $`P`$, we have to add $`P`$ with the image of the origin. But this addition depends on the origin, since $`A+B`$ depends on the fact that $`A+BSO`$ is a principal divisor. The change in the origin completely compensate for the change of the point. The conclusion is that the calculation does not depend on the possible choices of signs. Except from a global sign factor, the calculated point is then invariant, so that it only depends on even powers of the coordinates of the origin and is therefore an algebraic function of the starting point. This is an elementary case of Galois theory, which says that element of an algebraic extension which are invariant by all elements of the automorphism group (the Galois group) are elements of the base field. ## 5 Conclusion Integrable mappings have been proven to have zero algebraic entropy. Moreover, the observed quadratic polynomial growth of the degree is seen to be generic from the results of sec. (3.2). This step forward in the study of the dynamical behaviour of birational maps urges to find a reciprocal result. However the case of the discrete Painlevé system studied in shows that a zero algebraic entropy does not imply the existence of invariants, even in a two-dimensional setting. Maybe this is due to the non-autonomous character of this map, which implies that fixed points of $`\varphi ^k`$ are not fixed points of all the transformations $`\varphi ^{kp}`$. The derivation of the higher order iterates sketched in sec. (4) has been useful for an alternate proof of the polynomial growth of the degrees. We can wonder whether this calculation can be put to more practical uses. The logarithmic growth of the number of operations should in any case become a clear advantage both for the precision of the calculation and the required time. The usefulness of the fixed point varieties for a characterization of the invariant varieties should be probed. But for higher dimensional systems, the explicit equations are very large and difficult to obtain with current algebraic manipulation programs. It should be of interest to obtain such varieties from implicit equations, i.e., without trying to have a formula for the $`\varphi ^n`$. Otherwise, intersection of some low dimensional variety with the fixed point varieties should be sufficient to determine their dimensionality and therefore the dimension of the invariant varieties. It would also be interesting to have examples of the unusual behaviours which seem possible if the invariant varieties are higher dimensional abelian varieties. As is pointed out in eq. (4), some dynamical exponents could be positive with a zero algebraic entropy, giving “simple” dynamics with diverging trajectories. If the differential has a more complex structure than in (3), it should also be possible to have a polynomial growth of the degrees, but faster than quadratic. This would however imply a rather long recurrence relation for the degrees, which itself is the sign of a complex scheme for the resolution of the singularities . ## Appendix A Birational maps. In this appendix, I want to recall some properties of birational maps which have been described in more detail in and which are relevant to this work. Birational maps are not really maps. Except in the simplest case, they are not defined everywhere . They are more general correspondences, which can be one to many at some points. But they are nevertheless maps to a good approximation since they define maps on the complement of an algebraic variety. The projective space $`𝐏^n`$ is the space of complex lines through the origin in $`^{n+1}`$. Any non-zero element of the line is a set of homogeneous coordinates for this point of projective space. If one of the coordinates is set to 1, the others give the so-called affine coordinates of a part of the projective space. A rational map between the projective spaces $`𝐏^m`$ and $`𝐏^n`$ is simply a homogeneous polynomial map from $`^{m+1}`$ to $`^{n+1}`$. The homogeneity degree is called the degree of the map. The image of the line representing a point in $`𝐏^m`$ is either identically zero in $`^{n+1}`$ or a complex line by the homogeneity of the map. In the first case, the projective point is singular for the map and in the second, the line defines the image in $`𝐏^n`$. Going to affine variables, the map is defined by rational functions, but this leads to spurious singularities, which are just the consequence of saying that some hyperplane of projective space is at infinity. The composition of rational maps requires some care. If we compose the polynomial maps, $`\varphi `$ of degree $`d_\varphi `$ and $`\psi `$ of degree $`d_\psi `$ give a map of degree $`d_\varphi d_\psi `$, $`\psi \varphi `$. But with this composition law, only maps of degree 1 can have an inverse, since $`𝕀`$ is of degree 1. However, the greatest common divisor of the coordinates of the image can be factored out and the map is not modified at the points where it was defined. It is naturally extended to points where this divisor vanishes giving a reduced product $`\psi \times \varphi `$: $$\psi \varphi =m(\psi ,\varphi )\psi \times \varphi .$$ (12) $`m(\psi ,\varphi )`$ is called a multiplier and its degree gets subtracted to the degree of the composed map $`\psi \times \varphi `$. A birational map is a rational map with an inverse for this $`\times `$-product. The multiplier for the product of a map and its inverse is the equation of the variety where the map is not bijective. When considering iterations of a rational map, we always consider this reduced product, which allows for a non-trivial sequence of the degrees of the successive iterates.
no-problem/9912/hep-th9912083.html
ar5iv
text
# 1 Introduction ## 1 Introduction Recently new stable solitons in Yang-Mills theories were constructed, whose electric charges and magnetic charges are not proportional to each other. These new solitons exist only when more than one adjoint Higgs fields are involved, so are natural in Yang-Mills theories with extended supersymmetry. Their classical aspects have been studied in the context of $`N=4`$ supersymmetric Yang-Mills theories . Of these dyons, some preserve 1/4 of $`N=4`$ supersymmetry, and thus are known as 1/4 BPS dyons, while others do not preserve any supersymmetry at all. It is also well known that the supersymmetric Yang-Mills theories arise as a low energy description of parallel D3 branes in the type IIB string theory . In this context, dyons arise as string webs ending on D3 branes. For instance, more traditional 1/2 BPS monopoles and dyons are represented by straight $`(p,q)`$ string segments ending on a pair of D3 branes, while a 1/4 BPS dyon corresponds to a properly oriented, planar web of strings ending on more than two D3 branes. The stable non-BPS states are realized as the most general form of string web, which is typically nonplanar. Some non-BPS dyons can be thought of as deformation of 1/4 BPS dyons which results as D3 positions themselves get moved and become nonplanar in the six transverse directions. (A numerical study of such non BPS dyons as a field theory configuration has been performed within spherically symmetric ansatz and the resulting brane configurations were found to agree with that of the type IIB string theory. ) The detail of the 1/4 BPS field configurations has been explored in Ref. . The BPS equations satisfied by these field configurations consist of two pieces: one is the old magnetic BPS equation for some 1/2 BPS monopole configuration while the other is a covariant Laplace equation for the additional Higgs field in the 1/2 BPS monopole background. Because of this two-tier structure of the BPS equations, the parameter space of a 1/4 BPS dyon is identical to the moduli space of the first, magnetic BPS equations. The only subtlety here is that some of the monopole parameters transmute to classical electric charges, as we compare the two. The main lesson we learn from this fact is that the nonrelativistic dynamics that incorporates 1/4 BPS dyonic states can be formulated as that of dynamics on the same old monopole moduli space but with new interactions. This is true, at least when the monopole rest mass is dominant over the electric part of the energy. The kinetic term of the modified effective Lagrangian is given by the moduli space metric of the underlying 1/2 BPS monopoles, while the potential term comes from the additional Higgs field and is found to be a half of the squared norm of a Killing vector. The effective Lagrangian have a BPS bound of its own. Classically and quantum mechanically, the 1/4 BPS dyons arise as specific bound states that saturate such a low energy BPS bound. This new dynamics incorporates the 1/4 BPS dyons as well as more traditional 1/2 BPS monopoles and dyons. There have been a couple of derivations of this effective low energy Lagrangian of monopoles that produces 1/4 BPS dyonic configurations. The first such derivation relied on the relation between the BPS energy of 1/4 BPS configurations and the conserved electric charges . The trick here is to realize that the BPS energy can be estimated in two different ways. One from the field theory, and the other from realizing 1/4 BPS states as a bound state of monopoles in the low energy sense. By comparing the former, exact, formula to the latter, one can in fact identify the form of the potential term as one half of the electric part of the BPS energy, expressed in terms of Higgs expectation values and monopole moduli space geometry. The resulting potential is exact within the nonrelativistic approximation. The potential leads to the long range attraction or repulsion between dyons . Shortly thereafter, there appeared an alternate derivation by two of the authors . Here, the field theoretic Lagrangian is calculated for a given initial field data, which are made of 1/4 BPS configuration and its field velocity. The Lagrangian, after integrated over the space, turns out to give the sum of minus rest mass and the kinetic parts which consists of quadratic and also linear terms in velocity of moduli coordinates. This is somewhat similar to the consideration of the zero mode dynamics of a particle with nonzero momentum. After shifting the cyclic coordinates related to conserved electric charges, one gets the low energy Lagrangian including the potential energy. In both of the previous derivations, the 1/4 BPS dyons were used as the convenient stepping stone that leads to the above low energy Lagrangian. However, the dynamics is really that of monopoles, which naturally produces 1/4 BPS dyons as bound states. The effective Lagrangian was successfully understood because the states therein were understood very well by other means. This funny state of affairs begs for a question whether there exists a more fundamental derivation of the dynamics based only on the properties of 1/2 BPS monopoles. As we will see, there is indeed such a derivation. In particular, since the method does not rely on BPS properties of monopole bound states, it is applicable to situations where bound states are typically non BPS. Such outcomes are generic when more than two Higgs take independent vacuum expectation values. In fact, we are going to find the exact low energy Lagrangian when all six Higgs are turned on. On the other hand, note that the low energy dynamics is meaningful only when the kinetic and the potential energies are much smaller than the rest mass of the monopoles . Because of this, one combination of Higgs field must be chosen to be large, so that monopoles have rest masses much larger than the electric and the kinetic part of energy. This separates six Higgs into one with large expectations and five remaining ones with small and independent expectation values. In this new derivation, the 1/2 BPS monopole configurations are primary and of order one. As we take the expectation values of the additional five Higgs fields to be small, we take them to be of order $`\eta <<1`$ quantities, where $`\eta `$ characterizes the ratio of the additional Higgs expectation values to the first Higgs which shows up in magnetic BPS equation. We then solve the field equation for the five additional Higgs fields classically to the leading order in $`\eta `$, given a static monopole background. The problem again reduces to a second order Laplace equation for each of five Higgs. We put the result back into the field theory Lagrangian to obtain the potential term as a function of Higgs expectations. Of course, independently of this, we also consider slow motions of monopoles and derive the kinetic term as well. The resulting action is accurate to the order, $`\eta ^2`$ and $`v^2`$ where $`v`$ is the typical monopole velocity. The modified effective Lagrangian is again based on the monopole moduli space, since, to the leading order, the above two computations of kinetic and potential terms, do not interfere with each other. In particular, the kinetic term is given by the same moduli space metric. The potential is now half the sum of squared norms of five Killing vectors. These five Killing vector fields are picked out by the five additional and small Higgs expectation values. For generic but small vev’s of these five Higgs, even the lowest energy configuration with generic electric charges is non BPS. The view we take here is similar to the method of obtaining the Coulomb potential between two massive point particles. In that case, we consider the limit the electric coupling $`e`$ is small and the velocity $`v`$ of charged particles is small. Then, the leading solution of the Maxwell equation is Coulombic. To the order $`v^2+e^2`$, the Lagrangian is the standard kinetic energy with the Coulomb potential. The retarded or relativistic effects would be of order $`v^2e^2,v^4`$ and, thus, negligible. Recall that the planar string web for the 1/4 BPS dyonic configurations are made of the webs of fundamental strings and D strings. The key cause for the web is the attractive force between fundamental and D strings. For the 1/4 BPS supersymmetry, the orientation of the each string vertex should be consistent. In dyonic picture, the 1/4 BPS configurations correspond to dyons in finite separation with the delicate balance between Higgs force and electromagnetic force. They appear naturally as the BPS configuration for the low energy dynamics. Thus they are the lowest energy configurations for a given set of the electric charge, which characterizes the BPS energy. In the non BPS case, again the non planar string webs are formed from the fundamental strings and D strings. In the low energy Lagrangian, they would correspond to the lowest energy configuration for the given set of electric charges, which could not saturate the BPS energy bound. The supersymmetric completion of the low energy dynamics should have eight real supercharges. For the cases with one Higgs expectation and two Higgs expectations, the supersymmetric low energy dynamics are known . The former is completely determined by the monopole moduli space metric, which happens to be hyperKähler, while the latter also involves additional potential determined by a single linear combination of triholomorphic Killing vector field on the moduli space. When all six Higgs fields are involved, the low energy dynamics involve up to five linearly independent combinations of such Killing vectors. The final goal of this paper is to write down this supersymmetric low energy Lagrangian explicitly<sup>4</sup><sup>4</sup>4 The supersymmetric quantum mechanics can be obtained as the dimensional reduction of the six dimensional (0,8) supersymmetric sigma model to a one dimensional quantum mechanics, by the Scherk-Schwarz mechanism. This suggests that the form of the supersymmetric Lagrangian is the most general nonlinear sigma model with eight real supercharges. , which completes the low energy interaction of monopoles in $`N=4`$ Yang-Mills theory, to the extent that the nonrelativistic approximation makes sense. The plan of the paper is as follows. In Sec. 2, we review the 1/2 BPS monopoles and the 1/4 BPS equations. In Sec. 3, we derive the effective Lagrangian for the non BPS configurations. In Sec. 4, we explore this Lagrangian. In Sec. 5, we find the supersymmetric completion of the Lagrangian. In Sec. 6, we conclude with some remarks. ## 2 BPS Bound and Primary BPS equation We begin with the $`N=4`$ supersymmetric Yang-Mills theory. We choose the compact semi-simple group $`G`$ of the rank $`r`$. We divide the six Higgs fields into $`b`$ and $`a_I`$ with $`I=1,\mathrm{},5`$. The bosonic part of the Lagrangian is given by $`L`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle d^3x\mathrm{tr}\left\{𝐄^2+(D_0b)^2+(D_0a_I)^2\right\}}`$ (1) $``$ $`{\displaystyle \frac{1}{2}}{\displaystyle d^3x\mathrm{tr}\left\{𝐁^2+(𝐃b)^2+(𝐃a_I)^2+\left(i[a_I,b]\right)^2+\underset{I<J}{}(i[a_I,a_J])^2\right\}},`$ where $`D_0=_0iA_0`$, $`𝐃=i𝐀`$, and $`𝐄=_0𝐀𝐃A_0`$. The four vector potential $`(A_0,𝐀)=(A_0^aT^a,𝐀^aT^a)`$ and the group generators $`T^a`$ are traceless hermitian matrices such that $`\mathrm{tr}T^aT^b=\delta ^{ab}`$. As shown in Ref. , there is a BPS bound on the energy functional, which is saturated only when configurations satisfy $`𝐁=𝐃b,`$ (2) $`𝐄=c_I𝐃a_I,`$ (3) $`D_0bi[c_Ia_I,b]=0,`$ (4) $`D_0c_Ia_I=0,`$ (5) together with the Gauss law, $$𝐃𝐄i[b,D_0b]i[c_Ia_I,D_0c_Ia_I]=0.$$ (6) where $`c_I`$ is a unit vector in five dimensions. In addition, the rest of the Higgs field should be trivial on this configuration, or $`D_0(a_Ic_Ic_Ja_J)=0`$ (7) $`𝐃(a_Ic_Ic_Ja_J)=0`$ (8) $`[b,a_Ic_Ic_Ja_J]=0.`$ (9) This condition implies the 1/4 BPS configuration should be planar. The BPS energy is then $$Z=𝐛𝐠+c_I𝐚_I𝐪,$$ (10) where $`𝐛`$ and $`𝐚_I`$ are vacuum expectation values of the Higgs fields, while $`𝐠`$ and $`𝐪`$ are magnetic and electric charges respectively. Equation (2) is the old BPS equation for 1/2 BPS monopoles and is called the primary BPS equation. The BPS bound is saturated if the additional equations are satisfied. For 1/4 BPS configurations, the additional equations are from the energy bound and the Gauss law, which are put into a single equation, $$𝐃^2c_Ia_I[b,[b,c_Ia_I]]=0,$$ (11) which is called the secondary BPS equation. In addition, the $`a_Ic_Ic_Ja_J`$ which is orthogonal to $`c_I`$ vector should commute with all other fields and constant in space time. One last step necessary to solve for the 1/4 BPS dyon is to put $`A_0=c_Ia_I`$. In the type IIB string realization of $`U(N)`$ Yang-Mills theories, the above BPS equations imply that the corresponding 1/4 BPS string web lies on a plane. However, the D3 branes which are not connected to the web do not need to lie on the plane. Even when $`D3`$ branes lie on a single plane, one can find planar string webs which is not BPS as the orientations of the string junctions are not uniform. In this paper, we consider the special class of non BPS configurations which correspond the non planar web, which would have been 1/4 BPS configurations when we put D3 branes to a single plane by small deformations of their positions. In addition, we consider the string web is almost linear. For this case, we do not need to solve the full quadratic field equations. We consider an approximation by considering the quantity $$\eta \frac{|𝐚_I|}{|𝐛|},$$ (12) to be much smaller than unity, throughout this paper. Within such approximation, we may solve the field equation in two steps. First one solve the first-order, magnetic BPS equation. Once this is done, the solution of this primary BPS equation describes the collection of 1/2 BPS monopoles, and the Higgs field $`b`$ takes the form $$b𝐛𝐇\frac{𝐠𝐇}{4\pi r},$$ (13) asymptotically, where $`𝐇`$ is the Cartan subalgebra. We are interested in the case where the expectation value $`𝐛`$ breaks the gauge group $`G`$ maximally to Abelian subgroups $`U(1)^r`$. Then, there exists a unique set of simple roots $`𝜷_1,𝜷_2,\mathrm{},𝜷_r`$ such that $`𝜷_A𝐛>0`$ . The magnetic and electric charges are given by $$𝐠=4\pi \underset{A=1}{\overset{r}{}}n_A𝜷_A,$$ (14) where integer $`n_A0`$. For each simple root $`𝜷_A`$, there exist a fundamental monopole of magnetic charge $`4\pi 𝜷_A/e`$, which comes with four bosonic zero modes: The integer $`n_A`$ can be thought of as the number of the $`𝜷_A`$ fundamental monopoles. The moduli space of such 1/2 BPS configurations has the dimension of $`4_An_A`$. We will consider the case where all $`n_A`$ are positive so that the monopoles do not separate into mutually noninteracting subgroups. Let us denote the moduli space coordinates by $`z^m`$. If we parameterize BPS monopole solutions by the moduli coordinate $`z`$’s, $`A_\mu (𝐱,z^m)=(𝐀(𝐱,z^m),b(𝐱,z^m))`$ with $`\mu =1,2,3,4`$, the zero modes are in general of the form, $$\delta _mA_\mu =\frac{A_\mu }{z^m}+D_\mu ϵ_m,$$ (15) where $`D_\mu ϵ_m=_\mu ϵ_mi[A_\mu ,ϵ_m]`$ with understanding $`_4=0`$. The zero modes around the 1/2 BPS configurations are determined by perturbed primary BPS equation plus a gauge fixing condition, $`𝐃\times \delta _m𝐀=\delta _mbi[\delta _m𝐀,b],`$ (16) $`D_\mu \delta _mA_\mu =0,`$ (17) which forces the actual zero modes to be a sum of two terms. Given this definition of zero modes, one can define a natural metric on the moduli space spanned by the collective coordinate $`z`$’s , $$g_{mn}(z)=d^3x\mathrm{tr}\delta _mA_\mu \delta _mA_\mu .$$ (18) With such a metric, the Lagrangian (1) for the monopoles of the primary BPS equation can be expanded for small velocities as, $$\overline{}=𝐠𝐛++\mathrm{},$$ (19) where the first term is the rest mass of the monopoles, $$𝐠𝐛=\frac{1}{2}d^3x\mathrm{tr}\left\{𝐁^2+(𝐃b)^2\right\}.$$ (20) Ignoring the other five Higgs fields, the low energy dynamics that actually dictates the motion of these 1/2 BPS configurations would be given by the purely kinetic, nonrelativistic Lagrangian $$=\frac{1}{2}g_{mn}(z)\dot{z}^m\dot{z}^n.$$ (21) As there are $`r`$ unbroken global $`U(1)`$ symmetries, the corresponding electric charges should be conserved. In other words, $``$ should have $`r`$ cyclic coordinates corresponding to these gauge transformations. In particular, we can choose a basis such that a cyclic coordinate is denoted by $`\xi ^A`$ ($`A=1,\mathrm{},r`$) corresponds to the center of mass phase of monopoles of $`𝜷_A`$ root. In geometrical terms, the cyclic coordinates $`\xi ^A`$’s generate Killing vectors, $$K_A\frac{}{\xi ^A}.$$ (22) Finally, let us divide the moduli coordinates $`z^m`$ to $`\xi ^A`$ and the rest $`y^i`$, upon which the Lagrangian (21) can be rewritten as $$=\frac{1}{2}h_{ij}(y)\dot{y}^i\dot{y}^j+\frac{1}{2}L_{AB}(y)(\dot{\xi }^A+w_i^A(y)\dot{y}^i)(\dot{\xi }^B+w_j^B(y)\dot{y}^j),$$ (23) which defines the quantities $`h`$, $`L`$, and $`w`$’s. In particular, $$L_{AB}=g_{mn}K_A^mK_B^n.$$ (24) Notice that all metric components are independent of $`\xi ^A`$. ## 3 Additional Higgs and Monopole Dynamics Let us now explore the low energy dynamics of monopoles when additional Higgs fields, $`a_I`$, are turned on. When expectation values $`𝐚_I`$ are turned on, the monopole solutions of the primary BPS equation are not, in general, solutions to the full field equations. Monopoles exert static forces on other monopoles. For sufficiently small $`𝐚_I`$, these forces arise from the extra potential energy due to nontrivial $`a_I`$ fields; The combined effect of $`𝐚_I`$ and of the monopole background induce some nontrivial behavior to $`a_I`$, which “dresses” the monopoles and contributes to the energy of the system. To find this potential, we imagine a static configuration of monopoles, which are held fixed by some external force. Let us try to dress it with a time-independent $`a_I`$ field with the smallest possible cost of energy. The energy functional for such $`a_I`$ fields is $$\mathrm{\Delta }E=\frac{1}{2}d^3x\mathrm{tr}\left\{(𝐃a_I)^2+(i[a_I,b])^2\right\},$$ (25) to the leading order where we ignore terms of higher power in $`\eta `$, such as $`[a_I,a_J]^2`$. We can find the minimal “dressing” field $`a_I`$ by solving the second order equation, $$D^2a_I[b,[b,a_I]]=0.$$ (26) Solving this for $`a_I`$ and inserting them back into the energy functional above, we should find the minimal cost of energy for the static monopole configuration. The same type of the second order equation appeared in construction of 1/4 BPS dyons, where the projected Higgs field $`c_Ia_I`$ obey such an equation. However, we must emphasize that we are performing a very different task here. Specifically, in the construction of 1/4 BPS dyons, BPS equations force $`c_Ia_I`$ to be identified with the time-component gauge field, $`A_0`$, which determines electric charges. Here we are simply solving for the reaction of the scalar fields $`a_I`$ to the given monopole configuration. Using Tong’s trick, we notice that $`𝐃a_I`$ and $`i[b,a_I]`$ can be thought of as global gauge zero modes, $`D_\mu a_I`$, which satisfy the gauge fixing condition, $`D_\mu D_\mu a_I=0`$. Thus, $`D_\mu a_I`$ can be regarded as a linear combination of gauge zero modes, and subsequently each $`a_I`$ picks out a linear combination of $`U(1)`$ Killing vector fields on the moduli space, which are $$K_A^m\frac{}{z_m}=\frac{}{\xi _A}.$$ (27) More precisely, each $`K_A`$ corresponds to a gauge zero mode, $$K_A^m\delta _mA_\mu ,$$ (28) and each $`D_\mu a_I`$ is a linear combination of them, $$D_\mu a_I=a_I^AK_A^m\delta _mA_\mu ,$$ (29) when we expand the asymptotic value $`𝐚_I=_Aa^A𝝀_A`$, where $`𝝀_A`$’s are the fundamental weights such that $`𝝀_A𝜷_B=\delta _{AB}`$. We then express the potential energy $`𝒱`$, obtained by minimizing the functional $`\mathrm{\Delta }E`$ in Eq. (25) in the monopole background, in terms of the monopole moduli parameters as $$𝒱=\frac{1}{2}d^3x\mathrm{tr}\left\{(a_I^AK_A^m\delta _mA_\mu )(a_I^BK_B^n\delta _nA_\mu )\right\}=\frac{1}{2}g_{mn}a_I^AK_A^ma_I^BK_B^n.$$ (30) The value of this potential depends on the monopole configuration we started with, which induces the static force on monopoles. The low energy effective Lagrangian was purely kinetic when $`a_I`$ were absent. In the presence of $`a_I`$’s and of their expectation values $`𝐚_I`$, however, the Lagrangian picks up a potential term, $$=\frac{1}{2}g_{mn}\dot{z}^m\dot{z}^n𝒱$$ (31) which can be written more explicitly as, $``$ $`=`$ $`{\displaystyle \frac{1}{2}}g_{mn}(z)\dot{z}^m\dot{z}^n{\displaystyle \frac{1}{2}}g_{mn}(z)a_I^AK_A^ma_I^BK_B^n`$ (32) $`=`$ $`{\displaystyle \frac{1}{2}}h_{ij}(y)\dot{y}^i\dot{y}^j+{\displaystyle \frac{1}{2}}L_{AB}(y)(\dot{\xi }^A+w_i^A(y)\dot{y}^i)(\dot{\xi }^B+w_j^B(y)\dot{y}^j){\displaystyle \frac{1}{2}}L_{AB}(y)a_I^Aa_I^B.`$ where the index $`I`$ runs from 1 to 5, and labels the five potential terms. The procedure we employed here should be a very familiar one. When we talk about, say, Coulombic interaction between charged particles, we also fix the charge distribution by hand, and then estimate the potential energy it costs. Of course, there is a possibility of more interaction terms involving velocities of moduli as well as $`a_I`$ fields, but in the low energy approximation here, the only relevant terms of such kind would be of order $`v\eta `$. However, it is clear that neither backreaction of $`a_I`$ to the magnetic background nor the time-dependence of $`a_I`$’s can produce such a term. Thus, to the leading quadratic order in $`v`$ and $`\eta `$, the above Lagrangian captures all bosonic interactions among monopoles in the presence of $`𝐚_I`$’s.<sup>5</sup><sup>5</sup>5While the low energy dynamics turns out to be quite simple, there is a subtlety in reconstructing the actual field configuration for a given low energy motion on the moduli space. For the magnetic part of the configuration, $`A_\mu `$, the trajectory on moduli space can be represented reliably by allowing time-dependence of the moduli parameters. Namely, the time-dependent field configuration would be $`A_\mu =\stackrel{~}{A}_\mu (𝐱;z_m(t))`$ where $`\stackrel{~}{A}_\mu (𝐱;z)`$ is the solution of the primary BPS equation. For the additional Higgs fields, however, the naive ansatz $`a_I=\stackrel{~}{a}_I(𝐱;z_m(t))`$ does not work, where $`\stackrel{~}{a}_I(𝐱;z_m)`$ solves the static second order equation (26) in the background of $`\stackrel{~}{A}_\mu (𝐱;z)`$. Such an ansatz would involve fluctuations of nonnormalizable modes, as $`\stackrel{~}{a}_I`$ has a $`z_m`$-dependent $`1/r`$ tail. Rather, the time-dependence of $`a_I`$ field has much nicer large $`r`$ behavior, and this can be seen easily by solving the full field equation for $`a_I`$ order by order in $`v`$. ## 4 1/4 BPS and Non BPS Configurations The total energy of the field configuration within this nonrelativistic approximation is then $$E=𝐛𝐠+,$$ (33) where the nonrelativistic energy is derived from $``$, and can be written as $$=\frac{1}{2}g_{mn}(z)\left(\dot{z}^m\dot{z}^n+a_I^AK_A^ma_I^BK_B^n\right).$$ (34) The energy $``$ has a BPS bound of its own. With an arbitrary five dimensional unit vector $`c_I`$, we can rewrite the energy as $$=\frac{1}{2}g_{mn}(\dot{z}^mc_Ia_I^AK_A^m)(\dot{z}^nc_Ja_J^BK_B^n)+\frac{1}{2}g_{mn}a_I^AK_A^ma_J^BK_B^n+c_Ig_{mn}\dot{z}^ma_I^AK_A^n$$ (35) where $`a_I^A=a_I^Ac_Ic_Ja_J^A`$ is the part of $`a_I^A`$ orthogonal to $`c_I`$. Since there is $`r`$ $`U(1)`$ symmetries with Killing vectors $`K_A^m`$, there are $`r`$ conserved charges $$q_A=K_A^m\frac{}{\dot{z}^m}=g_{mn}K_A^m\dot{z}^n.$$ (36) As the metric $`g_{mn}`$ are positive definite, there is a bound on the energy, $$E|c_Ia_I^Aq_A|.$$ (37) This bound is saturated when $`\dot{z}^mc_Ia_I^AK_A^m=0`$ (38) $`a_I^A=a_I^Ac_Ic_Ja_J^A=0.`$ (39) The second equation is satisfied if, for instance, only one additional Higgs fields are relevant, while the first equation implies that the conserved charges are $$q_A=g_{mn}K_A^mc_Ia_I^BK_B^n.$$ (40) Quantum counterpart of such BPS configurations have been explored in Ref. . In field theory terms, these BPS states of low energy dynamics preserve 1/4 of field theory supersymmetries. These 1/4 BPS configurations describe static dyons spreading out in space such that the electromagnetic force and the Higgs force are in delicate balance. They are the BPS configurations of the low energy effective action when, in effect, only one linear combination of the Killing vector fields, $`c_Ia_I^AK_A`$, is relevant. For more general cases, when $`a_I`$ cannot be taken to be zero, the BPS bound are not saturated. Nevertheless, there must exist the lowest energy state with any given charge, which would correspond to stable non BPS states. (These non BPS configurations correspond to the string web which is not planar.) Such a stable dyonic configuration can be found classically considering the nonrelativistic energy functional. The energy functional for a given set of electric charges $$q_A=\frac{}{\dot{\xi }^A}$$ (41) is $$=\frac{1}{2}h_{ij}\dot{y}^i\dot{y}^j+U_{\mathrm{eff}}(y)$$ (42) where the effective potential is $$U_{\mathrm{eff}}=\frac{1}{2}L^{AB}(y)q_Aq_B+\frac{1}{2}L_{AB}a_I^Aa_I^B$$ (43) with the inverse of $`L_{AB}`$ is denoted by $`L^{AB}`$. The minimum of the energy is achieved by the configurations which are static in $`y^i`$ and satisfy $$\frac{}{y^i}U_{\mathrm{eff}}(y)=0$$ (44) In general the family of stable solutions $`y^i`$ for a given $`q_A`$, if they exist, will form a submanifold of the moduli space. However, it is not clear whether there will be always $`q_A`$ satisfying Eq. (44) for some $`y^i`$. In fact, it is known that for some case with too large values of $`q_A`$ there is no solution to such equations. The general analysis of Eq. (44) will be complicated. One case where it can be solved explicitly is when the magnetic background contains only one fundamental monopole of each kind; That is, suppose that, for each simple root $`𝜷_A,A=1,\mathrm{},r`$, we have one fundamental monopole at $`𝐱_A`$ and with the $`U(1)`$ phase $`\xi _A`$. Denote the relative position vectors between adjacent (in the Lie algebra sense) monopoles by $`𝐫_A=𝐱_{A+1}𝐱_A`$ for $`A=1,\mathrm{},r1`$ and also define the corresponding relative phases by $`\zeta _A`$. For the phases, the redefinition is such that the charges $`\stackrel{~}{q}_A`$, associated with $`\xi _A`$, is related to $`q_A`$’s by $`\stackrel{~}{q}_A=q_{A+1}q_A`$. The metric is then decomposed into two decoupled pieces; $`ds^2`$ $`=`$ $`{\displaystyle \frac{1}{(m_A)}}\left(d({\displaystyle \underset{A=1}{\overset{r}{}}}m_A𝐱_A)^2+{\displaystyle \frac{16\pi ^2}{e^4}}d({\displaystyle \underset{A=1}{\overset{r}{}}}\xi _A)^2\right)`$ (45) $`+`$ $`{\displaystyle \underset{A=1}{\overset{r1}{}}}{\displaystyle \underset{B=1}{\overset{r1}{}}}\left(C^{AB}d𝐫_Ad𝐫_B+C_{AB}(d\zeta ^A+𝐰(𝐫_A)d𝐫_A)(d\zeta ^B+𝐰(𝐫_B)d𝐫_B)\right)`$ where $`m_A`$ are the masses of the $`r`$ fundamental monopoles. The $`(r1)\times (r1)`$ matrices $`C^{AB}`$ and $`C_{AB}`$ are inverses of each other, $$\underset{B=1}{\overset{r1}{}}C^{AB}C_{BC}=\delta _C^A$$ (46) and are explicitly known $$C^{AB}=\mu ^{AB}+\delta ^{AB}\frac{\lambda _A}{|𝐫_A|}$$ (47) with the reduced mass matrix $`\mu _{AB}`$, $`A,B=1,\mathrm{},r1`$, and some coupling constants $`\lambda _A`$. The vector potential $`𝐰(𝐫)`$ is the Dirac potential; $$\frac{1}{r}=\times 𝐰(𝐫).$$ (48) In the new coordinate, the potential also decomposes into two parts, one of which is independent of moduli coordinates, $`U_{\mathrm{eff}}(r^A)`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{A,B=1}{\overset{r}{}}}L^{AB}(y)q_Aq_B+{\displaystyle \frac{1}{2}}{\displaystyle \underset{A,B=1}{\overset{r}{}}}L_{AB}a_I^Aa_I^B`$ (49) $`=`$ $`\mathrm{constant}+{\displaystyle \frac{1}{2}}{\displaystyle \underset{A,B=1}{\overset{r1}{}}}C^{AB}\stackrel{~}{q}_A\stackrel{~}{q}_B+{\displaystyle \frac{1}{2}}{\displaystyle \underset{A,B=1}{\overset{r1}{}}}C_{AB}\stackrel{~}{a}_I^A\stackrel{~}{a}_I^B`$ The vacuum expectation values in the new basis, $`\stackrel{~}{a}_I^A`$, $`A=1,\mathrm{},r1`$, are found from $`a_I^A`$, $`A=1,\mathrm{},r`$, using the relationship, $$\underset{A=1}{\overset{r}{}}a_I^Aq_A=\underset{A=1}{\overset{r1}{}}\stackrel{~}{a}_I^A\stackrel{~}{q}_A+\stackrel{~}{a}_I^0\frac{_{A=1}^rm_Aq_A}{_{A=1}^rm_A}$$ (50) with $`\stackrel{~}{a}_I^0`$ to be determined from this as well. The minimum of the potential is found by looking for the critical point, $$0=\frac{}{𝐫_C}U_{\mathrm{eff}}=\underset{A,B=1}{\overset{r1}{}}\frac{C^{AB}}{2𝐫_C}\left(\stackrel{~}{q}_A\stackrel{~}{q}_B\underset{I=1}{\overset{5}{}}\underset{A^{},B^{}=1}{\overset{r1}{}}C_{AA^{}}C_{BB^{}}\stackrel{~}{a}_I^A^{}\stackrel{~}{a}_I^B^{}\right)$$ (51) As $`C^{AB}/𝐫_C=\delta _{AB}\delta _{BC}𝐫_C/(r_C)^3`$, the condition reduces to $$0=\frac{}{𝐫_C}U_{\mathrm{eff}}=\frac{\lambda _C𝐫_C}{r_C^3}\left((\stackrel{~}{q}_C)^2\underset{I=1}{\overset{5}{}}\underset{A,B=1}{\overset{r1}{}}C_{CA}C_{CB}\stackrel{~}{a}_I^A\stackrel{~}{a}_I^B\right)$$ (52) and we find that the critical points are such that the charges are given as functions of $`\stackrel{}{r}_A`$ as follows, $$|\stackrel{~}{q}_C|=\sqrt{\underset{I=1}{\overset{5}{}}\underset{A,B=1}{\overset{r1}{}}C_{CA}(\stackrel{}{r})C_{CB}(\stackrel{}{r})\stackrel{~}{a}_I^A\stackrel{~}{a}_I^B}$$ (53) Once this is satisfied, $`\dot{𝐫}_A=0`$ solves the equations of motion, so the solution describes static configurations of many distinct monopoles, each dressed by the electric charges. They corresponds to stable non BPS dyons in the field theoretic description. By inserting (53) to the effective potential (49), the energy of the configuration is determined as a function of the monopole positions. The latter $`U(1)`$ charge is not determined by moduli parameters. It should be remarked that these states become 1/4 BPS, when only one Higgs vacuum expectation value out of five is nonvanishing or all the directions of them are parallel with each other. ## 5 Supersymmetric Extension So far we have concentrated on the bosonic part of the low energy effective Lagrangian in the moduli space approximation. The supersymmetric extension of the bosonic action can be achieved in two different routes. A direct approach is to follow the same strategy of the bosonic part. Namely, identify first the moduli fluctuations and their coordinates of the fermionic part, and integrate out all the other fluctuation except the monopole moduli variables using the original field theoretic Lagrangian. While such a derivation would be more desirable, the symmetry of the system seems to offer us a shortcut and allow us to fix the SUSY completion of the effective Lagrangian uniquely. We shall follow the latter approach here. Since the background configurations of monopoles preserves half of the 16 supersymmetries of the original $`N`$=4 SYM theory, the quantum mechanics should have four complex, or eight real supercharges, Furthermore, the low energy effective theory should be consistent with the $`SO(6)`$ R-symmetry of the four dimensional $`N`$=4 SYM theory. Out of six Higgs fields, we picked out one, $`b`$, associated with construction of monopoles, so only $`SO(5)`$ subgroup of $`SO(6)`$ may show up. For instance, when we consider the extreme case of $`𝐚_I=0`$, the SUSY quantum mechanics must have full $`SO(5)`$ R-symmetry. The additional Higgs expectations $`𝐚_I`$ break the remaining $`SO(5)`$ rotational symmetry of the field theory, as well, spontaneously. On the other hand, in the low energy dynamics of monopoles, $`𝐚_I`$ are small parameters, so the breaking of $`SO(5)`$ is explicit and soft. Thus, $`SO(5)`$ is not a symmetry of the low energy dynamics. Nevertheless, because all $`𝐚_I`$ are on an equal footing (unlike $`𝐛`$), the low energy dynamics must remain invariant when we rotate the $`𝐚_I`$ in addition to rotating dynamical degrees of freedom. Thus, although this $`SO(5)`$ is not a symmetry of the low energy theory in the conventional sense, this provides us with an interesting consistency checkpoint. Later we will find and write down this $`SO(5)`$ transformation explicitly. Existence of the four complex supercharges is already quite restrictive. The supersymmetry requires the geometry to be hyperKähler, to begin with, equipped with three complex structures that satisfy $`^{(s)}^{(t)}=\delta ^{st}+ϵ^{stu}^{(u)},`$ (54) $`D_m_p^{(s)n}=0.`$ (55) In the absence of $`𝐚_I`$’s, the dynamics would be a sigma model onto the hyperKähler moduli space of monopoles. The bosonic potential introduced by $`𝐚_I`$’s can be rewritten in terms of five triholomorphic Killing vector fields, $`G_I𝐚_I𝐊`$, as $$\frac{1}{2}\underset{I=1}{\overset{5}{}}G_I^mG_I^ng_{mn}.$$ (56) Alvarez-Gaume and Freedman discussed how such Killing vector fields can be incorporated into supersymmetric Lagrangian while maintaining four complex supercharges in the two-dimensional context. In this two-dimensional setting, they showed that up to four triholomorphic Killing vectors can be accommodated. This result presumably has something to do with the fact that the supersymmetric Lagrangian can also be obtained via the Scherk-Schwarz dimensional reduction from the six dimensional (0,8) nonlinear sigma model action presented in Ref. . Since we are considering quantum mechanics instead of two-dimensional field theory, this suggests to us that one should be able to incorporate up to five such Killing vectors to the effective Lagrangian. Thus, generalizing their result to quantum mechanics, we obtain the following unique supersymmetric completion of the low energy dynamics, $`={\displaystyle \frac{1}{2}}\left(g_{mn}\dot{z}^m\dot{z}^n+ig_{mn}\overline{\psi }^m\gamma ^0D_t\psi ^n+{\displaystyle \frac{1}{6}}R_{mnpq}\overline{\psi }^m\psi ^p\overline{\psi }^n\psi ^qg_{mn}G_I^mG_I^niD_mG_{In}\overline{\psi }^m(\mathrm{\Omega }^I\psi )^n\right)`$ (57) where $`\psi ^m`$ is a two component Majonara spinor, $`\gamma ^0=\sigma _2,\gamma ^1=i\sigma _1,\gamma ^2=i\sigma _3`$, $`\overline{\psi }=\psi ^T\gamma ^0`$. The operator $`\mathrm{\Omega }_I`$’s are defined respectively by $`\mathrm{\Omega }_4=\delta _n^m\gamma _{\alpha \beta }^1`$, $`\mathrm{\Omega }_5=\delta _n^m\gamma _{\alpha \beta }^2`$ and $`\mathrm{\Omega }_s=i_n^{(s)m}\delta _{\alpha \beta }`$ for $`s=1,2,3`$. The supersymmetry algebra by itself requires some properties of $`G_I`$’s, in addition to hyperKähler properties of $`g_{mn}`$. $`G_I`$ must satisfy $`D_mG_{In}+D_nG_{Im}=0,`$ (58) or equivalently $`_{G_I}g=0`$, that is, $`G_I`$ must be a Killing vector. In addition, the rotated version $`(^{(s)}G_I)_m`$’s must also satisfy $`D_m(^{(s)}G_I)_nD_n(^{(s)}G_I)_m=0`$ (59) Taken together with the Killing properties of $`G_I`$ and the closedness of the Kähler forms, this also implies that $`G_I`$ are triholomorphic, $$_{G_I}^{(s)}=0$$ (60) Of course, for the specific case of monopole dynamics, these two conditions are satisfied because each $`K_A`$ is a triholomorphic Killing vector field on the moduli space. One last requirement on $`G_I`$’s from SUSY algebra is, $$G_I^m_{mn}^{(s)}G_J^n=0.$$ (61) for $`s=1,2,3`$. This condition is met for triholomorphic $`G_I`$’s, provided that the commutators vanish, $`[G_I,G_J]=0`$. Since $`K_A`$’s all commute among themselves, this last condition is also satisfied in the above low energy dynamics of monopoles. When quantized, the spinors $`\psi ^E=e_m^E\psi ^m`$ with vielbein $`e_m^E`$, commute with all the bosonic dynamical variables, especially with $`p`$’s that are canonical momenta of the coordinates $`z`$’s. The remaining fundamental commutation relations are $`[z^m,p_n]=i\delta _n^m,`$ $`\{\psi _\alpha ^E,\psi _\beta ^F\}=\delta ^{EF}\delta _{\alpha \beta }.`$ (62) (Consequently, the bosonic momenta $`p`$’s do not commute with $`\psi ^m`$.) It is straightforward to show that the Lagrangian (57) is invariant under the N=4 supersymmetry transformations, $`\delta _{(0)}z^m=\overline{ϵ}\psi ^m,`$ $`\delta _{(0)}\psi ^m=i\dot{z}^m\gamma ^0ϵ\mathrm{\Gamma }_{np}^m\overline{ϵ}\psi ^n\psi ^pi(G^I\mathrm{\Omega }^I)^mϵ,`$ (63) $`\delta _{(s)}z^m=\overline{ϵ}_{(s)}(^{(s)}\psi )^m,`$ $`\delta _{(s)}\psi ^m=i(^{(s)}\dot{z})^m\gamma ^0ϵ_{(s)}+_l^{(s)m}\mathrm{\Gamma }_{np}^l\overline{ϵ}_{(s)}(^{(s)}\psi )^n(^{(s)}\psi )^pi(G^I^{(s)}\mathrm{\Omega }^I)^mϵ_{(s)},`$ (64) where $`ϵ`$ and $`ϵ_{(s)}`$ are spinor parameters and no summation convention is used for the index $`s=1,2,3`$. For the supercharges, let us first define supercovariant momenta by $`\pi _mp_m{\displaystyle \frac{i}{2}}\omega _{EFm}\overline{\psi }^E\gamma ^0\psi ^F,`$ (65) where $`\omega _{EFm}`$ is the spin connection. The corresponding N=4 SUSY generators in real spinors are then $`Q_\alpha =\psi _\alpha ^m\pi _m(\gamma ^0\mathrm{\Omega }^I\psi )^mG_m^I,`$ (66) $`Q_\alpha ^{(s)}=(^{(s)}\psi )_\alpha ^m\pi _m(\gamma ^0^{(s)}\mathrm{\Omega }^I\psi )^mG_m^I.`$ (67) These charges satisfy the N=4 superalgebra: $`\{Q_\alpha ,Q_\beta \}=\{Q_\alpha ^{(s)},Q_\beta ^{(s)}\}=2\delta _{\alpha \beta }2(\gamma ^0\gamma ^1)_{\alpha \beta }𝒵_42(\gamma ^0\gamma ^2)_{\alpha \beta }𝒵_5,`$ (68) $`\{Q_\alpha ,Q_\beta ^{(s)}\}=2\gamma _{\alpha \beta }^0𝒵_s,\{Q_\alpha ^{(1)},Q_\beta ^{(2)}\}=2\gamma _{\alpha \beta }^0𝒵_3,`$ (69) $`\{Q_\alpha ^{(2)},Q_\beta ^{(3)}\}=2\gamma _{\alpha \beta }^0𝒵_1,\{Q_\alpha ^{(3)},Q_\beta ^{(1)}\}=2\gamma _{\alpha \beta }^0𝒵_2,`$ (70) where the Hamiltonian $``$ and the central charges $`𝒵_I`$ read $`={\displaystyle \frac{1}{2}}\left({\displaystyle \frac{1}{\sqrt{g}}}\pi _m\sqrt{g}g^{mn}\pi _n+g^{mn}G_m^IG_n^I{\displaystyle \frac{1}{4}}R_{mnpq}\overline{\psi }^m\gamma ^0\psi ^n\overline{\psi }^p\gamma ^0\psi ^q+iD_\mu G_\nu ^I\overline{\psi }^\mu \mathrm{\Omega }^I\psi ^n\right),`$ (71) $`𝒵_I=G_I^m\pi _m{\displaystyle \frac{i}{2}}D_mG_n^I\overline{\psi }^m\gamma ^0\psi ^n.`$ (72) It is straightforward to see that the SO(5) rotation is realized by the transformation, $`\psi e^{\frac{1}{2}\theta _{KL}J_{KL}}\psi ,`$ (73) where $`\theta _{KL}`$ is antisymmetric in its indices and the corresponding generators, $`J_{KL}(=J_{LK})`$, denote $`J_{ab}=ϵ_{abc}^{(c)},J_{45}=i\sigma _2,J_{4a}=\sigma _1^{(a)},J_{5a}=\sigma _3^{(a)}`$ (74) with $`a,b,c=1,2,3`$. For example, the transformation reads explicitly $`\psi _\alpha ^m\mathrm{cos}\theta \psi _\alpha ^m+\mathrm{sin}\theta (\sigma _1^{(1)}\psi )_\alpha ^m,`$ (75) when only $`\theta _{41}=\theta _{14}=\theta `$ is nonvanishing. Performing such SO(5) rotations, we obtain a theory with the vacuum expectation values $`𝐚_{}^{}{}_{I}{}^{}=_{IJ}𝐚_J`$ where $`_{IJ}`$ is the corresponding SO(5) rotation matrix satisfying $`^T=I`$. More specifically, the induced transformation of $`G_I`$ that is linear in the vacuum expectation value $`𝐚^I`$, is $`G_I\left(e^{\theta _{KL}𝒥_{KL}}\right)_{IJ}G_J`$ (76) where $`(𝒥_{KL})_{IJ}=\frac{1}{2}(\delta _{KI}\delta _{LJ}\delta _{KJ}\delta _{LI})`$. When all $`a^I`$ are parallel with each other, one may make only one Higgs expectation value nonvanishing by an appropriate SO(5) rotations; the result corresponds to 1/4 BPS effective Lagrangian in Ref. . The ten generators of SO(5) in (74) exhaust all the possible covariantly constant, antisymmetric structures present in the N=4 supersymmetric sigma model without potential, so the realization of the R-symmetry is rather unique. The complex form of the supercharges are often useful. For this, we introduce $`\phi ^m\frac{1}{\sqrt{2}}(\psi _1^mi\psi _2^m)`$ and define $`Q\frac{1}{\sqrt{2}}(Q_1iQ_2)`$. The supercharges in (66) can be rewritten as $`Q=\phi ^m\pi _m\phi ^m(G_m^4iG_m^5)i{\displaystyle \underset{s=1}{\overset{3}{}}}G_m^s(^{(s)}\phi )^m`$ (77) $`Q^{}=\phi ^m\pi _m\phi ^m(G_m^4+iG_m^5)+i{\displaystyle \underset{s=1}{\overset{3}{}}}G_m^s(^{(s)}\phi ^{})^m.`$ (78) The supercharges $`Q^{(s)}`$ and $`Q_{}^{(s)}{}_{}{}^{}`$ are found similarly from (67). Finally, $`\{Q,Q^{}\}=\{Q^{(s)},Q_{}^{(s)}{}_{}{}^{}\}=`$, so the Hamiltonian is positive definite. All the central charges appear in other parts of the algebra. In Ref. , the quantum 1/4 BPS wavefunctions have been constructed explicitly and the structure of the supermultiplet are identified. This construction has heavily relied upon the BPS nature of the states which first order equations. This kind of simplification does not occur in the case of the stable non BPS states, so we will leave the analysis of their wavefunctions for future works. ## 6 Conclusion We have found the complete supersymmetric Lagrangian for the low energy dynamics of 1/2 monopoles when all six adjoint Higgs fields get expectation values. We consider the nonrelativistic dynamics of monopoles, which constrains the five additional Higgs to be small compared the first that gives mass to the monopoles. The bosonic part of the effective dynamics is found by a perturbative expansion of fields around purely magnetic monopole configurations, which agrees with previously found exact Lagrangian when only two Higgs fields, one large and another small, are involved. The supersymmetric extension is constructed and argued to be unique, given the four complex supercharges, and an would-be $`SO(5)`$ R-symmetry that is softly broken by five Killing potential terms. The dyonic state from the low energy dynamics would correspond to a web of strings ending on D3 branes, when realized as type IIB string theory configurations. This is possible for all classical gauge groups of the Yang-Mills theory. When the transverse positions of the D3 branes are planar, only one of the five Killing vectors become relevant, and the state saturate a BPS bound which is linearly characterized by the values of electric charges. These are 1/4 BPS dyons in the field theory sense. For a non planar distribution of D3 branes, on the other hand, at least two of the five Killing vectors are relevant. The resulting non planar web does not saturate a BPS bound, but exist as a stable dyonic state. The state will consist of several distinct monopoles, each dressed with some electric charges that are mostly determined by the inter-monopole separations. For a simple case, we gave a set of algebraic equations that can be used to determine the charge-position relationship. We have not fully explored the low energy effective Lagrangian even classically. It is expected that there exists a clear correspondence between the energy of non planar string web and the minimum energy of the stable but non BPS states. The energy of the string web in an appropriate limit can be determined as sum of each length multiplied by tension. The detailed comparison will be of interest. It would be interesting to find out the field theoretic configuration for these dyonic non BPS configurations. The quantum mechanics of the supersymmetric Lagrangian is more involved than that of Ref. , which considered only one Killing potential. Nevertheless it is of some interest to find the ground state for the given electric charges. Acknowledgments D.B. is supported in part by Ministry of Education Grant 98-015-D00061. K.L. is supported in part by the SRC program of the SNU-CTP and the Basic Science and Research Program under BRSI-98-2418. D.B. and K.L. are also supported in part by KOSEF 1998 Interdisciplinary Research Grant 98-07-02-07-01-5.
no-problem/9912/cond-mat9912246.html
ar5iv
text
# Untitled Document Pressure and Maxwell Tensor in a Coulomb Fluid B. Jancovici<sup>1</sup> ## Abstract The pressure in a classical Coulomb fluid at equilibrium is obtained from the Maxwell tensor at some point inside the fluid, by a suitable statistical average. For fluids in an Euclidean space, this is a fresh look on known results. But, for fluids in a curved space, a case which is of some interest, the unambiguous results from the Maxwell tensor approach have not been obtained by other methods. KEY WORDS: Coulomb fluids ; pressure ; Maxwell tensor; curved space. LPT Orsay 99-103 December 1999 <sup>1</sup> Laboratoire de Physique Théorique, Université de Paris-Sud, Bâtiment 210, 91405 Orsay, France (Unité Mixte de Recherche n 8627 - CNRS). E-mail: Bernard.Jancovici@th.u-psud.fr 1. INTRODUCTION The aim of the present paper is to revisit the concept of pressure for a Coulomb fluid, i.e. a fluid made of particles interacting through Coulomb’s law (electrolyte, plasma,…). We consider only fluids in thermodynamic equilibrium, and assume that classical (i.e. non-quantum) statistical mechanics is applicable. Pressure is often defined as the force per unit area that a fluid exerts on the walls of a (large) vessel containing it. However, pressure may also be defined without reference to any wall. One has to imagine some immaterial plane surface across the fluid, and pressure is the force per unit area with which the fluid lying on one side of this surface pushes on the fluid lying on the other side. Both definitions agree with each other. From a microscopic point of view, the force between two parts of the fluid is usually described in terms of the interactions between the molecules. Two molecules at a distance $`r`$ from each other are supposed to interact through some potential. This is the standard approach, which is briefly recalled in Section 2. In the case of electromagnetism, Maxwell, following Faraday, came to a different point of view: the forces between two charged objects are mediated by fields. At a given point of space, even in vacuum outside the charges, there is a stress tensor (the Maxwell stress tensor), a local quantity defined in terms of the fields at that point, similar to the stress tensor within some elastic medium. In this picture, every region of “empty” space exerts forces on the regions ajacent to it. In Section 3, it is shown how the pressure at some point inside a Coulomb fluid can be defined and computed from the Maxwell tensor at that point by a suitable statistical average, with an appropriate prescription for obtaining a finite result. This result agrees with the standard one. Section 3 “extends” the above ideas to the case of two-dimensional models. Section 4 discusses the case of Coulomb fluids living in a curved space. In this case, the Maxwell tensor approach will be shown to be especially appropriate. 2. A SUMMARY OF THE STANDARD APPROACH In the simple case of a fluid made of one species of particles, with a pair interaction $`v(r)`$ depending only on the distance $`r`$, the pressure $`P`$ is found to be, in the thermodynamic limit, $$P=nkT\frac{1}{6}n^2\frac{dv}{dr}rg(r)𝑑𝐫$$ $`(2.1)`$ where $`n`$ is the number density (number of particles per unit volume), $`k`$ is Boltzmann’s constant, $`T`$ is the temperature, and $`g(r)`$ is the pair distribution function. In (2.1), $`nkT`$ is the ideal gas part of the pressure (related to the momentum carried by the particles), while the following term, due to the interactions, is called the excess pressure $`P_{ex}`$. The same equation (2.1) is obtained by looking either at the pressure on the walls<sup>(1)</sup> or at the pressure in the bulk fluid<sup>(2)</sup>. Real Coulomb fluids are made of several species of particles (for instance, in an electrolyte, positive and negative ions, plus the solvent molecules). In a classical model, some short-range non-Coulombic interaction must be introduced, for avoiding the collapse on each other of particles of opposite sign. Here, for simplicity, we rather consider only a simplified model, the one-component plasma (OCP)<sup>(3)</sup>: identical point-particles of one sign, each of them carrying an electric charge $`q`$, embedded in a uniform background of opposite sign which ensures overall neutrality. Only the Coulomb interaction is retained, thus the interaction is $`v(r)=q^2/r`$ and $`(dv/dr)r=q^2/r`$. Due to the background, the average charge in a volume element $`d𝐫`$ at a distance $`r`$ of a given particle is $`qn[g(r)1]`$ rather than $`qng(r)`$, and, in the case of the OCP, equation (1) is to be replaced by $$P=nkT+\frac{q^2n^2}{6}\frac{1}{r}h(r)𝑑𝐫$$ $`(2.2)`$ where $`h(r)=g(r)1`$ is the pair correlation function. It may be noted that $`P_{ex}`$ is one third of the potential energy density.<sup>1</sup><sup>1</sup>1For an OCP, there are several non-equivalent possible definitions of the pressure. The pressure (2.2) is the thermal pressure, in the sense of Choquard et al.<sup>(4)</sup> The above standard approach is based on the assumption of an interaction-at-distance $`q^2/r`$. In the next Section, it will be shown how (2.2) can be derived by using the Maxwell tensor. 3. THE MAXWELL TENSOR APPROACH If only electrostatic interactions are retained (magnetic effects are neglected), the Maxwell tensor is<sup>(5)</sup> $$T_{\alpha \beta }=\frac{1}{4\pi }(E_\alpha E_\beta \frac{1}{2}𝐄𝐄\delta _{\alpha \beta })$$ $`(3.1)`$ In (3.1), the Greek indices label the three Cartesian axes $`(x,y,z)`$. $`T_{\alpha \beta }`$ is the $`\alpha `$ component of the force per unit area transmitted, across a plane normal to the $`\beta `$ axis, to the fluid lying on the negative side of this plane. Thus, choosing for $`\beta `$ any axis, say the $`x`$ axis, one obtains for the excess pressure, which is a force along that axis, $$P_{ex}=<T_{xx}>=\frac{1}{8\pi }<E_x^{\mathrm{\hspace{0.17em}2}}E_y^{\mathrm{\hspace{0.17em}2}}E_z^{\mathrm{\hspace{0.17em}2}}>$$ $`(3.2)`$ where $`<\mathrm{}>`$ denotes a statistical average on all particle configurations (the electric field at some point is a function of the particle configuration). Our task is to evaluate the statistical average (3.2) at some point inside the fluid, say at the origin. Let $`\rho ^{(2)}(r_{12})`$ be the statistical average of the product microscopic charge density at $`𝐫_1`$ times microscopic charge density at $`𝐫_2`$ ($`𝐫_{12}=𝐫_2𝐫_1`$). From (3.2), $$P_{ex}=\frac{1}{8\pi }𝑑𝐫_1𝑑𝐫_2\frac{x_1x_2y_1y_2z_1z_2}{r_1^{\mathrm{\hspace{0.17em}3}}r_2^{\mathrm{\hspace{0.17em}3}}}\rho ^{(2)}(r_{12})$$ $`(3.3)`$ In the present case of an OCP, $$\rho ^{(2)}(r_{12})=q^2[n\delta (𝐫_{12})+n^2h(r_{12})]$$ $`(3.4)`$ Using (3.4) in (3.3) gives to $`P_{ex}`$ two contributions $`P_{self}`$ and $`P_{nonself}`$ involving the $`\delta `$ part and the $`h`$ part of (3.4), respectively. $`P_{nonself}`$ gives no difficulty. This is a convergent integral (indeed, $`h`$ is -1 at small $`r_{12}`$ because the particles strongly repel each other, and $`h`$ has a fast decay at large $`r_{12}`$ because remote particles are uncorrelated). Because of the rotational symmetry around the origin, it can be rewritten as $$P_{nonself}=\frac{q^2n^2}{24\pi }𝑑𝐫_1𝑑𝐫_2\frac{𝐫_1𝐫_2}{r_1^{\mathrm{\hspace{0.17em}3}}r_2^{\mathrm{\hspace{0.17em}3}}}h(r_{12})$$ $`(3.5)`$ But $$P_{self}=\frac{nq^2}{8\pi }𝑑𝐫\frac{x^2y^2z^2}{r^6}$$ $`(3.6)`$ diverges at small $`r`$. The resolution of the difficulty is that the force that each particle exerts on itself should not be taken into account. Thus, the integral in (3.6) must be regularized by the prescription that no particle sits on the $`x=0`$ plane on which we have chosen to compute the pressure force. This prescription can be enforced by removing from the integration domains a thin slab $`\epsilon <x<\epsilon `$ and taking the limit $`\epsilon 0`$ at the end. This prescription does not change the convergent integral (3.5). But it means that the self part (3.6) must be defined, in cylindrical coordinates $`(x,\rho )`$, as $$P_{self}=\frac{nq^2}{8\pi }\underset{\epsilon 0}{lim}_{|x|>\epsilon }𝑑x_0^{\mathrm{}}2\pi 𝑑\rho \rho \frac{x^2\rho ^2}{(x^2+\rho ^2)^3}$$ $`(3.7)`$ Since the integral on $`\rho `$, performed first, is found to vanish, the result is $`P_{self}=0`$. As to $`P_{nonself}`$, (3.5) can be easily computed by taking as integration variables $`𝐫_1`$ and $`𝐫_{12}`$, and performing the integral on $`𝐫_1`$ first with the result $`4\pi /r_{12}`$. The final result is $$P_{ex}=P_{nonself}=\frac{q^2n^2}{6}𝑑𝐫_{12}\frac{1}{r_{12}}h(r_{12})$$ $`(3.8)`$ in agreement with the standard formula (2.2). An alternative way of calculating $`P_{self}`$ will turn out to be more appropriate for extensions which follow. (3.6) is split into the contributions $`P_0`$ of $`r<r_0`$ and $`P_1`$ of $`r>r_0`$, where $`r_0`$ is the radius of a small sphere centered at the origin. The prescription that no particle sits on the plane $`x=0`$ does not change the convergent part $`P_1`$, which can be computed, using the rotational symmetry, as $$P_1=\frac{nq^2}{24\pi }_{r>r_0}\frac{d𝐫}{r^4}=\frac{nq^2}{6r_0}$$ $`(3.9)`$ It is only in $`P_0`$ that the rotational symmetry is broken by the prescription $`|x|>\epsilon `$, which gives $$P_0=\frac{nq^2}{8\pi }\underset{\epsilon 0}{lim}_{\epsilon <|x|<r_0}𝑑x_0^{\sqrt{r_0^{\mathrm{\hspace{0.17em}2}}x^2}}2\pi 𝑑\rho \rho \frac{x^2\rho ^2}{(x^2+\rho ^2)^3}=\frac{nq^2}{6r_0}$$ $`(3.10)`$ Thus $`P_{self}=P_0+P_1=0`$, and (2.2) is retrieved. 4. TWO-DIMENSIONAL MODELS Two-dimensional models of Coulomb fluids are of interest for at least two reasons. First,some of these models are physically relevant. Second, exact results are available. The two-dimensional case has special features which require the present separate discussion. In two dimensions, the Coulomb interaction (as defined through the Poisson equation) between two charges $`q`$ and $`q^{}`$ is $`qq^{}\mathrm{ln}(r/L)`$, where $`L`$ is some irrelevant length. Since this interaction diverges at $`r=0`$ only mildly, in addition to the OCP it is also possible to consider a two-component plasma (TCP), made of positive and negative point-particles of respective charges $`q`$ and $`q`$, without any additional short-range repulsion (which is stable provided that the coupling constant $`\mathrm{\Gamma }=q^2/kT`$ be smaller than 2). For the OCP, the two-dimensional analog of (2.1), with the background taken into account, is $$P=nkT\frac{1}{4}n^2\frac{dv}{dr}rh(r)𝑑𝐫$$ $`(4.1)`$ Now $`v(r)=q^2\mathrm{ln}(r/L)`$ and (4.1) becomes $$P=nkT+\frac{1}{4}n^2q^2h(r)𝑑𝐫$$ $`(4.2)`$ Perfect screening, present in a conductor, says that $$nh(r)𝑑𝐫=1$$ $`(4.3)`$ (this means that the polarization cloud around a particle of charge $`q`$ carries the opposite charge $`q`$). Using (4.3) in (4.2) gives the simple exact equation of state <sup>(6,7)</sup> $$P=n(kT\frac{q^2}{4})$$ $`(4.4)`$ Now, $`P_{ex}=nq^2/4`$ is no longer related to the potential energy density. We now turn to the Maxwell tensor approach. In two dimensions, the Maxwell tensor is $$T_{\alpha \beta }=\frac{1}{2\pi }(E_\alpha E_\beta \frac{1}{2}𝐄𝐄\delta _{\alpha \beta })$$ $`(4.5)`$ with Greek indices now labeling two Cartesian axes $`(x,y)`$. (3.3) is replaced by $$P_{ex}=\frac{1}{4\pi }𝑑𝐫_1𝑑𝐫_2\frac{x_1x_2y_1y_2}{r_1^{\mathrm{\hspace{0.17em}2}}r_2^{\mathrm{\hspace{0.17em}2}}}\rho ^{(2)}(r_{12})$$ $`(4.6)`$ where, for an OCP, (3.4) still holds. Now, although (4.6) still converges for large values of $`r_1`$ and $`r_2`$ (because $`\rho ^{(2)}(r_{12})`$ has a fast decay as $`r_{12}`$ increases and its integral vanishes), separating it in self and nonself parts would generate terms separately diverging at infinity. Here, it is more appropriate to split (4.6) in another way, similar to what has been done at the end of Section 3. Namely, in (4.6), one separates the contribution $`P_0`$ of the integration domain $`(r_1,r_2<r_0)`$ and the rest $`P_{ex}P_0`$. This rest is a convergent integral and, by rotational symmetry, it vanishes. One is left with $`P_0`$ which can be split into its self and nonself parts, with now a nonself part which is convergent and also vanishes by rotational symmetry. Finally, the self part has to be defined in the same way as (3.10), and $$P_0=\frac{nq^2}{4\pi }\underset{\epsilon 0}{lim}_{\epsilon <|x|<r_0}𝑑x_{\sqrt{r_0^{\mathrm{\hspace{0.17em}2}}x^2}}^{\sqrt{r_0^{\mathrm{\hspace{0.17em}2}}x^2}}𝑑y\frac{x^2y^2}{(x^2+y^2)^2}=\frac{nq^2}{4}$$ $`(4.7)`$ Thus $$P_{ex}=P_0=\frac{nq^2}{4}$$ $`(4.8)`$ in agreement with (4.4). Similar considerations hold for the TCP, as long as $`\mathrm{\Gamma }<2`$, and the equation of state again is (4.4), where now $`n`$ is the total number density of the particles. The two-dimensional OCP can also be obtained as a limit of the $`\nu `$-dimensional one, as explained in Appendix A. 5.CURVED SPACES The statistical mechanics of a Coulomb fluid living in a curved space is of interest for at least two reasons. First, for doing numerical simulations (necessarily on a finite system) without having to deal with boundary effects, a clever method has been to confine the system on the surface of a sphere (in the two-dimensional case)<sup>(8)</sup> or an hypersphere (in the three-dimensional case)<sup>(9,10)</sup>. Second, for two-dimensional Coulomb fluids on a surface of constant negative curvature (pseudosphere) <sup>(11)</sup>, it is possible to go to the limit of an infinite system while keeping a finite curvature, thus to look at the properties of a curved infinite system (something which cannot be done for a sphere or hypersphere). The present paper actually arose from the question: How to define the pressure of a Coulomb fluid in a curved space, away frow any wall? A formula like (2.1) is based on the interaction-at-distance picture: the force acting on the fluid lying on one side of some immaterial plane is the sum of elementary forces acting on each molecule. This picture cannot be generalized to the case of a curved space, because there is no straightforward way of summing forces (vectors) applied at different points of space. Thus, the Maxwell tensor picture seems to be the only possible one, defining the pressure at a given point of space as a local quantity depending only on the electric field at this point. Three kinds of Coulomb fluids will be considered: The three-dimensional OCP on a hypersphere, the two-dimensional OCP or TCP on a sphere, the two-dimensional OCP or TCP on a pseudosphere. 5.1.OCP on a Hypersphere The hypersphere is the four-dimensional analog of the usual sphere. We consider an OCP living on the three-dimensional “surface” $`S_3`$ of a hypersphere of radius $`R`$. On $`S_3`$, the geodesic distance between two points is $`R\psi `$, where $`\psi [0,\pi ]`$ is the angular distance between these points, as seen from the center of the hypersphere. The volume element between two concentric spheres of radii $`R\psi `$ and $`R(\psi +d\psi )`$ is $`dV=4\pi R^3\mathrm{sin}^2\psi d\psi `$. The total volume of $`S_3`$ is $`V=2\pi ^2R^3`$. Since $`S_3`$ is a compact manifold without boundary, electric potentials and fields can be defined only if the total charge is zero. In particular, the electric field created by one point charge cannot be defined. For overcoming this difficulty, one can consider the OCP as a collection of pseudocharges <sup>(9)</sup>: a pseudocharge is defined as a point charge $`q`$ plus a uniform background of total charge $`q`$. At a point $`M`$ located at a geodesic distance $`R\psi `$ from a pseudocharge located at $`M_0`$, the electric potential created by the pseudocharge is $$\mathrm{\Phi }=\frac{q}{\pi R}\left((\pi \psi )\text{ctn}\psi \frac{1}{2}\right)+V_0$$ $`(5.1)`$ where $`V_0`$ is an arbitrary constant. The corresponding electric field at $`M`$ is $$𝐄=\frac{q}{\pi R^2}\left(\text{ctn}\psi +\frac{\pi \psi }{\mathrm{sin}^2\psi }\right)𝐭$$ $`(5.2)`$ where $`𝐭`$ is the unit vector tangent to the geodesic $`MM_0`$ at $`M`$. From (5.1), the interaction energy between two pseudocharges $`i`$ and $`j`$ at a geodesic distance $`R\psi _{ij}`$ of each other is found to be $$\varphi (\psi _{ij})=\frac{q^2}{\pi R}\left((\pi \psi _{ij})\text{ctn}\psi _{ij}\frac{1}{2}\right)$$ $`(5.3)`$ independent of $`V_0`$. The excess pressure is given by (3.2) where the electric field can be written as $`𝐄=𝐄_i`$, with $`𝐄_i`$ the field created by the i-th pseudocharge. As above, (3.2) can be split into a self part $`P_{self}`$ (made of $`𝐄_i𝐄_i`$ terms) and a nonself part $`P_{nonself}`$ (made of $`𝐄_i𝐄_j(ij)`$ terms). Because of the rotational symmetry, $`P_{nonself}`$ can be written as $$P_{nonself}=\frac{1}{24\pi }\underset{ij}{}𝐄_i𝐄_j=\frac{1}{3}u_{nonself}$$ $`(5.4)`$ where $`u_{nonself}`$ is the nonself part of the potential energy density, which can be reexpressed in terms of the interaction $`\varphi `$ rather than in terms of fields, by the usual integration by parts, as $$u_{nonself}=\frac{1}{2\pi ^2R^3}\underset{i<j}{}\varphi (\psi _{ij})=\frac{n^2}{2}\varphi (\psi )h(\psi )𝑑V$$ $`(5.5)`$ where one can use the pair correlation function $`h`$ rather than the pair distribution function $`g=h+1`$ since $`\varphi 𝑑V=0`$. As to $`P_{self}`$, it is a divergent integral which however can be made finite by adapting what has been done at the end of Section 3, namely splitting it into the contribution $`P_0`$ of geodesic distances $`R\psi <R\psi _0`$, and the finite contribution $`P_1`$ of $`R\psi >R\psi _0`$ for which rotational symmetry can be used. In terms of $`E(\psi )`$ given by (5.2), $$P_1=\frac{n}{24\pi }_{\psi _0}^\pi E^2(\psi )4\pi R^3\mathrm{sin}^2\psi d\psi $$ $`(5.6)`$ Evaluating the integral in (5.6) as $`\psi _00`$ gives $$P_1=\frac{nq^2}{6R}\left(\frac{1}{\psi _0}\frac{3}{2\pi }+O(\psi _0)\right)$$ $`(5.7)`$ On the other hand, for $`P_0`$, the curvature effects become negligible as $`\psi _00`$ (more precisely, as shown in Appendix B, they are $`O(\psi _0)`$) and the Euclidean prescription (3.10) can be used, with $`r_0=R\psi _0`$, giving $$P_0=\frac{nq^2}{6R\psi _0}+O(\psi _0)$$ $`(5.8)`$ Finally, the total pressure is $$P=nkT+\frac{n^2}{6}\varphi (\psi )h(\psi )𝑑V\frac{nq^2}{4\pi R}$$ $`(5.9)`$ This is the generalization of (2.2) to the case of a hypersphere. The present evaluation of the pressure makes no explicit use of the self-energy of a pseudoparticle. This is an important remark. Indeed, another possible definition of the pressure would be minus the derivative of the free energy with respect to the volume. But, for evaluating the free energy, it is necessary to define properly the zero of energy for a system of pseudoparticles, and this necessarily involves some heuristic convention about the self-energy of a pseudoparticle. In ref.9. a reasonable convention gave $`3q^2/4\pi R`$ for this self-energy, and the corresponding pressure is identical with (5.9). This pressure does obey the usual relation $$P_{ex}=\frac{1}{3}u$$ $`(5.10)`$ where $`u`$ is the total potential energy density defined with the above convention. However, in ref.10, another reasonable convention (giving a faster approach to the thermodynamic limit as $`R\mathrm{}`$) has been used, with additional terms of order higher than $`1/R`$, and the pressure derived from the corresponding free energy no longer agrees with (5.9). The definition of the pressure in terms of the Maxwell tensor is free from this arbitrariness. 5.2.OCP or TCP on a Sphere The above considerations can be easily adapted to the (simpler) case of two-dimensional Coulomb systems living on the surface of a sphere. Now, the electric potential created by a pseudocharge is $$\mathrm{\Phi }=q\mathrm{ln}\mathrm{sin}(\psi /2)+V_0$$ $`(5.11)`$ $`P_{nonself}`$ and $`P_1`$, convergent integrals, vanish because of the rotational symmetry. One is left with $`P_0`$, for which the curvature effects are negligible and (4.7) holds with the same result (4.8) as in the case of a plane system. This result $`P_{ex}=nq^2/4`$ holds for an OCP, and also for a TCP when $`\mathrm{\Gamma }<2`$. Here too, the free energy depends on an arbitrary convention about the zero of energy. In ref.8, this convention was implicitly made by the way in which (5.11) was used together with the choice $`V_0=q\mathrm{ln}(2R/L)`$, with $`R`$ the radius of the sphere. It is only thanks to this convention that the corresponding free energy has a derivative with respect to the sphere area which correctly gives the equation of state (4.4). 5.3.OCP or TCP on a Pseudosphere Recently, two-dimensional Coulomb systems living on a surface of constant negative curvature (a pseudosphere) were studied.<sup>(11)</sup> Since, unlike a sphere, a pseudosphere infinite, one has the interesting possibility of considering systems which are both infinite and curved. Let $`a`$ be a length such that the Gaussian curvature of the pseudosphere is $`1/a^2`$ (instead of $`1/R^2`$ on a sphere). Now, the electric potential and field created by a single point charge $`q`$ exist. At a geodesic distance $`s`$ from this charge, the electric potential is $$\mathrm{\Phi }=q\mathrm{ln}\mathrm{tanh}\frac{s}{2a}$$ $`(5.12)`$ where the possible additive constant has been fixed by the condition that this potential vanishes at infinity ($`s\mathrm{}`$). The electric field is $$𝐄=\frac{q}{a\mathrm{sinh}(s/a)}𝐭$$ $`(5.13)`$ where $`𝐭`$ is the unit vector tangent to the geodesic. The pressure can be obtained from the Maxwell tensor just as in the case of a sphere, with the same result (4.4), i.e. $`P_{ex}=nq^2/4`$. This pressure holds not only in the thermodynamic limit, but also at the center of a finite disk. The above result for the pressure calls for some discussion. On a pseudosphere, when the size of a large domain increases, its perimeter grows as fast as its area. As a consequence, there is no uniquely defined thermodynamic limit for the free energy per particle (this limit depends on the shape of the domain and on the boundary conditions). A bulk pressure cannot be defined by deriving the free energy with respect to the area. In ref.11, a bulk “pressure” $`p`$ was defined by its virial expansion with the prescription that the thermodynamic limit of each virial coefficient $`B_k`$ (which seems to exist on a pseudosphere) has to be computed before the virial series in powers of the density $`n`$ is summed. It is now apparent that this $`p`$ is not identical with the pressure $`P`$ obtained from the Maxwell tensor in the form (4.4). We now believe that the correct pressure is $`P`$, while $`p`$ only is a mathematical quantity (seemingly with some interesting properties). Nevertheless an important result of ref.11 is true: there is at least one thermodynamic quantity, the bulk energy per particle, which has a series expansion in integer powers of the density, in contrast to the case of a plane system in which the energy per particle is singular at zero density. 5.4.Why no Trace Anomaly? Some time ago, it has been remarked that conducting Coulomb systems are critical-like at any temperature<sup>(12,13)</sup> in some sense: they have long-range electric potential and field correlations, and the free energy of a two-dimensional Coulomb system with an electric potential $`\varphi `$ has logarithmic finite-size corrections similar to the ones which occur<sup>(14)</sup> in a critical system described by a conformal-invariant field theory with a field $`\varphi `$. In ref.14, it was shown that, for a critical system, this logarithmic correction to the free energy is associated to a trace anomaly of the stress tensor, proportional to the Gaussian curvature of the surface on which the system lives. Thus, at first sight, one might expect that the pressure of a two-dimensional Coulomb system (which is minus one half of the expectation value of the trace of the Maxwell tensor, with a suitably defined self part) would have a term $`O(1/R^2)`$ on a sphere and $`O(1/a^2)`$ on a pseudosphere. Yet, such terms are not present in (4.4). Why? Actually, for a Coulomb system made of $`N`$ particles on a sphere of radius $`R`$, thus with a density $`n=N/4\pi R^2`$, the free energy $`F`$ has a term $`(kT/6)\mathrm{ln}N`$. The pressure is (with a suitable definition of the zero of energy) a partial derivative at constant $`N`$: $`P=\left(F/(4\pi R^2)\right)_N`$. Thus the $`\mathrm{ln}N`$ term in $`F`$ gives no contribution to the pressure, in agreement with (4.4). However, in field theory, some ultraviolet cutoff (a length $`\eta `$) has to be introduced and the trace of the stress tensor has an expectation value $`<\mathrm{\Theta }>`$ related to $`\left(F/(4\pi R^2)\right)_\eta `$ with now a derivative taken at constant cutoff. When the Coulomb system is described in terms of a field theory, the role of the cutoff is played by the microscopic scale $`\eta =n^{1/2}`$. Thus, the $`\mathrm{ln}N=\mathrm{ln}(n4\pi R^2)`$ term in $`F`$ is associated to a trace anomaly in the field-theoretical $`<\mathrm{\Theta }>`$, not in $`P`$. A related statement is: If the expectation value of the trace of the Maxwell tensor is computed with a field-theoretical measure (the functional integral measure $`𝒟\varphi `$), it has a trace anomaly. This trace anomaly is not present when the measure is the particle configuration space one $`d𝐫_1d𝐫_2\mathrm{}d𝐫_N`$. Similarly, the pressure (4.8) on a pseudosphere has no trace anomaly. 6.CONCLUSION The pressure in a Coulomb fluid has been defined as minus the statistical average of a diagonal element, say $`T_{xx}`$ of the Maxwell tensor. This definition leads to an ill-defined integral, which however can be given a definite value by an appropriate prescription: the fluid is supposed split into two regions separated by a thin empty slab normal to the $`x`$-axis, $`T_{xx}`$ is computed at a point inside this slab, and the limit of a slab of zero thickness is taken at the end. For Coulomb fluids in an Euclidean space, this approach through the Maxwell tensor is just a fresh look on well-known results. But, for Coulomb systems in a curved space, we are not aware of any other way of obtaining an unambiguous value for the pressure. For simplicity, only point-particle systems without short-range forces have been considered. But an extension to systems with hard cores seems feasible. APPENDIX A. THE $`\nu `$-DIMENSIONAL OCP In this Appendix, the excess pressure of a $`\nu `$-dimensional OCP ($`\nu >2`$) is related to its potential energy density. The dimension $`\nu `$ is treated as a continuous variable, and the limit $`\nu 2`$ is taken. The unit of charge is defined such that the electric field at a distance $`r`$ from a unit charge be $`1/r^{\nu 1}`$. Thus, the potential is $`1/(\nu 2)r^{\nu 2}`$. The Maxwell tensor is $$T_{\alpha \beta }=\frac{1}{S_{\nu 1}}(E_\alpha E_\beta \frac{1}{2}𝐄𝐄\delta _{\alpha \beta })$$ $`(\text{A}.1)`$ where $$S_{\nu 1}=\frac{2\pi ^{\nu /2}}{\mathrm{\Gamma }(\nu /2)}$$ $`(\text{A}.2)`$ is the area of the sphere of unit radius. In terms of the Maxwell tensor $`T`$, the nonself part of the pressure is $$P_{nonself}=\frac{1}{\nu }<\text{tr}T>_{nonself}=\frac{\nu 2}{\nu }\frac{1}{2S_{\nu 1}}<𝐄^2>_{nonself}$$ $`(\text{A}.3)`$ where the nonself part of the electrostatic energy density is $$\frac{1}{2S_{\nu 1}}<𝐄^2>_{nonself}=\frac{q^2n^2}{2S_{\nu 1}}𝑑𝐫_1𝑑𝐫_2\frac{𝐫_1𝐫_2}{r_1^\nu r_2^\nu }h(r_{12})$$ $`(\text{A}.4)`$ Taking as integration variables $`𝐫_1`$ and $`𝐫_{12}`$, and performing first the integral on $`𝐫_1`$, one finds, as expected, the potential energy density $$\frac{1}{2S_{\nu 1}}<𝐄^2>_{nonself}=\frac{q^2n^2}{2}𝑑𝐫_{12}\frac{1}{(\nu 2)r_{12}^{\nu 2}}h(r_{12})$$ $`(\text{A}.5)`$ As to the self part of the pressure $`P_{self}=<T_{xx}>_{self}`$, it must be defined, like in (3.7), as $$P_{self}=\frac{nq^2}{2S_{\nu 1}}\underset{\epsilon 0}{lim}_{|x|>\epsilon }𝑑x_0^{\mathrm{}}S_{\nu 2}𝑑\rho \rho ^{\nu 2}\frac{x^2\rho ^2}{(x^2+\rho ^2)^\nu }$$ $`(\text{A}.6)`$ Here too, the integral on $`\rho `$, performed first, is found to vanish, thus $`P_{self}=0`$. Therefore, the final result for the excess pressure is $$P_{ex}=\frac{q^2n^2}{2\nu }𝑑𝐫_{12}\frac{1}{r_{12}^{\nu 2}}h(r_{12})$$ $`(\text{A}.7)`$ In the limit $`\nu 2`$, using the perfect screening rule (4.3), one retrieves $$P_{ex}=\frac{nq^2}{4}$$ $`(\text{A}.8)`$ APPENDIX B. CURVATURE EFFECTS IN A SMALL SPHERE In this Appendix, (5.8) is derived. In four-dimensional Euclidean space, with Cartesian coordinates $`(x,y,z,t)`$, the surface $`S_3`$ of a hypersphere of radius $`R`$ centered at the origin usually is parametrized by the hyperspherical coordinates $`(u,v,w)`$ related to the Cartesian ones by $$x=\mathrm{sin}w\mathrm{sin}v\mathrm{cos}u,y=\mathrm{sin}w\mathrm{sin}v\mathrm{sin}u,z=\mathrm{sin}w\mathrm{cos}v,t=\mathrm{cos}w$$ $$0u2\pi ,\mathrm{\hspace{0.17em}\hspace{0.17em}0}v\pi ,\mathrm{\hspace{0.17em}\hspace{0.17em}0}w\pi $$ $`(\text{B}.1)`$ However, here, it is more convenient to parametrize $`S_3`$ by the three independent variables $`(x,y,z)`$. We define $`r=(x^2+y^2+z^2)^{1/2}`$ and $`\rho =(y^2+z^2)^{1/2}`$. A useful relation is $`r^2=R^2\mathrm{sin}^2w`$. Using the Jacobian for the change of coordinates, one finds that the volume element $`dV=R^3\mathrm{sin}^2w\mathrm{sin}vdudvdw`$ becomes $$dV=\frac{dxdydz}{\mathrm{cos}w}$$ $`(\text{B}.2)`$ The hypersphere pole $`x=y=z=0`$ will be called $`O`$. The geodesic distance between $`O`$ and $`(x,y,z)`$ is $`Rw`$. The part $`P_0`$ of the pressure can be evaluated at $`O`$, i.e. the electric field in (3.2) is the one at $`O`$. $`P_0`$ is that part of $`P_{self}`$ which is created by the pseudocharges located at a geodesic distance from $`O`$ smaller than $`R\psi _0`$. The regularization prescription is that there is no particle in a thin slab $`|x|<\epsilon `$. The electric field $`E(w)𝐭`$ created at $`O`$ by a pseudocharge at $`(x,y,z)`$ is given by (5.2) where $`\psi =w`$ and $`𝐭=(x/r,y/r,z/r)`$. Thus, with (B.2) taken into account, the analog of (3.10) is $$P_0=\frac{n}{8\pi }\underset{\epsilon 0}{lim}_{\epsilon <|x|<r_0}_0^{\sqrt{r_0^{\mathrm{\hspace{0.17em}2}}x^2}}\frac{2\pi d\rho \rho }{\mathrm{cos}w}\frac{x^2\rho ^2}{x^2+\rho ^2}E^2(w)$$ $`(\text{B}.3)`$ where $`r_0=R\mathrm{sin}\psi _0`$. An expansion in powers of $`r/R`$ gives $$\frac{E^2(w)}{\mathrm{cos}w}=\frac{q^2}{(x^2+\rho ^2)^2}[1+O(r^2/R^2)]$$ $`(\text{B}.4)`$ When the expansion (B.4) is used in (B.3), the leading term of (B.3) is (3.10) and the next term (which gives a convergent integral for which the $`\epsilon `$ regularization is superfluous) is $`O(r_0/R^2)`$. Using $`r_0=R\mathrm{sin}\psi _0`$, one does find $$P_0=\frac{nq^2}{6R\psi _0}+O(\psi _0)$$ i.e.(5.8). ACKNOWLEDGEMENTS The author has benefited from stimulating discussions with J.M.Caillol, J.L.Cardy, A.Comtet, F.Cornu, A.Krzywicki, R.Omnes, and many others. REFERENCES 1. J.-P.Hansen and I.R.McDonald, Theory of Simple Liquids (Academic, London, 1986). 2. P.A.Egelstaff, An Introduction to the Liquid State (Academic, London, 1967). 3. M.Baus and J.-P.Hansen, Phys.Rep. 59:1 (1980). 4. Ph.Choquard, P.Favre, and Ch.Gruber, J.Stat.Phys. 23:405 (1980). 5. J.D.Jackson, Classical Electrodynamics (Wiley, New York, 1962). 6. A.M.Salzberg and S.Prager, J.Chem.Phys. 38:2587 (1963). 7. E.H.Hauge and P.C.Hemmer, Phys. Norv. 5:209 (1971). 8. J.M.Caillol, J.Physique-Lettres 42:L-245 (1981). 9. J.M.Caillol and D.Levesque, J.Chem.Phys. 94:597 (1991). 10. J.M.Caillol, J.Chem.Phys. 111:6528 (1999). 11. B.Jancovici and G.Téllez, J.Stat.Phys. 91:953 (1998). 12. B.Jancovici, G.Manificat, and C.Pisani, J.Stat.Phys. 76:307 (1998). 13. G.Téllez and P.J.Forrester, J.Stat.Phys. 97: 489(1999). 14. J.L.Cardy and I.Peschel, Nucl.Phys.B 300 \[FS 22\]:377 (1988).
no-problem/9912/quant-ph9912076.html
ar5iv
text
# Erratum: Asymptotic entanglement manipulations can be genuinely irreversible. [Phys. Rev. Lett. 84, 4260 (2000)] The presented proof in Ref. that $`E_D<E_f^{\mathrm{}}`$ (irreversibility) for some Werner states, was based on an invalid lemma of Ref. (cf. erratum ) (p. 3 of the Ref ) saying that $`E_{PT}`$ is additive for Werner states. Therefore our proof of irreversibility is incorrect. However, the irreversibility holds, as shown recently in Ref. . The authors considered some family of bound entangled states (hence having $`E_D=0`$) and showed that $`E_f^{\mathrm{}}`$ for those states is nonzero. Other results of the our paper remain valid as they do not make use of the mentioned lemma. They are: * Proof that $`E_n=\mathrm{log}\varrho ^{PT}`$ is upper bound for distillable entanglement (Lemma 2 of Ref. ). An independent proof of this fact was found earlier by Werner, Benasque, 1998 \[private communication\], see Ref . * Proof that $`E_n`$ does not increase under trace-preserving PPT superoperators (Appendix of Ref. ). * Calculation of value of $`E_{PT}`$ for Werner states (eq. (16) of Ref. ). * Calculation of the measure $`E_n`$ for isotropic state and Werner states (eq. (13) and (15) of Ref. ).
no-problem/9912/nucl-th9912029.html
ar5iv
text
# Nuclear symmetry energy in presence of hyperons in the nonrelativistic Thomas-Fermi approximation ## Abstract We generalise the finite range momentum and density dependent Seyler-Blanchard nucleon-nucleon effective interaction to the case of interaction between two baryons. This effective interaction is then used to describe dense hadronic matter relevant to neutron stars in the nonrelativistic Thomas-Fermi approach. We investigate the behaviour of nuclear symmetry energy in dense nuclear and hyperon matter relevant to neutron stars. It is found that the nuclear symmetry energy always increases with density in hyperon matter unlike the situation in nuclear matter. This rising characteristic of the symmetry energy in presence of hyperons may have significant implications on the mass-radius relationship and the cooling properties of neutron stars. We have also noted that with the appearance of hyperons, the equation of state calculated in this model remains causal at high density. The study of matter far off from normal nuclear matter density is of interest in understanding various of properties of neutron stars. The matter density in the core of neutron stars could exceed up to a few times normal nuclear matter density. Our knowledge about dense matter is very much constrained by a single density point in the whole density plane i.e. normal nuclear matter density or the saturation density. The empirical values of various properties of symmetric nuclear matter i.e. binding energy, bulk symmetry energy, compressibility are only known at this density. All models are fitted to those properties at the saturation density and then extrapolated to high density regime. The symmetry energy is an essential input in understanding gross properties of neutron stars. The bulk symmetry energy is defined as the difference between the energy per particle for pure neutron matter and that of symmetric nuclear matter at normal nuclear matter density. The empirical value of the bulk symmetry energy lies in the range 30-40 MeV. Nuclear symmetry energy controls Fermi momenta of baryons, particle fractions and the equation of state of dense matter. Since a dense system like neutron stars is an infinite one, the volume and symmetry energy terms in the Bethe-Weizsäcker mass formula contribute to the total energy of the system. As a consequence, the energy of the system is lowered when the system is more symmetric i.e. its symmetry energy is less. Though various nonrelativistic as well as relativistic models are fitted to the symmetry energy at the saturation density, there is no consensus among the models about the behaviour of nuclear symmetry energy far off from normal nuclear matter density. It was earlier noted by many authors that the symmetry energy, in nonrelativistic models, initially increased and afterwards it either decreased with density or saturated to a value. It was attributed to the role of the tensor interaction in isospin singlet (T=0) nucleon pairs. At low density, the attraction due to the tensor force dominates over the short range repulsion in T=0 nucleon pairs. As a consequence, symmetric nuclear matter is more attractive than pure neutron matter and the symmetry energy increases with density initially. At high density, the tensor interaction in T=0 channel vanishes and the short range repulsion in isospin singlet nuclear pairs wins over that of isospin triplet pairs. As a result, nuclear symmetry energy falls at high density regime leading to energetically favourable pure neutron matter and the disappearance of protons. It was shown by Engvik et al. that the symmetry energy increased with density in the lowest order Brueckner calculations using modern nucleon-nucleon (NN) potentials. Such a behaviour of the symmetry energy was also reported in another Brueckner calculation using realistic nucleon-nucleon potentials . On the other hand, Akmal et al. found that the symmetry increased at lower densities and then decreased at high density in variational chain summation (VCS) method using one such modern NN potential i.e. A18. The difference between those calculations may be stemmed from the neglect of higher order terms in Brueckner calculations. Akmal et al. also observed that the proton fraction calculated in the VCS approach using A18 plus three nucleon interaction, increased with density. However, they noted that the too strong repulsion in the three nucleon force resulted in overestimation in the proton fraction or the symmetry energy. In relativistic mean field (RMF) models , the symmetry energy always increases with density. Here, the mean $`\rho `$-meson field is responsible for the interaction part of the symmetry energy and it increases with density. However, two main features $``$ the tensor force and different repulsive strengths in isospin singlet and isospin triplet nucleon pairs, are absent in RMF calculations. In various nonrelativistic models, the fall of the symmetry energy occurs beyond a few times normal nuclear matter density. On the other hand, the formation of hyperons is a possibility at about 2-3 times normal nuclear matter density. Therefore, it may be a serious flaw to consider a dense matter system consisting only of nucleons at high density. Also, nonrelativistic models consisting only of nucleons violate causality at high density. This problem might be rectified with the appearance of hyperons. Hyperons are created at the cost of nucleons’ energy. With the formation of hyperons, Fermi momenta (velocities) of nucleons will be reduced. On the other hand, hyperons being heavier than nucleons will have smaller Fermi velocities. In this situation, all baryons may be treated as nonrelativistic particles in a dense system. Strange hadron systems were studied extensively in RMF models . Recently, there have been some calculations on strange hadronic matter in the nonrelativistic Brueckner approximation using baryon-baryon potentials and also using phenomenologically constructed energy density functional . In this letter, we investigate the density dependence of nuclear symmetry energy in the nonrelativistic Thomas-Fermi approximation using a momentum and density dependent finite range Seyler-Blanchard effective interaction. The momentum dependent Seyler-Blanchard nucleon-nucleon effective interaction was extensively applied to the determination of the parameters of mass formula by Myers and Swiatecki . However, the energy dependence of the single particle potential was too strong because of the strong momentum dependence in the effective interaction. Later, the momentum dependent Seyler-Blanchard effective interaction was modified to include a two-body density dependent term which simulated three body effects and the energy dependence of the single particle potential was exploited to delineate the momentum and density dependence of the effective interaction . This modified Seyler-Blanchard (SBM) interaction was used in the description of heavy ion collisions , dense matter properties and neutron stars . Here, we generalise the SBM interaction to the case of baryon-baryon interaction with the inclusion of hyperons in addition to nucleons. Later, we exploit this momentum and density dependent finite range baryon-baryon effective interaction to calculate nuclear symmetry energy in nuclear and hyperon matter relevant to neutron stars. The interaction between two baryons with separation $`r`$ and relative momentum $`p`$ is given by $$V_{eff}(r,\rho ,p)=C_{B_1B_2}[1\frac{p^2}{b^2}d^2(\rho _1+\rho _2)^n]\frac{e^{r/a}}{r/a},$$ (1) where $`a`$ is the range parameter and $`b`$ defines the strength of repulsion in the momentum space; $`d`$ and $`n`$ are two parameters determining the strength of the density dependence; $`\rho _1`$ and $`\rho _2`$ are total baryon densities at the sites of two interacting baryons. We have all the baryons of SU(3) octet and leptons($`e^{}`$, $`\mu ^{}`$) in our calculation. The constituents of matter in neutron stars are highly degenerate and the chemical potentials of baryons and leptons are much larger than the temperature of the system. Therefore, our calculation is confined to zero temperature case. The single particle potential for baryon $`B_1`$ is defined as, $`V_{B_1}(p_1,\rho )`$ $`=`$ $`V_{B_1}^0+p_1^2V_{B_1}^1+V^2`$ (2) $`=`$ $`{\displaystyle \frac{2}{(2\pi )^3}}{\displaystyle 𝑑\stackrel{}{p_2}𝑑\stackrel{}{r_2}V_{eff}[C_{B_1B_1}\mathrm{\Theta }(p_{F_{B_1}}p_2)+\underset{B_2B_1}{}C_{B_1B_2}\mathrm{\Theta }(p_{F_{B_2}}p_2)+V^2]},`$ (3) where, $`V_{B_1}^0`$, $`V_{B_1}^1`$ are the momentum independent and dependent parts of the single particle potential, respectively and $`V^2`$, the rearrangement contribution arising out of the density dependence of the two-body effective interaction, is given by $`V^2={\displaystyle \frac{1}{2}}{\displaystyle 𝑑\stackrel{}{r^{^{}}}\frac{v_2}{\rho }\underset{B_1}{}\rho _{B_1}[C_{B_1B_1}\rho _{B_1}+\underset{B_2B_1}{}C_{B_1B_2}\rho _{B_2}]},`$ (4) with $`v_2=d^2(2\rho )^n\frac{e^{r/a}}{r/a}`$. Here, the total baryon density is denoted by $`\rho `$ and the summations over $`B_1`$ and $`B_2`$ go over all the species of SU(3) baryon octet. The density for baryon $`B`$ is denoted by $`\rho _B`$ and Fermi momentum by $`P_{F_B}`$. The effective mass is defined as, $$m_B^{}=[\frac{1}{m_B}+2V_B^1]^1.$$ (5) We have from equations (1), (2) and (3) $`V_{B_1}^0`$ $`=`$ $`4\pi a^3(d^2(2\rho )^n1)[C_{B_1B_1}\rho _{B_1}+{\displaystyle \underset{B_2B_1}{}}C_{B_1B_2}\rho _{B_2}]`$ (6) $`+`$ $`{\displaystyle \frac{4a^3}{\pi b^2}}[C_{B_1B_1}{\displaystyle \frac{p_{F_{B_1}}^5}{5}}+{\displaystyle \underset{B_2B_1}{}}C_{B_1B_2}{\displaystyle \frac{p_{F_{B_2}}^5}{5}}],`$ (7) $$V_{B_1}^1=\frac{4\pi a^3}{b^2}[C_{B_1B_1}\rho _{B_1}+\underset{B_2B_1}{}C_{B_1B_2}\rho _{B_2}],$$ (8) $$V^2=4\pi a^3d^2n(2\rho )^{n1}\underset{B_1}{}\rho _{B_1}[C_{B_1B_1}\rho _{B_1}+\underset{B_2B_1}{}C_{B_1B_2}\rho _{B_2}].$$ (9) The chemical potential of baryon $`B`$ is given by, $`\mu _B={\displaystyle \frac{P_{F_B}^2}{2m_B^{}}}+V_B^0+V^2.`$ (10) The symmetry energy is an essential ingredient in understanding dense matter. As our calculations are concerned with a system having density far off from normal nuclear matter density, it is very much necessary to know the behaviour of nuclear symmetry energy at high density. The energy per nucleon in asymmetric matter may be written as , $$E(\rho ,\beta )=E(\rho ,\beta =0)+\beta ^2E_{sym}(\rho ),$$ (11) where $`\beta =\frac{(\rho _n\rho _p)}{\rho }`$ is the asymmetry parameter and $`\rho _n`$ and $`\rho _p`$ are neutron and proton densities, respectively; $`E(\rho ,\beta =0)`$ and $`E_{sym}(\rho )`$ are energy per nucleon in symmetric matter and nuclear symmetry energy, respectively. It can be shown that the symmetry energy is related to neutron and proton chemical potentials . Neutron and proton chemical potentials are defined respectively as $`\mu _n=\frac{ϵ}{\rho _n}`$ and $`\mu _p=\frac{ϵ}{\rho _p}`$, where $`ϵ=\rho E(\rho ,\beta )`$ is the energy density. The expression of nuclear symmetry energy follows from equation (9) and the definitions of the chemical potentials as, $`\mu _n\mu _p=4\beta E_{sym}(\rho ).`$ (12) Putting the expression for chemical potentials (equation (8)) along with equations (5)-(7) in equation (10), we obtain $$4\beta E_{sym}(\rho )=E_{kin}+V_s,$$ (13) where the kinetic and the interaction parts of the symmetry energy are respectively given by, $$E_{kin}=\left[\frac{P_{F_n}^2}{2m_n^{}}\frac{P_{F_p}^2}{2m_p^{}}\right],$$ (14) and $$V_s=[4\pi a^3(d^2(2\rho )^n1)(\rho _n\rho _p)+\frac{4a^3}{5\pi b^2}(p_{F_n}^5p_{F_p}^5)](C_{nn}C_{np}).$$ (15) The five parameters of nucleon-nucleon interaction in equation (1) - two strength parameters $`C_{BB}`$s (one for $`pp`$ or $`nn`$ interaction and the other for $`np`$ or $`pn`$ interaction), $`a`$, $`b`$ and $`d`$ are determined for a fixed value of $`n=1/3`$ by reproducing the saturation density of normal nuclear matter ($`\rho _0=0.1533`$ $`fm^3`$), the volume energy coefficient for symmetric nuclear matter ($`16.1`$ MeV), asymmetry energy coefficient ($`34`$ MeV), the surface energy coefficient of symmetric nuclear matter ($`18.01`$ MeV) and the energy dependence of the real part of the nucleon-nucleus optical potential. With the above choice of $`n`$, the incompressibility of normal nuclear matter turns out to be $`260`$ MeV. Also, the effective mass ratio ($`m_N^{}/m_N`$) comes out to be 0.61 at normal nuclear matter density in our calculation. The values of parameters for nucleon-nucleon interaction are presented in Table I. Informations about nucleon-hyperon interactions are confined to hypernuclei data . There is a large body of data on binding energies of $`\mathrm{\Lambda }`$-hypernuclei. Analyses of those experimental data on hypernuclei indicate that the potential felt by a $`\mathrm{\Lambda }`$ in normal nuclear matter is $`30`$ MeV. With our two-body baryon-baryon interaction (equation (1)), we determine the strength of nucleon-hyperon and hyperon-hyperon interaction from equation (2) keeping two range parameters ($`a`$ and $`b`$) and the density dependence of the interaction same as that of the nucleon-nucleon interaction. Parameters of nucleon-$`\mathrm{\Lambda }`$ interaction are shown in Table II. Experimental data of $`\mathrm{\Sigma }`$-hypernuclei are scarce and ambiguous because of the strong $`\mathrm{\Sigma }`$N $`\mathrm{\Lambda }`$N decay. It is also assumed that the $`\mathrm{\Sigma }`$ well depth in normal nuclear matter is equal to that of a $`\mathrm{\Lambda }`$ particle. Therefore, the strength of $`\mathrm{\Sigma }`$N interaction is the same as that of $`\mathrm{\Lambda }`$N interaction in our calculation and this is shown in Table II. In emulsion experiments with $`K^{}`$ beams, there are a few events attributed to the formation of $`\mathrm{\Xi }`$-hypernuclei. These data can be explained in terms of a potential well of $`25`$ MeV for $`\mathrm{\Xi }`$ particle in symmetric nuclear matter . We obtain the strength parameter ($`C_{BB}`$) of $`\mathrm{\Xi }`$N interaction by fitting the single particle potential to the above mentioned value and present in Table II. There are a few events of $`\mathrm{\Lambda }\mathrm{\Lambda }`$ hypernuclei. Analyses of those events indicate a rather strong hyperon-hyperon interaction. Schaffner et al. constructed single particle potentials on the basis of one boson exchange calculations of Nijmegen group and the well depth of a hyperon in hyperon matter is estimated to be $`40`$ MeV and this is universal for all hyperon-hyperon interactions. The parameters of hyperon-hyperon interaction are given in Table II. In all cases, we notice that interactions involving hyperons, are weaker compared to nucleon-nucleon interaction. The composition of a neutron star is constrained by charge neutrality and baryon number conservation. Also, constituents of matter are in beta-equilibrium. Baryon chemical potentials are related to neutron and lepton chemical potentials through the general relation given by $$\mu _i=b_i\mu _nq_i\mu _l,$$ (16) where $`b_i`$ and $`q_i`$ are the baryon number and charge of i-th baryon species, respectively and ’$`l`$’ stands for electrons and muons. Solving the above mentioned constraints, at a given density, self-consistently, we obtain effective masses, Fermi momenta or chemical potentials which determine the gross properties of neutron stars. Particle abundances of nucleons-only matter relevant to a neutron star are shown in figure 1. Here, we notice that the proton (electron) fraction initially increases with density and decreases at higher densities. Such a behaviour of the proton fraction with density in nonrelativistic models was noted earlier by various authors . They attributed it to the density dependence of nuclear symmetry energy. We will discuss about this later. In figure 2, particle fractions in hyperon matter relevant to a neutron star are plotted with baryon density. The threshold condition for the appearance of hyperons depends not only on their masses but also on their charges and interaction strengths. The threshold condition is given by $$\mu _nq_B\mu _em_B^{}+V_B^0+V^2,$$ (17) where $`\mu _n`$ and $`\mu _e`$ are neutron and electron chemical potentials respectively, $`q_B`$ is the charge of baryon B. The quantities on the right hand side of equation (15) are given by equations (4) - (7). When the left hand side equals to or exceeds the right hand side of equation (15), baryon species B will be populated. Here, we notice that hyperons first appear at 1.5 times normal matter density. Also, it is noted that the electron fraction decreases monotonically because negatively charged hyperons make the neutron star almost charge neutral. On the other hand, the proton fraction is enhanced in hyperon matter compared to the situation in nuclear matter ( see figures 1 and 2). Moreover, the proton fraction does not show any declining tendency at high density as it has been observed in nucleons-only system. We plot absolute proton density with baryon density in figure 3. The dashed line (curve $`a`$) denotes neutron-proton system whereas the solid line (curve $`b`$) represents hyperon system. We find that the proton density always increases with baryon density in hyperon environment. It may be attributed to the behaviour of nuclear symmetry energy with density in hyperon matter. Violation of causality at high density is a problem in nonrelativistic models . The speed of sound ($`v^2=\frac{P}{ϵ}`$) in nucleons-only matter becomes superluminal i.e. greater than the velocity of light at high density. With the appearance of additional degrees of freedom in the form of hyperons, Fermi momenta of neutrons and protons are reduced at high density in comparison to the situation with nucleons-only matter. As a consequence, the equation of state now respects causality at densities which might occur at the centers of neutron stars. Nuclear symmetry energy is plotted with baryon density in figure 4. The dashed line (curve $`a`$) represents the calculation for nuclear matter whereas the solid line (curve $`b`$) implies that of hyperon matter. We find that the nuclear symmetry energy in nuclear matter increases initially with density and decreases later at high density. It was pointed out by many authors that the fall of the symmetry energy was due to the greater short-range repulsion in isospin singlet nucleon pairs than that of isospin triplet pairs at high density . In our calculation , there are two strength parameters in the SBM nucleon-nucleon effective interaction i.e. $`C_{nn}`$ ($`C_{pp}`$) which represents isospin triplet state (T=1) and $`C_{np}`$ ($`C_{pn}`$) implying isospin singlet (T=0) state. It is evident from Table I that the strength parameter ($`C_{np}`$) in isospin singlet state is stronger than that of the triplet state ($`C_{nn}`$). It is the interaction term ($`V_s`$) in the symmetry energy (see equation (13)) that regulates the behaviour of nuclear symmetry energy. At lower densities, the interaction term in $`E_{sym}`$ is positive because the repulsive first term in $`V_s`$ is larger than the attractive second term. Therefore, the symmetry energy is increasing at lower densities. On the other hand, the interaction term, $`V_s`$, becomes negative around $`4\rho _0`$ because the second term, in equation (13), which is attractive in nature, wins over the first term. Thus at high density, pure neutron matter is energetically favourable and protons disappear from the system. We observe that the symmetry energy in nuclear matter ( curve a in figure 4) starts falling around density $`4\rho _0`$. On the other hand, the appearance of hyperons is a possibility at about 2-3 times normal nuclear matter density. Therefore, it may not be justified to consider a system consisting only of nucleons at high density. Here, we discuss the density dependence of nuclear symmetry energy including hyperons in our nonrelativistic calculation. Nuclear symmetry energy in presence of hyperons is calculated using equations (11), (12) and (13). In hyperon matter, neutrons and protons couple to a hyperon with the same coupling strength. Therefore, those terms originating from nucleon-hyperon interaction cancel out in the calculation of nuclear symmetry energy in hyperon matter. The solid line (curve $`b`$) in figure 4 represents our calculation of the symmetry energy in hyperon matter. It increases with density. This may be attributed to the behaviour of the interaction term($`V_s`$) in $`E_{sym}`$ (see equation (13)) in a hyperon environment. Hyperons are produced at the cost of the energy of nucleons. Therefore, Fermi momenta of neutrons and protons are reduced with the appearance of hyperons compared to the case of nuclear matter. As a result, the repulsive first term of $`V_s`$ (equation (13)) dominates over the attractive second term leading to a rising nuclear symmetry energy for all densities. This behaviour of the symmetry energy in hyperon matter is also reflected in the proton fraction (curve b in figure 3). The proton fraction in neutron stars is crucial in the determining the direct URCA process which leads to the cooling of neutron stars . The direct URCA process happens if the proton fraction exceeds the threshold value i.e. 11 percent. This happens in our calculation including hyperons around $`3.0\rho _0`$. We have compared our nonrelativistic calculations with those of relativistic mean field models. Nuclear symmetry energy calculated in RMF models increases monotonically with density . In RMF models, the interaction part of the symmetry energy is related to the mean $`\rho `$ meson field which always increases with density. It is to be noted that the symmetry energy calculated in RMF models rises faster compared to nonrelativistic calculations . This may be attributed to the fact that the different repulsive strengths in isospin singlet and isospin triplet nucleon pairs are not taken into account by RMF models. In conclusion, we have studied nuclear symmetry energy in nuclear and hyperon matter relevant to neutron stars in the nonrelativistic Thomas-Fermi approximation using a momentum and density dependent finite range Seyler-Blanchard baryon-baryon effective interaction. In nuclear matter, the symmetry energy (proton fraction) initially increases and later it falls with density. With the appearance of hyperons, nuclear symmetry energy increases with density in hyperon matter. The proton fraction follows the same trend as that of the symmetry energy. The increasing symmetry energy or proton fraction might have important bearings on the mass-radius relationship and cooling properties of neutron stars. We will report on these aspects in a future publication. Figure Captions FIG. 1. Particle abundances of nucleons-only matter as a function of normalised baryon density. FIG. 2. Particle abundances of hyperon matter as a function of normalised baryon density. FIG. 3. Proton density as a function of normalised baryon density in nucleons-only and hyperon matter. FIG. 4. Nuclear symmetry energy as a function of normalised baryon density in nucleons-only and hyperon matter.
no-problem/9912/astro-ph9912268.html
ar5iv
text
# The Gas Reservoir for present day Galaxies : Damped Ly𝛼 Absorption Systems ## 1. Introduction Damped Ly$`\alpha `$ Absorbers are the objects with highest HI column density of QSO absorption line systems. QSO absorption line systems are intergalactic material or in rare cases even galaxies that lie along the line of sight to background QSOs. In the spectrum of the background QSOs QSO absorption line systems manifest themselves primarily in thousands of Ly$`\alpha `$ absorption lines on the blue side of the QSO Ly$`\alpha `$ emission line - the so called Ly$`\alpha `$ forest. In a simplified picture each Ly$`\alpha `$ absorption line represents an intersecting intergalactic cloud. Ly$`\alpha `$ absorption lines at a wavelength close to the QSO Ly$`\alpha `$ emission line are caused by clouds near the QSO in physical space whereas Ly$`\alpha `$ absorption lines further towards the blue are less redshifted and hence caused by clouds closer to us along the line of sight (for a recent review see Rauch, 1998). Damped Ly$`\alpha `$ Absorbers (DLAs) are QSO absorption line systems are causing damped Ly$`\alpha `$ absorption. To do that DLAs have neutral hydrogen column densities larger than $`2\times 10^{20}cm^2`$, which is comparable to the column density of baryons in normal disk galaxies at the present epoch. A very important result recently found is that most of the baryons that reside in stars in galaxies today, at high redshift were in cold gas in DLAs (Wolfe et al. 1995). In other words, DLAs constitute the gas reservoir out of which present day galaxies formed. Since DLAs are objects found by the absorption they cause, much information has been collected about metallicity and dust content through the study of line strengths of metal lines associated with the DLAs allthough the interpretation of the data is still subject to debate (Lu et al., 1996, Kulkarni et al., 1997). Little, however, is known about the sizes and morphologies of the objects. One way to obtain this information is to detect emission from them. From an observational point of view the main problems in studying emission from DLAs are (i) that they are very faint and (ii) the presence of a much brighter QSO at a distance of only 0-3 arcsec on the sky. At a redshift of $`z=2`$ DLAs produce regions of 15-25Å (the width depending on the HI column density) of saturated absorption in the spectrum of the background QSOs. Hence imaging in a narrow filter with a width corresponding to the width of the damped absorption line will circumvent problem (ii). If the DLA is a Ly$`\alpha `$ emitter it will be relatively easy to detect against the modest sky background in the narrow band filter which circumvents problem (i). Narrow band imaging of DLAs have been pursued in more than a decade (e.g. Lowenthal et al., 1995), but only recently with success. The DLA at $`z=2.81`$ towards PKS0528-250 (Møller and Warren, 1993, 1998, Warren and Møller, 1996) and the DLA at $`z=1.934`$ towards Q0151+048A (Møller, Warren and Fynbo, 1998, Fynbo, Møller and Warren, 1998, 1999) have been detected using the narrow filter technique. In the case of the DLA towards Q0151+048A we detected extended Ly$`\alpha `$ emission, which allowed us to obtain the rotation curve of the galaxy (Møller, Fynbo and Warren in prep.). In this paper we report on results from a new narrow band project aimed at the DLA at $`z=1.943`$ towards PKS1157+014, and compare results for the DLAs with a sample of high redshift galaxies that are selected in a completely independent way - the Lyman-break galaxies. ## 2. Observations PKS1157+014 was observed with the 2.56m Nordic Optical Telescope (NOT) March 28 - 31 1998. Two of the nights were lost to bad weather. We obtained a total integration time of 10 hours in narrow band, and 4000 sec in both I and U. The seeing ranged from 0.6 arcsec in I to 0.9 arcsec in the narrow band. Due to the two nights lost to bad weather we didn’t reach the flux-limits we aimed at. With the data obtained we reach a 5$`\sigma `$ flux limit of $`7.5\times 10^{17}`$ erg s<sup>-1</sup> cm<sup>-2</sup> in the narrow band and 5$`\sigma `$ limiting magnitudes of 25.9 and 25.3 in I(AB) and U(AB) respectively. ## 3. The field of PKS1157+014 Fig. 1 shows $`96\times 24`$ arcsec<sup>2</sup> surrounding PKS1157+014 from the combined I-band, U-band and narrow filter frames. As seen the quasar is not present in the narrow band frame due to the strong absorption line. We do not see any significant Ly$`\alpha `$ emission at or near the position of the quasar from the DLA. We have obtained two more nights on NOT in March 1999, which will allow us to reach a 5$`\sigma `$ detection limit of $`5\times 10^{17}ergs^1`$ cm<sup>-2</sup>, which is sufficiently deep to detect the DLAs we have seen in earlier projects. However, we do detect two candidate emission line galaxies marked by ’S’ at signal-to-noise levels between 4 and 5 in the combined narrow-band frame. These very blue and compact emission line galaxies are very similar to the emission line galaxies associated to the DLAs seen in the fields of PKS0528-250 and Q0151+048. In both DLA-fields we have studied so far with narrow band imaging we have found one or more galaxy at the redshift of the DLA, indicating that also at high redshift galaxies were members of groups. It is interesting to note how the galaxies seem to be aligned. This is seen in most high redshift groups of Ly$`\alpha `$ emitting galaxies (see Fig. 6 in Møller and Warren, 1998). This trend is in agreement with N-body simulations of hierarchical structure formation were galaxies predominantly form along filaments (e.g. Evrard et al., 1994). ## 4. Are Lyman-break galaxies and DLAs the same objects? In the last few years hundreds of high redshift galaxies have been found using a technique completely independent of QSO absorption lines. This technique is based on the fact that young, starforming galaxies will have a strong spectral break at the lyman limit, which at high redshift is redshifted into the optical window (see Dickinson, 1998, for a recent review). Galaxies found using this method are refered to as Lyman-break galaxies (LBGs). LBGs need to be bright enough for spectroscopical confirmation of their high redshift so they are typically brighter than R(AB)=26. Since DLAs and LBGs are selected completely independently from the population of progenitor galaxies it is very interesting to compare the recent results for the LBGs with results from studies of DLAs. Assuming that DLAs arise in gaseous discs associated with LBGs one way to perform this comparison is to calculate how faint we need to integrate down the extrapolation of the luminosity function of LBGs in order to explain the observed probability for a QSO line of sight to cross a DLA. Results of this calculation are presented in Fynbo et al., 1999, and summarised here. At $`z=3`$ we find that 70-90% of DLA galaxy counterparts are fainter than R(AB)=26, which is the current limit for spectroscopic confirmation of LBG candidates. Since DLAs contain close to all the gas that make up present day galaxies we conclude that the progenitors of a typical present day galaxy at $`z=3`$ were small and faint and that the LBGs only constitute the tip of the iceberg of high redshift galaxies in terms of locating the reservoir of cold gas out of which present day galaxies formed. This is also consistent with the results from semi-analytical modeling of galaxy formation in which LBGs form in very rare high overdensity regions and are the progenitors of present day bright cluster galaxies (e.g. Baugh et al., 1998). Hence when we wish to study properties such as metallicity, dust content and star formation for the population of progenitor galaxies as a whole, the DLAs are more likely representative than LBGs. ## References Baugh C.M., Cole S., Frenk C.S., Lacey C.G., 1998, ApJ, 498, 504 Dickinson M, 1998, in press (astro-ph/980264) Evrard A.E., Summers F.J., Davis M., 1994, ApJ, 422, 11 Fynbo J.U., Møller P., Warren S.J., 1998, In: ’Structure and Evolution of the intergalactic Medium from QSO Absorption Line Systems’, ed. Petitjean P., Charlot S., (Editions Frontieres), p.408 Fynbo J.U., Møller P., Warren S.J., 1999, MNRAS, 305, 849 Kulkarni V.P., Fall S.M., 1997, ApJL, 484, 7 Lowenthal J.L., Hogan G.J., Green R.F., Woodgate B., Caulet A., Brown L., and Bechthold J., 1995, ApJ, 451, 484 Lu L., Sargent W.L.W., Barlow T.A., Churchill C.W., Vogt S.S., 1996, ApJS, 107, 475 Møller P., Warren S.J., 1993, A&A, 270,43 Møller P., Warren S.J., Fynbo J.U., 1998, A&A, 330, 19 Møller P., Warren S.J., 1998, MNRAS, 299, 661 Rauch M., 1998, ARA&A 1998, 36, 267 Warren S.J., Møller P., 1996, A&A, 311,25 Wolfe A.M., Lanzetta K.M., Foltz, C.B., Chaffee F.H., 1995, ApJ, 454, 698
no-problem/9912/astro-ph9912090.html
ar5iv
text
# New infrared object in the field of the SMC cluster NGC 330Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA. ## 1 Introduction NGC 330 is a young ($``$10-50 Myr, Chiosi et al. 1995, Cassatella et al. 1996; Keller et al. 1999b) populous cluster in the Small Magellanic Cloud (SMC). An intriguing property of this cluster is its richness in Be star content (Grebel et al. 1992; Keller et al. 1999c). The Be phenomenon is observed in objects with very different evolutionary states, such as classical Be stars, Herbig Ae/Be objects, Be supergiants, symbiotic stars or post-AGB objects (see e.g. Zickgraf, 1998). Since the age of NGC 330 is $``$ 50 Myrs, some of the Be stars ‘observed in this cluster can indeed be classical Be stars, Be supergiants or even Herbig Ae/Be stars. On the other hand, stellar evolution theory predicts that at the age of $``$50 Myr high mass stars ($`M_{}>7M_{}`$) should already reach the AGB stage (Fagotto et al. 1994). Since the lifetime of such massive objects on the AGB is very short, possibly less than a few $`10^6`$ years (Blöcker, 1995), some of the objects showing the Be phenomenon in NGC 330 can be thus expected to be massive post-AGB stars. Additional information helping to distinguish between these different evolutionary groups of Be stars can be obtained from the observations in the infrared. Except possibly for the classical Be stars, strong emission at infrared wavelengths is typical for most objects showing the Be phenomenon, including Herbig Ae/Be stars, Be supergiants, post-AGB objects and symbiotic stars. Each of these groups can be characterized by different circumstellar properties, such as the dust column density, dust temperature and composition, and so forth (Waters et al. 1998). Indeed, most of these quantities can be constrained from mid-IR observations. In this work we present the ISOCAM observations of a new infrared source, which is the most prominent in the field of NGC 330 at mid-IR wavelengths. This object appears to be a strong H$`\alpha `$ emission source and thus can represent a possible Be star candidate. Employing ISOCAM observations and data available from the literature we discuss the properties of this object and consider several alternatives for constraining its evolutionary status. ## 2 Observations and results The new mid-infrared object (MIR1) was discovered during the raster imaging observations of the populous cluster NGC 330 with the ISOCAM (Cesarsky et al. 1996) on board the ISO satellite (Kessler et al. 1996). Observations were made on May 22, 1997 using the broad-band CAM filters LW1, LW2 and LW10, corresponding to the effective wavelengths of 4.5, 6.75 and 11.5 $`\mu `$m, respectively. The raster mode was 5 $`\times `$ 5, with a raster step size equal to 8 pixels (24<sup>′′</sup>) and a pixel field of view (PFOV) of 3<sup>′′</sup>. The fundamental integration time was set to $`t_{\mathrm{int}}`$ = 2.1 sec, with a total number of about 15 exposures per single raster position. ISOCAM data were reduced using the CAM Interactive Analysis software (CIA version 3)<sup>1</sup><sup>1</sup>1The ISOCAM data presented in this paper was analyzed using “CIA”, a joint development by the ESA Astrophysics Division and the ISOCAM Consortium led by the ISOCAM PI, C. Cesarsky, Direction des Sciences de la Matière, C.E.A., France. and the photometry was performed with the IRAF APPHOT package. The measured mid-IR fluxes, together with the optical photometry collected from the literature, are given in Table 1. The ISOCAM flux errors given in Table 1 are formal APPHOT errors. The absolute photometric uncertainty of the ISOCAM measurements is estimated to be less than 20% (Biviano, 1998). The optical counterpart of the infrared source was identified from the instrumental coordinates of MIR1 which were derived with respect to the positions of 8 field stars on LW2 CAM frame (identification accuracy is $``$ 1<sup>′′</sup>). The identification chart of MIR1 is given in Fig. 1. ## 3 Discussion The optical counterpart of MIR1 appears to be the variable star 224 discovered by Balona (1992). During the observing run of six nights when it was monitored by Balona, it faded by 0.2 mag and was distinctly variable within a night. Although periods around the 1 day expected for a Cepheid were indicated, no period gave a satisfactory fit to the data so the observed scatter and red color led Balona to suspect that it may be a double mode Cepheid on the red edge of the instability strip. Independent observations of NGC 330 (Sebo & Wood 1994) made over a 4 year period verified the variability of MIR1 (their star 515V)with a $`\mathrm{\Delta }V0.5`$ and $`\mathrm{\Delta }I0.4`$, but again no regular period was evident. Strikingly, the average V magnitude over six days (17.12; Balona, 1992) is very similar to the average V magnitude over $``$4 years (17.17; Sebo & Wood, 1994). The optical counterpart of MIR1 was found to be a strong H$`\alpha `$ source. Observations in the narrow-band ($`\mathrm{\Delta }\lambda =1.5`$nm) H$`\alpha `$ filter showed that this object (star 485, Keller et al. 1999c) was the second strongest H$`\alpha `$ emitter in the field of NGC 330 after the planetary nebula L305. This object is also listed in the SMC H$`\alpha `$ source catalog of Meyssonnier & Azzopardi (1993) as object 906. The strong H$`\alpha `$ emission and the prominent mid-IR excess are difficult to assess within the evolutionary scenario of a classical Cepheid. Indeed it is possible that this object is a binary system, however the discussion of this possibility in the view of the scarce observational facts seems rather premature. The $`(\mathrm{H}\alpha R)`$ color index is much larger in MIR1 than in any classical Be star in NGC 330 (Keller et al. 1999c), which, together with a strong mid-IR excess, indicates that MIR1 is unlikely to be a classical Be star. Therefore, we will further concentrate on the Be supergiant, Herbig Ae/Be and post-AGB star scenarios instead. ### 3.1 Be supergiant and Herbig Ae/Be star scenarios One of the possible alternatives for constraining the evolutionary status of MIR1 is a Be supergiant scenario. This is indeed supported by the existence of H$`\alpha `$ emission, which is typical to all types of Be stars. Spectral observations of MIR1 obtained by Keller (1999a) confirm that this object is a very strong H$`\alpha `$ emitter; the spectrum clearly shows H$`\alpha `$ line though no H$`\gamma `$ or higher. Although the observed optical color indices of MIR1 are distinctively different from those of Be supergiants in the Magellanic Clouds (Zickgraf et al. 1992), this may be a consequence of the interstellar or circumstellar reddening. A dereddening procedure employing the reddening-free $`Q`$ parameter yields $`Q_{BVI}0.09`$ (calculated assuming the standard excess ratio) which indicates that the spectral type of this object (depending on the luminosity class) should be O8-B2. Taking B0 as a representative of these values, one obtains $`V_014.60`$, $`(BV)_00.30`$ and $`(VI)_00.27`$, $`A_V2.5`$ and, using the SMC distance modulus of 18.9, $`M_V4.3`$. Assuming that the bolometric correction for the spectral type B0 is $`BC_V2.5`$, we derive $`M_{\mathrm{bol}}6.8`$. Taking into account the errors of the spectral type determination (which set a range of possible $`T_{\mathrm{eff}}`$ between $`\mathrm{20\hspace{0.17em}000}\mathrm{34\hspace{0.17em}000}`$ K), the obtained $`T_{\mathrm{eff}}`$ and $`M_{\mathrm{bol}}`$ are indeed comparable with those of Be supergiants in the MCs (cf. Zickgraf et al. 1992). Keller et al. (1999b) show a HR diagram of the cluster from the HST data and the Be stars at the cluster turnoff have $`T_{\mathrm{eff}}\mathrm{16\hspace{0.17em}000}`$ and $`M_{\mathrm{bol}}5.8`$. They also have one Be star (B13) like a blue straggler with $`T_{\mathrm{eff}}\mathrm{32\hspace{0.17em}000}K`$ and $`M_{\mathrm{bol}}6.5`$. These temperatures and luminosities are similar to the ones obtained for MIR1. The derived $`A_V2.5`$, however is much higher than the average in the field of NGC 330 (which measures the range from $`E(BV)=0.03`$ derived by Carney et al. (1985) to $`E(BV)=0.12`$ obtained by Bessell, 1991), and therefore indicates a significant circumstellar extinction. Indeed, the spectral energy distribution of MIR1 shows a strong mid-IR excess (Fig. 2). The estimate of the ratio of ISO LW10 band flux over the V band flux in MIR1 yields $`F_{12}/F_V3.3`$. This is comparable with the $`F_{12}/F_V5.4`$ observed in a ’representative’ Be supergiant GG Car (Waters et al. 1998) and thus could be viewed as an additional argument supporting the Be supergiant scenario. Employing theoretical evolutionary tracks of Fagotto et al. (1994) and making use of the derived $`T_{\mathrm{eff}}`$ and $`M_{\mathrm{bol}}`$ we obtain a stellar mass of $`M_{}1520M_{}`$ and the age of 8-14 Myr. The derived age of MIR1 is comparable with the cluster’s age (10-20 Myr, Cassatella et al. 1996), suggesting that the candidate Be supergiant could be a cluster member. Prominent mid-IR excesses are also common in Galactic Herbig Ae/Be stars with cool circumstellar shells (group II objects, see Hillenbrand et al. 1992). However, Herbig Ae/Be scenario seems rather unlikely for the case of MIR1. First, the available observations of NGC 330 do not show any evidence for the ongoing star formation in the field of NGC 330. Second, although some Galactic Herbig Ae/Be stars are observed as isolated objects, they are usually low-mass stars (cf. Hillenbrand et al. 1995) and therefore the high mass of the possible Herbig Ae/Be candidate ($`25`$ $`M_{}`$) inferred from the dereddened photometry of MIR1 and the SMC distance modulus rules out this possibility too. ### 3.2 Post-AGB star scenario Post-AGB stars have been long recognized as one of the evolutionary groups showing the Be phenomenon. Indeed, strong H$`\alpha `$ emission is typical for most post-AGB objects and thus the existence of H$`\alpha `$ emission in MIR1 works in favor of this scenario too. Most of the post-AGB objects show a double-peaked spectral energy distributions (e.g., Kwok, 1993; Zhang & Kwok, 1991), similar to the one observed in MIR1 (Fig. 2). A simple estimate of the infrared luminosity obtained from the blackbody fit to the ISOCAM data yields $`L_{\mathrm{IR}}1300L_{}`$ with a blackbody dust temperature $`T_\mathrm{d}=360`$ K. The estimate of the dust mass in the circumstellar shell, $`M_\mathrm{d}`$, can be made then using the following expression (Gurzadyan, 1997) : $$\frac{M_\mathrm{d}}{M_{}}=9.21T_\mathrm{d}^4\frac{L_{\mathrm{IR}}}{L_{}}$$ (1) where $`T_\mathrm{d}`$ and $`L_{\mathrm{IR}}`$ are the dust temperature and the infrared luminosity, respectively. Taking the $`L_{\mathrm{IR}}`$ and $`T_\mathrm{d}`$ values derived above, one obtains $`M_\mathrm{d}7\times 10^7M_{}`$, which is comparable with the dust masses typical for the post-AGB objects (e.g., Pottasch & Parthasarathy, 1988). Two facts should be noted, however. Firstly, the obtained blackbody dust temperature ($`T_\mathrm{d}=360`$ K) can be considerably overestimated, since its derivation relies on the mid-IR data only and does not take into account any information about the dust radiation at longer wavelengths. Secondly, at the dust temperatures typical for the post-AGB objects, a large fraction of infrared flux is emitted at wavelengths longer than $`12\mu `$m and thus $`L_{\mathrm{IR}}`$ can be considerably higher than the presently derived value. Therefore, the obtained estimate of $`M_\mathrm{d}`$ indicates only a lower limit for the dust mass in MIR1. The upper limit for the effective temperature of the central star of the possible post-AGB object can be inferred from the following considerations. If MIR1 is assumed to be a normal planetary nebula (i.e., past the PPN stage), the effective temperature of the central star should be at least $`T_{\mathrm{eff}}\mathrm{30\hspace{0.17em}000}`$ K and the observed $`(BV)=0.50`$ would indicate a considerable circumstellar extinction. Indeed, the central star with $`T_{\mathrm{eff}}\mathrm{30\hspace{0.17em}000}`$ K should have $`(BV)_00.30`$, and hence the $`E(BV)0.8`$, that is, $`A_V2.5`$ and $`M_V4.0`$. Assuming that the bolometric correction is $`BC_V3.0`$ one obtains $`M_{\mathrm{bol}}7.0`$, which is very close to the classical luminosity limit for the post-AGB stars ($`M_{\mathrm{bol}}7.2`$, e.g. Shaw & Kaler, 1989). Thus we conclude, that the classical luminosity limit for the post-AGB objects sets the upper limit for the effective temperature of the central star at about 30 000 K. The lower limit for the effective temperature of MIR1 can be constrained from the observed SED. The two-blackbody fit to the optical and mid-IR data (see Fig. 2) gives a lower limit estimate of the total luminosity of MIR1, $`L_{\mathrm{tot}}1800L_{}`$. Using a simple iteration procedure one can obtain the $`(BV)_0`$, and therefore $`T_{\mathrm{eff}}`$, which would produce the observed $`L_{\mathrm{tot}}`$ with the observed $`M_V1.8`$. Such procedure yields $`(BV)_00.14`$, $`A_V2.0`$, and $`T_{\mathrm{eff}}\mathrm{14\hspace{0.17em}000}`$ K, seting this as a lower limit for the effective temperature of the central star. The obtained temperature range suggests that MIR1 can be a good proto-planetary nebula (PPN) candidate. This is reinforced by the fact, that the infrared to the total luminosity ratio in MIR1 is $`L_{\mathrm{IR}}/L_{\mathrm{tot}}0.7`$, which is considerably higher than the value typical for the planetary nebulae ($`L_{\mathrm{IR}}/L_{\mathrm{tot}}0.3`$, see e.g. Pottasch, 1997). Since the presently estimated total luminosity of MIR1 is only $`L_{\mathrm{tot}}1800L_{}`$, it is rather unlikely that this object could be a high mass post-AGB star belonging to NGC 330; instead, it is probably a low mass field star. However, the mass and thus the evolutionary status of the possible PPN can not be constrained precisely yet. Therefore the tighter constraints on this scenario should be set by future optical spectroscopy of MIR1, which would provide additional information both about the central star and the nebula. ## 4 Conclusions We present the ISOCAM observations of a new infrared object in the field of the young populous cluster NGC 330 in the Small Magellanic Cloud. The discovered IR object which has been previously identified as a variable star shows a prominent mid-IR excess, indicating the presence of a dust shell. This along with the strong H$`\alpha `$ emission makes the suggestion that this object may be a low-mass Cepheid rather unlikely. We suggest instead that this object may be a low mass field post-AGB star in a proto-planetary nebula stage or a Be supergiant belonging to cluster NGC 330. In both cases the expected optical extinction is relatively high ($`A_V2.02.5`$). We also can not reject the possibility that this object may be an isolated Herbig Ae/Be star. Unfortunately, presently available observations do not allow to distinguish between these scenarios clearly. Neither the basic stellar parameters of the detected object, nor the physical conditions in the H$`\alpha `$ and dust emitting regions can be constrained precisely yet. Therefore, further photometric observations in the UV, near-IR and far-IR, and especially optical spectroscopy would be a highly desirable second step towards clarifying the nature of MIR1. ###### Acknowledgements. We thank Saulius Raudeliūnas and Mudumba Parthasarathy for productive and stimulating discussions, Stefan Keller for providing information about spectral observations of MIR1 prior to publication, and the referee, Michael Bessell, for the valuable comments and suggestions. This research was supported in part by grant-in-aids for Scientific Research (C) and for International Scientific Research (Joint Research) from the Ministry of Education, Science, Sports and Culture in Japan.
no-problem/9912/hep-ph9912536.html
ar5iv
text
# A model for parton distributions in hadronsContribution to the DIS99 workshop proceedings ## Abstract The non-perturbative parton distributions in hadrons are derived from simple physical arguments resulting in an analytical expression for the valence parton distributions. The sea partons arise mainly from pions in hadronic fluctuations. The model gives new insights and a good description of structure function data. TSL/ISV-99-0213 Hard processes involving hadrons are calculated by folding perturbative QCD matrix elements with parton distributions describing the probability of finding a quark or a gluon in the hadron. Perturbative QCD evolution describes the dependence of the parton distributions on the hard scale $`Q`$ of the interaction. However, their dependence on the momentum fraction $`x`$ at the lower limit for applying perturbative QCD, $`Q_00.52`$ GeV, are fitted to data using parameterisations, e.g. of the form $$f_i(x,Q_0)=N_ix^{a_i}(1x)^{b_i}(1+c_i\sqrt{x}+d_ix)$$ (1) The parameters in these functions have no direct physical meaning, making it difficult to interpret the results. To gain understanding of non-perturbative QCD we have developed a physical model for the parton distributions at $`Q_0`$. This is here briefly described together with our latest developments. The basic physical picture is that a probe with large resolution, compared to the hadron size, will see free quarks and gluons in quantum fluctuations of the hadron. The measuring time is short compared to the life-time of the fluctuation since the latter is determined by the confinement of quarks and gluons inside the hadron, as illustrated in Fig. 1. This makes it possible to describe the formation of the fluctuations independently of the measuring process. Our approach only intends to provide the four-momentum $`k`$ of a single probed parton. All other information in the hadron wave function is neglected, treating the other partons collectively as a remnant with four-momentum $`r`$, see Fig. 2. It is convenient to describe the process in the hadron rest frame where there is no preferred direction and hence spherical symmetry. The probability distribution for finding one parton is taken as a Gaussian, which expressed in momentum space for a parton with four-momentum $`k`$ and mass $`m_i`$ is $$f_i(k)dk=N(\sigma _i,m_i)e^{\frac{(k_0m_i)^2+k_1^2+k_2^2+k_3^2}{2\sigma _i^2}}dk,$$ (2) where $`\sigma =\frac{1}{d_h}m_\pi `$ is the inverse of the confinement length scale or hadron diameter. The partonic structure is described using the light-cone momentum fraction $`x=\frac{k_+}{p_+}`$ which the parton has in the initial hadron. Since $`x`$ is invariant under boosts in the $`z`$ direction, the same will be true for the calculated parton distributions. There are a number of constraints that must be fullfilled by the parton distributions. The normalisation for valence quarks is given by the sum rules $$_0^1f_i(x)𝑑x=n_i,$$ (3) and for the gluons by the momentum sum rule $$\underset{i}{}_0^1xf_i(x)𝑑x=1.$$ (4) There are also the kinematical constraints $$m_i^2j^2<W^2\mathrm{and}r^2>\underset{i}{}m_i^2$$ (5) given by the final partons being on-shell or time-like and the remnant having to include the remaining partons. These constraints also leads to $`0<x<1`$. The parton model requires that $`W`$ is well above the resonance region and that the resolution of the probe is much larger than the size of the hadron, i.e. $$Wm_p\mathrm{and}Q_0\sigma _i$$ (6) The scale of the probe must also be large enough, $`Q_0\mathrm{\Lambda }_{QCD}`$, for perturbative QCD to describe the evolution of the parton distributions from the starting scale $`Q_0`$. In we integrated Eq. 2 numerically to find the parton distributions since the kinematical constraints, Eqs. 5, are quite complicated in general. The problem is much simpler if the transverse momenta and the masses of the partons are neglected. It is then possible to derive an analytical expression for the parton distributions, $$f_i(x)=N^{}(\stackrel{~}{\sigma }_i)\mathrm{exp}\left(\frac{x^2}{4\stackrel{~}{\sigma }_i^2}\right)\mathrm{erf}\left(\frac{1x}{2\stackrel{~}{\sigma }_i}\right),$$ (7) where $$\stackrel{~}{\sigma }=\frac{1}{d_hm_h}\frac{m_\pi }{m_h}.$$ (8) The valence parton distributions for hadrons are here determined simply by the mass and size of the hadron! The resulting valence distributions for the proton ($`\stackrel{~}{\sigma }0.15`$) and the pion ($`\stackrel{~}{\sigma }1`$) are very reasonable as shown in Fig. 3. Note that the pion distributions are very similar to $`xf(x)=2x(1x)`$ and that one third of the pion momentum is carried by gluons. The sea partons are described by hadronic fluctuations, e.g. for the proton $`|p\pi ^0+|n\pi ^++\mathrm{}`$, where the probe measures a valence parton in one of the two hadrons. The momentum distribution of pions in a hadronic fluctuation is assumed to follow from the same model as for the valence partons, with the differences that the mass can not be neglected and the width $`\sigma _\pi 50`$ MeV is smaller, related to the longer range of pionic strong interactions. Using these valence and sea parton $`x`$ distributions at $`Q_0=0.85`$ GeV, next-to-leading order DGLAP evolution in the CTEQ program was applied to obtain the parton distributions at larger $`Q`$. The proton structure function $`F_2(x,Q^2)`$ can then be calculated and compared with deep inelastic scattering data, as illustrated in Fig. 4 and detailed in . The model does remarkably well, in view of its simplicity and few parameters (6). Of course, conventional parton density parameterisations give much better fits, probably mainly due to their many more parameters ($`20`$). The main part of the proton structure is determined by the valence distributions, but the sea gives an important contribution at small $`x`$, as can also be seen from the comparison with the measured $`F_2`$. Including all pion fluctuations will give a flavour asymmetric sea with $`\overline{d}>\overline{u}`$ as also observed experimentally , but the numerical details remain to be investigated. The model predicts the valence parton distributions for all hadrons, but heavy quarks gives more complicated analytical expressions . Numerical results on strange and charmed mesons are shown in . In addition, a study based on Monte Carlo has been made to investigate intrinsic strange and charm quarks in the proton.
no-problem/9912/hep-ph9912445.html
ar5iv
text
# Untitled Document Edinburgh 99/21 DAMTP-1999-170 hep-ph/9912445 THE CHALLENGE OF SMALL xx R D Ball Royal Society University Research Fellow Department of Physics and Astronomy University of Edinburgh, EH9 3JZ, Scotland P V Landshoff DAMTP, Centre for Mathematical Sciences Cambridge, CB3 0AW, England email addresses: rdb@th.ph.ed.ac.uk pvl@damtp.cam.ac.uk Abstract We review the current understanding of the behaviour of inclusive cross sections at small $`x`$ and large $`Q^2`$ in terms of Altarelli-Parisi evolution, the BFKL equation, and Regge theory, asking in particular to what extent they are mutually consistent. This report is a summary of various discussions at the Durham phenomenology workshop, September 1999 December 1999 Introduction A striking discovery at HERA has been the rapid rise with $`1/x`$ of the proton structure $`F_2`$ at small $`x`$. If one fits this rise to an effective power $`x^{\lambda (Q^2)}`$ then, even at quite small values of $`Q^2`$, $`\lambda (Q^2)`$ is found to be significantly greater than the value just less than 0.1 associated with soft pomeron exchange that is familiar in purely hadronic collisions . Moreover, $`\lambda (Q^2)`$ increases rapidly with $`Q^2`$. Similarly, and perhaps equally importantly, the size of the scaling violations is seen to increase dramatically as we go to smaller $`x`$ (see figure 1). Figure 1: a) Measurements of $`F_2`$ by ZEUS . The curves show a NLO perturbative fit, with scaling violations as predicted by perturbative QCD. b) $`\lambda (Q^2)`$ extracted from ZEUS and E665 data on $`F_2(x,Q^2)`$ . The solid line above $`1\mathrm{GeV}^2`$ is from a NLO Altarelli-Parisi fit, while the lines below $`1\mathrm{GeV}^2`$ are from Regge fits. At first it was believed that $`\lambda (Q^2)`$ could be calculated from the BFKL equation . However it was soon realised that this approach could not explain the observed rise of $`\lambda `$ with $`Q^2`$, nor the large scaling violations. Instead, the experimental data are in good agreement with with the double-logarithmic rise $$F_2(x,Q^2)\mathrm{exp}(\sqrt{(48/\beta _0)\mathrm{ln}1/x\mathrm{ln}\mathrm{ln}Q^2}),$$ $`(1)`$ predicted long ago from the lowest-order Altarelli-Parisi equations . The data can also be fitted in Regge theory , by adding the exchange of a ‘hard pomeron’ to that of the soft pomeron; this achieves an effective power $`\lambda (Q^2)`$ as the result of combining fixed-power terms whose relative weights vary with $`Q^2`$. In this note we review the present difficulties with the BFKL equation, the uncertainties related to the resummation of small $`x`$ logarithms in Altarelli-Parisi equations, and discuss whether either of these approaches is consistent with Regge theory and in particular the assumption that the dominant singularities are Regge poles. The central question concerns the extent to which the behaviour of cross-sections in the small $`x`$ limit may be calculated from perturbative QCD. These are important issues, as the accuracy of any extractions of parton distribution functions from HERA data and thus of many of the predictions for the LHC relies crucially on our understanding of them. Most of these analyses are currently based on conventional fixed order perturbation theory. Figure 2: ZEUS data for $`Q^4F_2^c`$, fitted to a single fixed power of $`x`$. The Regge Approach The ZEUS collaboration has recently published new data on events in which a $`D^{}`$ particle is produced, which they use to extract the contribution $`F_2^c(x,Q^2)`$ to the complete structure function $`F_2(x,Q^2)`$ from events where the $`\gamma ^{}`$ is absorbed by a charmed quark. Their data for $`F_2^c(x,Q^2)`$ have the property that, over a wide range of $`Q^2`$ they can be described by a fixed power of $`x`$: $$F_2^c(x,Q^2)=f_c(Q^2)x^{ϵ_0}$$ $`(2)`$ with $`ϵ_00.4`$ and $`f_c(Q^2)`$ fitted to the data: see figure 2. If the behaviour (2) were literally true, it would imply that the Mellin transform $`F_2^c(j,Q^2)`$ would have a pole at $`j=1+ϵ_0`$. Such poles in the complex angular momentum plane are called Regge poles, and the theory of Regge poles has a long history . It has been used very successfully to correlate together a huge amount of data from soft hadronic reactions: total cross-sections such as $`pp`$ and $`\overline{pp}`$, partial cross-sections such as $`\gamma p\rho p`$, differential cross-sections such as $`pppp`$, and diffraction dissociation (events where the final state has a very fast hadron). It is well established that $`j`$-plane amplitudes have a pole near to $`j=\frac{1}{2}`$, resulting from vector and tensor meson exchange, and another singularity, called the soft-pomeron singularity, near to $`j=1`$. It is possible to obtain a good description of the soft hadronic data by assuming that this singularity too is a pole, at $`j=1.08`$. Its dynamical origin is poorly understood ; it is presumably the result of some kind of nonperturbative gluonic exchange, or perhaps glueball exchange. While the assumption that the soft-pomeron singularity is a pole describes a large amount of data well, Regge theory admits other types of singularity. For example, powers of logarithms of $`W^2`$ have been used to obtain equally good fits to total-cross-section data . These fits have the advantage that they automatically satisfy standard unitarity bounds when extrapolated to arbitrarily high $`W^2`$, but they have the disadvantage that Regge factorization and quark counting rules become rather harder to understand. Nor can they readily be extended to other applications, such as $`pp`$ and $`\overline{p}p`$ elastic scattering, and diffraction dissociation . Regge theory should be applicable whenever $`W^2`$ is much greater than all the other variables, in particular when $`W^2Q^2`$ (and thus $`x1`$), even if $`Q^2`$ is large. However, the tensor-meson and soft-pomeron poles are insufficient to fit all the HERA $`F_2`$ data. An excellent fit can be obtained by including a further fixed pole at $`j=1+ϵ_0`$, so that $$F_2(x,Q^2)=\underset{i=0,1,2}{}f_i(Q^2)x^{ϵ_i}$$ $`(3)`$ This ansatz fits the data all the way from photoproduction at $`Q^2=0`$ to $`Q^2=2000`$ GeV<sup>2</sup>, the highest value available at small $`x`$. The soft-pomeron power is $`ϵ_1=0.08`$, the tensor-meson power is $`ϵ_20.5`$, while the new power is $`ϵ_00.4`$, which we have already seen is what is needed to fit the data for $`F_2^c`$ shown in figure 2. The new leading singularity at $`j=1+ϵ_0`$ is sometimes referred to as the ‘hard pomeron’ singularity. This does not explain what causes it: it has often been conjectured that its origin is perturbative QCD, and we will see below the extent to which it is consistent with our current understanding based on the summation and resummation of small $`x`$ logarithms. Although there is no sign of any contribution from the hard pomeron in data for purely hadronic processes, it does seem to be present in $`F_2(x,Q^2)`$ even at extremely small $`Q^2`$: measurements indicate that even for $`Q^2`$ as low as 0.045 GeV<sup>2</sup>, $`F_2`$ is rising quite steeply in $`x`$. Even at $`Q^2=0`$ the effective power $`\lambda `$ may well be greater than that associated with soft purely-hadronic collisions. Similarly, the data for $`\gamma pJ/\psi p`$ are described well by the sum of two powers in the amplitude, $`(W^2)^{ϵ_0}`$ and $`(W^2)^{ϵ_1}`$ at $`t=0`$. One does not expect a contribution from tensor meson exchange, because of Zweig’s rule. The Regge picture also successfully describes the differential cross-section away from $`t=0`$. The striking feature of these fits is that such a wide variety of different data may be described using a simple parameterization: this suggests a universal underlying mechanism, and raises the hope that the hard component at least might be derivable from perturbative QCD. However, the $`j`$\- plane singularities need not be poles, so the $`x`$ dependence need not be simple powers of $`x`$: powers of $`\mathrm{ln}1/x`$ could do as well. Furthermore, Regge theory does not determine the coefficient functions $`f_i(Q^2)`$ in (3). Nor is it clear that three terms in (3) will always be enough: as the range in $`x`$ and $`Q^2`$ increases still further, it may be that yet more terms are required. Thus although the $`x`$ and $`Q^2`$ of the existing data can be fitted using a Regge pole ansatz, the uncertainties in any extrapolation outside the existing kinematic range (such as from HERA to the LHC) are difficult to quantify. Moreover, it is not possible using Regge theory alone to predict jet cross sections, or indeed vector boson or top or Higgs production cross sections: we need more dynamics. Our only candidate for a complete theory of strong interactions at high energies is perturbative QCD, and it is to the understanding of perturbative QCD at small $`x`$ that we now turn. QCD: Resummation of Logs of $`x`$$`x`$ At first it was hoped that the BFKL equation provided a purely perturbative calculation of the value of $`\lambda (Q^2)`$. This hope was based on the leading contribution to the BFKL kernel $`K(Q^2,k^2)`$ with fixed coupling. Its Mellin transform $`\chi (M)`$ has a minimum at $`M=\frac{1}{2}`$, which gives rise to a power rise of the form $`x^\lambda `$, with $`\lambda =\lambda _0\chi (\frac{1}{2})=12\mathrm{ln}2\alpha _s/\pi `$, in qualitative agreement with the first data sets. However this agreement was superficial, essentially because the $`Q^2`$ dependence was incorrect (see figure 1): $`\lambda `$ did not rise with $`Q^2`$, but remained fixed. There were suggestions that this was because the BFKL equation did not take sufficient account of energy conservation and of nonperturbative effects : it is difficult to avoid important contributions from soft gluons, which cannot be estimated using perturbation theory. For this reason attempts to improve the kernel by making the coupling run were never entirely successful : running couplings make the equation unstable, leading to unphysical effects. The full extent of the difficulties was reinforced by the calculation of the next-to-leading order correction to the kernel : the correction turned out to be very large and negative, inverting the minimum of the BFKL function $`\chi (M)`$, which was responsible for the power behaviour at leading order (see figure 4a). Since the saddle points of the inverse Mellin transform were now off the real axis, the NLLx equation gave rise to negative cross-sections in the Regge region . This destroyed any faith that might have remained in the leading-order prediction. Various proposals to fix up the BFKL equation have been put forward: for example a particular choice of the renormalization scale , or a different identification of the large logs which are resummed . However the root of the problem is that the perturbative contributions to $`\chi (M)`$ become progressively more and more singular at integer values of $`M`$, due to unresummed logarithms of $`Q^2`$ and $`k^2`$ in the kernel $`K`$. In particular, near $`M=0`$ the expansion oscillates wildly. It follows that a perturbative expansion which sums logarithms of $`x`$ must also resum the large logarithms of $`Q^2`$ to all orders in perturbation theory if it is to be useful. QCD: Resummation of Logs of $`Q^2`$ The usual way to resum logarithms of $`Q^2`$ is to use Altarelli-Parisi evolution equations, with the splitting functions calculated at a given fixed order in perturbation theory. If one starts at some initial scale $`Q_0^2`$ with parton distributions that rise less steeply than a power in $`1/x`$, then fixed order evolution to higher $`Q^2`$ leads to distributions that become progressively steeper in $`1/x`$ as $`Q^2`$ increases, in agreement with the $`F_2`$ data from HERA. More significantly the prediction of the specific form (1) of the rise is in good agreement with the data over a wide region of $`x`$ and $`Q^2`$. This is widely seen as a major triumph for perturbative QCD, as direct evidence for asymptotic freedom : the coefficient $`\beta _0`$ in (1) which determines the slope of the rise is the first coefficient of the QCD $`\beta `$-function. Figure 3: a) The gluon distribution extracted from a NLO fit to ZEUS data for $`F_2`$ . b) The ZEUS data for $`F_2^c`$ , compared to the QCD prediction obtained from the gluon a). The success of fixed-order perturbative QCD in describing the increasingly precise HERA $`F_2`$ data when $`Q^2\begin{array}{c}>\hfill \\ \hfill \end{array}1\mathrm{GeV}^2`$ has been confirmed many times by successful NLO fits . From these a gluon distribution may be extracted, (see figure 3a), and predictions for $`F_2^c`$ (figure 3b), dijet production, and $`F_L`$, all of which have now been supported by direct measurements . Clearly fixed order perturbative QCD works well at HERA: none of these predictions is trivial, and all are successful. Of course once $`Q_0^2`$ is as small as $`1\mathrm{GeV}^2`$ or less a perturbative treatment is no longer appropriate, and indeed an instability develops in the NLO gluon distribution at around such a scale (see figure 3a). It is perhaps useful to compare figure 2 with figure 3b: the data are the same on each figure, but the curves on the former are the result of a power fit that assumes a flavour-blind hard pomeron, while those on the latter are from a straightforward parameter-free prediction made using NLO perturbative QCD. Interestingly the conclusions are also different: the slope of the rise in $`x`$ manifestly increases with $`Q^2`$ in figure 3b (corresponding to the rise of the slopes in figure 1a and figure 3a), while in figure 2 it is fixed. It is important to realise that the success of the NLO perturbative QCD predictions is crucially dependent on the nonperturbative input at the initial scale $`Q_0^21\mathrm{G}\mathrm{e}\mathrm{V}^2`$ being ‘soft’ — not rising too quickly with $`x`$ — so that the rise in $`x`$ can be generated dynamically. If instead the rise were input in the form (3), growing as $`x^{ϵ_0}`$ with $`ϵ_0`$ as large as $`0.4`$, this would when evolved perturbatively with the NLO anomalous dimension lead to a $`Q^2`$ dependence which was independent of $`x`$ and thus inconsistent with the data (see figure 1). If one were to insist on such a hard pomeron singularity, one would thus to be consistent also have to argue that NLO perturbative QCD could not be applied in this region. The many quantitative successes of NLO perturbative QCD at HERA \[4,,24,,25\] would then have to be considered merely fortuitous. Conversely, if one instead accepts that the success of the perturbative predictions is significant, one would then have to conclude that the simple assumption (3) that the rightmost singularity in the $`j`$-plane is a simple pole is incorrect, since the perturbative results rely for their success on a soft input. This said, to obtain reliable predictions for processes at the LHC it is not sufficient to confirm NLO QCD within experimental errors at HERA: we must also be able to understand theoretical errors. In particular, at small $`x`$ the approximation to the splitting functions given by retaining only the first few terms in an expansion in powers of $`\alpha _s`$ is not necessarily very good: as soon as $`\xi =\mathrm{log}1/x`$ is sufficiently large that $`\alpha _s\xi 1`$, all terms of order $`\alpha _s(\alpha _s\xi )^n`$ (LLx) and $`\alpha _s^2(\alpha _s\xi )^n`$ (NLLx) must also be considered in order to achieve a result which is reliable up to terms of order $`\alpha _s^3`$. In fact $`\alpha _s\xi \begin{array}{c}>\hfill \\ \hfill \end{array}1`$ throughout most of the HERA kinematic region, so one might expect these effects to be significant. The fact that empirically they seem to be small is thus a mystery requiring some explanation. This argument may be sharpened by consideration of the $`j`$-plane singularities of the Mellin transform $`F_2(j,Q^2)`$. At the $`n`$-th order in fixed order perturbation theory the iteration of small $`x`$ logarithms in the evolution gives rise to essential singularities of the form $$(j1)^1\mathrm{exp}(\alpha _s^n/(j1)^n)$$ $`(4)`$ The $`j=1`$ singularity thus becomes more severe order by order in perturbation theory. This is not necessarily a problem phenomenologically, since (4) corresponds to a sequence of predictions for measurable quantities such as $`F_2(x,Q^2)`$ that are strictly convergent provided only that $`x>0`$. It follows that although (4) may not be correct actually at the point $`j=1`$ it may be a good numerical approximation to the correct behaviour away from $`j=1`$. Furthermore there is good reason to believe that a resummation over all orders $`n`$ might remove the singularity . The argument is that, if there is a singularity at a fixed point in the complex $`j`$-plane for large values of $`Q^2`$, such as a naive application of (4) might seem to imply, then considerations of analyticity in $`Q^2`$ suggest that it might also be present at small $`Q^2`$. While this is not completely excluded, the Mellin transform variable $`j`$ is essentially a complex angular momentum and studies made more than a quarter of a century ago never found any need for a worse singularity than a fixed pole at $`j=1`$ in Compton-scattering amplitudes, with no singularity at all at that point in $`F_2`$. The problem with this argument is that although it suggests that the singularity structure (4) is incorrect, it still doesn’t tell us precisely what or where the rightmost singularities are in the $`j`$-plane. Furthermore it is clearly not possible to deduce precisely what it is from the data: to do this we would need to do experiments of arbitrarily high precision at arbitrarily high energies. It is thus interesting to ask whether we can instead deduce it from perturbative QCD. To do this, we would at least need a sensible resummation of small $`x`$ logarithms. We now discuss the difficult problem of constructing such a resummation. Figure 4: (a) the BFKL function $`\chi (M)`$ and (b) the corresponding anomalous dimension $`\gamma (N)`$ in various approximation schemes . QCD: Resummation of Logs of $`x`$ and Logs of $`Q^2`$ Using the BFKL kernel it is possible to deduce the coefficients of the LLx singularities of the splitting function to all orders in perturbation theory, ie of all terms in the anomalous dimension $`\gamma (N)`$ of the form $`\alpha _s^n/N^n`$, where $`N=j1`$. Summing up these singularities converts the sum of poles into a cut starting from $`N=\lambda _0`$, apparently confirming the Regge expectation about the behaviour at $`j=1`$: it is this cut which at fixed coupling gives the power rise of the BFKL pomeron. This procedure may be extended beyond LLx \[26,,31,,32\]: the anomalous dimension $`\gamma (\alpha _s,N)`$ in a particular factorization scheme (such as $`\overline{\mathrm{MS}}`$) is related to a BFKL function $`\chi (\alpha _s,M)`$ through the ‘duality’ relation $$\chi (\alpha _s,\gamma (\alpha _s,N))=1.$$ $`(5)`$ Expanding this relation to NLLx, and using calculations of the coefficient function and gluon normalization and of the NLLx kernel , we can compute the coefficients of all terms of the form $`\alpha _s\alpha _s^n/N^n`$ in the anomalous dimension. Such an approach has several advantages over the direct solution of the BFKL equation: there is a clean factorization of hard and soft processes, running coupling effects are properly taken care of by well formulated renormalization group arguments, and it is easy to arrange for a smooth matching to the large $`x`$ region. However it was known some time ago that reconciling the summed logarithms with the HERA data was actually rather difficult . Once all the NLLx corrections were known it became clearer why: the expansion in summed anomalous dimensions at LLx, NLLx,…is unstable \[32,,34\], the ratio of NLLx/LLx contributions growing rapidly as $`\xi =\mathrm{log}1/x\mathrm{}`$. It follows that the previous theoretical estimates of the size of the effects of the small $`x`$ logarithms based on the fixed order BFKL equation, either at LLx or NLLx, were all hopelessly unreliable. Indeed any calculation which resums LO and NLO logs of $`Q^2`$, but sums up only LO and NLO logarithms of $`x`$ is seen to be insufficient: some sort of all order resummation of the small $`x`$ logarithms is always necessary. Clearly there are many ways in which such a resummation might be attempted: what is needed are guiding principles to keep it under control. One such principle is momentum conservation : before using $`\chi (M)`$ to compute the corrections to $`\gamma (N)`$ through the duality eqn.(5), we should first resum all the LO and NLO singularities at $`M=0`$ discussed above, and impose the momentum conservation condition $`\gamma (\alpha _s,1)=0`$, whence (from eqn.(5)) $`\chi (\alpha _s,0)=1`$. Since these are collinear singularities, their coefficients may be determined from the usual LO and NLO anomalous dimensions, again using the duality relation eqn.(5), but this time in the reverse direction. It turns out that when the $`M=0`$ singularities are resummed they account for almost all of $`\chi `$ in the region of $`M=0`$ (see figure 4a): this explains already why the remaining small $`x`$ corrections have not yet been seen at HERA. Small $`x`$ logarithms are simply numerically much less important than collinear logarithms. The second principle is perturbative stability. The instability found at NLLx can be shown to follow inevitably from the shift in the value $`\lambda `$ of $`\chi `$ at the minimum due to subleading corrections . This shifts the position of the singularity from $`N=\lambda _0`$ to $`N=\lambda _0+\mathrm{\Delta }\lambda `$, and this shift must be accounted for exactly if a sensible resummed perturbative expansion is to be obtained. Since in practice the correction $`\mathrm{\Delta }\lambda `$ is of the same order as the leading term $`\lambda _0`$, it seems probable that $`\lambda =\lambda _0+\mathrm{\Delta }\lambda `$ is not calculable in perturbation theory: rather the value of $`\lambda `$ may be used to parameterise the uncertainty in the value of $`\chi `$ in the vicinity of $`M=\frac{1}{2}`$. This uncertainty is clearly due to the unresummed infrared logarithms at $`M=1`$. In an attempt is made to resum these singularities through a symmetrization of $`\chi `$ about $`M=\frac{1}{2}`$: $`\chi `$ is then supposedly determined for all $`0M1`$, and $`\lambda `$ is given by the height of its minimum. The main shortcoming of this approach is that it makes implicit assumptions about the validity of perturbation theory when $`Q^2`$ is very small. Putting together the two principles of momentum conservation and perturbative stability, we can compute fully resummed NLO anomalous dimensions (see figure 4b). The result depends on the unknown parameter $`\lambda `$. Provided $`\lambda \begin{array}{c}<\hfill \\ \hfill \end{array}0`$, the corrections to Altarelli-Parisi evolution in the HERA region are tiny: for larger values they may be significant at low $`x`$ and low $`Q^2`$, and it might then be possible to determine $`\lambda `$ from the data. It can be seen from the plot that the singularity structure at $`N=0`$ (and thus $`j=1`$) is still completely undetermined: this is a reflection of the uncertainty in the $`\chi `$ plot at $`M=1`$, which makes it not only unclear as to the value of $`\chi `$ at its minimum, but even whether there is a minimum at all. To determine the position and nature of the rightmost singularities in the $`j`$-plane would presumably require control of $`\chi (M)`$ at $`M=1,2,\mathrm{}`$, which is clearly beyond current perturbative technology. It seems that to make further progress we require either genuine nonperturbative input, or a substantial extension of the perturbative domain. A possible way in which this might be done through a new factorization procedure was explored in , from which the main conclusion was that at small $`x`$ the coupling should run not with $`Q^2`$, but with $`W^2`$. Preliminary calculations suggest that this is not phenomenologically unnacceptable. However much more work remains to be done. Summary At low $`Q^2`$ but high $`W^2`$ Regge theory works well and gives nontrivial and successful predictions. At high $`Q^2`$ and small $`x`$ NLO perturbative QCD works well and gives nontrivial and successful predictions, with quantifiable uncertainties due to the need for a controlled resummation of small $`x`$ logarithms. In the same region, Regge theory can also fit data successfully, but without the predictive power of perturbative QCD. Neither Regge theory, nor conventional perturbative QCD, nor even the data, seem to be able to predict the precise form of cross sections in the Regge limit $`W^2\mathrm{}`$ with $`Q^2`$ large. To do this, new ideas will probably be needed. Acknowledgements: RDB would like to thank Guido Altarelli, Stefano Catani, John Collins, Gavin Salam, Dave Soper, and Andreas Vogt for discussions on this subject, and in particular Stefano Forte for a critical reading of the manuscript. References relax A Donnachie and P V Landshoff, Physics Letters B296 (1992) 227 relax ZEUS Collaboration, Euro Phys Jour C7 (1999) 609 relax J Forshaw and D A Ross, Quantum chromodynamics and the pomeron , Cambridge University Press (1997), and references therein relax R D Ball and S Forte, Physics Letters B335 (1994) 77; B336 (1994) 77 relax De Rujula et al, Physical Review D10 (1974) 1649 relax R K Ellis, W J Stirling and B R Webber, QCD and Collider Physics Cambridge University Press (1996) and references therein relax A Donnachie and P V Landshoff, Physics Letters B437 (1998) 408 relax A Donnachie and P V Landshoff, hep-ph/9910262 relax ZEUS collaboration: A Breitweg et al, hep-ex/9908012 relax P D B Collins, Introduction to Regge theory, Cambridge University Press (1977) relax P V Landshoff and O Nachtmann, Zeit Phys C35 (1987) 405 O Nachtmann, Ann Phys 209 (1991) 436 H G Dosch, E Ferreira and A Kramer, Physical Review D5 (1992) 1994 relax P Desgrolard et al, Physics Letters B309 (1993) 191;B459 (1999) 265 J R Cudell et al, hep-ph/9908218 relax A Donnachie and P V Landshoff, Nuclear Physics B267 (1986) 690 relax H1 collaboration: C. Adloff et al, Zeit Phys C74 (1997) 221 relax ZEUS collaboration: talk by C Amelung at DIS99, Zeuthen relax J C Collins and P V Landshoff, Physics Letters B276 (1992) 196 M F McDermott, J R Forshaw and G G Ross, Physics Letters B349 (1995) 189 J Bartels, H Lotter and M Vogt, Physics Letters B373 (1996) 215 relax L P A Haakman et al, Nuclear Physics B518 (1998) 275 Y V Kovchegov and A H Mueller Physics Letters B439 (1998) 428 N Armesto et al, Physics Letters B442 (1998) 459 relax V S Fadin and L N Lipatov, Physics Letters B429 (1998) 127 relax D A Ross, Physics Letters B431 (1998) 161 relax S J Brodsky et al, JETP Lett. 70 (1999) 155 R S Thorne, Physical Review D60 (1999) 054031 relax C R Schmidt, Physical Review D60 (1999) 074003 relax G Salam, Jour High Energy Phys 9807 (1998) 19 relax F Wilczek, Dirac medal lecture, Trieste, 1994 hep-th/9609099 relax See for example M Botje (ZEUS Collaboration ), hep-ph/9905518 V Barone, C Pascaud and F Zomer, hep-ph/9907512 relax See e.g. M Klein, Lepton-Photon proceedings (Stanford, 1999) P Marage, ICHEP proceedings (Tampere, 1999), hep-ph/9911426 relax R D Ball and S Forte, Physics Letters B351 (1995) 313 relax J R Cudell, A Donnachie and P V Landshoff, Physics Letters B448 (1999) 281 relax P V Landshoff and J C Polkinghorne, Physical Review D5 (1972) 2056 relax G Altarelli, R D Ball and S Forte, hep-ph/9911273 relax T Jaroszewicz, Physics Letters B116 (1982) 291 relax M Ciafaloni, Physics Letters B356 (1995) 74 R D Ball and S Forte, Physics Letters B359 (1995) 362 S Catani, Zeit Phys C70 (1996) 263; Zeit Phys C75 (1997) 665 G Camici and M Ciafaloni, Nuclear Physics B496 (1997) 305 relax R D Ball and S Forte, Physics Letters B465 (1999) 271 relax S Catani and F Hautmann, Physics Letters B315 (1993) 157; Nuclear Physics B427 (1994) 475 relax R D Ball and S Forte, hep-ph/9805315 J Blümlein et al., hep-ph/9806368 relax R K Ellis, F Hautmann and B R Webber, Physics Letters B348 (1995) 582 R D Ball and S Forte Physics Letters B358 (1995) 365 and hep-ph/9607291 I Bojak and M Ernst, Nuclear Physics B508 (1997) 731 relax M Ciafaloni et al, Physics Letters B452 (1999) 372; Physical Review D60 (1999) 114036; Jour High Energy Phys 9910 (1999) 017 relax R D Ball and S Forte, Physics Letters B405 (1997) 317 relax R G Roberts, Euro Phys Jour C10 (1999) 697 This research is supported in part by the EU Programme ‘‘Training and Mobility of Researchers", Networks ‘‘Hadronic Physics with High Energy Electromagnetic Probes" (contract FMRX-CT96-0008) and ‘‘Quantum Chromodynamics and the Deep Structure of Elementary Particles’’ (contract FMRX-CT98-0194), and by PPARC
no-problem/9912/cond-mat9912431.html
ar5iv
text
# Analysis of negative magnetoresistance. Statistics of closed paths. II. Experiment ## I Introduction The phenomenon of anomalous magnetoresistance at low temperature in “dirty” metals and doped semiconductors was explained by the theory of quantum corrections to the conductivity. The interference correction to the conductivity gives the main contribution to the negative magnetoresistance in 2D structures at low temperature and low magnetic field. A unique analytical expression for the magnetic field dependence of negative magnetoresistance has been found in Ref. $`\mathrm{\Delta }\sigma (B)`$ $`=`$ $`\sigma (B)\sigma (0)`$ (1) $`=`$ $`aG_0\left(\mathrm{\Psi }\left(0.5+{\displaystyle \frac{B_{tr}\tau }{B\tau _\phi }}\right)\mathrm{ln}\left({\displaystyle \frac{B_{tr}\tau }{B\tau _\phi }}\right)\right),`$ (2) where $`G_0=e^2/(2\pi ^2\mathrm{})`$, $`B_{tr}=\mathrm{}c/(2el^2)`$, $`l`$ is the mean free path, $`\mathrm{\Psi }(x)`$ is a digamma function, $`\tau `$ and $`\tau _\phi `$ stand for the elastic scattering and phase breaking time, respectively. The parameter $`a`$ is equal to unity for the non-interacting case. This expression was obtained in the diffusion approximation for the isotropic scattering by randomly distributed scatterers with a short-range potential. Nevertheless it is universally used in the analysis of experimental data to extract the phase breaking time and its temperature dependence through the fitting of experimental curves. It should be pointed out that $`\tau _\phi `$ determined in this way is a fitting parameter rather than the phase breaking time because some deviation of experimental curves from Eq. (2) takes place in almost without exception. This deviation may result from some correlations in distribution of scatterers, scattering anisotropy, or long-range potential fluctuations in real 2D systems. This may be a possible reason of the saturation of $`\tau _\phi `$ with decreasing temperature as it is seen in some experiments. In the previous paper we have developed a new approach to the analysis of anomalous magnetoresistance which makes it possible to obtain the information about the statistics of closed paths from the magnetic field dependence of magnetoresistance. This approach provides the basis for a new method of analysis of negative magnetoresistance due to weak localization suppression. In the present paper we demonstrate the potentials of this method as it is applied for interpretation of concrete experimental results obtained for GaAs/InGaAs/GaAs quantum wells. ## II Basis of method The essence of the method is clear from Eq. (8) of the previous paper. One can see that the Fourier transform of negative magnetoresistance is given by $`\mathrm{\Phi }(S)`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{\Phi }_0}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑B\delta \sigma (B)\mathrm{cos}\left({\displaystyle \frac{2\pi BS}{\mathrm{\Phi }_0}}\right)=`$ (3) $`=`$ $`2\pi l^2G_0W(S)\mathrm{exp}\left({\displaystyle \frac{\overline{L}(S,l_\phi )}{l_\phi }}\right),`$ (4) $`\mathrm{\Phi }_0=2\pi c\mathrm{}/e`$ is the elementary flux quantum, $`l_\phi =v_F\tau _\phi `$, $`W(S)`$ and $`\overline{L}(S,l_\phi )`$ are the area distribution function of closed paths and the area dependence of the average length of closed paths respectively, introduced in Section II of Ref. . Thus, it is clearly seen from Eq. (4) that $`\mathrm{\Phi }(S)`$ contains the information on the area distribution function of closed paths $`W(S)`$, and on the function $`\overline{L}(S,l_\phi )`$ . If $`l_\phi `$ tends to infinity when $`T0`$, the extrapolation of $`\mathrm{\Phi }(S,T)`$ to $`T=0`$ gives the value of $`2\pi l^2G_0W(S)`$. To determine the area dependence of $`\overline{L}`$ we assume that for actual areas $`\overline{L}`$ is a power function of area, $`\overline{L}(S,l_\phi )=S^\beta f(l_\phi )`$. The numerical calculations of the function $`\overline{L}(S,l_\phi )`$ (see Fig. 4 in Ref. ) have shown that this assumption is valid in a wide range of $`S`$, $`l_\phi `$. In the diffusion approximation (i.e. for $`\tau /\tau _\phi 1`$) the value of $`\beta `$ is about $`0.67`$. It is clear that in this approximation the value of $`\beta `$ is independent of scattering anisotropy. Beyond the diffusion approximation the value of $`\beta `$ is lower and depends on $`\tau /\tau _\phi `$ ratio. To extract the value of $`\beta `$ from experimental data one can measure $`\delta \sigma (B)`$ at two temperatures, i.e. at different $`l_\phi `$, then find the function $$A(S)\mathrm{ln}\left[\frac{\mathrm{\Phi }(S,T_1)}{\mathrm{\Phi }(S,T_2)}\right]=S^\beta (f(l_\phi ^{T_1})f(l_\phi ^{T_2}))$$ (5) and finally determine $`\beta `$ from $`A(S)`$ curve. ## III Experiment We have measured the conductivity in heterostructures n-GaAs/In<sub>0.07</sub>Ga<sub>0.93</sub> As/n-GaAs of two types. The heterostructures with 200 Å In<sub>0.07</sub>Ga<sub>0.93</sub>As quantum well, $`\delta `$-doped by Si in the centre, relate to the first type. The heterostructures with 50 Å In<sub>0.07</sub>Ga<sub>0.93</sub>As well and doped barriers relate to the second type. The $`\delta `$-doped by Si layers are arranged in them on both sides of the well at the distance 100 Å. In this paper we present the experimental results for two structures of different types which are refereed as structure I and structure II, respectively. The measurements carried out in wide ranges of magnetic fields (up to 6 T) and temperatures (0.4-40 K) show that in structures investigated only one size-quantized subband is occupied. The main contribution to the conductivity comes from the electrons in the quantum well of In<sub>0.07</sub>Ga<sub>0.93</sub>As. The electron density and mobility for structure I are $`n=1.2\times 10^{12}`$ cm<sup>-2</sup> and $`\mu =1.4\times 10^3`$ cm<sup>2</sup>/(V sec), respectively. For structure II they are the following $`n=2.5\times 10^{11}`$ cm<sup>-2</sup> and $`\mu =1.1\times 10^4`$ cm<sup>2</sup>/(V sec). The magnetic field dependencies of the conductivity for both structures for low magnetic fields and different temperatures are shown in Fig. 1. The negative magnetoresistance is observed in the whole range of magnetic fields (up to 6 T). The main contribution to the negative magnetoresistance in the range $`B<0.5`$ T for structure I and $`B<0.2`$ T for structure II comes from the weak localization effect, while in the range $`B>1`$ T for structure I and $`B>0.4`$ T for structure II it results from the correction to the conductivity due to electron-electron interaction. Notice that the positive magnetoresistance due to weak antilocalization effect was observed in analogous structures for low magnetic fields in Refs. . This effect is observed when the spin relaxation time $`\tau _s`$ is less than the phase relaxation time. The positive magnetoresistance is absent in both our structures (for example, see the inset in Fig. 1b, where magnetoresistance of structure II for very low magnetic fields is shown). It should be mentioned that the conductivity of the structures studied in Ref. was order of magnitude larger than that in our case therefore the phase relaxation time was longer, too. The phase relaxation time in our case lies in the range $`(0.21.5)\times 10^{11}`$ sec (see below). Comparing this value with $`\tau _s=(34)\times 10^{11}`$ sec determined in Ref. we have $`\tau _s>\tau _\phi `$ for our structures. Another reason for the absence of positive magnetoresistance in our case is the fact that, in contrast to structures investigated in Ref. our structures are symmetric in the growth direction. Usually the expression (2) is used to analyze the negative magnetoresistance, taking $`a`$ and $`\tau _\phi `$ as fitting parameters. The solid curves in Fig. 1 have been obtained in this way and, at first glance, they are in good agreement with the experimental data in the range of magnetic field $`B<B_{tr}`$. For structure I this procedure gives $`a=1`$, $`\tau _\phi =1.25\times 10^{11}`$ sec for $`T=1.5`$ K. However, the more detailed analysis reveals the difference between the theory and the experimental data (dashed curve in Fig. 1a). As a consequence, the parameters $`a`$ and $`\tau _\phi `$ vary in the range of $`0.811.15`$ and $`(1.051.6)\times 10^{11}`$ sec, respectively, when the fitting procedure is undertaken in different intervals of $`B`$ within the range $`0<B<0.5B_{tr}`$. Thus, the accuracy of determination of $`a`$ and $`\tau _\phi `$ values is $`2025`$%. The ratio $`\tau /\tau _\phi `$ for this structure is $`0.0040.009`$ for the temperature range $`1.54.2`$ K. Analogous data treatment for structure II gives $`a=0.60.7`$, $`\tau _\phi =0.47\times 10^{11}`$ sec for $`T=1.5`$ K and the ratio $`\tau /\tau _\phi =0.030.2`$ for the temperature range $`0.434.2`$ K. In the strict sense the expression (2) is not valid for this structure because the scattering potential is smooth and the scattering is anisotropic. Nevertheless it provides a good agreement with the experimental data (Fig.1 b). It is commonly believed that lower than unity value of $`a`$ results from the electron-electron interaction (Maki-Tompson term). However, below it is shown that such value of $`a`$ in structure II is the result of failure of the diffusion approximation due to poor $`\tau /\tau _\phi `$ ratio. The temperature dependencies of $`\tau _\phi `$ are plotted in Fig. 2 for both structures, and as is seen $`\tau _\phi T^p`$ with $`p1`$. This means that the inelasticity of electron-electron interaction is the main mechanism of the phase relaxation. Now we demonstrate new possibilities of analysis of experimental data provided by the method described above. It is obvious that the method is applicable for low magnetic fields, $`B<B_c`$, where the interference contribution to the negative magnetoresistance is dominant. As is seen from Eq.(4) the information about the statistics of closed paths can be obtained from the Fourier transform of $`\delta \sigma (B)=\sigma (B)\sigma (\mathrm{})`$. But the value $`\mathrm{\Delta }\sigma (B)=\sigma (B)\sigma (0)`$, not $`\delta \sigma (B)`$, is experimentally measured. It is easily shown that the Fourier transform of the experimental curve $`\delta \sigma ^{}(B)=\sigma (0)\sigma (B_c)+\mathrm{\Delta }\sigma (B)`$ padded with zeros at $`B>B_c`$ is close to that of $`\delta \sigma (B)`$ at $`S>\mathrm{\Phi }_0/B_c`$. In Fig. 3 the Fourier transforms of $`\delta \sigma ^{}(B)`$ for different temperatures are presented. The area range where the Fourier transform $`\mathrm{\Phi }(S)`$ is shown is bounded. The maximum value of $`S`$ is determines by signal-to-noise ratio in the experimental curves $`\mathrm{\Delta }\sigma (B)`$. The minimum value is determined by the range of magnetic field, $`B<B_c`$, where the weak localization effect is dominant. We assume that $`B_c`$ is about $`0.5`$ T for structure I and $`0.2`$ T for structures II. As is seen from Eq. (5), the function $`\mathrm{log}(A(S))`$ must be linear with respect to $`\mathrm{log}(S)`$ with the slope $`\beta `$. It is seen from Fig. 4 that the corresponding curves are really close to straight lines for both structures, but the slopes are different: $`\beta =0.70\pm 0.05`$ for structure I and $`\beta =0.52\pm 0.05`$ for structure II. Thus, for structure I with $`\tau /\tau _\phi <0.01`$ the value of $`\beta `$ is close to that obtained in the improved diffusion approximation $`\beta =0.67`$, but somewhat larger than the value $`\beta =0.62`$ obtained in the numerical simulation. In structure II the ratio $`\tau /\tau _\phi `$ is significantly larger and the diffusion approximation fails. Besides, the fact that the impurities are arranged in the barriers in this structure leads to smooth scattering potential and anisotropic scattering. To our knowledge, there are no theoretical results for this case. However, it is valid to say that the anisotropy of scattering and smooth scattering potential do not change the statistics of closed trajectories with the lengths significantly larger than the mean free length. The numerical calculations beyond the diffusion approximation for isotropic scattering show that $`\beta =0.55`$ at $`\gamma =0.1`$. It is close to $`\beta `$ value for structure I with $`\gamma =0.20.03`$. Thus we believe that the main reason for small value of $`\beta `$ in this structure is failure of diffusion approximation. The method put forward in this paper gives a possibility to determine the area distribution function $`W(S)`$. The temperature dependencies of $`\mathrm{\Phi }(S,T)`$ for several $`S`$ are plotted in Fig. 5a. The value of $`\mathrm{\Phi }`$ for a given $`S`$ increases when $`T0`$ due to increase of $`l_\phi `$. Thus the extrapolation of curves $`\mathrm{\Phi }(S,T)/G_0`$ to $`T=0`$ (see Eg. (4)) gives the value of $`2\pi l^2W`$ for the corresponding value of $`S`$. The results of such a data treatment for both structures are shown in Fig. 5b. In Fig. 5b the area distribution functions obtained within the improved diffusion approximation with parameters corresponding to the structures investigated are presented too. One can see that at low $`S`$ the experimental area dependencies of $`2\pi l^2W`$ are close to the theoretical ones for both structures. For $`S>(45)\times 10^{10}`$ cm<sup>2</sup> the more rapid decreasing of the experimental curves is observed and for $`S10^9`$ cm<sup>2</sup> the experimental values of $`2\pi l^2W(S)`$ is $`35`$ times lower than the theoretical values. There are two reasons for such a discordance: (i) the number of closed trajectories with large areas in real samples is smaller than the theoretical one due to, for instance, the long-range potential fluctuations; (ii) the saturation of the phase breaking length with decreasing temperature from $`T1`$ K has to lead to underestimating the value of $`2\pi l^2W(S)`$ for large $`S`$ in the data processing described above. It should be noted that some evidence for $`l_\phi `$ saturation was obtained only for temperatures $`T<0.15`$ K. Our measurements were carried out at significantly higher temperature, $`T>0.4`$ K. So, we believe that the rapid decreasing of $`2\pi l^2W(S)`$ for $`S>(45)\times 10^{10}`$ cm<sup>2</sup> (Fig. 5 b) results from the shortage of large trajectories, rather than from the saturation of $`l_\phi `$. Thus, one can see that the area distribution functions of closed paths coincide practically for both structures, but the area dependencies of the average length of closed trajectories are distinguished. This distinction results from different $`\tau /\tau _\phi `$ ratio. Just this fact leads to the lower than unity value of prefactor $`a`$ in structure II rather than the electron-electron interaction. ## IV Conclusion The new method of the analysis of negative magnetoresistance is used. This method provides a possibility to obtain an information on the statistics of closed paths. The experimental studies of negative magnetoresistance show that the area dependence of average length of closed paths depends on $`\tau /\tau _\phi `$ ratio: $`\overline{L(S)}S^{0.7}`$ at $`\tau /\tau _\phi <10^2`$; $`\overline{L(S)}S^{0.5}`$ at $`\tau /\tau _\phi 10^1`$. This fact leads to the lower than unity value of prefactor when one fits the experimental results to Hikami expression rather than contribution of electron-electron interaction (Maki-Tompson term). The experimental area distribution functions of closed paths are close to these obtained in improved diffusion approximation at low area, but distinct at large one. The shortage of large trajectories by long range potential fluctuation is a possible reason for such distinction. This work was supported in part by the RFBR through Grants 97-02-16168, 98-02-17286, the Russian Program Physics of Solid State Nanostructures through Grant 97-1091, and the Program University of Russia through Grant 420.
no-problem/9912/cond-mat9912288.html
ar5iv
text
# Anomalous Tien–Gordon scaling in a 1d tunnel junction ## 1 Introduction Time dependent quantum transport has attracted a lot of interest since the works of Tien and Gordon and Tucker ; more recently, theoretical findings and experiments on quantum dots and on superlattices renewed the interest in photon–assisted transport in semiconductor nanostructures. In particular, the possibility to investigate experimentally time–dependent transport through mesoscopic systems has opened the way to a deeper understanding of new effects strongly relying on the spatiotemporal coherence of electronic states. Moreover, in most time–dependent experiments like electron pumps , photon–assisted–tunneling , and lasers require an analysis going beyond the linear response theory in the external frequency. Thus, many efforts have been devoted, in last years, to the theoretical investigation of nonlinearities in semiconductor nanostructures , electronic correlations , and screening of ac fields . The Tien–Gordon formula, according to which the dc component of the photo–induced current is given by a superposition of static currents $`I_0`$ (the currents without the ac field) weighted by integer order Bessel functions, is represented by the following formula $`I_{\text{dc}}={\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}J_n^2\left({\displaystyle \frac{eV_1}{\mathrm{}\mathrm{\Omega }}}\right)I_0\left(V_0+n\mathrm{}\mathrm{\Omega }/e\right);`$ (1) the argument of the Bessel functions is linearly dependent on the ac voltage intensity $`V_1`$ and on the inverse of the driving frequency (or subharmonic) $`\mathrm{\Omega }`$. A selfconsistent theory, based on the scattering matrix approach, has shown that the side–band peaks depend on the screening properties of the system ; moreover theoretical investigations for superlattice microstructures showed an $`\mathrm{\Omega }^2`$ dependence of the transmission probability spectrum of the photonic sidebands (that is the argument of the Bessel functions), when a nonlocalized (a finite range) ac driving was taken into account . In this paper, we investigate how 1d electron–electron interaction, in the framework of the Luttinger model , nonlinearities, due to the presence of an impurity, and a finite range ac electric field affect the photo–induced current. We will show that the TG formula is still valid, but the argument of the Bessel functions is not anymore linearly dependent on $`1/\mathrm{\Omega }`$. In the time dependent regime the nonlinearity of the system gives rise to frequency mixing and harmonic generation. Earlier treatments of the ac transport considered voltages, dropping only at the position of the barrier , and zero range interactions between the electrons. Here, both of these are generalized to the more realistic situation of finite range of both, the electron–electron interaction and the electric field. As a matter of fact previous calculations showed clearly that the spatial shape of the electric field does influence ac transport. ## 2 Model The Hamiltonian for a Luttinger liquid of length $`L`$ ($`\mathrm{}`$) with an impurity and subject to a time–dependent electric field is $`H=H_0+H_{\mathrm{imp}}+H_{\text{ac}}`$, where $`H_0={\displaystyle \underset{k0}{}}\mathrm{}\omega (k)^{}b_k^{}b_k^{}.`$ (2) The dispersion relation of the collective excitations, $`\omega _k=v_\mathrm{F}|k|\sqrt{1+\widehat{V}_{\mathrm{ee}}(k)/\mathrm{}\pi v_\mathrm{F}},`$ depends on the Fourier transform of the finite range interaction potential . We assume a 3d screened Coulomb potential of range $`\alpha ^1`$ projected onto a quantum wire of diameter $`d\alpha ^1`$. The interaction decays exponentially and one gets $`V_{\mathrm{ee}}(x)=(V_\mathrm{L}\alpha /2)\mathrm{e}^{\alpha |x|}`$, with interaction strength $`V_\mathrm{L}`$ . For $`\alpha \mathrm{}`$, one obtains a zero–range interaction. The tunneling barrier of height $`U_{\mathrm{imp}}`$ is localized at $`x=0`$ , $`H_{\mathrm{imp}}=U_{\mathrm{imp}}\mathrm{cos}\left(2\sqrt{\pi }\vartheta (x=0)\right),`$ (3) with the phase variable of the Luttinger model $`\vartheta (x)=\mathrm{i}{\displaystyle \underset{k0}{}}\mathrm{sgn}(k)\sqrt{{\displaystyle \frac{v_\mathrm{F}}{2L\omega (k)}}}\mathrm{e}^{\mathrm{i}kx}\left(b_k^{}+b_k^{}\right).`$ The coupling to the external driving voltage yields $`H_{\text{ac}}=e{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑x\varrho (x)V(x,t).`$ The electric field is related to the voltage drop by differentiation, $`E(x,t)=_xV(x,t)`$, and the charge density is $`\varrho (x)=k_\mathrm{F}/\pi +_x\vartheta (x)/\sqrt{\pi }`$. The space–time dependent electric field, $`E(x,t)=E_{\text{dc}}(x)+E_a(x)\mathrm{cos}\left(\mathrm{\Omega }t\right)`$, such that $`E_a(x)=E_1\mathrm{e}^{|x|/a}`$, gives a voltage drop $`V_1_{\mathrm{}}^{\mathrm{}}𝑑xE_a(x)=2E_1a`$. The spatial dependence of the dc part of the electric field does not need to be specified, as only the overall voltage drop, $`V_0_{\mathrm{}}^{\mathrm{}}𝑑xE_{\text{dc}}(x)`$, is of importance in dc transport . ## 3 Methods and Results The current at the barrier is given by the expectation value $`I(x=0,t)=j(x=0,t)`$, where the current operator is defined via the continuity equation, $`_xj(x,t)=e_t\rho (x,t)`$. For a high barrier, the tunneling contribution to the current can be expressed in terms of forward and backward scattering rates which are proportional to the tunneling probability $`\mathrm{\Delta }^2`$. The latter may be obtained in terms of the barrier height $`U_\mathrm{t}`$ by using the instanton approximation . The result can be written in terms of the one-electron propagator $`S+\mathrm{i}R`$ , $`I(x=0,t)=e\mathrm{\Delta }^2{\displaystyle _0^{\mathrm{}}}d\tau \mathrm{e}^{S(\tau )}\mathrm{sin}R(\tau )`$ $`\times \mathrm{sin}\left[{\displaystyle \frac{e}{\mathrm{}}}{\displaystyle _{t\tau }^t}dt^{}V_{\mathrm{eff}}(t^{})\right],`$ (4) with $`S(\tau )+\mathrm{i}R(\tau )={\displaystyle \frac{e^2}{\pi \mathrm{}}}{\displaystyle _0^{\omega _{\mathrm{max}}}}{\displaystyle \frac{\mathrm{d}\omega }{\omega }}e\left\{\sigma ^1(x=0,\omega )\right\}`$ $`\times \left[(1\mathrm{cos}\omega \tau )\mathrm{coth}{\displaystyle \frac{\beta \omega }{2}}+\mathrm{i}\mathrm{sin}\omega \tau \right],`$ where $`\beta =1/k_\mathrm{B}T`$, $`\omega _{\mathrm{max}}`$ the usual frequency cutoff that corresponds roughly to the Fermi energy , and the ac conductivity of the system without impurity is $`\sigma (x,\omega )={\displaystyle \frac{\mathrm{i}v_\mathrm{F}e^2\omega }{\mathrm{}\pi ^2}}{\displaystyle _0^{\mathrm{}}}\mathrm{d}k{\displaystyle \frac{\mathrm{cos}kx}{\omega ^2(k)(\omega +\mathrm{i0}^+)^2}}.`$ (5) Furthermore, the effective driving voltage is related to the electric field by $`V_{\mathrm{eff}}(t)`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}dx{\displaystyle _{\mathrm{}}^t}dt^{}E(x,t^{})r(x,tt^{})`$ (6) $`=`$ $`V_0+{\displaystyle \frac{\mathrm{}\mathrm{\Omega }}{e}}|z|\mathrm{cos}\left(\mathrm{\Omega }t\phi _z\right),`$ where $`r(x,\omega )=\sigma (x,\omega )/\sigma (x,\omega )`$, $`|z|`$ and $`\phi _z`$ are, respectively, modulus and argument of $`z={\displaystyle \frac{e}{\mathrm{}\mathrm{\Omega }}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}dxE_a(x)r(x,\mathrm{\Omega }).`$ (7) With the above assumptions about the shapes of the driving field and the interaction potential one obtains $$|z|=\frac{eV_1}{\mathrm{}\mathrm{\Omega }}\frac{1}{\sqrt{1+a^2k^2(\mathrm{\Omega })}}A(\frac{\mathrm{\Omega }}{v_\mathrm{F}\alpha },\frac{k(\mathrm{\Omega })}{\alpha },\alpha a),$$ (8) where $`k(\mathrm{\Omega })`$ is the inverse of the dispersion relation and $$A^2(u,v,w)=\frac{1}{1+u^2}\left[1+v^2\frac{(u+wv)^2}{(uw+v)^2}\right].$$ (9) In the following, we concentrate on the results for the dc component of the current which does not depend on $`x`$ and is directly given by the current at the barrier, for which we only need to know only $`|z|`$, $`I_{\mathrm{dc}}={\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}J_n^2\left(|z|\right)I_0\left(V_0+n{\displaystyle \frac{\mathrm{}\mathrm{\Omega }}{e}}\right).`$ (10) The important point here is that the driven dc current is completely given in terms of $`I_0(V_0)`$, the nonlinear dc current-voltage characteristic of the tunnel barrier, $`I_0\left(V_0\right)=e\mathrm{\Delta }^2{\displaystyle _0^{\mathrm{}}}d\tau \mathrm{e}^{S(\tau )}\mathrm{sin}R(\tau )\mathrm{sin}\left({\displaystyle \frac{eV_0\tau }{\mathrm{}}}\right).`$ (11) Eqs. (10), (11) generalize results which have been obtained earlier but without interaction between the tunneling objects, and also for the Luttinger model with a zero-range interaction, together with a $`\delta `$-function like driving electric field . For $`V_0`$ much smaller than some cutoff-voltage $`V_\mathrm{c}`$ which is related to the inverse of the interaction range, $`I_0V_0^{2/g1}`$. This recovers the result obtained earlier for $`\delta `$-function interaction and zero-range bias electric field . When $`V_0V_\mathrm{c}`$, the current becomes linear . For intermediate values of $`V_0`$, $`I_0`$ exhibits a cross-over between the asymptotic regimes with a point of inflection near $`V_\mathrm{c}`$. For zero-range interaction, $`I_0V_0^{2/g1}`$ for any $`V_0`$. Figure 1 shows the currents $`I_0`$, $`I_{\mathrm{dc}}`$ and the differential conductance $`\mathrm{d}I_{\mathrm{dc}}/\mathrm{d}V_0`$ as functions of $`eV_0/\mathrm{}\mathrm{\Omega }`$ for $`g=0.9`$ and $`g=0.5`$ for zero-range of the driving electric field. For $`g=0.9`$ one observes sharp minima in the differential conductance at integer multiples of the driving frequency in certain regions of the driving voltage $`V_1`$. These can be understood as follows. When the strength of the interaction is not too large, the region where $`\mathrm{d}I_{\mathrm{dc}}/\mathrm{d}V_0`$ is much smaller than 1 is small compared with $`\mathrm{}\mathrm{\Omega }`$ thus for $`eV_0\mathrm{}\mathrm{\Omega }`$, $`\mathrm{d}I_{\mathrm{dc}}/\mathrm{d}V_0\left(2/g1\right)|eV_0\mathrm{}\mathrm{\Omega }|^{2/g2}`$. Then, Eq. (10) yields near $`eV_0=m\mathrm{}\mathrm{\Omega }`$ $`{\displaystyle \frac{\mathrm{d}I_{\mathrm{dc}}}{\mathrm{d}V}}`$ $``$ $`1J_m^2(|z|)+\mathrm{const}J_m^2(|z|)`$ (12) $`\times \left|eV_0m\mathrm{}\mathrm{\Omega }\right|^{2/g2}.`$ For $`g>2/3`$, this yields for integer $`m`$ the cusp-like structures observed in Fig. 1. For $`g<2/3`$, no cusps occur anymore. In addition, the current $`I_{\mathrm{dc}}`$ is depleted so strongly and over such a large region of the bias voltages that the regime of almost vanishing $`\mathrm{d}I_{\mathrm{dc}}/\mathrm{d}V_0`$ becomes larger than $`\mathrm{}\mathrm{\Omega }`$ and in general no minima near integer multiples of the frequency exist. As can be seen in the figure, the depths of the cusps depend on the driving voltage $`V_1`$ ($`|z|`$) which can also be understood from of Eq. (12) which shows that the values of the differential conductances at the voltages $`eV_0=m\mathrm{}\mathrm{\Omega }`$ are approximately $`1J_m^2(|z|)`$. It is therefore instructive to look into the behavior of $`|z|`$ as a function of the frequency. Figure 2 shows the scaling exponent $`\nu `$ determined from $`\nu =v_\mathrm{F}\alpha {\displaystyle \frac{\mathrm{d}\mathrm{log}|z|}{\mathrm{d}\mathrm{log}\mathrm{\Omega }}}.`$ (13) We observe a non-universal cross-over between $`|z|\mathrm{\Omega }^1`$, the case discussed by Tien and Gordon which corresponds to a driving field of zero-range ($`a0`$), and $`|z|\mathrm{\Omega }^2`$ which is obtained for a homogeneous external field ($`a\mathrm{}`$) . Although the behavior of $`z`$ depends strongly on the parameters of the model in the cross–over regime, this does not influence qualitatively the occurrence of the cusps. Their existence depends crucially on the finite range of the interaction, and the condition $`g>2/3`$. However, by varying $`|z|`$, the depths of the minima are changed due to the variation of $`J_m^2(|z|)`$. Finally, we have demonstrated that the result which has been obtained by Tien and Gordon for tunneling of non-interacting quantum objects in 1D driven by a mono-chromatic field localized at the tunnel barrier remains valid even in the presence of interactions of arbitrary range and shape, and for an arbitrary shape of the mono-chromatic driving field. The central point is that the frequency driven current is completely given by a linear superposition of the current-voltage characteristics at integer multiples of the driving frequency, weighted by Bessel functions. The argument of the latter contains the amplitude of the driving voltage only linearly but the dependence of the argument on the frequency and the range of the driving field is determined by its spatial shape. However, one can easily identify regions where the dependence on the frequency becomes very simple. For a driving field which is localized near the tunnel barrier, the integral in Eq. (7) can be evaluated approximately by noting that $`r(x,\mathrm{\Omega })`$ varies only slowly with $`x`$ and can be taken out of the integral. Then, $`|z|=eV_1/\mathrm{}\mathrm{\Omega }`$ which corresponds to the result of Tien and Gordon . In the other limit of an almost homogeneous electric field, $`E_1=V_1/a`$, one needs to calculate the spatial average of $`r(x,\mathrm{\Omega })`$ . This gives $`\sigma (k=0,\mathrm{\Omega })/\sigma (x=0,\mathrm{\Omega })\mathrm{\Omega }^1`$, since $`\sigma (x=0,\mathrm{\Omega })\mathrm{const}`$. This implies $`|z|\mathrm{\Omega }^2`$. Such a frequency dependence has been discussed earlier for non-interacting particles . Here, we see that it is valid under quite general assumptions also for interacting particles. A possible method to detect this behavior experimentally is to investigate the real part of the first harmonic of the current through the tunnel contact and to determine the current responsivity which is given by the ratio of the expansions of $`I_{\mathrm{dc}}`$ and the first harmonic to second and first order in $`|z|`$, respectively . Given the above result for the driven dc-current, the general behavior of the differential conductance as a function of $`eV_0/\mathrm{}\mathrm{\Omega }`$ can be straightforwardly obtained. Of special interest is the occurrence of cusps at $`eV_0/\mathrm{}\mathrm{\Omega }=m`$ ($`m`$ integer) which appear to be quite stable against changes in the model parameters. A similar result has been discussed earlier , but for a small potential barrier between fractional quantum Hall edge states which implies zero-range interaction. In the case discussed here, the finite range of the interaction is crucial for obtaining the cusps, due to the absence of a linear contribution towards the current for small voltage which is characteristic of tunneling in 1D dominated by interaction. The cusps could be used to frequency-lock the dc part of the driving voltage. To summarize, we have shown how the electron correlation and the spatial distribution of a driving field determine the anomalous scaling of the photo–induced current and the mode locking patterned structure of the nonlinear differential conductance. This work has been supported by EU via TMR(FMRX-CT96-0042, FMRX-CT98-0180), by INFM via PRA(QTMD97), and by italian ministry of university via MURST(SCQBD98).
no-problem/9912/hep-lat9912003.html
ar5iv
text
# I Introduction ## I Introduction The infrared sector of strong interaction physics is characterized by nonperturbative phenomena. Most prominent among these are the confinement of quarks and gluons into color-singlet hadrons, and the spontaneous breaking of chiral symmetry, which, through the associated (quasi-) Goldstone bosons, i.e. the pions, dominates the low-lying hadronic spectrum. On the theoretical side, there is presently convincing evidence from numerical lattice calculations that Yang-Mills theory, i.e. QCD without dynamical quark degrees of freedom, indeed generates confinement. Concomitantly, diverse physical pictures of the QCD vacuum have been proposed which generate a confining potential between color sources, among others, the dual Meissner effect mechanism ,, random magnetic vortices -, the stochastic vacuum , the leading-log model of Adler , and dual QCD . Other models, for instance instanton models , have foregone a description of confinement and concentrated on generating spontaneous chiral symmetry breaking. Furthermore, there is compelling evidence from lattice Monte Carlo experiments that, as temperature is raised, one encounters a transition to a deconfined phase of Yang-Mills theory in which colored constituents can propagate over distances much larger than typical hadronic sizes. As the above listing already indicates, there presently exists a disparate collection of model explanations for different nonperturbative QCD phenomena, but not a consistent, comprehensive picture of the degrees of freedom dominating the infrared sector. Perspectives for bridging this gap have recently arisen in the framework of the magnetic vortex picture of the QCD vacuum. In this picture, initially explored in -, the Yang-Mills functional integral is assumed to be dominated by disordered vortex configurations. These vortices represent closed magnetic flux lines in three-dimensional space; correspondingly, their world sheets in four-dimensional space-time are two-dimensional. They should be contrasted with the electric flux degrees of freedom encoded by Wilson lines. In fact, they can be regarded as dual to the latter. Wilson loops are sensitive to the (quantized) magnetic flux carried by the vortices; conversely, one can think of closed vortices as measuring the electric flux carried by a Wilson line. On a space-time lattice, this relation is particularly manifest in the fact that magnetic fluxes are defined on the lattice which is dual to the one on which Wilson lines are defined (two lattices being dual to one another means that they have the same lattice spacing $`a`$, but are displaced from one another by the vector $`(a/2,a/2,a/2,a/2)`$). In continuum language, using magnetic degrees of freedom means switching from the usual canonical variables, namely vector potential and electric field, to magnetic field variables along with appropriate canonical conjugates. In the magnetic language, the constraint analogous to Gauß’ law is the Bianchi identity, which enforces continuity of magnetic flux. In order to emphasize the dual relation between vortices and electric flux, the above terminology, originating from the canonical framework, will be used in the following even when discussing the corresponding objects covariantly in 3+1 dimensions, i.e. including their (Euclidean) time evolution. The duality between magnetic and electric fluxes in particular provides a heuristic motivation for using vortices to describe the infrared regime of Yang-Mills theory. Whereas electric degrees of freedom become strongly coupled in the infrared, leading to the nonperturbative effects highlighted above, it conversely seems plausible to expect magnetic degrees of freedom to be weakly coupledFor Yang-Mills theory, no explicit duality transformation which manifestly exchanges strong and weak coupling regimes is known. However, in related theories, e.g. supersymmetric extensions , such a transformation can be constructed.. They can thus be hoped to furnish an adequate representation for the true infrared excitations of the theory. Early evidence that magnetic vortices may indeed form in the Yang-Mills vacuum came from the observation that a constant chromomagnetic field is unstable with respect to the formation of tubular domains. This led to the formulation of the Copenhagen “spaghetti” vacuum . Due to its technical complexity, this approach largely concentrated on local properties of vortices, as opposed to their global topological character, which will take on a decisive role in the present work. Also in the lattice formulation of Yang-Mills theory, different possibilities of defining vortices were explored ,. On this track, a number of encouraging developments have recently taken place; in particular, a practicable procedure for isolating and localizing vortices in lattice gauge configurations has been constructed - using an appropriate gauge<sup>§</sup><sup>§</sup>§For a recent discussion on the meaning of this gauge-fixing procedure and possible alternatives and generalizations, see -. This discussion further underscores the need to independently explore the physics of vortices, e.g. via the model presented in this work, beyond the technical discussion of how they can be identified in lattice gauge configurations.. This has made it possible to gather a wealth of information about these collective degrees of freedom, using lattice experiments, which was previously inaccessible. Vortices appear to dominate long-range gluonic physics not only at zero temperature , but also at finite temperatures , and are able to generate both the confined as well as the deconfined phases. Moreover, there are indications that also chiral symmetry breaking is induced by vortices ,, and that vortices are potentially suited to account for the topological susceptibility of the Yang-Mills ensemble ,,,. If, however, the vortex picture of infrared Yang-Mills dynamics is to attain practical value beyond a qualitative interpretation of the lattice results, it must be ultimately developed into a full-fledged calculational tool, with a simplified model dynamics of the vortices allowing to concentrate on the relevant infrared physics. The work presented here is intended as an initial step in this direction. On the one hand, the effective vortex model proposed below is shown to qualitatively reproduce the confinement aspects of Yang-Mills theory, including the finite-temperature transition to a deconfined phase; on the other hand, it is verified that the parameters of the model can be chosen such as to quantitatively replicate the relation between deconfinement temperature and zero-temperature string tension known from lattice Yang-Mills experiments. Thematically, the present report is restricted to the confinement characteristics of the theory; because of this, only one genuine prediction will be presented, namely of the behavior of the spatial string tension in the deconfined phase. Another test of the predictive power of the model is discussed in a companion paper , which focuses on the topological susceptibility of the vortex surface ensemble proposed below. The model, which is formulated below for the case of SU(2) color, is open to many refinements in its details; however, in its present form it is entirely adequate to reproduce the vortex phenomenology hitherto extracted from lattice Yang-Mills simulations. In particular, the finite-temperature deconfinement transition can be understood in terms of a transition between two phases in which the vortices either percolate throughout (certain slices of) space-time or not. ## II Vortex model The SU(2) vortex model under investigation in this work is defined by the following properties: Vortices multiply any Wilson loop by a factor corresponding to a nontrivial center element of the gauge group whenever they pierce its minimal area. Center elements of a group are those elements which commute with all elements in the group; e.g. in the case of SU(2), one has the trivial phase $`1`$ and one nontrivial element, namely the phase $`1`$. An SU(2) color vortex thus contributes a factor $`1`$ to a Wilson loop when it pierces the minimal area of the latter; its flux is quantized. This can be viewed as the defining property of a vortex - beyond any specific model assumption about its dynamics such as presented below. It specifies how vortices couple to electric fluxes, in particular the fluxes implied by the world lines of quark sources. This property provided the original motivation for the proposal of the magnetic vortex picture of confinement -. If the vortices are distributed in space-time sufficiently randomly, then samples of the Wilson loop of value $`+1`$ (originating from loop areas pierced an even number of times by vortices) will strongly cancel against samples of the Wilson loop of value $`1`$ (originating from loop areas pierced an odd number of times by vortices), generating an area law fall-off. The circumstances under which this heuristic argument is valid will be one of the foci of the present work. As a last remark, the following should be noted. Above, the value a Wilson loop takes in a given vortex configuration was defined via the number of times the minimal area spanned by the loop is pierced by vortices. One should not misinterpret this as an arbitrariness. Due to the closed nature of the vortices, any other choice of area results in the same value for the Wilson loop. If one wishes to formulate the influence of vortices on Wilson loops in a manner which does not refer to any spanning of the loop by an area, then it is the linking number between the vortices and the loops which determines the value of the latter. For practical purposes, however, the minimal area specification is very convenient. For the purpose of finite temperature studies, note that the above specification applies equally to the area spanned by two Polyakov loops. Vortices are closed two-dimensional random surfaces in four-dimensional (Euclidean) space-time. Vortex surfaces will be modeled as consisting of plaquettes on a four-dimensional space-time lattice. In a given lattice configuration, a plaquette can be associated with two values, 0 or 1. The value 0 indicates that the plaquette in question is not part of a vortex, whereas the value 1 indicates that it is part of a vortex. Note that this lattice is dual to the one on which “electric” degrees of freedom would be defined, such as Yang-Mills link variables, or the associated Wilson loops. How vortices emerge on the dual lattice in the framework of Yang-Mills theory via center gauge fixing and center projection is discussed in detail in -. The fact that vortices are closed will be implemented in the following way in numerical Monte Carlo experiments. Only Monte Carlo updates are allowed which simultaneously change the values of all the plaquettes making up the surface of an (arbitrary) three-dimensional elementary cube in the four-dimensional lattice. This can be interpreted as the creation of a vortex in the shape of the cube surface on the lattice. Such an algorithm generates only closed two-dimensional surface configurationsNote that this construction also reflects that vortex surfaces can be viewed as boundaries of three-dimensional volumes in four-dimensional space-time .. Note that if a given plaquette which is being updated was already part of a vortex before the update, then it ceases to be part of a vortex after the update. One can think of this in terms of two vortices annihilating each other on a plaquette, if they both occupy that plaquetteFor SU(2) color, there is only a single non-trivial center element $`Z=1`$. As a consequence, two vortices annihilate, since $`Z^2=1`$. For higher SU(N) groups, where the center elements are defined by $`Z^N=1`$, superposition of two vortices will in general yield a residual vortex flux, introducing the possibility of vortex branchings.. The net result is that the plaquette is not part of any vortex. Vortex surfaces are associated with a physical transverse thickness. In order to represent regular, finite action, configurations in Yang-Mills theory, vortices must possess a physical transverse thickness in the directions perpendicular to the vortex surface. This thickness has furthermore been argued to be of crucial phenomenological importance, e.g. in generating the approximate Casimir scaling behavior of Wilson loops in the adjoint representation of the gauge group at intermediate distances ,. A finite vortex thickness in particular implies that it does not make sense to consider configurations in which two parallel vortex surfaces are closer to each other than the vortex thickness; i.e., vortices cannot be packed arbitrarily densely. This feature is implemented in the present vortex model simply via the lattice spacing, which forces parallel vortices to occur a certain distance from each other. This should not be misconstrued to mean that the vortices have a hard core. Rather, when two parallel vortices come too close to each other, their fluxes can be considered to annihilate in the sense that their superposition is considered equivalent to the vacuum. The reader is reminded that this is precisely the manner in which the effect of a Monte Carlo update on a vortex configuration was specified above. The algorithm thus reflects very closely the underlying physical picture. Note furthermore that the implementation of the vortex thickness via the lattice spacing implies that the latter is treated as a fixed physical scale. This will be commented upon in more detail further below. As a last remark, it should be noted that at this stage, no explicit transverse profile for the vortices has been introduced, as would be necessary e.g. for the purpose of correctly describing the behavior of adjoint representation Wilson loops ,. The vortex thickness enters only via the minimal distance between parallel vortex surfaces, whereas the surfaces themselves are still treated as infinitely thin. The introduction of an explicit transverse profile is one of the possible refinements of the model presented here. Vortices are associated with an action density per surface area. This is reflected in a corresponding explicit term in the action, cf. eq. (4) below. Vortices are stiff. While an ultraviolet cutoff on the space-time fluctuations of the vortex surfaces is already implied by the lattice spacing, vortices will be endowed with a certain stiffness beyond this via an explicit term in the action, cf. eq. (5) below. Specifically, if two plaquettes which share a link but do not lie in the same two-dimensional plane are both part of a vortex, then this fact will be penalized with a certain action increment. Thus, vortices are stiffer than implied by the lattice spacing alone, and the stiffness can be regulated with an independent parameter. ## III Physical Interpretation of the Model Before investigating the properties of the model vortices defined above, it is necessary to clarify more precisely which degrees of freedom these vortices are to represent. The infrared structure of typical Yang-Mills configurations is thought to be encoded in thick magnetic vortices. The term “thick” means that, whereas vortices on large scales form two-dimensional surfaces in four-dimensional space-time, they possess an extension, i.e. a regular profile function, in the directions perpendicular to the surface. This extension is one of the quantities determined in the framework of the Copenhagen vacuum and also has been probed by lattice experiments ,,. As already mentioned in the previous section, it is instrumental in explaining the approximate Casimir scaling behavior of Wilson loops in the adjoint representation of the gauge group at intermediate distances ,. By contrast, the center projection vortices which arise in the framework of the maximal center gauges - are (in the continuum limit ) infinitely thin surfaces which provide a rough localization of the thick vortices described above, as has also been ascertained empirically using lattice experiments ,. It is possible to define the center projection vortices on arbitrarily fine lattices. Thus, while their effective action after integrating out all other Yang-Mills degrees of freedom presumably does already contain the QCD scale $`\mathrm{\Lambda }_{QCD}`$, it does not yet describe an infrared effective theory; center projection vortices may still exhibit fluctuations of arbitrarily short wavelengths, within the profile of the thick vortices they represent. The QCD scale in the center projection vortex effective action merely controls on which scale this action becomes nonlocal. As one diminishes the lattice spacing, the nonlocality of the effective action presumably becomes more and more pronounced<sup>\**</sup><sup>\**</sup>\**Note that the center projection vortex effective theory, which on the lattice is a $`Z(2)`$ gauge theory with non-standard action, must contrive to avoid the deconfining transition well-known to occur in the standard $`Z(2)`$ gauge theory with plaquette action as the coupling is diminished in approaching the continuum limit.. Evidence for the aforementioned short wavelength fluctuations has been gathered e.g. in , where the binary correlations between center projection vortex intersection points with a given space-time plane were measured. This correlation function, while exhibiting the renormalization group scaling expected of a physical quantity, appears to diverge at small distances. Such behavior can be understood from short wavelength fluctuations of the center projection vortices as follows: Consider a plane which cuts a thick vortex along a (smeared-out) line, and consider furthermore intersection points of the associated center projection vortex with this plane, cf. Fig. 1. Due to the transverse fluctuations of the projection vortices, one will find a strongly enhanced probability of detecting such intersection points close to one another (compared with the probability one would expect from the mean vortex density). The precise location of a center projection vortex within the profile of the associated thick vortex is gauge-dependent; it changes as different specific realizations of the maximal center gauge are adopted ,. The vortex model proposed in this work does not aim to reproduce these gauge-dependent fluctuations. It is intended as a true low-energy effective theory; the model vortex surfaces defined in the previous section, while formally thin, are meant to represent the center of the profile of a thick vortex, which is smooth on short length scales, without reproducing the short-wavelength fluctuations of the corresponding center projection vortex, cf. Figure 1. Alternatively, the model vortices can be interpreted as low-energy effective degrees of freedom obtained by integrating out, or smoothing<sup>††</sup><sup>††</sup>††A specific smoothing procedure applied to center projection vortices in Yang-Mills theory was investigated in . over, all ultraviolet fluctuations in the abovementioned center projection vortex effective theory, down to some fixed physical scale (encoded in the vortex model via the lattice spacing). Note that such a procedure also eliminates the complicated nonlocal dynamics of the center projection vortices and leaves a vortex model action which should be well described by a few local terms, in the spirit of a gradient expansion . It is this conceptual framework which led to the vortex model postulated in the previous section. Note that this picture in particular implies that one should not expect the center projection vortex density measured in lattice Yang-Mills experiments, which does obey the proper renormalization group scaling law (note erratum in ), , to match the density of the model vortices. Rather, the latter should be substantially lower. In view of this framework, it becomes clear that the lattice spacing enters the vortex model as a fixed physical cutoff scale. In other words, it is not envisaged to take the continuum limit of the model by reducing the lattice spacing and accordingly renormalizing the coupling constants. If one wishes to formulate the model in the continuum, then the lattice spacing must be replaced by some other fixed physical cutoff scale related to the thickness of the vortices. Thus, the lattice model is only insofar a caricature of continuum vortex physics as it merely allows the vortices to run parallel to the space-time axes. One could in principle remedy this e.g. by representing the vortex surfaces as triangulations in space-time, presumably again with some lower bound on the areas of the elementary triangles. Accordingly, the lattice spacing introduces specific physical effects into the vortex model, as already mentioned in the previous section. On the one hand, the spacing accounts for aspects of the finite vortex thickness by preventing parallel vortex surfaces from occurring too close to one another. On the other hand, it introduces an ultraviolet cutoff on the space-time fluctuations of the vortices, which however is superseded by an explicit curvature penalty in the action. It must be emphasized that these effects enter the model not as unwanted lattice artefacts, but as a specific realization of physical features also present in any corresponding continuum picture of vortices. ## IV Formal definition The properties of the vortex model described above can be summarized formally as follows. The basic variables are plaquettes $`p_n^{\{\mu ,\nu \}}`$ on a four-dimensional space-time lattice, extending from a lattice site described by the four-vector $`n`$ into the (positive) $`\mu `$ and $`\nu `$ directions. Note that the superscripts $`\{\mu ,\nu \}`$, where always $`\mu \nu `$, are unordered sets, i.e. there is no distinction between $`\{\mu ,\nu \}`$ and $`\{\nu ,\mu \}`$. The variables $`p_n^{\{\mu ,\nu \}}`$ can take the values 0 or 1. Furthermore, in the following, $`e_\mu `$ will denote the vector in $`\mu `$ direction of the length of one lattice spacing. The partition function reads $$Z=\left(\underset{n}{}\underset{\genfrac{}{}{0pt}{}{\mu ,\nu }{\mu <\nu }}{}\underset{p_n^{\{\mu ,\nu \}}=0}{\overset{1}{}}\right)\mathrm{\Delta }[p_n^{\{\mu ,\nu \}}]\mathrm{exp}(S[p_n^{\{\mu ,\nu \}}])$$ (1) with the constraint $`\mathrm{\Delta }[p_n^{\{\mu ,\nu \}}]`$ $`=`$ $`{\displaystyle \underset{n}{}}{\displaystyle \underset{\mu }{}}\delta _{L_n^\mu mod2,0}`$ (2) $`L_n^\mu `$ $`=`$ $`{\displaystyle \underset{\genfrac{}{}{0pt}{}{\nu }{\nu \mu }}{}}\left(p_n^{\{\mu ,\nu \}}+p_{ne_\nu }^{\{\mu ,\nu \}}\right)`$ (3) enforcing closedness of the vortex surfaces by constraining the number $`L_n^\mu `$ of occupied plaquettes attached to the link extending from the lattice site $`n`$ in $`\mu `$ direction to be even, for any $`n`$ and $`\mu `$. How this constraint is conveniently enforced in Monte Carlo simulations was described in section II. The action consists of a surface area part and a curvature part, $`S=S_{area}+S_{curv}`$, which read $`S_{area}`$ $`=`$ $`ϵ{\displaystyle \underset{n}{}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{\mu ,\nu }{\mu <\nu }}{}}p_n^{\{\mu ,\nu \}}`$ (4) $`S_{curv}`$ $`=`$ $`{\displaystyle \frac{c}{2}}{\displaystyle \underset{n}{}}{\displaystyle \underset{\mu }{}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{\nu ,\lambda }{\nu \mu ,\lambda \mu ,\lambda \nu }}{}}\left(p_n^{\{\mu ,\nu \}}p_n^{\{\mu ,\lambda \}}+p_n^{\{\mu ,\nu \}}p_{ne_\lambda }^{\{\mu ,\lambda \}}+p_{ne_\nu }^{\{\mu ,\nu \}}p_n^{\{\mu ,\lambda \}}+p_{ne_\nu }^{\{\mu ,\nu \}}p_{ne_\lambda }^{\{\mu ,\lambda \}}\right)`$ (5) $`=`$ $`{\displaystyle \frac{c}{2}}{\displaystyle \underset{n}{}}{\displaystyle \underset{\mu }{}}\left[\left({\displaystyle \underset{\genfrac{}{}{0pt}{}{\nu }{\nu \mu }}{}}\left(p_n^{\{\mu ,\nu \}}+p_{ne_\nu }^{\{\mu ,\nu \}}\right)\right)^2{\displaystyle \underset{\genfrac{}{}{0pt}{}{\nu }{\nu \mu }}{}}\left(p_n^{\{\mu ,\nu \}}+p_{ne_\nu }^{\{\mu ,\nu \}}\right)^2\right]`$ (6) While the lower expression for $`S_{curv}`$ is more compact, the upper expression exhibits its construction more clearly: For every link extending from the lattice site $`n`$ in $`\mu `$ direction, all pairs of attached plaquettes whose two members do not lie in the same plane are considered and, if both members are part of a vortex, this is penalized with an action increment $`c`$. It should be noted that this type of random surface action has in recent times also attracted interest in connection with a rather different physical motivation than the one espoused in the present work ,. These investigations correspondingly focus on entirely different observables associated with the random surfaces. In particular, only what would be interpreted as the zero-temperature case in the vortex framework is considered, whereas the treatment below emphasizes also the generalization to finite temperatures and the resulting phenomena. Wilson loops are determined by the number of times vortices pierce their minimal area. As a generic example, consider a rectangular Wilson loop with corners defined as follows. Let $`n_0`$ be an arbitrary lattice site and $`m_0=n_0+(e_1+e_2+e_3+e_4)/2`$. Place the corners of the Wilson loop at $`\{m_0,m_0+Je_1,m_0+Ke_2,m_0+Je_1+Ke_2\}`$ with integer $`J,K`$; as already mentioned in section II, Wilson loops are defined on the lattice dual to the one the vortices are constructed on. Then, in any given vortex configuration, the number of times the minimal area spanned by this Wilson loop is pierced by vortices is $$Q=\underset{j=1}{\overset{J}{}}\underset{k=1}{\overset{K}{}}p_{n_0+je_1+ke_2}^{\{3,4\}}$$ (7) and the Wilson loop consequently takes the value $`W=(1)^Q`$. ## V Survey of the plane of coupling constants Carrying out measurements of Wilson loops of different sizes on a symmetric ($`16^4`$) lattice, one finds that the plane of coupling constants $`ϵ,c`$ can be partitioned into a confining and a non-confining region, cf. Fig. 2. Since the symmetric lattice constitutes an approximation to the zero-temperature (i.e. infinite lattice) theory (the physical size of the lattice spacing will be determined further below), the confining sector is of course the one of interest. Furthermore, for a large range of coupling constants $`ϵ,c`$ in the confining region, one finds a deconfinement phase transition when raising the temperature of the system by decreasing the number of lattice spacings $`N_t`$ making up the Euclidean time direction and evaluating the heavy quark potential using Polyakov loop correlators. Since the lattice only provides a discrete set of temperatures for given lattice spacing, it is necessary to use an interpolation procedure to determine the deconfinement temperature. For fixed $`ϵ`$, the curvature coefficient $`c`$ was varied and the values of $`c`$ were recorded at which the inverse deconfinement temperature crosses the values $`a,2a`$ and $`3a`$ ($`a`$ denoting the lattice spacing). Interpolation then allows to define the ratio $`T_c/\sqrt{\sigma _0}`$ for all $`c`$, where $`T_c`$ is the deconfinement temperature and $`\sigma _0`$ is the zero temperature string tension. For the purpose of modeling SU(2) Yang-Mills theory, it is desirable to reproduce the value $`T_c/\sqrt{\sigma _0}0.69`$, cf. . The line in the coupling constant plane for which this is achieved is displayed in Fig. 2. The dotted end of the line for $`ϵ>0.4`$ indicates that the $`T_c/\sqrt{\sigma _0}0.69`$ trajectory was not explored further into this direction, because string tension measurements became too noisy to allow its accurate determination; however, the authors have no evidence that the trajectory stops at any particular point before (presumably) ultimately reaching the non-confining region. On the other hand, in the region $`ϵ<0.4`$, the system appears to become unstable; for $`ϵ=0.6`$, unphysical oscillatory behavior of the potential between static sources can be clearly observed. This is not surprising, since for $`ϵ<0`$, the model action ceases to be manifestly positive. Only for a certain limited region of negative $`ϵ`$ can one expect the model to be stabilized by the cutoff on the vortex density implied by the lattice spacing. Setting the scale by positing a zero-temperature string tension of $`\sigma _0=(440\text{MeV})^2`$, measurements of $`\sigma _0a^2`$ allow to extract the lattice spacing $`a`$. The results obtained on the $`T_c/\sqrt{\sigma _0}0.69`$ trajectory for different values of $`ϵ`$ are summarized in Table I. The lattice spacing only varies by about 10 % on the aforementioned trajectory. This corroborates the interpretation of the lattice spacing as a fixed physical quantity discussed at length in section III. It must again be emphasized that the role of the lattice spacing in the vortex model is fundamentally different e.g. from its role in lattice gauge theory. There, the lattice spacing represents an unphysical regulator, and physical quantities must be extrapolated to the continuum limit, where a certain scaling behavior connecting the coupling constant and the lattice spacing arises due to the scale invariance of the classical theory. On the other hand, in the vortex model, the $`T_c/\sqrt{\sigma _0}0.69`$ trajectory also implies a type of scaling behavior, but one which connects the coupling constants $`ϵ`$ and $`c`$. The lattice spacing $`a`$ by contrast must be counted among the physical quantities, which are to be accorded a fixed value, just like the string tension or the deconfinement temperature. Indeed, as discussed in section III, the lattice spacing has a definite interpretation connected with the transverse thickness of the vortices. It is reassuring that the lattice spacing in fact does behave accordingly on the $`T_c/\sqrt{\sigma _0}0.69`$ trajectory by remaining approximately constant. Most of the measurements in the subsequent sections refer specifically to the case $`ϵ=0`$; by constraining the model to the $`T_c/\sqrt{\sigma _0}0.69`$ trajectory, this implies using the set of coupling constants $$ϵ=0c=0.24.$$ (8) When other choices of coupling constants are used, this is explicitly stated. In particular, in the next section, a prediction of the spatial string tension in the deconfined phase will be presented; there, measurements at different points along the $`T_c/\sqrt{\sigma _0}0.69`$ trajectory will be taken to demonstrate the stability of the prediction. As a final remark, note that the lattice spacing $`a`$ also specifies the ultraviolet limit of validity of the effective vortex theory. In the case of the choice of coupling constants (8), one has $`a=0.39`$ fm, cf. Table I, which corresponds to a maximal momentum representable on the lattice of $`\mathrm{\Lambda }=\pi /a1600\text{MeV}`$. ## VI Confinement and percolation Fig. 3 displays measurements of the string tension between two static color sources, evaluated using Polyakov loop correlators, and also of the spatial string tension extracted from spatial Wilson loops, at different temperatures. These measurements quantitatively reproduce the behavior found in full Yang-Mills theory. While the correct relation between zero-temperature string tension and deconfinement temperature was fitted using the freedom in the choice of coupling constants, cf. the previous section, the behavior of the spatial string tension constitutes a first prediction of the model. As evidenced in Fig. 3, this prediction is stable to within 10 % effects as one varies the point on the physical $`T_c/\sqrt{\sigma _0}0.69`$ trajectory at which the measurement is taken. For the choice (8) of coupling constants, the value $`\sigma _s(T=1.67T_c)=1.39\sigma _0`$ measured in the present vortex model agrees to within 1% with the value obtained in full Yang-Mills theory, as can be inferred by interpolating the values quoted in . A possible way to understand this surprisingly accurate correspondence, based on the specific space-time structure of the vortex configurations in the deconfined phase, will be discussed further below. In order to understand and interpret the confinement properties displayed in Fig. 3, it is useful to contrast them with the percolation properties of the vortices . To best exhibit the latter, it is necessary to consider three-dimensional slices of the lattice universe (where the slices are taken between two parallel hyperplanes of the lattice on which the vortices are defined). On such slices, vortices form closed lines made up of links. Both time slices as well as slices in which one of the spatial coordinates is fixed (in the following referred to as space slices) are of interest. By finding an initial vortex link, identifying all vortex links connected to it, and iterating this procedure with all new vortex links reached, one can discriminate between disjoint vortex clusters; the space-time extension of a cluster is then defined as the maximal Euclidean distance between two points on that cluster. In a percolating ensemble, most vortex links will be organized into clusters of near maximal size, which corresponds to an extension of $`\sqrt{N_t^2+2N_s^2}/2`$ lattice spacings on a $`N_t\times N_s\times N_s`$ space slice due to the periodic boundary conditions (and analogously for time slices). On the other hand, in the absence of percolation, most vortex links will be organized into small clusters. Fig. 4 displays, for different temperatures and taking space slices of the lattice universe, the fraction of vortex links present in the ensemble which are part of a cluster of the extension specified on the horizontal axis. Clearly, in the confining phase, vortices in space slices percolate, whereas they cease to percolate in the deconfined phase. A different picture is obtained when considering time slices, cf. Fig. 5. There, vortex lines percolate in both phases. This in particular also implies that the two-dimensional vortex surfaces in four-dimensional space-time percolate in both phases. Only when considering space slices of the lattice universe does a percolation transition become visible. Measurements at fixed $`ϵ,c`$ (and therefore at fixed lattice spacing $`a`$) only provide a discrete set of temperatures; in the case of $`ϵ=0,c=0.24`$ treated above, $`N_t=2`$ happens to correspond to $`T=0.83T_c`$, whereas $`N_t=1`$ corresponds to $`T=1.67T_c`$ (the reader may feel uneasy at this point because $`N_t=1`$ seems rather special, and the observed phenomena in the deconfined phase may be tied to this particular case; this concern is addressed in more detail further below). Clearly, it is desirable to have a better temperature resolution especially in the region of the phase transition. Values of observables at intermediate temperatures can be defined via an interpolation procedure such as already employed in section V. This is how the values at $`T=1.1T_c`$ in Fig. 5 were obtained. The measurements in section V in particular allowed to interpolate, for fixed $`ϵ=0`$, the value of $`T_ca`$ as a function of $`c`$. This, however, also permits finding the set of $`c_i`$ for which $`1.1=(Ta)/(T_ca)=1/(iT_ca)`$, with $`i`$ denoting an integer. Then, by construction, measurements of an observable on a lattice with $`N_t=i`$ at coupling $`c=c_i`$ all correspond to $`T=1.1T_c`$, which again by interpolation allows to define the observable also for $`c=0.24`$ at $`T=1.1T_c`$. A more specific picture of the deconfined phase is obtained by analyzing the number of links contained in the clusters. For the choice of coupling constants $`ϵ=0,c=0.24`$, (only) the lattice universe with $`N_t=1`$ realizes the deconfined phase. On space slices of this lattice, clusters containing only one vortex link are necessarily clusters which wind around the lattice in the Euclidean time direction and are closed by virtue of the periodic boundary conditions. The smallest non-winding vortex cluster by contrast contains four links. Indeed, measuring the percentage of links belonging to clusters containing only one link yields a fraction of 95% (for $`N_t=1`$, i.e. $`T=1.67T_c`$). Thus, the small extension vortex clusters which dominate the deconfined phase can more specifically be characterized as winding vortex configurations. Note that this specific space-time structure of the vortex configurations in the deconfined phase, which is also found for the center projection vortices extracted from the SU(2) Yang-Mills ensemble, cf. , may explain the surprising accuracy of the prediction of the spatial string tension displayed in Fig. 3. Given that the Yang-Mills dynamics favors the formation of winding vortices extending predominantly into the Euclidean time direction, the vortex surfaces allowed in the present model can quite accurately represent the configurations relevant in the full theory, despite the large lattice spacing. Thus, in this particular setting, the model space does not imply a strong truncation of the full physics, even on the coarse lattice used. The reader may feel uneasy about the discussion in the previous paragraphs because the lattice with $`N_t=1`$ realizing the deconfined phase seems a rather special case, and one might worry that the loss of percolation, and more specifically the dominance of winding vortices, is a particular feature of the $`N_t=1`$ lattice. In order to show that, on the contrary, the correlation between deconfinement on the one hand and the dominance of winding vortices on the other hand is a generic feature of the random vortex surfaces, the authors have numerically investigated the (unrealistic) choice of parameters $`ϵ=0,c=0.46`$. In this case, lattices with $`N_t=1,2,3`$ realize the deconfined phase, whereas lattices with larger $`N_t`$ realize the confined phase (as determined by measurements of the string tension). Table II displays the resulting fraction $`p_i`$ of vortex links found in clusters containing a total of $`i`$ links. For $`N_t=1`$, the winding vortices containing one link dominate the ensemble to more than 99.5%. For $`N_t=2`$, at least 77% of links detected were part of a winding vortex containing a total of two links; the fraction $`p_4`$ contains additional winding vortices (with a transverse fluctuation), but the analysis is ambiguous, since there are also non-winding clusters containing four links. For $`N_t=3`$, this ambiguity does not occur, and the fractions $`p_3`$, $`p_5`$ and $`p_7`$ necessarily correspond to winding vortices, with transversal fluctuations in the cases of $`p_5`$ and $`p_7`$. Thus, winding vortices still encompass well more than half of the vortex links present in the ensemble even quite near the critical temperature (additional winding vortices are subsumed in the further odd fractions $`p_9,p_{11},\mathrm{}`$ not shown). Percolating clusters (embodied in $`p_{max}`$) on the other hand are virtually non-existent for the deconfined cases $`N_t=1,2,3`$. By contrast, for $`N_t4`$, which corresponds to the confining phase, the dominant proportion of vortex links is associated with percolating vortices (cf. $`p_{max}`$). Short winding vortices completely disappear, including ones with small transverse fluctuations (cf. $`p_5`$ and $`p_7`$ in the case $`N_t=5`$). As a last point, it is interesting to contrast the percolation phenomena exhibited above with the behavior of the vortices in the region of the phase diagram in which confinement is absent even at zero temperature (the shaded region in Fig. 2). In the cases studied further above, taken from the confining regime of the coupling constant plane, a percolation transition at the deconfinement temperature $`T_c`$ only became visible in space slices of the lattice universe; on the other hand, percolation of vortex lines in time slices, and consequently percolation of the complete vortex surfaces in four dimensions, persisted even above the deconfinement temperature $`T_c`$. In contradistinction to this, in the non-confining regime of the coupling constant plane, the two-dimensional vortex surfaces in four-dimensional space-time as a whole do not percolate. Note that this is equivalent to the statement that vortex lines do not percolate in any three-dimensional slice of space-time (at zero temperature, or any symmetric lattice approximation of it, there is of course no distinction between space and time slices). As a case in point, therefore, Fig. 6 displays the sliced vortex distribution analogous to Fig. 4, but evaluated using a symmetric ($`16^4`$) lattice for $`ϵ=0.17,c=0.4`$, which is in the non-confining region, cf. Fig. 2. ## VII Discussion It is not surprising that the high temperature, deconfined phase of the vortex model is associated with a lack of vortex percolation in space slices of the universe. For any Polyakov loop correlator, one can choose a space slice containing both of the Polyakov loops involved as well as the minimal area spanned by them. Consider now the consequence of a lack of vortex percolation in this space slice in the following simple heuristic picture. Absence of percolation implies the existence of an upper bound $`d`$ on the size of vortex clusters. Due to the closed nature of the vortex lines, this implies that, on the space-time plane containing the two Polyakov loops, any point at which a vortex pierces that plane comes paired with another such point at most a distance $`d`$ away, cf. Fig. 7. Consider the idealized case of such pairs being randomly distributed on the space-time plane in question. Then one can evaluate the behavior of the Polyakov loop correlator at inverse temperature $`\beta `$ on a universe of linear extension $`L`$ as follows. Only pairs whose midpoints lie within the two strips of width $`d`$ centered on the Polyakov loops can contribute a factor $`1`$ to the Polyakov loop correlator. Denote by $`p`$ the probability that such a pair actually does contribute a factor $`1`$. This probability is an appropriate average over the distances of the midpoints of the pairs from the Polyakov loops, their angular orientations and the distribution of separations between the points making up the pairs. The probability $`p`$, however, does not depend on the macroscopic extension of the Polyakov loop correlator. Now, a pair which is placed at random on the space-time plane has probability $`pA/\beta L`$ of contributing a factor $`1`$ to the Polyakov loop correlator, where $`A=2\beta d`$ is the area of the two strips of width $`d`$ centered on the Polyakov loops, and $`\beta L`$ is the area of the entire plane. Placing $`N_{pair}`$ pairs on the plane at random, the probability that $`n`$ of them contribute a factor $`1`$ to the Polyakov loop correlator is $$P_{N_{pair}}(n)=\left(\begin{array}{c}N_{pair}\\ n\end{array}\right)\left(\frac{2pd}{L}\right)^n\left(1\frac{2pd}{L}\right)^{N_{pair}n}$$ (9) and, consequently, the expectation value of the correlator for large universes is $$W=\underset{n=0}{\overset{N_{pair}}{}}(1)^nP_{N_{pair}}(n)=\left(1\frac{4pd}{L}\right)^{N_{pair}}\stackrel{N_{pair}\mathrm{}}{}e^{2\beta pd\rho }$$ (10) where the planar density of points $`\rho =2N_{pair}/\beta L`$ is kept fixed as $`N_{pair}\mathrm{}`$. The Polyakov loop correlator is therefore independent of the separation between the Polyakov loops, negating confinement, if the extension of vortices or vortex networks in a space slice of the universe is bounded. Note that the persistence of percolation of the two-dimensional vortex surfaces as a whole in the deconfined phase does not influence this argument; what is important is the presence or absence of a pair correlation between vortex intersection points on the plane containing a Polyakov loop correlator. Conversely, percolation of vortices is therefore a necessary condition for confinement. Only then is it possible for the points at which vortices pierce a given space-time plane to be sufficiently randomly distributed as to generate an area law for a Wilson loop or Polyakov loop correlator embedded in that plane; the pair correlation crucial in the model visualization presented above is no longer operative. Indeed, if one assumes these piercing points to be randomly distributed, one obtains by an argument analogous to the one above an area law with a string tension equal to twice the planar density of intersection points $`\rho `$, cf. . More generally, if the points cannot be packed arbitrarily densely, but instead at most one per plaquette of a lattice (of spacing $`a`$) imposed on the plane can occur, the string tension obeys $$\sigma /\rho =\frac{1}{\rho a^2}\mathrm{ln}(12\rho a^2)$$ (11) which reduces to the value $`\sigma /\rho =2`$ quoted above in the limit $`a0`$. The relation between the planar vortex density $`\rho `$ and the string tension arising in the simple random picture (11) is obeyed to a good approximation by the values measured at zero temperature in the present vortex model (with $`ϵ=0,c=0.24`$). At zero temperature, one has $`\sigma a^2=0.755`$ and $`\rho a^2=0.27`$, which fulfills (11) up to a 3% deviation; the measured quantities are thus consistent with a random distribution of vortex intersection points on any given space-time plane. The same behavior is found for the center projection vortices of Yang-Mills theory after subjecting them to a smoothing procedure ; the density of unsmoothed center projection vortices is significantly higher . This is consistent with the physical interpretation discussed in section III. Quantitatively, the model vortex density quoted above differs from the center projection vortex density in SU(2) Yang-Mills theory by a factor two. The necessity of percolation for an area law behavior of the Wilson loop furthermore explains the persistence of percolation in time slices in the deconfined phase. Otherwise, spatial Wilson loops, which can be embedded in a time slice, could not continue to obey an area law above the deconfinement temperature. On the other hand, spatial Wilson loops can also be embedded in space slices, in which percolation ceases in the deconfined phase. The reason one can nevertheless still understand the spatial string tension in the space slice picture lies in the different topological setup: Vortices winding in the Euclidean time direction, which dominate the deconfined phase, pierce spatial Wilson loops at isolated points despite being of limited extension. The pair correlation between the piercing points which would preclude an area law does not arise due to the possibility of closing a short vortex line via the periodic boundary conditions. To sum up, in the vortex picture there is a strong connection between confinement and the percolation properties of vortices. Percolation is a necessary condition for confinement; the deconfinement transition is induced by a percolation transition to a phase which lacks percolating vortex clusters (when an appropriate slice of the configurations is considered). Also in this respect, the behavior of the vortex model presented here closely parallels the behavior found for the center projection vortices of Yang-Mills theory . This connection between percolation and confinement moreover is one of the points at which the duality between the (magnetic) vortex picture and electric flux models becomes apparent. In electric flux models, the deconfinement transition also takes the guise of a percolation transition ; however, it is the deconfined phase in which electric flux percolates. While this clarifies how vortex configurations generate the confined and deconfined phases, the simple structure of the vortex model investigated in this work also allows an intuitive understanding of the underlying dynamics, i.e. why the vortices behave as they do. Qualitatively, the parameters entering the vortex action have the following effects. The action per plaquette area $`ϵ`$ (cf. eq. (4)) acts as a chemical potential for the mean density of vortices, whereas the curvature penalty $`c`$ (cf. eq. (5)) imposes an ultraviolet cutoff on the space-time fluctuations of the vortex surfaces. To a certain extent, the two effects can be traded off with one another; striking evidence of this is provided by the approximately invariant physics found on the $`T_c/\sqrt{\sigma _0}0.69`$ trajectory depicted in Fig. 2. This can be understood as follows: If one generates two vortex configurations at random, then in all but exceptional cases, the configuration with the higher mean vortex density will also contain the higher amount of total curvature. Therefore, both coupling constants $`ϵ`$ and $`c`$ simultaneously curtail both the mean vortex density and the curvature. If one raises either $`ϵ`$ or $`c`$, the mean density of vortices falls (and, along with it, the zero-temperature string tension). This gradual decrease of the string tension with the density does not persist indefinitely; instead, at some point, the vortex density becomes so low that the vortex structures lose their connectivity and form isolated clusters instead of percolating throughout space-time, cf. Fig. 6. This implies a pair correlation between vortex intersection points on a space-time plane which leads to an immediate loss of confinement, as already discussed further above. Therefore, despite there remaining a finite vortex density, the string tension vanishes; in this way, the confining and non-confining regions in Fig. 2 are generated. Turning to the case of finite temperatures, the deconfining dynamics in the random vortex model can be understood in terms of an entropy competition. As one shortens the (lattice) universe in the Euclidean time direction, a new class of vortex clusters of small extension (viewed in a space slice) becomes available, namely the winding vortices which have been verified above to dominate the deconfined phase. Thus, the entropy balance between the class of percolating vortex configurations and the class of limited extension, non-percolating configurations shifts towards the latter. This interpretation of the deconfining dynamics is almost a tautology in view of the simple formal structure of the vortex model presented here. Evaluating the partition function (cf. eq. (1)) of the model by construction amounts precisely to enumerating all possible closed surface configurations, given a certain mean vortex density (enforced by the coupling constants $`ϵ`$ and, indirectly, $`c`$) and an ultraviolet cutoff on the fluctuations of the vortex surfaces (embodied in the lattice spacing $`a`$ and reinforced by the curvature penalty $`c`$). No other dynamical information enters the model, and therefore it can be nothing but the entropy associated with the different classes of random surfaces which determines which phase is realized. ## VIII Outlook In the previous sections, a model of infrared Yang-Mills dynamics was presented which allows an intuitive understanding of both the confinement phenomenon and the transition to a deconfined phase at finite temperatures. These properties are closely tied to the percolation characteristics of the vortex surfaces on which the model is based. The behavior of the random surface ensembles generating the two phases closely parallels the behavior found for center projection vortices in the Yang-Mills ensemble . It is possible to choose the coupling constants of the vortex model such that long-range static quark potentials and spatial string tensions measured in Yang-Mills theory are quantitatively reproduced at all temperatures up to the cutoff of the model. The correct description of the spatial string tension in the deconfined phase should be noted in particular; this nontrivial feature at no point entered either the construction of the model or the choice of coupling constants. The vortex model presented here was formulated to describe Yang-Mills theory with an SU(2) color group. The case of SU(3) color realized in nature will in some respects exhibit qualitatively different behavior; the center vortex model appropriate for this gauge group is currently under investigation. Since there are two nontrivial center elements in the SU(3) group, namely the phases $`e^{\pm i2\pi /3}`$ (multiplied by the $`3\times 3`$ unit matrix), one must allow for two distinct vortex fluxes. The main qualitative difference in the topology of vortex configurations is the presence of vortex branchings; a vortex carrying one type of vortex flux can split into two vortices carrying the other type of vortex flux, as long as flux conservation is respected. On the other hand, to provide a comprehensive picture of the infrared sector of QCD, the vortex model must also be investigated with a view to describing the topological susceptibility of the Yang-Mills ensemble and the spontaneous breaking of chiral symmetry. The manner in which vortex configurations generate a nontrivial Pontryagin index was recently clarified in ; the relevant properties are encoded in the (oriented) self-intersection number of the vortex surfaces. In a companion paper , those results are implemented on a space-time lattice, allowing a measurement of the topological susceptibility. Also for this quantity, the vortex model generates a realistic value, compatible with lattice measurements in full SU(2) Yang-Mills theory. In the same vein, it is necessary to construct efficient ways to evaluate the spectrum of the Dirac operator in a vortex background. This will make it possible to calculate the associated chiral condensate , which represents an order parameter for the spontaneous breaking of chiral symmetry. As a last remark, it is tempting to speculate that the phase diagram of electroweak theory can similarly be understood in terms of the percolation characteristics of electroweak vortices , in particular as far as the confinement properties are concerned. In the Higgs phase, the coupling to the Higgs condensate may penalize the vortex density to such an extent that the theory enters the non-percolating, non-confining regime (the shaded region in Fig. 2), thus allowing electroweak gauge bosons to be seen as asymptotic states. ## IX Acknowledgements M.E. thanks B.Petersson and G.Thorleifsson for an illuminating discussion on random surface models.
no-problem/9912/quant-ph9912033.html
ar5iv
text
# Mixedness and teleportation ## Abstract We show that on exceeding a certain degree of mixedness (as quantified by the von Neumann entropy), entangled states become useless for teleporatation. By increasing the dimension of the entangled systems, this entropy threshold can be made arbitrarily close to maximal. This entropy is found to exceed the entropy threshold sufficient to ensure the failure of dense coding. Shared bipartite entanglement has found a host of interesting applications in quantum communications . It is natural to expect that the efficiency of these applications would go down with the decrease of shared entanglement. However, apart from the degree of entanglement of a shared state, there is another physical factor, namely the mixedness of the state, which causes deterioration of the efficiency of the applications. Though for given classes of states (such as the Werner states ), the entanglement of the state may decrease with the mixedness of the state, the two are not necessarily related concepts. For example, a mixed state can have more entanglement than a completely pure (zero mixedness) disentangled state. Thus we are interested in how the mixedness of a given state, taken as an independent physical criterion, affects the efficiency of the entanglement applications. In particular, we will focus on teleportation . A good measure of mixedness of a state $`\rho `$ is its von Neumann entropy $`S(\rho )=\text{Tr}(\rho \mathrm{log}\rho )`$. We will first show that when the entropy of a given $`N\times N`$ state exceeds $`\mathrm{log}N+(1\frac{1}{N})\mathrm{log}(N+1)`$, the state becomes useless for teleportation. To this end we will first need to prove a short theorem. For this theorem we need a quantity called the singlet fraction introduced by the Horodeckis . The singlet fraction $`F(\rho )`$ of a $`N\times N`$ state $`\rho `$ is defined as $`\text{max}\mathrm{\Psi }|\rho |\mathrm{\Psi }`$, where the maximum is taken over all the $`N\times N`$ maximally entangled states. We now proceed to our theorem. Theorem:If the entropy $`S(\rho )`$ of a state $`\rho `$ of a $`N\times N`$ system exceeds $`\mathrm{log}N+(1\frac{1}{N})\mathrm{log}(N+1)`$, then the singlet fraction $`F(\rho )<\frac{1}{N}`$. Proof: Let, for a certain state $`\rho `$, $`F(\rho )\frac{1}{N}`$. This means that there exists, at least one $`N\times N`$ maximally entangled state $`|\mathrm{\Psi }_{\text{Max}}`$, for which $`\mathrm{\Psi }_{\text{Max}}|\rho |\mathrm{\Psi }_{\text{Max}}\frac{1}{N}`$. Let us write the state $`\rho `$ as $$\rho =\underset{i=1,j=1}{\overset{N^2}{}}c_{ij}|ij|,$$ (1) where $`\{|i\}`$ is a basis formed from $`|\mathrm{\Psi }_{\text{Max}}`$ and $`N^21`$ other maximally entangled states. From the definition of singlet fraction it follows that the largest of the elements $`c_{ii}`$ (say this is $`c_{11}`$) has a value greater than or equal to $`\frac{1}{N}`$. Now, we know that the von Neumann entropy $`S(\rho )`$ of the state $`\rho `$ is always less than or equal to its Shannon entropy in any particular basis. This implies $$S(\rho )\underset{i=1}{\overset{N^2}{}}c_{ii}\mathrm{log}c_{ii}.$$ (2) Subject to the constraint $`c_{11}\frac{1}{N}`$, the expression $`_{i=1}^{N^2}c_{ii}\mathrm{log}c_{ii}`$ attains its highest value when $`c_{11}=\frac{1}{N}`$ and the rest $`N^21`$ elements $`c_{ii}`$ are all equal. Thus $`{\displaystyle \underset{i=1}{\overset{N^2}{}}}c_{ii}\mathrm{log}c_{ii}`$ $``$ $`{\displaystyle \frac{1}{N}}\mathrm{log}{\displaystyle \frac{1}{N}}`$ (3) $``$ $`(1{\displaystyle \frac{1}{N}})\mathrm{log}\{{\displaystyle \frac{1}{N^21}}(1{\displaystyle \frac{1}{N}})\}`$ (4) $`=`$ $`\mathrm{log}N+(1{\displaystyle \frac{1}{N}})\mathrm{log}(N+1).`$ (5) From Eqs.(2) and (5) it follows that $$S(\rho )\mathrm{log}N+(1\frac{1}{N})\mathrm{log}(N+1).$$ (6) Thus we have $$F(\rho )\frac{1}{N}S(\rho )\mathrm{log}N+(1\frac{1}{N})\mathrm{log}(N+1).$$ (7) The implication in the above equation is equivalent to $$S(\rho )>\mathrm{log}N+(1\frac{1}{N})\mathrm{log}(N+1)F(\rho )<\frac{1}{N}.$$ (8) In Ref. the Horodeckis have shown that singlet fraction $`F(\rho )<\frac{1}{N}`$ implies that one cannot do teleportation with $`\rho `$ with better than classical fidelity. Thus when the entropy of a state exceeds $`\mathrm{log}N+(1\frac{1}{N})\mathrm{log}(N+1)`$, then by virtue of the theorem proved above, the state becomes useless for teleportation. Here, the phrase ”useless for teleportation” means ”useless for teleportation with better than classical fidelity”. Note that this value of entropy is a minimum threshold. At values of entropy arbtrarily close to this but less, a state $`\rho `$ is not forbidden to allow better than classical teleportation. For example, consider the generalized Werner state $`W_N(ϵ)=ϵ|\mathrm{\Psi }_N\mathrm{\Psi }_N|+(1ϵ)\rho _\text{M}`$ of $`N\times N`$ dimensions where $`\rho _\text{M}`$ is the corresponding maximally mixed state. When $`ϵ`$ is infinitisimally greater than $`\frac{1}{N}`$ (which automatically ensures that the singlet fraction is $`>\frac{1}{N}`$) the state will allow teleportation better than classical, but its entropy will only be slightly below $`\mathrm{log}N+(1\frac{1}{N})\mathrm{log}(N+1)`$. An interesting consequence of our result is the fact that as the dimension $`N`$ of the systems is increased, the entropy threshold becomes closer and closer to the maximal possible entropy of the state. In fact as $`N\mathrm{}`$, we have $`\mathrm{log}N+(1\frac{1}{N})\mathrm{log}(N+1)2\mathrm{log}N`$. Thus for systems of very large dimensions, even an entropy extremely close to the maximal entropy is not sufficient to ensure the failure of teleportation. It is now interesting to compare the entropy sufficient to ensure the failure of teleportation with the entropy sufficient to ensure the failure of another application, namely, dense coding . Dense coding with mixed states have been studied before , but here our target is to identify a degree of mixedness above which dense coding is bound to fail. Here again, failure of dense coding will mean its capacity being less than or equal to the classical communication capacity of $`\mathrm{log}N`$ bits per qu-N-bit. An upper bound to the capacity for dense coding with mixed signal states $`W_i`$ occurring with probabilities $`p_i`$ is given by the Holevo bound $`H=S(p_iW_i)p_iS(W_i)`$. The first expression $`S(p_iW_i)`$ can attain at most a value of $`2\mathrm{log}N`$. Thus when the entropy $`S(W_i)`$ of a signal state exceeds $`\mathrm{log}N`$ we have $`H\mathrm{log}N`$. Therefore an entangled state $`\rho `$ will fail to be useful for dense coding when $`S(\rho )>\mathrm{log}N`$. This is also a minimum threshold. For example, for the state $`W_N(ϵ)`$, we have $`H=2\mathrm{log}NS(W_N(ϵ))`$ for standard Bennett and Wiesner scheme of dense coding and this can exceed $`\mathrm{log}N`$ for $`S(W_N(ϵ))`$ slightly less than $`\mathrm{log}N`$. This threshold of $`\mathrm{log}N`$ is evidently much smaller than the threshold $`\mathrm{log}N+(1\frac{1}{N})\mathrm{log}(N+1)`$ sufficient to ensure the failure of teleportation. In this paper we have shown that there is a degree of mixedness after which a state becomes useless for teleportation. We have quantified this mixedness with the von Neumann entropy, but we could as well use the linear entropy $`S_L=1\text{Tr}\rho ^2`$. In that case the threshold for failure of teleportation will be $`1\frac{2}{N(N+1)}`$. The fact that on increasing the mixedness of a state, dense coding fails before teleportation indicates that teleportation is ”more robust” to external noise. Of course, our entropic criterion is only a sufficient condition for the failure of teleportation. However, entropic criteria can never be necessary for the failure of any entanglement application because they fail even for pure disentangled states. It would be easier to calculate the entropy of a state than to calculate its singlet fraction as no maximization is involved in the former calculation. Hence mathematically, our entropic criterion ($`S>\mathrm{log}N+(1\frac{1}{N})\mathrm{log}(N+1)`$) is more convinient than the corresponding singlet fraction condition ($`F<\frac{1}{N}`$). How about the realtion between mixedness and entanglement itself? We know that for a Bell diagonal state $`\rho `$ with only two non-zero eigenvalues, the distillable entanglement is equal to $`1S(\rho )`$ . Such a state would not be distillable if $`S(\rho )1`$. Is there such an entropy threshold sufficient to ensure the failure of entanglement distillation for an arbitrary $`N\times N`$ state? We leave that as an interesting open question.
no-problem/9912/astro-ph9912279.html
ar5iv
text
# 1 Foamlike Patterns in Cosmic Structure ## 1 Foamlike Patterns in Cosmic Structure Probing cosmic large scale structure on the basis of X-ray observations puts particular emphasis on the densest regions within the global matter distribution, the rich clusters of galaxies. Understanding the relationship between the cluster distribution and the underlying matter distribution is therefore a key element in any assessment of cosmic structure on the basis of samples of X-ray selected clusters. By now, the foamlike arrangement of matter and galaxies on Megaparsec scales has become a well-established feature of the cosmic matter distribution, consisting of an assembly of anisotropic elements, filaments and walls of various sizes, surrounding large underdense void regions and sprinkled with wholly or partially virialized dense clumps of matter, varying in size from rich clusters of galaxies down to small groups of a few galaxies. It is one of successes of gravitational instability cosmic structure formation theories to find that this pattern of walls, filaments and voids appears to be the generic outcome of these scenarios. A major obstacle in quantifying this quintessential aspect of cosmic structure is the lack of a systematic insight into the dynamical and statistical aspects of cellular geometries as well as the absence of a readily available and well-established mathematical machinery to evaluate and compare observations and simulations. Stochastic geometry – the branch of mathematics concerned with nontrivial geometrical concepts involving stochastic behaviour of one or more of their characteristics – may be expected to contribute significantly to further such understanding. ## 2 Voronoi Tessellations The canonical example of a stochastic geometrical model for a cellular division of space is that of Voronoi tessellations. This space filling network of convex polyhedra offers a surprisingly realistic and versatile representation of the characteristics and features of the foamlike or cellular spatial arrangement of matter in the Universe. In short, it is defined through a spatial distribution of nuclei. Each nucleus corresponds to one Voronoi cell, which comprises that part of space closer to this nucleus than to any of the other. The walls and edges forming the polyhedrons’ surfaces are identified with the wall-like and filamentary superclusters in the galaxy distribution, the vertices with and massive clusters of galaxies, while the interior of the Voronoi cells corresponds to the large void regions barren of galaxies. The morphology of artificial galaxy distributions set up within this network bears a striking resemblance to that observed in the more complicated circumstances of the real Cosmos or that of the artificial reality of computer simulations of structure formation. Figure 1 shows a conglomerate of several neighbouring Voronoi cells, with one of the cells by shaded surface and a wire-framed representation in the case of its neighbouring cells. The solid lines are the edges (filaments) in the network, the dots (coloured red) are the vertices of the Voronoi tessellation. In principle a Voronoi tessellation could be regarded as a mere geometric toy model, a heuristic geometrical description of a substantially more complicated reality. However, a detailed assessment of the role of voids in structure formation provides a physical ground for them representing reality, whether the observed or simulated one, in a more subtle way. This is certainly reinforced by their striking resemblance to the observed or simulated foamlike appearance of the galaxy and matter distribution. ## 3 Voronoi Vertex Clustering Even in the limited setting of Figure 1 it is evident that the vertex distribution is not a random Poisson distribution. The full spatial distribution of Voronoi vertices in the full cubic volume (righthans frame fig. 1) clearly involves a substantial degree of clustering. This impression of strong clustering, on scales smaller than or of the order of the cellsize $`\lambda _\mathrm{C}`$, is most evidently confirmed by their two-point correlation function $`\xi \left(r\right)`$. Not only can we discern a clear positive signal but – surprising at the time of its finding on the basis of similar computer experiments – out to a distance of at least $`r1/4\lambda _\mathrm{C}`$ the correlation function appears to be an almost perfect power-law, $$\xi \left(r\right)=\left(r_\mathrm{o}/r\right)^\gamma ,$$ (1) with a slope $`\gamma 1.92.0`$. Its amplitude, traditionally expressed in terms of the “clustering length” $`r_\mathrm{o}`$, at which $`\xi \left(r_\mathrm{o}\right)=\mathrm{\hspace{0.17em}1}`$, has a value $`r_\mathrm{o}0.29\lambda _\mathrm{C}`$. Beyond this range, the power-law behaviour breaks down and following a gradual decline the correlation function rapidly falls of to a zero value once distances are of the order of the cellsize. However, rather than a characteristic geometric scale, $`r_\mathrm{o}`$ is more a measure for the “compactness” of the spatial clustering, set mainly by the small-scale clustering. A more significant scale within the context of the geometry of the spatial patterns in the density distribution is the “correlation length” $`r_\mathrm{a}`$, the scale at which $`\xi \left(r_\mathrm{a}\right)=\mathrm{\hspace{0.17em}0}`$. As a genuine scale of coherence, it is more relevant to the morphology of the nontrivial spatial structures we seek to study. Beyond $`r_\mathrm{a}`$ the distribution of Voronoi vertices is practically uniform. If we interpret the clustering length $`r_\mathrm{o}20h^1\text{Mpc}`$, usually found for samples of rich clusters of galaxies, within the context of a Voronoi tessellation it would imply a cellsize of $`\lambda _\mathrm{C}70h^1\text{Mpc}`$. Although the two-point cluster-cluster correlation function reproduced by the Voronoi vertices fits very well to the function yielded from the observations, the large cell size $`\lambda _c`$ may be a complication. It is surely well in excess of the $`25h^135h^1\text{Mpc}`$ size of the voids in the galaxy distribution. Moreover, also within the Voronoi concept itself it would conflict with the clustering of objects dwelling in the walls and filaments of the same tessellation framework. Clustering analysis of such configurations reveals that the two-point correlation function of galaxies confined to the walls – as well as for those confined to the edges – also displays distinct power-law behaviour at sub-cellular scales. The involved clustering length, however, is different from that of the vertices in the same framework. For the wall galaxies it is but half the value of that of the vertices, $`r_{w,\mathrm{o}}0.14\lambda _\mathrm{C}`$,. If $`r_{\mathrm{w},\mathrm{o}}`$ is identified with the galaxy-galaxy clustering length of $`r_{\mathrm{g},\mathrm{o}}5h^1\text{Mpc}`$, this would yield a cellsize of $`35h^1\text{Mpc}`$. The latter is suggestively similar to the size of the actually observed voids, which may be a tantalizing hint for a profound relationship between the clustering length $`r_\mathrm{o}`$ and the typical cellular scale of the cosmic foam network. Adressing this apparent inconsistency between vertex and wall clustering we first observe that the vertex correlation function of eqn. (1) concerns the whole sample of vertices, irrespective of any possible selection effects based on one or more relevant physical aspects. In reality, it will be almost inevitable to invoke some sort of biasing through the defining criteria of the catalogue of clusters. Interpreting the Voronoi model in its quality of asymptotic approximation to the galaxy distribution, its vertices will automatically comprise a range of “masses”. Neglecting the details of the temporal evolution, we may assign each Voronoi vertex a “mass” equal to the total amount of matter that ultimately will flow towards that vertex. Applying the “Voronoi streaming model” as a reasonable description of the clustering process, it is reasonably straightforward if cumbersome to calculate the “mass” or “richness” $`_\mathrm{V}`$ of each Voronoi vertex by pure geometric means. It concerns the volume of a non-convex polyhedron centered on the Voronoi vertex, with related Voronoi nuclei as one of the polyhedral vertices. These nuclei are the ones that supply the Voronoi vertex with inflowing matter. Evidently, the “mass” is larger for vertices on the surface of large Voronoi cells. The vertex samples in our study consist of all Voronoi vertices with a richness equal to higher than a limiting value, the “sample richness” $`_\mathrm{S}`$. Subsequently, we determined the correlation function characteristics for each of the vertex samples. Assessing their behaviour as a function of the average distance $`\lambda _\mathrm{V}`$ between the sample vertices, for samples encompassing from $`10\%`$ up to $`100\%`$ of the total number of Voronoi vertices and thus for $`\lambda _\mathrm{V}0.51.5\times \lambda _\mathrm{C}`$, revealed a remarkable and tantalizing scaling of the clustering phenomenon. All subsamples of Voronoi vertices have a two-point correlation function that out to a certain range retains a power-law behaviour almost exactly similar the one of the full sample of vertcies. No significant difference in power law slope between the various subsamples can be discerned, all have $`\gamma 1.82.0`$. On the other hand, both clustering length $`r_\mathrm{o}`$ and correlation length $`r_\mathrm{a}`$ do display a systematic dependence on sample richness. Both $`r_\mathrm{o}`$ and $`r_\mathrm{a}`$ increase proportionally to the sample richness, and hence the average vertex distance $`\lambda _\mathrm{V}`$ in the samples ! The more massive the vertices are, the more strongly they cluster. In other words, in the line of the random field “peak biasing” scheme, we here find a purely geometrically based biasing scheme. Interesting in this respect is to observe that the increase of $`r_{\mathrm{V},\mathrm{o}}`$ is perfectly linearly proportional to the mutual vertex distance $`\lambda _\mathrm{V}`$ in the sample (Fig. 2, left frame). This suggests that Voronoi vertex clustering is a perfect realization of the clustering scaling description proposed by Szalay & Schramm. It also adheres to the increasing level of clustering that selections of more massive clusters appear to display in large-scope N-body simulations although there are telling differences in detailed behaviour. For our purpose, even more significant is the uncovered scaling of the correlation length $`r_{\mathrm{V},\mathrm{a}}`$, similar in character to that of $`r_{\mathrm{V},\mathrm{o}}`$. The increase of $`r_\mathrm{a}`$ also turns out to be almost exactly linearly proportional to the average vertex distance. The repercussions are manyfold. It means that the selected vertices still have a strong positive correlation at scales where the poorer samples do not possess any clustering. Moreover, samples with more massive clusters are expected to have a clustering that extends out further than that of their poorer brethren. This may be a significant observation in the light of the finding that samples of rich galaxy clusters seem to have a positive correlation at scales of tens of Megaparsec, quite in excess of scales with appreciable galaxy clustering. Finally, the implied constant ratio between clustering and correlation length, $`r_\mathrm{a}/r_\mathrm{o}1.86`$ (Fig. 2, right frame), implies a perfect self-similar scaling of the Voronoi vertex distribution. The complete correlation function $`\xi _s\left(r\right)`$ of each selected subsample $`s`$, and not just the part in the power-law range, is a self-similar mapping of an elementary function $`\xi _{\mathrm{el}}`$ scaled by means of a characteristic lengthscale parameter $`L_s`$. ## 4 Bias and Cosmic Geometry: Conclusions The above results form a tantalizing indication for the existence of self-similar clustering behaviour in spatial patterns with a cellular or foamlike morphology. It might hint at an intriguing and intimate relationship between the cosmic foamlike geometry and a variety of aspects of the spatial distribution of galaxies and clusters. One important implication is that with clusters residing at a subset of nodes in the cosmic cellular framework, a configuration certainly reminiscent of the observed reality, it would explain why the level of clustering of clusters of galaxies becomes stronger as it concerns samples of more massive clusters. In addition, it would succesfully reproduce positive clustering of clusters over scales substantially exceeding the characteristic scale of voids and other elements of the cosmic foam. At these Megaparsec scales there is a close kinship between the measured galaxy-galaxy two-point correlation function and the foamlike morphology of the galaxy distribution. In other words, the cosmic geometry apparently implies a ‘geometrical biasing” effect, qualitatively different from the more conventional “peak biasing” picture.
no-problem/9912/cond-mat9912118.html
ar5iv
text
# Origin of the high piezoelectric response in PbZr1-xTixO3 \[ ## Abstract High resolution x-ray powder diffraction measurements on poled PbZr<sub>1-x</sub>Ti<sub>x</sub>O<sub>3</sub> (PZT) ceramic samples close to the rhombohedral-tetragonal phase boundary (the so-called morphotropic phase boundary, MPB) have shown that for both rhombohedral and tetragonal compositions, the piezoelectric elongation of the unit cell does not occur along the polar directions but along those directions associated with the monoclinic distortion. This work provides the first direct evidence for the origin of the very high piezoelectricity in PZT. preprint: cond-mat/000-ms \] The ferroelectric PbZr<sub>1-x</sub>Ti<sub>x</sub>O<sub>3</sub> (PZT) system has been extensively studied because of its interesting physical properties close to the morphotropic phase boundary (MPB), the nearly vertical phase boundary between the tetragonal and rhombohedral regions of the phase diagram close to x= 0.50, where the material exhibits outstanding electromechanical properties . The existence of directional behavior for the dielectric and piezoelectric response functions in the PZT system has been predicted by Du et al. , from a phenomenological approach . These authors showed that for rhombohedral compositions the piezoelectric response should be larger for crystals oriented along the direction than for those oriented along the direction. Experimental confirmation of this prediction was obtained for the related ferroelectric relaxor system PbZn<sub>1/3</sub>Nb<sub>2/3</sub>-PbTiO<sub>3</sub> (PZN-PT), which has a rhombohedral-to-tetragonal MPB similar to that of PZT, but it has not been possible to verify similar behavior in PZT due to the lack of single crystals. Furthermore, ab initio calculations based on the assumption of tetragonal symmetry, that have been successful for calculating the piezoelectric properties of pure PbTiO<sub>3</sub> , were unable to account for the much larger piezoelectric response in PZT compositions close to the MPB. Thus, it is clear that the current theoretical models lack some ingredient which is crucial to understanding the striking piezoelectric behavior of PZT. The stable monoclinic phase recently discovered in the ferroelectric PbZr<sub>1-x</sub>Ti<sub>x</sub>O<sub>3</sub> system (PZT) close to the MPB, provides a new perspective to view the rhombohedral-to-tetragonal phase transformation in PZT and in other systems with similar phase boundaries as PMN-PT and PZN-PT . This phase plays a key role in explaining the high piezolectric response in PZT and, very likely, in other systems with similar MPBs. The polar axis of this monoclinic phase is contained in the (110) plane along a direction between that of the tetragonal and rhombohedral polar axes . An investigation of several compositions around the MPB has suggested a modification of the PZT phase diagram as shown in Fig. 1(top right) . A local order different from the long-range order in the rhombohedral and tetragonal phases has been proposed from a detailed structural data analysis. Based on this, a model has been constructed in which the monoclinic distortion (Fig. 1, bottom-left) can be viewed as either a condensation along one of the $``$110$``$ directions of the local displacements present in the tetragonal phase (Fig. 1 bottom-right), or as a condensation of the local displacements along one of the $``$100$``$ directions present in the rhombohedral phase (Fig. 1 top-left). The monoclinic structure, therefore, represents a bridge between these two phases and provides a microscopic picture of the MPB region . In the present work experimental evidence of an enhanced elongation along for rhombohedral PZT and along for tetragonal PZT ceramic disks revealed by high-resolution x-ray diffraction measurements during and after the application of an electric field is presented. This experiment was originally designed to address the question whether poling in the MPB region would simply change the domain population in the ferroelectric material, or whether it would induce a permanent change in the unit cell. As shown below, from measurements of selected peaks in the diffraction patterns, a series of changes in the peak profiles from the differently oriented grains are revealed which provide key information about the PZT problem. PbZr<sub>1-x</sub>Ti<sub>x</sub>O<sub>3</sub> ceramic samples with x= 0.42, 0.45 and 0.48 were prepared by conventional solid-state reaction techniques using high purity (better than 99.9%) lead carbonate, zirconium oxide and titanium oxide as starting compounds. Powders were calcined at 900<sup>o</sup>C for six hours and recalcined as appropriate. After milling, sieving, and the addition of the binder, the pellets were formed by uniaxial cold pressing. After burnout of the binder, the pellets were sintered at 1250<sup>o</sup>C in a covered crucible for 2 hours, and furnace-cooled. During sintering, PbZrO<sub>3</sub> was used as a lead source in the crucible to minimize volatilization of lead. The sintered ceramic samples of about 1 cm diameter were ground to give parallel plates of 1 mm thickness, and polished with 1 $`\mu `$m diamond paste to a smooth surface finish. To eliminate strains caused by grinding and polishing, samples were annealed in air at 550<sup>o</sup>C for five hours and then slow-cooled. Silver electrodes were applied to both surfaces of the annealed ceramic samples and air-dried. Disks of all compositions were poled under a DC field of 20 kV/cm at 125<sup>o</sup>C for 10 minutes and then field-cooled to near room temperature. The electrodes were then removed chemically from the x= 0.42 and 0.48 samples. For the x= 0.45 sample (which had been ground to a smaller thickness, about 0.3 mm), the electrodes were retained, so that diffraction measurements could be carried out under an electric field. Several sets of high-resolution synchrotron x-ray powder diffraction measurements were made at beam line X7A at the Brookhaven National Synchrotron Light Source. A Ge(111) double-crystal monochromator was used in combination with a Ge(220) analyser, with a wavelength of about 0.8 Å in each case. In this configuration, the instrumental resolution, $`\mathrm{\Delta }2\theta `$, is an order-of-magnitude better than that of a conventional laboratory instrument (better than 0.01<sup>o</sup> in the $`2\theta `$ region 0-30<sup>o</sup>). The poled and unpoled pellets were mounted in symmetric reflection geometry and scans made over selected peaks in the low-angle region of the pattern. It should be noted that since lead is strongly absorbing, the penetration depth below the surface of the pellet at $`2\theta =20^o`$ is only about 2 $`\mu `$m. In the case of the x= 0.45 sample, the diffraction measurements were carried out with an electric field applied in-situ via the silver electrodes. Powder diffraction measurements on a flat plate in symmetric reflection, in which both the incident and the diffracted wave vectors are at the same angle, $`\theta `$, with the sample plate, ensures that the scattering vectors are perpendicular to the sample surface. Thus only crystallites with their scattering vector parallel to the applied electric field are sampled. Scans over selected regions of the diffractogram, containing the (111), (200) and (220) pseudo-cubic reflections, are plotted in Fig.2 for poled and unpoled PZT samples with the compositions x= 0.48 (top) and x= 0.42 (bottom), which are in the tetragonal and rhombohedral region of the phase diagram, respectively. The diffraction profiles of the poled and unpoled samples show very distinctive features. For the tetragonal composition (top), the (200) pseudo-cubic reflection (center) shows a large increase in the tetragonal (002)/(200) intensity ratio after poling due to the change in the domain population, which is also reflected in the increased (202)/(220) intensity ratio in the right side of the figure. In the rhombohedral composition with x= 0.42 (bottom of Fig.2), the expected change in the domain population can be observed from the change of the intensity ratios of the rhombohedral (111) and (11$`\overline{1}`$) reflections (left side ) and the (220) and (2$`\overline{2}`$0) reflections (right side). In addition to the intensity changes, the diffraction patterns of the poled samples show explicit changes in the peak positions with respect to the unpoled samples, corresponding to specific alterations in the unit cell dimensions. In the rhombohedral case (x= 0.42), the electric field produces no shift in the (111) peak position (see bottom-left plot in Fig.2), indicating the absence of any elongation along the polar directions after the application of the field. In contrast, the poling does produce a notable shift of the (00l) reflections (center plot), which corresponds to a very significant change of d-spacing, with $`\mathrm{\Delta }d/d=0.32`$%, $`\mathrm{\Delta }d/d`$ being defined as $`(d_pd_u)/d_u`$, where $`d_p`$ and $`d_u`$ are the d-spacings of the poled and unpoled samples, respectively. This provides experimental confirmation of the behavior predicted by Du et al. for rhombohedral PZT, as mentioned above. The induced change in the dimensions of the unit cell is also reflected as a smaller shift in the (202) reflection (right side plot), corresponding to a $`\mathrm{\Delta }d/d`$ along of 0.12%. In the tetragonal case for x= 0.48 (top of Fig.2), there is no peak shift observed along the polar \[00l\] direction (center plot), but the (202) and the (111) reflections exhibit striking shifts (right and left sides, respectively). Furthermore, this composition, which at room temperature is just at the monoclinic-tetragonal phase boundary, shows, after poling, a clear tendency towards monoclinic symmetry, in that the (111) and (202) reflections, already noticeably broadened in the unpoled sample and indicative of an incipient monoclinicity, are split after poling. These data clearly demonstrate, therefore, that whereas the changes induced in the unit cell after the application of an electric field do not increase either the rhombohedral or the tetragonal strains, a definite elongation is induced along those directions associated with the monoclinic distortion. In addition to the measurements on the poled and unpoled samples, diffraction measurements were performed in situ on the rhombohedral PZT sample with x= 0.45 as a function of applied electric field at room temperature. The results are shown in Fig.3 where the (111), (200) and (220) pseudo-cubic reflections are plotted with no field applied (top) and with an applied field of 59 kV/cm field (bottom). The top part of the figure also shows data taken after removal of the field. As can be seen, measurements with the field applied show no shift along the polar direction but, in contrast, there is a substantial shift along the direction similar to that for the poled sample with x= 0.42 shown in Fig. 2, proving that the unit cell elongation induced by the application of a field during the poling process corresponds to the piezoelectric effect induced by the in-situ application of a field. Comparison of the two sets of data for x= 0.45 before and after the application of the field shows that the poling effect of the electric field at room temperature is partially retained after the field is removed, although the poling is not as pronounced as for the x= 0.42 sample in Fig. 2. A quantification of the induced microstrain along the different directions has been made by measuring the peak shifts under fields of 31 and 59 kV/cm. In Fig. 4, $`\mathrm{\Delta }d/d`$ is plotted versus the applied field, $`E`$, for the (200) and (111) reflections. These data show an approximately linear increase in $`\mathrm{\Delta }d/d`$ for (200) with field, with $`\mathrm{\Delta }d/d=0.30\%`$ at 59 kV/cm, corresponding to a piezoelectric coefficient $`d_{33}`$$``$ 500 pm/V, but essentially no change in the d-spacing for (111). It is interesting to compare in Fig.4 the results of dilatometric measurements of the macroscopic linear elongation ($`\mathrm{\Delta }l/l`$) on the same pellet, which must also reflect the effects of domain reorientation. At higher fields, this contribution diminishes and one could expect the $`\mathrm{\Delta }l/l`$ vs. E curve to fall off between those for the and oriented grains, typical of the strain behaviour of polycrystalline ceramics . Although such a trend is seen above 30 kV/cm, it is intriguing to note that below this value, the macroscopic behavior is essentially the same as the microscopic behavior for the (200) reflection. It is of interest to relate our observations to the more conventional description of piezoelectric effects in ceramics, in which the dielectric displacements would be attributed to tilts of the polar axis. What we actually observe in the diffraction experiment is an intrinsic monoclinic deformation of the unit cell as a consequence of the rotation of the polar axis in the monoclinic plane. However, large atomic displacements can only occur in ompositions close to the MPB, and it is this feature which accounts for the sharp peak in the piezoelectric d constants for compositions close to 52:48 Zr/Ti . We therefore conclude that the piezoelectric strain in PZT close to the morphotropic phase boundary, which produces such striking electromechanical properties, is not along the polar directions but along those directions associated with the monoclinic distortion. This work supports a model based on the existence of local monoclinic shifts superimposed on the rhombohedral and tetragonal displacements in PZT which has been proposed from a detailed structural analysis of tetragonal and rhombohedral PZT samples. Very recent first-principles calculations by L. Bellaiche et al. have been able not only to reproduce the monoclinic phase but also to explain the high piezoelectric coefficients by taking into account rotations in the monoclinic plane. As demonstrated above, these high resolution powder data provide key information to understanding the piezoelectric effect in PZT. In particular, they allow an accurate determination of the elongation of the unit cell along the direction of the electric field, although they give no information about the dimensional changes occurring along the perpendicular directions, which would give a more complete characterization of the new structure induced by the electric field. It is interesting to note that in the case of the related ferroelectric system PZN-PT, the availability of single crystals has allowed Durbin et al. to carry out diffraction experiments along similar lines at a laboratory x-ray source. Synchrotron x-ray experiments by the present authors are currently being undertaken on PZN-PT single crystals with Ti contents of 4.5 and 8% under an electric field, and also on other ceramic PZT samples. Preliminary results on samples with x= 0.46 and 0.47, which are monoclinic at room temperature, have already been obtained. In these cases, the changes of the powder profiles induced by poling are so drastic that further work is needed in order to achieve a proper interpretation. We thank L. Bellaiche, A. M. Glazer, J.A. Gonzalo and K. Uchino for their stimulating dicussions, B. Jones and E. Alberta for assisting in the sample preparation, and A. L. Langhorn for his invaluable technical support. Financial support by the U.S. Department of Energy under contract No. DE-AC02-98CH10886, and by ONR under project MURI (N00014-96-1-1173) is also acknowledged.
no-problem/9912/astro-ph9912486.html
ar5iv
text
# Supernova rates in Abell galaxy clusters and implications for metallicity ## The project and main scientific objectives We have used the Wise Observatory 1m telescope to monitor monthly a sample of 163 rich (richness class $`R>0`$) Abell galaxy clusters, with medium redshift ($`0.06<z<0.2`$) northern declination $`(\delta >0)`$ and small angular size $`(r<20^{})`$. We have also observed ”blank” flanking fields for a sub-sample of the clusters. These will be used to study the cluster vs. field SN rates, and to estimate the luminosity contributed by the cluster in each field. We have used unfiltered (”clear”) observations to achieve maximum sensitivity, and have a characteristic limiting magnitude of $`R22`$. Variable objects are discovered by image subtraction. New subtraction methods have been developed for use in this project (see fig. 1). Our main scientific goal is to derive from our data the SN rate as a function of various parameters such as host galaxy type and cluster environment: position within the cluster, cluster richness and cluster vs. field. SN rates can then be used to determine the current and past star formation rates in galaxy clusters mdp . Our measured SN rates can replace the assumed rates used so far in studies of metal abundances in the intracluster gas. We also intend to study the rate, distribution and properties of intergalactic SNe in galaxy clusters. A candidate intergalactic SN we have discovered, SN 1998fc (see fig. 3), will be discussed below. Our search is also sensitive to other optical transients, such as AGNs in the clusters and behind them, flares from tidal disruption of stars by dormant massive black holes in galactic nuclei and GRB afterglows. We may also detect the gravitational lensing effect of the clusters on background SNe kb . ## First results Our program has already discovered 11 spectroscopically confirmed SNe at $`z=0.10.24,`$ (see table 1 and fig. 2) and several unconfirmed SNe. We have also detected variable stellar objects (some of which are AGN) and dozens of asteroids. ## Intergalactic SNe and enhanced central metal abundances in clusters The existence of a diffuse population of intergalactic stars is supported by a growing body of observational evidence such as intergalactic planetary nebulae in the Fornax and Virgo clusters thw arn ciar fre , and intergalactic red giant stars in Virgo ftv . Recent imaging of the Coma cluster reveals low surface brightness emission from a diffuse population of stars gw , the origin of which is attributed to galaxy disruption dmh mor . Since type Ia SNe are known to occur in all environments, there is no obvious reason to assume that such events do not happen within the intergalactic stellar population. SN 1998fc may be such an event. The intergalactic stellar population is centrally distributed dub . Therefore, metals produced by intergalactic Ia SNe can provide an elegant explanation for the central enhancement of metal abundances with type Ia characteristics, recently detected in galaxy clusters dw . ### SN 1998fc - An intergalactic SN candidate in Abell 403 SN 1998fc was detected near the cD galaxy of Abell 403 gm1 , and was spectroscopically confirmed as a type Ia SN at the cluster redshift gm2 flr . The most likely host for this SN, the cD galaxy, is very distant - at least 78 Kpc away. This may be an intergalactic SN whose progenitor star was a member of the diffuse intergalactic stellar population. Alternatively, the host may be a faint dwarf galaxy. The distribution of ”hostless” SNe is expected to be different if the progenitors are members of the intergalactic population, centered near the cluster core dub , or members of dwarf galaxies, more abundant in the outskirts of galaxy clusters phi . Therefore, the nature of such objects could be resolved with larger number statistics. In any event, the number of SNe with undetected hosts relative to the total number of cluster SNe can put an upper limit on the intergalactic stellar fraction.
no-problem/9912/nucl-ex9912006.html
ar5iv
text
# Measurement of pd→³He⁢𝜂 in the 𝑆₁₁ Resonance ## Abstract We have measured the reaction $`\mathrm{pd}{}_{}{}^{3}\mathrm{He}\eta `$ at a proton beam energy of 980 MeV, which is 88.5 MeV above threshold using the new “germanium wall” detector system. A missing–mass resolution of the detector system of 2.6$`\%`$ was achieved. The angular distribution of the meson is forward peaked. We found a total cross section of (573 $`\pm `$ 83 (stat.) $`\pm `$ 69 (syst.) ) nb. The excitation function for the present reaction is described by a Breit Wigner form with parameters from photoproduction. The production of $`\eta `$–mesons is interesting because it opens the possibility of studying the interaction between the lightest isoscalar particle and the nuclear environment. Haider and Liu were the first to show that even bound $`\eta `$-nucleus systems, i.e. $`\eta `$–mesic nuclei, could be possible . Based on the results of Bhalerao and Liu they found an attractive $`\eta `$–N interaction which in their calculations leads to bound states for nuclei with mass number A $``$ 10 . Rakityanski et al. even relaxed this condition to A $``$ 2. The widths of such states were predicted to be narrow enough to be observable for nuclei with A $``$ 4. Wycech et al. also predicted the formation of mesic nuclei in $`\mathrm{dd}{}_{}{}^{4}\mathrm{He}\eta `$, but not in $`\mathrm{pd}{}_{}{}^{3}\mathrm{He}\eta `$. In contrast, Abaev and Nefkens , as well as Wilkin showed that the formation of quasi–bound $`\eta `$$`{}_{}{}^{3}\mathrm{He}`$ states in the reaction $`\mathrm{pd}{}_{}{}^{3}\mathrm{He}\eta `$ should indeed be possible. In addition, the reaction $`\mathrm{pd}{}_{}{}^{3}\mathrm{He}\eta `$ is of interest due to its surprisingly large cross section close to threshold making this reaction a prime candidate for the source of $`\eta `$–mesons in tagged $`\eta `$–facilities . A detector system called the “germanium wall” was built at the COSY facility in Julich (see Figure 1). In its complete setup, the germanium wall is a stack of four position sensitive high–purity germanium detectors having a conical acceptance with an opening angle of $`\pm 287.5`$ mrad. In the centre of each detector is a hole with a size of $`\pm 28`$ mrad allowing the primary beam to pass through. Two types of detectors are used, one 1.3 mm thin diode (“quirl–detector”) for determining the reaction vertices through its good position resolution given by the crossing of two counterrotating spirals and three 17 mm thick diodes for measuring the particle energies (“energy–detectors”). For further details see Ref. . The setup used for the present measurement consisted of one quirl and two energy–detectors (Quirl, E1 and E3, see Figure 1). First measurements with the “germanium wall” showed the good missing–mass resolution of the system. The reaction $`\mathrm{pd}{}_{}{}^{3}\mathrm{He}\eta `$ was studied at a proton beam energy of 980 MeV (88.5 MeV above threshold) leading to almost 4$`\pi `$ acceptance of the detector system for the product $`{}_{}{}^{3}\mathrm{He}`$–particles. We performed two runs at different times. The target was a cell filled with liquid deuterium with 6 mm diameter and thicknesses of 2.4$`\pm `$0.2 mm (run A) and 4.4$`\pm `$0.2 mm (run B), respectively . The COSY extracted proton beam was focussed onto the target yielding a spot with a radius $`\sigma =0.5mm`$ and a divergence of 6 mrad. These parameters together with the short distance between target to detector yields a total angular uncertainty of 16 mrad, where the individual contributions are linearly added. This uncertainty is much larger than that resulting from the position resolution of the detector, which is in the order of 2 mrad. The beam had a momentum spread of $`\mathrm{\Delta }p/p=8\times 10^4`$ . The energy and direction of the emerging $`{}_{}{}^{3}\mathrm{He}`$–particles were measured by the “germanium wall”. Figure 2 shows a $`\mathrm{\Delta }`$E–E spectrum demonstrating the capability of the detector system for particle identification. Through energy and emission direction measurement of $`{}_{}{}^{3}\mathrm{He}`$–particles, the missing–mass was calculated. A missing–mass spectrum for run B is shown in Figure 3. The $`\eta `$–peak is clearly visible with a resolution of $`\sigma =(6.1\pm 0.5)MeV/c^2`$. Background is mainly caused by multi–pion production (e.g. $`\mathrm{pd}{}_{}{}^{3}\mathrm{He}\pi ^+\pi ^{}`$, $`\mathrm{pd}{}_{}{}^{3}\mathrm{He}\pi ^0\pi ^0`$, etc.). Low–energy $`{}_{}{}^{3}He`$ background events at small angles were not detected, because of the minimum opening of the detector system. These events correspond to large relative energies between the pions and thus to large missing–mass values. Therefore, the spectrum is truncated at 600 MeV/$`\mathrm{c}^2`$. For run A a missing–mass resolution of $`\sigma =(8.2\pm 0.6)MeV/c^2`$ is obtained. The whole body of data was divided into 5 and 6 angular bins for run A and run B, respectively. For each bin a Gaussian together with a background function were fitted to the corresponding missing–mass spectrum above 450 $`MeV/c^2`$. For run B, two different shapes for the background were assumed: a polynomial of third order and a function $`BG=\sqrt{a_0\left[1\left(\frac{mma_1}{a_1}\right)^2\right]}\frac{mm^{a_2}}{a_3}`$ with $`mm`$ the missing–mass and $`a_i`$ parameters to be fitted. Both functions lead to the same results. For run A, the background is not so clearly separated for large missing–masses from the $`\eta `$ peak as it is in run B. Therefore, several functions were tested. Polynomials were fitted to the range lower than the $`\eta `$ peak and only the first point above the peak. Alternatively a step like function (cummulative Lorentzian) was fitted to all data. The $`\eta `$ peak was always assumed to be a Gaussian. Finally the number of $`pd{}_{}{}^{3}He\eta `$ events was obtained by integrating the Gaussians and weighted means were deduced. Further details of the data analysis procedure are given elsewhere . Due to beam halo during the experiments the intensity of the beam had to be reduced to a level of $`10^5`$ protons per second. Thus pile up and detector damage were avoided, but event statistics was strongly reduced. The measured angular distribution is shown in Figure 4. The error bars shown represent the statistical errors only. In addition, there are systematic uncertainties: target thickness $`10\%`$ and $`5\%`$ in the different runs, respectively, luminosity calibration $`7\%`$, corrections due to trigger and detector inefficiencies due to nuclear interactions in Germanium (see Ref. ) $`5\%`$. The total systematic error of $`13\%`$ and $`10\%`$ (when added in quadrature) for the two runs, respectively is smaller than the statistical error. The efficiency of the data analysis ($`80\%`$) was studied by Monte Carlo simulations. The simulated detector response was found to be in excellent agreement with the experiment . Before and after each run the detector was checked with radioactive sources. No significant deviations in the amplifier gains were found as expected, since the electronic circuits were kept at a constant temperature. Since both runs were performed under different experimental conditions with respect to beam halo, target thickness and distance between target and detector , different systematic errors lead to an enhancement of run A compared to run B which is slightly above the statistical error given above. The halo was 2.2 times more intense during run A than in run B thus leading to a larger combinatorial background than in run B and hence to larger error bars of the integrated Gaussians for the $`\eta `$ peak. The angular distribution is forward peaked. This is in contrast to the results of Mayer et al. reporting almost isotropic distributions close to threshold. The cross sections of Banaigs et al. measured through $`\mathrm{dp}{}_{}{}^{3}\mathrm{He}\eta `$ at a slightly higher excitation energy (corresponding to an equivalent proton kinetic energy of 1047 MeV) agree with our data. When we fit Legendre polynomials to the present data, an unphysical negative value for $`\mathrm{cos}(\theta )=1`$ is obtained. In order to overcome this deficiency we have added one data point at $`\mathrm{cos}(\theta )`$=-1 from Ref. , who had measured an excitation function for this angle. This point is also shown in Fig. 4. In order to take into account the fact that the point was measured at a slightly different energy of $`0.5\%`$, we have doubled its statistical error. Another data point in the literature taken at a somewhat higher beam energy (see Ref. ) is also shown in Fig. 4. Again its error bar was doubled. A Legendre polynomial fit to all points of the angular distribution, i.e. the present ones from both runs as well as the one from Ref.’s weighted by their total errors, yielded parameters $`A_0=45.6\pm 5.9,A_1=47.3\pm 10.9`$, and $`A_2=5.8\pm 8.5`$ all given in nb/sr, and with a $`\chi ^2/nfree=0.3`$ from which a total cross section of (573 $`\pm `$ 74) nb follows. In addition to this statistical error the systematic error is assumed to be 69 nb. The result is insensitive to the added point, because the differential cross section is small for the backward angle emission. Including higher degrees in the fit procedure does not improve the fit, a lower degree gives an unphysical negative value for $`\mathrm{cos}(\theta )=1`$. Kingler has extended a model originally developed for pion production to include higher–resonances. The original model was limited to only pion exchange, $`\mathrm{\Delta }`$ resonance excitation as well as non resonant contributions. The extension treats other meson exchanges as well as higher nucleon resonances than the $`\mathrm{\Delta }`$. The vertex function for the different baryon–baryon–meson couplings were calculated in a simple quark model and are momentum dependent. For the present reaction the largest contribution to the cross section comes from the $`NN\rho `$ and $`NN\omega `$ interactions while the contribution due to $`NN^{}(1535)\pi `$ interaction is one order of magnitude smaller. The contributions of other resonances like $`N^{}(1440)`$, $`N^{}(1650)`$, and $`N^{}(1710)`$ are even smaller. However, the form factor is calculated only for harmonic oscillator wave functions with the frequency being a free parameter varied to fit the experimental data. The model predictions are shown in Fig. 4 as dashed curve. Obviously, the calculation shows structures not observed in the present data. The extracted total cross section together with earlier data from Mayer et al. is shown in Figure 5. From the angular distributions given by Banaigs et al. , Loireleux and Kirchner we extracted total cross sections by fitting Legendre polynomials. The results are also shown in Figure 5. Kirchner claimed that the data from Ref. suffer from important electronic problems. No further details are given. The data from Mayer et al. indicate a strongly rising cross section close to threshold. Also shown in the Figure is a normalized calculation within a two–step model developed by Kilian and Nann . Within this model a pion is produced in a first step through $`ppd\pi ^+`$. In a second step, this pion produces the $`\eta `$ in an interaction with the neutron. A kinematical velocity matching yields the maximum close to threshold. The present data do not support this model as the dominant reaction mechanism. The energy region from 900–1100 MeV corresponds to the centre of the $`N^{}`$ $`S_{11}`$ resonance ($`\mathrm{\Gamma }`$ $``$ 200 MeV) known to couple strongly to the $`\eta `$$`N`$ channel . One may therefore attempt to describe the cross section by an intermediate $`N^{}`$(1535) resonance excitation. The cross section is calculated as $$\sigma (E)=\frac{p_\eta }{p_p}|M(E)|^2$$ (1) with $`E`$ the excitation energy and $`M`$ the matrix element. All momenta $`p`$ are in the centre of mass system. This is calculated as in photoproduction on the proton $$|M(E)|^2=\frac{A\mathrm{\Gamma }_R^2}{(Em_r)^2+\mathrm{\Gamma }(E)^2}$$ (2) with $$\mathrm{\Gamma }(E)=\mathrm{\Gamma }_R\left(b_\eta \frac{p_\eta }{p_{\eta ,R}}+b_\pi \frac{p_\pi }{p_{\pi ,R}}+b_{\pi \pi }\right).$$ (3) Similar to Ref. , we applied a width at the resonance of $`\mathrm{\Gamma }_R`$=200 MeV, a Breit-Wigner mass of $`m_R`$=1540 MeV/c<sup>2</sup>. The branching ratios were set to $`b_\eta `$=0.47 for the $`\eta `$ decay, $`b_\pi `$=0.48 for the pion decay and $`b_{\pi \pi }`$=0.05 for the two pion decay . The momenta at the resonance position are indicated by the index $`R`$. The only free parameter is the strength $`A`$ taken to be 241 nb in order to fit to the present data point. The calculation is shown in Figure 5 as a solid curve. The trend of the data is reproduced, which may be taken as an indication that production of the $`N^{}`$(1535) resonance is the dominant reaction mechanism and that the product of kinematics and form factor changes only very little over the present energy range. The overall shape of the calculation slightly underestimates the data of Mayer et al. close to threshold. For a more detailed investigation of fine structure, additional data in this region is needed. An enhancement close to threshold was also seen in $`\eta `$ production in NN interactions and was attributed to a strong final state interaction. The agreement between the excitation functions for $`pd{}_{}{}^{3}\mathrm{He}\eta `$ and $`\gamma pp\eta `$ reactions excludes strong FSI between the nucleus and the $`\eta `$ except for the near threshold region. We gratefully acknowledge the COSY crew for their efforts providing us with a good beam. We are thankful for support by BMBF Germany (06 MS 882), Internationales Büro des BMBF, SCSR Poland (2P302 025 and 2P03B 88 08), NATO Scientific Affairs, and COSY Jülich.
no-problem/9912/cond-mat9912133.html
ar5iv
text
# Impurity Effects on the Flux Phase Quantum Critical Point Scenario ## Abstract Impurity substitution of Zn in La-214 and (Y,Ca)-123 high-$`T_c`$ superconductors suppresses $`T_c`$ but does not affect appreciably the onset of the pseudogap phase in the underdoped region nor optimal doping or the position of the inferred quantum critical point. Based on a $`1/N`$ expansion of the $`tJ`$ model we explain these findings as well as the similar dependence on a magnetic field in terms of a quantum critical point scenario where a flux phase causes the pseudogap. The quantum critical point scenario represents a popular frame to discuss the phase diagram of high-$`T_c`$ oxides. Suppressing superconductivity by strong magnet fields it has been found experimentally that there exists a critical hole doping $`\delta ^{QCP}`$ at zero temperature which separates a metallic state at larger dopings from an insulating state at lower dopings. Strong fluctuations of the order parameter related to the insulating phase are thought to suppress the density of states for $`\delta <\delta ^{QCP}`$ leading to the pseudogap features in the underdoped region and to be instrumental for superconductivity around $`\delta _c`$ and, at higher temperatures, for the anomalous properties of the normal state in these systems. The microscopic nature of the order parameter of the insulating phase and its fluctuations are presently not clear. One obvious choice is antiferromagnetism which occurs at $`T=0`$ as long-range ordered phase at zero and small dopings. The corresponding zero temperature critical point, however, corresponds to a much smaller doping value than the observed one, $`\delta ^{QCP}0.17`$. A reasonable large $`\delta ^{QCP}`$ has been obtained in Ref. for a scenario with an incommensurate charge density wave (ICDW). In this approach the pseudogap features are not directly related to the ICDW order parameter but rather connected to strong $`d`$-wave superconducting fluctuations sustained by ICDW precursors. Related approaches include preformed Cooper pairs where phase coherence is achieved below $`T_c`$ or RVB spinon pairing and the $`\pi `$-flux phase, where charge coherence is obtained by Bose condensation of holons. A different proposal has been made in Ref. . Based on a $`1/N`$ expansion for the $`tJ`$ model the quantum critical point was identified with a transition from the normal to a $`d`$-wave flux state occurring near the observed $`\delta ^{QCP}`$ for realistic parameters. In this approach optimal doping is determined by the onset of the flux phase and the phase diagram in the underdoped region is characterized by the competition between the flux and superconducting order parameters both having $`d`$-wave symmetry. Recently, several experimental results have been published which may be able to confirm or to rule out some of the above approaches. Measurements of NMR spin lattice relaxation rates in the presence of magnetic fields up to $`15`$ Tesla did not yield appreciable changes for the onset temperature $`T^{}`$ of the pseudogap phase whereas the superconducting $`T_c`$ was reduced by about $`8`$ K . A strong suppression of $`T_c`$ and, at the same time, no change in the pseudogap was previously reported in Zn-doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>. High resolution photoemission, electronic Raman spectroscopy, NMR and heat capacity data show that $`T^{}`$ does not merge with $`T_c`$ in the overdoped regime, but vanishes near optimal doping. These findings indicate that the pseudogap and the superconductivity are different phenomena and not related to the same order parameter. Furthermore, it has been found experimentally that the lowering of the $`T_c`$ curves in Zn doped Y<sub>0.8</sub>Ca<sub>0.2</sub>Ba<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> ((Y,Ca)-123) and La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (La-214), is concentrated around optimal doping and that the optimal doping itself is not shifted. It is the purpose of this Letter to investigate the influence of impurity scattering and magnetic fields on the phase diagram calculated in Ref. and to compare the results with the above experimental findings. We consider a $`t`$-$`J`$-$`V`$ model with $`N`$ degrees of freedom per lattice site on a square lattice. Its Hamiltonian can be written in terms of Hubbard’s $`X`$-operators as $`H`$ $`=`$ $`{\displaystyle \frac{t}{N}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{ij}{p=1\mathrm{}N}}{}}X_i^{p0}X_j^{0p}+{\displaystyle \frac{J}{4N}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{ij}{p,q=1\mathrm{}N}}{}}X_i^{pq}X_j^{qp}`$ (1) $`{\displaystyle \frac{J}{4N}}{\displaystyle \underset{\genfrac{}{}{0pt}{}{ij}{p,q=1\mathrm{}N}}{}}X_i^{pp}X_j^{qq}+{\displaystyle \underset{\genfrac{}{}{0pt}{}{ij}{p,q=1\mathrm{}N}}{}}{\displaystyle \frac{V_{ij}}{2N}}X_i^{pp}X_j^{qq}.`$ The internal labels $`p`$,$`q`$… consist of a spin label distinguishing spin up and spin down states and a flavor label counting $`N/2`$ identical copies of the original orbital. $`ij`$ denotes nearest-neighbor sites. The first three terms represent the $`t`$-$`J`$ Hamiltonian, the last term a screened Coulomb interaction appropriate for two dimensions and taken from Ref. . In the following we express all energies in units of $`t`$. The strength of the Coulomb interaction will be characterized by its value between nearest neighbor sites $`V_{n.n.}`$. In the limit of large $`N`$, the interactions become purely instantaneous and $`H`$ can be diagonalized analytically. In the absence of impurity scattering, the coexistence state of superconductivity and a staggered $`(\pi ,\pi )`$ flux phase can be obtained from a Nambu representation with 4 states yielding four electronic bands with dispersion $$\pm E_\pm (𝐤)=\pm \sqrt{[\xi (𝐤)\pm \stackrel{~}{\mu }]^2+\mathrm{\Delta }(𝐤)^2},$$ (2) where $$\xi (𝐤)=\sqrt{ϵ(𝐤)^2+\varphi (𝐤)^2}.$$ (3) Here the momenta $`𝐤`$ are restricted to the new Brillouin zone which is one half of the original one. $`\stackrel{~}{\mu }`$ is a renormalized chemical potential, $`\varphi (𝐤)`$ the flux order parameter, $`\mathrm{\Delta }(𝐤)`$ the superconducting gap, and $`ϵ(𝐤)`$ the one-particle energies in the normal state. Both order parameters have $`d`$-wave symmetry: $`\varphi (𝐤)=\varphi [\mathrm{cos}(k_x)\mathrm{cos}(k_y)]`$, $`\mathrm{\Delta }(𝐤)=\mathrm{\Delta }[\mathrm{cos}(k_x)\mathrm{cos}(k_y)]`$. They are determined by the self-consistent set of equations: $$\varphi (𝐤)=\frac{1}{2N_c}\underset{𝐩}{}J(𝐤+𝐩)\eta _\varphi (𝐩),$$ (4) $$\mathrm{\Delta }(𝐤)=\frac{1}{2N_c}\underset{𝐩}{}[J(𝐤+𝐩)V_{n.n.}(𝐤+𝐩)]\eta _\mathrm{\Delta }(𝐩),$$ (5) where $$\eta _\varphi (𝐤)=\frac{\varphi (𝐤)}{\xi (𝐤)}\left\{\frac{\xi (𝐤)+\stackrel{~}{\mu }}{2E_+(𝐤)}\mathrm{tanh}\left[\frac{E_+(𝐤)}{2T}\right]+\frac{\xi (𝐤)\stackrel{~}{\mu }}{2E_{}(𝐤)}\mathrm{tanh}\left[\frac{E_{}(𝐤)}{2T}\right]\right\},$$ (6) $$\eta _\mathrm{\Delta }(𝐤)=\frac{\mathrm{\Delta }(𝐤)}{2E_+(𝐤)}\mathrm{tanh}\left[\frac{E_+(𝐤)}{2T}\right]+\frac{\mathrm{\Delta }(𝐤)}{2E_{}(𝐤)}\mathrm{tanh}\left[\frac{E_{}(𝐤)}{2T}\right].$$ (7) The resulting phase diagram, calculated using $`J=0.3`$ and $`V_{n.n.}=0.5J`$, is shown in Fig. 1. Disregarding superconductivity, the second-order normal state-flux phase transition line ends in a quantum critical point, denoted by the black dot, at $`\delta ^{QCP}0.115`$. We find the maximum of $`T_c`$ at essentially the same doping because of the strong competition between flux and superconducting phase, and also that the flux phase instability is only slightly shifted by superconductivity (dashed line). Now we are going to investigate how the phase diagram in Fig. 1 is affected by impurity scattering. In the simplest approximation, the effects of impurities in the normal state can be taken into account by introducing a renormalized frequency $$i\stackrel{~}{\omega }_n=i\omega _n+i\mathrm{\Gamma }\frac{\omega _n}{|\omega _n|},$$ (8) where $`\mathrm{\Gamma }`$ is a scattering rate, here used as a free parameter proportional to the impurity concentration. Throughout the flux phase the self-energy due to impurity scattering is still diagonal in the 4x4 Nambu representation because the flux order parameter does not couple to the impurities. The constant $`\mathrm{\Gamma }`$ in Eq. (8) could be improved in a deeper analysis considering effects due to the proximity of the Van Hove singularity. However, the interesting doping region for superconductivity is in our model not at all correlated with the Van Hove singularity. As a matter of fact, the chemical potential for $`\delta 0.1`$ is quite far away from the Van Hove singularity, which is at $`\delta =0`$ in our model. In this situation the band can be assumed in a good approximation to be structureless and $`\mathrm{\Gamma }`$ as a constant, even in the case of strong potential scattering. Experimentally it is known that non-magnetic impurities such as Zn and Al induce local moments on neighboring Cu sites. Both in a $`d`$-wave flux phase and a $`d`$-wave superconductor random local magnetic moments lead only to renormalizations of the frequency. As a result they contribute additively to $`\mathrm{\Gamma }`$ in Eq.(8) and thus can be accounted for by a proper choice for $`\mathrm{\Gamma }`$. It also has been argued that strong potential scattering near the unitary limit is much more important for the reduction of $`T_c`$ than the scattering from induced magnetic moments. The occupation number $`f_\mathrm{\Gamma }`$ of an electronic state with energy $`ϵ`$ in the presence of impurities reads $`f_\mathrm{\Gamma }\left({\displaystyle \frac{ϵ}{T}}\right)`$ $`=`$ $`T{\displaystyle \underset{n}{}}{\displaystyle \frac{e^{i\omega _n0^+}}{i\stackrel{~}{\omega }_nϵ}}`$ (9) $`=`$ $`{\displaystyle \frac{1}{2}}+{\displaystyle \frac{1}{2\pi }}\text{Im}\left[\psi \left({\displaystyle \frac{1}{2}}i{\displaystyle \frac{ϵ+i\mathrm{\Gamma }}{2\pi T}}\right)\psi \left({\displaystyle \frac{1}{2}}+i{\displaystyle \frac{ϵi\mathrm{\Gamma }}{2\pi T}}\right)\right],`$ where $`\psi `$ denotes the digamma function. In the limit of zero impurity concentration $`\mathrm{\Gamma }0`$ and $`f_\mathrm{\Gamma }(ϵ/T)`$ reduces to the usual Fermi function $`f(x)=1/(e^x+1)`$. In a similar way, we also define a function $`\mathrm{tanh}_\mathrm{\Gamma }`$ by $$\mathrm{tanh}_\mathrm{\Gamma }\left(\frac{ϵ}{2T}\right)=\frac{1}{2}\left[f_\mathrm{\Gamma }\left(\frac{ϵ}{T}\right)f_\mathrm{\Gamma }\left(\frac{ϵ}{T}\right)\right].$$ (10) For the determination of the superconducting critical temperature $`T_c`$, the self-consistent set of gap equations can be linearized with respect to $`\mathrm{\Delta }(𝐤)`$. The resulting equations are again given by Eqs. (4-7) if the function $`\mathrm{tanh}`$ is everywhere replaced by the function $`\mathrm{tanh}_\mathrm{\Gamma }`$, defined in Eq. (10). The solid lines in Fig. 2 show numerical results for $`T_c`$ as a function of doping $`\delta `$ for different scattering rates $`\mathrm{\Gamma }`$, using $`J=0.3`$ and $`V_{n.n.}=0.5J`$. These curves illustrate the suppression of $`T_c`$ with increasing scattering rates $`\mathrm{\Gamma }=0`$, $`210^3`$, $`410^3`$, and $`610^3`$. The corresponding changes in $`T^{}`$, determining the phase boundary between the normal state and the flux state, are depicted in Fig. 2 by the grey region. The chosen values for $`\mathrm{\Gamma }`$ correspond roughly to $`\mathrm{\Gamma }1.0T_c`$ at optimal doping, and to $`\mathrm{\Gamma }1.5T_c`$ in the strongly underdoped region, interpolating between the weak- and the strong-coupling regimes. One important result of Fig. 2 is that the flux phase boundary $`\delta ^{FL}(T)`$ is only slightly shifted by impurities, in spite of the strong suppression of the superconducting critical temperature. In particular, the zero temperature limit of $`\delta ^{FL}`$, $`\delta ^{FL}(0)`$, is almost completely independent of the impurity scattering rate. Since in our approach the maximum of $`T_c`$ as a function of doping is essentially determined by $`\delta ^{FL}(0)`$ this means that the $`T_c(\delta )`$ curves shrink to $`\delta ^{FL}(0)`$ with increasing scattering rate which is a characteristic feature of Fig. 2. Interpreting Fig. 2 in terms of a quantum critical point scenario means that the corresponding critical doping $`\delta ^{QCP}`$ is given by $`\delta ^{FL}(0)`$ and that $`\delta ^{QCP}`$ is almost completely independent of the impurity scattering rate. The curves in Fig. 2 are in excellent agreement with the corresponding experimental curves in Zn doped (Y,Ca)-123 and La-214, given in Fig. 2 of Ref. . We can gain further insight into our results by the following analysis. We first notice that, at least for the above values of $`\mathrm{\Gamma }`$, impurity scattering effects lead to an additional smearing of the occupation number in Eq. (9) which can be simulated in a very good approximation by an effective temperature: $$f_\mathrm{\Gamma }\left(\frac{ϵ}{T}\right)f\left(\frac{ϵ}{T+\mathrm{\Gamma }}\right).$$ (11) As a consequence, the flux phase instability in the presence of impurities is roughly determined by $`\delta _\mathrm{\Gamma }^{FL}(T)\delta ^{FL}(T+\mathrm{\Gamma })`$. But $`\delta ^{FL}`$ is very weakly dependent on $`T`$ until $`T\stackrel{~}{\mu }`$, so that the $`T=0`$ quantum critical point is not expected to be shifted as long as $`\mathrm{\Gamma }\stackrel{<}{}\stackrel{~}{\mu }`$ holds. In order to understand better why $`\delta ^{FL}(T)`$ is almost independent of $`T`$ for $`\mathrm{\Gamma }\stackrel{<}{}\stackrel{~}{\mu }`$, we consider the flux phase susceptibility $`\chi (T,\mathrm{\Gamma },\delta )`$. The instability line $`\delta ^{FL}(T)`$ is determined in general by the equation: $`1=J\chi (T,\mathrm{\Gamma },\delta )`$, where the dependence of the susceptibility on temperature and impurities is given by the factor $$F_\mathrm{\Gamma }(ϵ)=\frac{f_\mathrm{\Gamma }[(ϵ\stackrel{~}{\mu })/T]f_\mathrm{\Gamma }[(ϵ\stackrel{~}{\mu })/T]}{ϵ}.$$ (12) A sketch of the numerator and denominator of $`F_\mathrm{\Gamma }`$ is given in Fig. 3. Continuous and discontinuous solid lines represent the numerator for $`\mathrm{\Gamma }0`$ and $`\mathrm{\Gamma }=0`$, respectively, the dashed line the denominator. Due to the weak variation of the denominator around $`\stackrel{~}{\mu }`$ and $`\stackrel{~}{\mu }`$, it is clear that this factor is only slightly affected by a possible smearing due to finite temperatures or impurity concentrations, as long as $`T+\mathrm{\Gamma }\stackrel{<}{}\stackrel{~}{\mu }`$ holds. Things are different in the case of the superconducting susceptibility. Here the divergence of the denominator coincides with the jump in the numerator, so that even a small smearing leads to a strong change in $`\chi `$ and a large suppression of $`T_c`$. We would like to mention that a charge-density-wave susceptibility would contain a similar factor as in Eq. (12), so that also in this case impurities will not affect substantially the function $`F_\mathrm{\Gamma }`$. However, the CDW order parameter has the symmetry of the underlying lattice, i.e., $`s`$-wave symmetry. Impurities couple in this case directly to the order parameter and the self-energy due to impurity scattering also acquires non-diagonal elements in addition to the diagonal ones described by Eq. (8). As a result, one expects that the charge-density wave state is sensitive to impurities and the corresponding quantum critical point and optimal doping would be shifted by the impurities, in disagreement with Ref. . We have considered throughout our analysis a $`(\pi ,\pi )`$ flux phase and its competition with superconductivity. In Ref. it was shown that there is a continuous transition line in the $`T\delta `$ plane from a commensurate to an incommensurate flux state at low temperatures (the term “commensurate” is here used with respect to the lattice periodicity and not, as in Ref. , with respect to the electronic filling). Taking the incommensurability into account the largest onset of the flux phase as a function of doping occurs now at $`T=0`$ with a critical doping $`\delta ^{QCP}0.135`$. However, the boundaries in the phase diagram and, in particular, the competition between flux and superconducting phases are not much changed by allowing for the incommensurability of the flux phase. Disregarding superconductivity we also have studied the influence of impurities on the boundary between an incommensurate flux and the normal phase. The resulting width in $`T^{}`$ for the scattering rates used in Fig. 2 is very similar to that shown in this Figure for the commensurate case. In particular, the change of the critical doping at $`T=0`$ was for all $`\mathrm{\Gamma }`$’s smaller than $`0.01`$ showing that our Fig. 2 calculated for the commensurate case is also valid in the incommensurate case in a very good approximation. Also the simplified arguments based on Fig. 3 for the robustness of the flux state in contrast to the superconducting state with respect to impurities still apply. The finiteness of the chemical potential $`\stackrel{~}{\mu }`$ reflects the fact that one-particle states which are also not exactly degenerate in energy are involved in forming the flux state. This means that finite difference energies $`|ϵ(𝐤)ϵ(𝐤+𝐐)|`$ ($`𝐐`$ is the wave vector of the flux phase) associated with a large phase space are important which can be characterized by a typical energy $`2\stackrel{~}{\mu }`$. This explains why both the commensurate and the incommensurate flux phase behave in a very similar way with respect to impurities. Our proposed scenario of a flux quantum critical point is also consistent with the NMR measurements of Ref. . In that paper, a magnetic field $`H=14.8`$ T was shown to yield a net reduction of the superconducting critical temperature of $`\mathrm{\Delta }T_c=7.8`$ K but no corresponding decrease of the pseudogap temperature $`T^{}`$ within the experimental uncertainty of 2 $`\%`$. In our theory the pseudogap and the superconductivity arise from two different mechanisms, so that a significant reduction of $`T_c`$ is possible in the absence of a corresponding reduction of $`T^{}`$. The predominant effect of a magnetic field on the superconducting phase is a reduction of $`T_c`$ in order to balance the free magnetic energy related to the Meissner effect. The decrease of $`T_c`$ is linear in $`H`$ for $`HH_c`$ ($`H_c`$ being the critical magnetic field), so that the reduction in $`T_c`$ is quite effective even for small magnetic fields. On the other hand, the effect of a magnetic field on the flux phase is mainly due to the Zeeman splitting $`\mathrm{\Delta }E=g\mu _BH`$, where $`g`$ and $`\mu _B`$ are the $`g`$ factor and the Bohr magneton, respectively. This energy is about $`20`$ K for $`H=14.8`$ T, and thus much smaller than the width of the electronic band. If we generalize our order parameter $`\varphi `$ in the presence of a magnetic field via $`\varphi \varphi =[\varphi _{}(\stackrel{~}{\mu }\mathrm{\Delta }E)+\varphi _{}(\stackrel{~}{\mu }+\mathrm{\Delta }E)]/2`$, we obtain an effective susceptibility given by $`\chi =[\chi _{}(\stackrel{~}{\mu }\mathrm{\Delta }E)+\chi _{}(\stackrel{~}{\mu }+\mathrm{\Delta }E)]/2`$. The Zeeman splitting $`\mathrm{\Delta }E=20`$ K is much smaller than the energy scale set by the bandwidth $`W0.5`$ eV, so that we expect a negligible effect on the flux instability. Moreover the shift of $`T_c`$ will be only of order $`H^2`$ in the magnetic field. We have checked the above arguments by calculating explicitely the change in the instability from the normal to the flux state in the presence of a Zeeman splitting $`\mathrm{\Delta }E=20`$ K, corresponding to $`\mathrm{\Delta }E=610^3t`$ with $`t=0.3`$ eV. We find a shift in doping of the $`T=0`$ quantum critical point of about 4 $`\%`$, which is quite small. This shift disappears with increasing temperature because of thermal smearing which makes the Zeeman splitting ineffective. In conclusion, we have analyzed the effects of impurity scattering on the flux phase and on its interplay with superconductivity within the framework of a quantum critical point scenario. We have found that the transition between the flux and the normal phase is essentially unaffected by impurities and magnetic fields, in very good agreement with the experimental data. This is especially true at zero temperature where we identify the quantum critical point with the transition between the normal and the (incommensurable) flux state. We also pointed out that a charge density wave as the origin of the pseudogap phase would directly couple to impurities in contrast to the flux order parameter and thus be much more sensitive to impurities. \*** Present address: Dipartimento di Fisica, Università di Roma I “La Sapienza”, P.le Aldo Moro 2, 00184 Roma, Italy.
no-problem/9912/math-ph9912019.html
ar5iv
text
# References Partitioning Composite Finite Systems A.S. Botvina<sup>1,2,3</sup>, A.D. Jackson<sup>4</sup>, and I.N. Mishustin<sup>4,5,6</sup> <sup>1</sup>GANIL (CEA-DSM/CNRS-IN2P3), B.P.5027, F-14076 Caen Cedex 5, France <sup>2</sup>Dipartimento di Fisica and INFN, 40126 Bologna, Italy <sup>3</sup>Institute for Nuclear Research, Russian Academy of Science, 117312 Moscow, Russia <sup>4</sup>Niels Bohr Institute, DK-2100 Copenhagen Ø, Denmark <sup>5</sup>Kurchatov Institute, Russian Research Center, 123182 Moscow, Russia <sup>6</sup>Institute for Theoretical Physics, J.-W. Goethe University, D-60054 Frankfurt am Main, Germany ## Abstract We compare different analytical and numerical methods for studying the partitions of a finite system into fragments. We propose a new numerical method of exploring the partition space by generating the Markov chains of partitions based on the Metropolis algorithm. The advantages of the new method for the problems where partitions are sampled with non-trivial weights are demonstrated. PACS numbers: 25.70.Pq, 02.70.Lq . Many fields of physics deal with the common phenomenon that, under appropriate conditions, a compound system can disintegrate into constituents. Let us consider an isolated system composed of $`A_0`$ identical particles (we call them nucleons) which are kept together by some attractive forces. If sufficient energy is put into the system, it will disintegrate into fragments. These fragments can either be individual nucleons or bound clusters of several nucleons. Examples of such processes abound in condensed matter physics, nuclear physics, and astrophysics. In order to provide a microscopic description of such processes, one must sort out possible partitions of the system and compare their probabilities. At the first step, it is necessary to develop methods of generating and sampling the partitions. The aim of this paper is to propose a new and efficient method of doing this. The obvious way to proceed is simply to construct all partitions directly and calculate the characteristics of interest. Unfortunately, this approach is possible only for small $`A_0`$ because the total number of partitions, $`P(A_0)`$, grows rapidly with $`A_0`$. For instance, $`P(100)=190569292`$ while $`P(200)=3972999029388`$. Even if one needs only perform a few non-trivial operations for each partition, this task becomes intractable for $`A_0>100`$. We shall, however, reserve this direct method for checking the more practical methods presented below. First, we address an analytical approach to dealing with the Euler’s partitioning problem. It is based on the Generating Function (GF) formalism . This approach can be applied successfully for calculating average characteristics of partitions. We characterize each partition $`f`$ by the multiplicities $`\{N_A\}`$ of fragments with different nucleon numbers $`A`$, $`1AA_0`$. Then, the conservation of the total nucleon number for each $`f`$ is expressed as: $$\underset{A=1}{\overset{A_0}{}}N_A^{(f)}A=A_0.$$ (1) Evidently, the total fragment multiplicity $`M`$ in the channel $`f`$ is $$M_f=\underset{A=1}{\overset{A_0}{}}N_A^{(f)}.$$ (2) Following a well-established method in mathematical literature , we introduce an unconstrained generating function $`Z(x)`$: $$Z(x)=\underset{N_A=0}{\overset{\mathrm{}}{}}\underset{A=1}{\overset{\mathrm{}}{}}\left(c_Ax^A\right)^{N_A}=\underset{A=1}{\overset{\mathrm{}}{}}\frac{1}{1c_Ax^A},$$ (3) where the $`c_A`$ are arbitrary numbers which can later be taken as $`c_A`$=1. Here $`x`$ can be considered as a Lagrange multiplier. Now we can calculate the total number of partitions, $`P(A_0)`$, by simply expanding eq.(3) and counting the coefficient of $`x^{A_0}`$. The results for large $`A_0`$ or $`x1`$ are well approximated by famous Hardy-Ramanujan formula: $$P(A_0)=\frac{1}{\sqrt{48}A_0}\mathrm{exp}\left(\pi \sqrt{\frac{2A_0}{3}}\right)+O\left(\left[\mathrm{exp}\left(\pi \sqrt{\frac{2A_0}{3}}\right)\right]^{1/2}\right).$$ (4) One can use this generating function to calculate approximately the average multiplicities of fragments $`N_A`$ over all partitions. This is done by replacing the exact constraint of eq. (1) by an approximate one: $$\underset{A=1}{\overset{\mathrm{}}{}}N_AA=A_0,$$ (5) i.e. the constraint is fulfilled on average only. Then one obtains $$A_0=x\frac{\mathrm{ln}\left(Z(x)\right)}{x}=\underset{A=1}{\overset{\mathrm{}}{}}\frac{Ax^A}{1x^A},$$ (6) where we have set the $`c_A=1`$. This equation must be solved to determine $`x`$. A very good approximation to the solution at large $`A_0`$ is $$x=\mathrm{exp}\left(\pi \sqrt{\frac{1}{6A_0}}+\frac{1}{4A_0}\right).$$ (7) Now, the mean multiplicities of fragments can be calculated as: $$N_A=c_A\frac{\mathrm{ln}\left(Z(x)\right)}{c_A}=\frac{x^A}{1x^A}.$$ (8) The result is shown in fig. 1 (top panel) in comparison with the results of the direct method in which all the partitions are included in the calculation. It is seen that the agreement is good except for a slight discrepancy at large $`A`$ which indicates an expected finite size effect. Indeed, eq. (8) gives small but finite $`N_A`$ even for $`A>A_0`$ when the exact calculation gives strictly zero. The average multiplicity of all fragments can be calculated as $`M=_AN_A`$ and is well approximated by the expression $$M=\frac{1}{\pi }\sqrt{\frac{3A_0}{2}}\mathrm{ln}\left(\frac{6A_0}{b\pi ^2}\right),$$ (9) with $`b=`$0.315087. For example, for $`A_0`$=100 it gives us $`M=21.32`$ while the exact value obtained with the direct method is 21.75. More generally, it is useful to consider the situation in which partitions are biased with certain weights. In statistical theory, for example, identical fragments are counted in a partition sum with a factorial weight $`1/N_A!`$. The weight of a partition is then $`W_f=1/_AN_A!`$. In this case, the corresponding generating function can be written as: $$Z(x)=\underset{N_A=0}{\overset{\mathrm{}}{}}\underset{A=1}{\overset{\mathrm{}}{}}\frac{\left(c_Ax^A\right)^{N_A}}{N_A!}=\underset{A=1}{\overset{\mathrm{}}{}}\mathrm{exp}\left(c_Ax^A\right).$$ (10) This form is similar to the grand canonical partition sum if one identifies $`x`$ with the fugacity and $`c_A`$’s with the internal partition sums of individual fragments . Now instead of eqs.(6) and (8) one easily obtains (after substituting $`c_A`$=1): $$A_0=\underset{A=1}{\overset{\mathrm{}}{}}Ax^A,N_A=x^A.$$ (11) For $`A_0\mathrm{}`$ one finds the approximate expressions $`x=\mathrm{exp}(1/\sqrt{A_0})`$ and $`M=\sqrt{A_0}`$. These results are shown in fig. 1 (bottom panel). The mean multiplicity $`M=10`$ for the case $`A_0=100`$ is in good agreement with the exact value of 9.77 obtained by direct calculation. For the two simple examples considered above one can calculate also the multiplicity distributions of individual fragments. It is clear from the structure of the generating functions, eqs. (3) and (10), that the distribution is exponential in the first case and Poissonian in the second case. The normalized multiplicity distributions are respectively, $$P_1(N_A)=\frac{1}{1+N_A}\left(\frac{N_A}{1+N_A}\right)^{N_A},P_2(N_A)=\mathrm{exp}(N_A)\frac{N_A^{N_A}}{N_A!}.$$ (12) As seen in fig. 3, the exact results are reproduced by these distributions with high accuracy. In practice, however, direct accounting for all partitions can only be done for $`A_0\stackrel{<}{}\mathrm{\hspace{0.17em}100}`$. If the weight factors are complicated, it can also be hard to find an analytical solution. Multiplicity distributions and correlations, which are of considerable physical interest, are particularly difficult to obtain<sup>1</sup><sup>1</sup>1In this respect an interesting development of an analytical method was recently made in ref. .. There is thus a need for another method, presumably based on the generation of individual partitions. Obviously, it must be efficient enough to permit computer simulation within a reasonable time. A first attempt to develop such a method was made in refs. by introducing a bias function $`b(A_0,M)=P(A_0,M)/P(A_0)`$, where $`P(A_0,M)`$ is the total number of partitions with exactly $`M`$ fragments. It can be calculated using the recursion relation $$P(A_0,M)=P(A_0M,M)+P(A_01,M1).$$ (13) As before, the total number of partitions is $$P(A_0)=\underset{M}{}P(A_0,M).$$ (14) This bias function is used to generate a sample of partitions by the Monte Carlo method. First, $`M`$ is selected randomly with a probability given by the bias function, $`b(A_0,M)`$. Then, a random partition with selected multiplicity is generated as described in ref. . We shall refer to this method as Biased Random Generation (BRG). Another Monte Carlo method of generating partition samples using a bias function obtained with a Laplace transformation is described in ref. . Figs. 1 and 2 (top panel) show how well the BRG method works in the case when all partitions have equal weighs. The results are presented for $`A_0`$=100 and summarize the outcome of 10<sup>5</sup> randomly generated partitions. By construction, this method is guaranteed to give the correct multiplicity distribution as shown in fig. 2 (top panel). It is less trivial that it reproduces correctly also the mean multiplicities of individual fragments as well as other distributions. Unfortunately, the BRG method has a serious drawback: It produces correct results only for the case in which the weights of partitions are equal. This is not surprising given that eq. (13) was obtained under this assumption. When we introduce nontrivial weight factors, for instance relative factorial weights $`W=1/_AN_A!`$ for partitions with fixed $`M`$, the method fails. This is clearly seen in the bottom panel of fig. 1 for mean fragment multiplicities. In the case of nontrivial partition weights the calculation of a bias function might be even more difficult than the calculation of a corresponding generating function. Here, we propose a new method of the partition sampling which is designed especially for computer simulations. The idea is to generate a Markov chain by moving from one partition to another by minimal steps, i.e., by demanding that neighboring partitions differ by the state of one nucleon only. We shall refer to this method of generating partition samples as Markov Chain Generation (MCG). The procedure allows the following moves: (a) to transfer a nucleon from one fragment to another, (b) to make a nucleon free, or (c) to attach a free nucleon to a fragment. In addition, one must ensure that each new partition is different from the previous one, since fragments with the same $`A`$ are to be regarded as indistinguishable. As well known, any sampling procedure of this kind must satisfy the detailed balance requirement. This can be achieved by applying the famous Metropolis algorithm , where a chain of partitions is generated by performing subsequent moves in the partition space biased by the partition probabilities $`W`$ (weight factors). As shown elsewhere (e.g., ref. ), this method provides a correct description of the complete partition space for any specified weight factors $`W`$. In the MCG the number of all possible moves is limited and easily countable for any partition. By generating a new partition we account for the probability of all possible moves, and thus we avoid the bias function problem. Detailed balance is guaranteed by application of the Metropolis algorithm. The numerical procedure is implemented in the following way: Step I: For a given partition with $`M`$ fragments of mass numbers $`A_i`$ ($`i=1,\mathrm{},M`$), enumerate all fragments in the order of decreasing mass so that $`A_1A_2\mathrm{}A_M`$. This order is to be strictly maintained; any move violating this ordering is rejected. In this manner, we ensure that each move gives a genuinely new partition. Step II: Select at random the fragment $`i`$ that looses a nucleon and the fragment $`j`$ ($`j=1,\mathrm{},M+1;ji`$) that accepts it. (The case $`j=M+1`$ corresponds to making the nucleon free.) Check this move against the ordering requirement of Step I. If the order is violated, repeat the determination of $`i`$ and $`j`$. Step III: Calculate the weight of a new partition, $`W_{\mathrm{new}}`$, and compare it with the weight of the previous one, $`W_{\mathrm{old}}`$. A new partition is added to the ensemble if $`W_{\mathrm{new}}W_{\mathrm{old}}`$. If $`W_{\mathrm{new}}<W_{\mathrm{old}}`$, a new partition is added with probability $`W_{\mathrm{new}}/W_{\mathrm{old}}`$. Otherwise, the old partition is taken as the new one and a new move is undertaken. Step IV: Calculate the characteristics of interest by taking all partitions from the chain. The chain is truncated when these characteristics are saturated. We stress that, contrary to the GF and BRG methods discussed above, the MCG method is a purely numerical procedure which requires nothing more than random number generation. This provides a welcome degree of universality which is missing in other methods. For example, similar to the direct calculation, our method can be applied in case of any partition weight, as well as it can be easily generalized for other partition spaces, e.g., when fragments are characterized by two numbers (such as mass $`A`$ and charge $`Z`$) instead of one . The initialization problem, i.e., the question of which partition should be taken as a seed, does not appear to be important for the MCG method. The system with $`A_0`$=100 loses all memory of the initial partition after approximately $`10^4`$ moves. In order to obtain a representative partition sample, one should just discard these initial partitions from the ensemble. This is verified for several cases when partition weights vary smoothly with fragment mass and the number of fragments. In other cases, the number of initial moves may increase. This problem must be analyzed in each particular case. We have checked the MCG method in a number of ways. The results are presented in figs. 1-4 for two cases: first, when all partitions have equal weights (top panels) and second, when partitions with identical fragments are suppressed by the factorial weights $`W_f=1/_AN_A!`$ (bottom panels). They show the mean fragment multiplicity as a function of $`A`$ (the mass distribution), the distribution of total fragment multiplicity, and a very specific characteristic, i.e., the distribution of multiplicities of particular fragments ($`A=1`$, $`A=4`$ and $`A10`$) taken over all partitions. The results of the exact direct method and of Markov chain generation are in remarkably good agreement. It should be stressed that for $`A_0=100`$ all $`1.9\times 10^8`$ partitions are included in the direct method while only 10<sup>5</sup> partitions can be taken from the chain to explore the entire partition space with the MCG method. Small discrepancies in the tails of the distributions are seemingly related to a limited sample size and numerical precision. However they are not important in practice because of their very small relative weight in the chain. For smaller systems (e.g. $`A_0`$=20) the agreement is also good. For larger systems (e.g., $`A_0`$=1000), where the direct method is intractable, comparisons were made with the analytical GF method. As demonstrated in fig. 4, the agreement is quite good, apart of a small discrepancy in the tails. One should bear in mind, however, that the GF method slightly overestimates the exact result (see fig. 1). We emphasize that the same high quality agreement between the direct and MCG methods is achieved in both considered cases which differ significantly by the weight factors. Calculations have been made for partitioning with other weights, and similar agreement has been found. Therefore, we believe that the MCG method described here offers a simple and correct numerical solution to the partition sampling problem. In conclusion, we have analyzed several methods for calculating characteristics of the partition space of a finite composite system. We have developed a new numerical method, the Markov Chain Generation, which is flexible and efficient in practical calculations with complicated partition weights. We see a variety of applications of this method in different fields dealing with finite-size objects, from atomic nuclei to molecular clusters and astrophysical objects. We believe that this method will be very useful for studying the thermodynamics of finite systems . The authors thank J.P. Bondorf for fruitful discussions. A.S.B. thanks the INFN, Italy (Bologna section), and I.N.M. thanks the Niels Bohr Institute, Copenhagen University, for the kind hospitality and financial support. This work was supported in part by the Humboldt Foundation, Germany. Figure captions Fig. 1. Average multiplicities $`N_A`$ of fragments with mass number $`A`$ for the system with total mass $`A_0=100`$. Solid lines: direct calculation taking into account all partitions, dashed lines: numerical Markov chain generation of partitions, dot-dashed lines: analytical calculations by the generating function method, dotted lines: biased random generation. Top panel: for partitions with equal weights, bottom panel: for partitions with the factorial weights $`1/_AN_A!`$. Fig. 2. Distribution of total fragment multiplicity $`M`$ for the system $`A_0=100`$. Notations are the same as in fig. 1. Fig. 3. Multiplicity distributions of fragments with $`A=1`$, $`A=4`$ and $`A10`$ for the system $`A_0=100`$. Notations are the same as in fig. 1. Fig. 4. Comparison of fragment mass distributions for $`A_0`$=20 and 1000 calculated by the analytical and Markov chain generation methods. Top and bottom panels show calculations for two different weighting factors as above.
no-problem/9912/astro-ph9912019.html
ar5iv
text
# A jet-disk symbiosis model for Gamma Ray Bursts: fluence distribution, CRs and 𝜈’s ### (To appear in Proceedings of the 10th Annual October Astrophysics Conference in Maryland: Cosmic Explosions !) ## Introduction Gamma-Ray Bursts are short bursts that peak in the soft $`\gamma `$-ray band, between 100 KeV and a few MeV. The duration of their emission goes from $`10\times 10^3`$ s to $`10^3`$ s, and they show variability of the order of $`\mathrm{ms}`$. They also show persistent emissions in the X, optical, infrared and radio bands (afterglow), a spatially isotropic distribution, and a nonthermal spectrum. It is believed that GRBs are associated with relativistic shocks caused by a relativistic fireball in a pre-existing gas, such as the interstellar medium or a stellar wind/jet, producing and accelerating electrons/positrons to very high energies, which produce the gamma-emission and the various afterglows observed paczy86 ; rees93 . More than 30 years after their discovery, thanks to the Burst and Transient Source Experiment (BATSE) and the Italian-Dutch satellite BeppoSax, the scientific community knows that Gamma Ray Bursts (GRBs) are isotropically distributed in the sky and that at least some of them are at cosmological distances. But the present data available for redshift position and host galaxy localization are still too few to give us good statistics to study the evolution of GRBs and their redshift distribution. Because of this lack of information, it is still necessary to assume that GRBs follow the statistical distribution of some other well known objets to obtain the GRBs fluence or flux distribution itself feni95 ; cohe95 . ## GRB jet model: key points In our model pugl99 , GRBs develop in a pre-existing jet. We consider a binary system formed by a neutron star and an O/B/WR companion in which the energy of the GRB is due to the accretion-induced collapse of the neutron star to a black hole. To fix the jet parameters we use the basic ideas of the jet-disk symbiosis model by Falcke $`\&`$ Biermann falc95 . In this model, accretion disk, jet, and compact object are considered as an entire system. Mass and energy conservation are applied and the total jet power $`Q_{\mathrm{jet}}`$ is found to be a substantial fraction of disk luminosity $`L_{\mathrm{disk}}`$. We assume that the collapse of a neutron star to a black hole in a binary system induces a highly anisotropic energy release along the existing jet: a violent twist and jerk of the magnetic field. It initiates a relativistic shock wave, with an initial bulk Lorentz factor of about $`10^4`$. Baryonic mass is known to be low in jets. The bulk Lorentz factor evolution derives from the sweep up of the jet material. Magnetic field and particle number density evolution are obtained from the jump conditions in the ultrarelativistic shock. We consider a power law electron energy distribution with a low energy cut-off. Pre-existing energetic electrons/positrons are further accelerated in the shock. The afterglow emission is due to synchrotron and Inverse Compton processes from the shock region. The fluence of the initial burst is determined by shock, dissipation, and $`\gamma `$-$`\gamma `$ optical depth effects. The emission region is optically thin very early on and always in the fast cooling regime. There are only two parameters for the explosion: the energy in bulk flow along the jet, $`E_{51}10^{51}\mathrm{erg}`$, and the fraction $`\delta `$ of shock energy in relativistic particles. The parameters from the binary system jet are: the mass flow $`\dot{M}10^5M_{}/\mathrm{yr}`$, the speed of the unperturbed jet $`0.3v_{0.3}`$, as well as the minimum electron Lorentz factor $`100\gamma _{\mathrm{m},2}`$. With these parameters and a distance $`D_{28.5}10^{28.5}\mathrm{cm}`$, a time $`t_510^5\mathrm{s}`$, and a frequency $`\nu _{14}10^{14}\mathrm{Hz}`$, we obtain the correct flux level of the afterglow: $$F_\nu ^{(\mathrm{ob})}(t)7.45\times 10^{28}\delta (E_{51}^{5/4}\dot{M}_{5\mathrm{j}}^{1/4}v_{0.3}^{1/4})\gamma _{\mathrm{m},2}D_{28.5}^2t_5^{5/4}\nu _{14}^1\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{Hz}^1$$ (1) ## Contribution to cosmic ray and neutrino flux We calculated the GRB rate and compared the corresponding cumulative distribution in fluence with the data. We used the SFR as a function of the redshift presented by Madau mada96 with a flatter SFR at high redshift to obtain the corresponding fluence distribution of GRBs with the redshift and to use their rate to study the eventual contribution of GRBs to the cosmic ray distribution, both in our Galaxy and in the extragalactic region. We checked if in our jet model GRBs were standard candles. The corrected data for the 4B BATSE catalogue fluence distribution petr99 require the adoption of a luminosity function with a power $`\varphi (f)f^{1.55}`$. The result of our calculations is shown in Fig. 1(left), in which the theoretical fluence distribution curve is compared with the 4B corrected data. Considering the total number of GRBs in BATSE catalogue, an observing time of 8 years, a volume scale of $`h^310^{10.8}\mathrm{Mpc}^3`$, with $`H_0=h(100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1)`$ the Hubble constant, a beaming factor $`\frac{4\pi }{2\pi \theta ^2}=200\theta _{1\mathrm{j}}^2`$, with $`\theta `$ the jet opening angle, the rate of GRBs is: $`10^{5.4}(h^3\theta _{1\mathrm{j}}^2)\mathrm{GRBs}\mathrm{per}\mathrm{year}\mathrm{per}\mathrm{\hspace{0.33em}100}\mathrm{Mpc}^3`$ (2) We used the GRB rate obtained with the SFR from Madau and two different approaches to calculate the contribution from GRBs to the cosmic rays and the neutrino spectra. First we considered that each GRB gives the same contribution equal to $`10\%`$ of the initial energy, here $`10^{51}\mathrm{ergs}`$. Secondly we assume that each GRB contributes proportionally to its own fluence; the fluence distribution adopted has a power law. In Fig. 1(right) we compared the all particle energy spectrum as measured by different ground-based experiment with the spectrum from GRBs in the case that each of them gives the same contribution (dashed line) and with the one in which the contribution is proportional to the fluence (solid line) for the extragalactic case. In the jet-disk symbiosis model for GRBs any extragalactic origin at high energies for cosmic rays is ruled out considering that for energies greater than $`10^{18}`$ eV (dotted line), the interactions with the microwave background are relevant and decrease the curve substantially. A corresponding analysis for the cosmic ray contribution from GRBs inside our Galaxy leads to the same result: Near $`10^{18}`$ eV the arrival directions of CRs are observed to be ispotropic to an excellent approximation, and yet their diffusion time out of the Galaxy is much shorter than the time scale between GRBs in our Galaxy. Therefore the time for isotropization is not available, ruling out any contribution from GRBs. ## Conclusions To summarize, our model can explain the initial gamma ray burst, the spectrum and temporal behaviour of the afterglows, the low baryon load, an optical rise, and do all this with a modest energy budget. Moreover, this GRB model is developed within an existing framework for galactic jet sources, using a set of observationally well determined parameters. Using a relatively small set of parameters, the jet-disk symbiosis model applied to GRBs, a tested SFR, and the fundamental physics of the photohadronic interactions we arrive at the conclusion that GRBs are unlikely to give any contribution to the high energy cosmic ray spectrum both inside and outside our Galaxy and to the neutrino spectrum as well.
no-problem/9912/astro-ph9912285.html
ar5iv
text
# Relativistic Gravity and Binary Radio Pulsars ## Introduction Not long after Einstein proposed his General Theory of Relativity, a variety of experimental tests to be done with solar system objects was suggested. These included the measurement of the perihelion advances of planets, the bending of light rays by the Sun, and radar echo delays from planets. However, such tests were limited by the fact that the effects to be measured were tiny perturbations on a classical description. They only verified the theory in the “weak-field” limit, akin to studying a function by only considering its Taylor expansion about zero. The “strong-field” regime, in which GR effects are more than a perturbation and a classical description is grossly violated, probably at first appeared inaccessible to Earth-bound observers. The discovery of the first binary pulsar, PSR B1913+16, by Hulse & Taylor (1975) radically changed this situation. This binary system, consisting of two neutron stars in an eccentric 8 hr binary orbit, has permitted precise tests of GR predictions for the first time in the strong-field regime Taylor & Weisberg (1982, 1989). Thus far, GR has passed all tests with flying colours. In this review, after an introduction to pulsars and pulsar timing, we present the most recent results of observations of PSR B1913+16, as well as of PSR B1534+12, the second discovered binary pulsar system suitable for sensitive GR studies. We also describe a search for new pulsars that is currently underway, and which promises to find more such objects. For previous excellent reviews of relativistic binary pulsars and their experimental constraints on strong-field relativistic gravity see Taylor et al. (1992) and Damour & Taylor (1992). ## Radio Pulsars: Some Background Pulsars are rotating, magnetized neutron stars. They exhibit beams of radio emission that can be observed, by a fortuitously located astronomer, as pulsations, once per rotation period. In the published literature there are 708 pulsars known (but see section “Parkes Multibeam Survey” below), all but a handful of which are in the Milky Way, the remainder being in the Magellanic Clouds. Known pulse periods range from a few seconds down to 1.5 ms. These pulse periods are observed to increase steadily, indicative of spin-down due to magnetic dipole radiation. From the observed pulse period and rate of spin-down of a pulsar, the magnitude of the dipole component of the stellar magnetic field, as well as an age estimate, can be deduced. See Lyne & Smith (1998) for a complete review of the properties of radio pulsars. For our purposes here, we need highlight only two properties of radio pulsars: the stabilities of the radio pulse profile and the stellar rotation. By “pulse profile” we mean the result of the addition of many (typically thousands) of individual pulses, by folding the sampled radio telescope power output modulo the apparent pulse period. Two examples of such pulse profiles are shown in Figure 1. Average profiles are observed to be stable in that the summation of any few thousand consecutive pulses always results in the same pulse profile for a given radio pulsar at a given observing frequency, even though individual pulse morphologies vary greatly. Currently there is no theory to explain this observation; in radio pulsar timing it is simply accepted as fact. Less surprising perhaps is the observed rotational stability. In a reference frame not accelerating with respect to the pulsar, the observed times of pulsations (or TOAs, for times-of-arrival) are generally predictable with high precision, given only the pulse period and spin-down rate. This, we argue, is less surprising than the profile stability because of the large stellar moment of inertia and absence of external torques, in strong contrast to accreting neutron stars whose rotation is much less stable \[e.g. Bildsten et al. 1997\]. ## Pulsar Timing The combination of pulse profile and rotational stability makes a radio pulsar useful as an extremely precise clock; in some cases the stability of the pulsar-clock is comparable to those of the world’s best atomic time standards \[e.g. Kaspi, Taylor & Ryba 1994\]. However, the realization of this stability can come only after effects extrinsic to the pulsar are accounted for. In particular, TOAs measured at an Earth-bound radio telescope must be transformed to a reference frame that is not accelerating with respect to the pulsar. For this purpose, the solar system barycentre reference frame is generally used. Standard pulsar timing thus consists of observing a pulsar at a radio telescope continuously over many cycles. The start time of the observations is recorded with high precision, and the sampled telescope power output is folded at the topocentric (i.e. apparent) pulse period. The resulting average pulse profile is cross-correlated with a high signal-to-noise template (e.g. Fig. 1) in order to determine the arrival time of the average pulse. That time is then transformed to the solar system barycentre. This transformation can be summarized by the expression $$t_{\mathrm{SSB}}=t_\mathrm{O}+\mathrm{\Delta }t_\mathrm{C}+\mathrm{\Delta }t_\mathrm{R}+\mathrm{\Delta }t_\mathrm{E}+\mathrm{\Delta }t_\mathrm{S}+\mathrm{\Delta }t_\mathrm{D},$$ (1) where $`t_{\mathrm{SSB}}`$ is the pulse arrival time at the solar system barycentre (typically in Baryncetric Dynamical Time), $`t_\mathrm{O}`$ is the arrival time as observed at an Earth-bound radio telescope, $`\mathrm{\Delta }t_\mathrm{C}`$ is the difference between the observatory clock and a suitably stable atomic time standard (such as Terrestrial Dynamical Time), $`\mathrm{\Delta }t_\mathrm{R}`$ is the Roemer delay, or the difference in arrival time of a pulse at the solar system barycentre and at the observatory due to the geometric path length difference, $`\mathrm{\Delta }t_\mathrm{E}`$ is the Einstein delay due to (weak-field) GR effects in the solar system, and $`\mathrm{\Delta }t_\mathrm{S}`$ is the so-called “Shapiro delay,” which depends logarithmically on the impact parameter of the Earth-pulsar and Earth-Sun line of sights. Note that $`\mathrm{\Delta }t_\mathrm{R}`$, $`\mathrm{\Delta }t_\mathrm{E}`$ and $`\mathrm{\Delta }t_\mathrm{S}`$ require precise knowledge of the sky coordinates of the pulsar; this is turned around so that if observations of the source are available over at least one year, the known motion of the Earth in its orbit permits the measurement of the pulsar’s coordinates with high precision. The last term, $`\mathrm{\Delta }t_\mathrm{D}`$, is an observing frequency-dependent term that accounts for the dispersion of radio waves in the ionized interstellar medium according to the cold plasma dispersion law. The delay term is proportional to DM$`/f^2`$, where DM is the dispersion measure, or integrated electron density along the line of sight, and $`f`$ is the observing frequency. The measured DM, together with a model for the distribution of free electrons in the Galaxy \[e.g. Taylor & Cordes 1993\], provides an estimate of the distance to a pulsar. Details of all the above terms can be found in various references \[e.g. Manchester & Taylor 1977\]. The above procedure for timing a pulsar of interest is repeated typically on a bi-weekly or monthly basis, so that the spin and astrometric parameters are improved in an iterative fashion: the squares of the residual differences between the initial model-predicted TOAs and the observed TOAs are minimized by varying, and hence improving, the model parameters. The transformation and subsequent determination of the five optimal spin and astrometric parameters (the period $`P`$, its rate of change $`\dot{P}`$, two sky coordinates and DM) are done using a publically available software package, tempo, which consists of several thousand lines of Fortran code<sup>1</sup><sup>1</sup>1http://pulsar.princeton.edu/tempo/index.html. Note that by using TOAs, as opposed to measuring the pulse period at each observing epoch, the timing analysis is coherent in the sense that every rotation of the neutron star is accounted for. ## Timing Binary Pulsars If the pulsar is in a binary system, its motion about the binary centre of mass will cause regular delays and advances in observed TOAs just as the Earth’s motion around the Sun does.<sup>2</sup><sup>2</sup>2 Although most non-degenerate stars are in binary systems, most pulsars are isolated because supernova explosions usually disrupt binaries. See Bhattacharya & van den Heuvel (1991) for a review of the circumstances under which binary pulsars form. Classically, five additional parameters are required to describe and predict pulse arrival times for binary pulsars, in addition to the five spin and astrometric parameters. Conventionally the five Keplerian parameters are the orbital period $`P_b`$, the projected semi-major axis $`a\mathrm{sin}i`$, where $`i`$ is the inclination angle of the orbit, the orbital eccentricity $`e`$, the longitude of periastron $`\omega `$ measured from the line defined by the intersection of the plane of the orbit and the plane of the sky, and an epoch of periastron $`T_0`$. Only the projected semi-major axis is measurable, as pulsar timing is only sensitive to the radial component of the pulsar’s motion. Therefore, the component masses cannot be uniquely determined. Note that under certain circumstances, even in a classical system, the five Keplerian parameters may be insufficient to fully describe the orbit; for example, in the binary pulsar PSR J0045$``$7319, classical spin-orbit coupling induces post-Keplerian dynamical effects, a result of the quadrupole moment of the pulsar’s rapidly rotating B-star companion Lai, Bildsten, & Kaspi (1995); Kaspi et al. (1996). In some binary systems, particularly double neutron star binaries, relativistic effects must also be taken into account in order to model the binary orbit and hence observed TOAs properly. A list of the known double neutron star binaries is given in Table 1. The only non-classical post-Keplerian (PK) effects to have been measured in a binary pulsar system thus far are: the rate of periastron advance $`\dot{\omega }`$, the combined effects of relativistic Doppler shift and time dilation $`\gamma `$ (equivalent to the solar system Einstein delay – see Eq. 1), the rate of orbital decay $`\dot{P_b}`$, and $`r`$ and $`s`$, the two parameters describing the Shapiro Delay, or the observed pulse time delay due to the bending of space-time near the pulsar companion, important for highly inclined orbits (equivalent to $`\mathrm{\Delta }t_\mathrm{S}`$ in Eq. 1). The relativistic post-Keplerian parameters measured in each of the known double neutron star binaries are given in Table 1. The systems for which tests of theories of relativistic gravity are possible are indicated by bold type: these are binaries for which $`N`$ post-Keplerian parameters are measurable, where $`N>2`$. These systems permit $`N2`$ tests of gravity, as the first two parameters determine the masses of the two components. Overall, the suitability of a binary pulsar system for tests of GR or other theories of gravity is determined by a number of factors, including orbital period, orbital eccentricity, orbital inclination angle, the morphology of the pulse profile (narrower pulses permit higher measurement precision) and of course, the pulsar’s radio flux. For example, PSR B2127+11C, though in a binary system that is superb for testing GR Prince et al. (1991), is faint (it was discovered in a deep search of the globular cluster M15) and has thus far not permitted any tests of GR. ## PSR B1913+16 The results of long-term timing observations of the relativistic binary pulsar PSR B1913+16 are well-known; indeed they have been distinguished with the 1993 Nobel Prize in Physics awarded to the discoverers Joseph Taylor and Russell Hulse. Detailed descriptions and reviews of the results and implications of those timing observations can be found in a variety of references Hulse & Taylor (1975); Taylor et al. (1976); Taylor & Weisberg (1982); Taylor (1987); Taylor & Weisberg (1989); Damour & Taylor (1991); Taylor et al. (1992); Damour & Taylor (1992); Taylor (1992, 1993). Here we briefly summarize the status of those observations, and discuss the recently reported evidence for geodetic precession in this system. ### Status of Timing Observations of PSR B1913+16 As reported by Taylor (1993), timing observations of the 59 ms PSR B1913+16 made at the 305 m radio telescope at Arecibo, Puerto Rico through 1993 (the Arecibo telescope became inoperable not long afterward in preparation for a major upgrade, which is nearly complete) have resulted in the determination of three post-Keplerian parameters: the rate of periastron advance $`\dot{\omega }=4^{}.226621\pm 0^{}.000011`$, the combined time dilation and gravitational redshift $`\gamma =4.295\pm 0.002`$ ms, and the observed orbital period derivative $`\dot{P_b}=(2.4225\pm 0.0066)\times 10^{12}`$. The first two of these parameters determine the component masses to be $`1.4411\pm 0.0007`$ $`M_{}`$ and $`1.3874\pm 0.0007`$ $`M_{}`$. The third post-Keplerian parameter, $`\dot{P_b}`$, in principle allows for one test of GR (or other theory of gravity). However, the observed value of $`\dot{P_b}`$ must first be corrected for the effect of acceleration in the Galactic potential. This correction follows from the simple first-order Doppler effect, where $`P_b^{obs}/P_b^{int}=1+v_R/c`$, where $`P_b^{obs}`$ and $`P_b^{int}`$ are the observed and intrinsic values, and $`v_R`$ is the radial velocity of the pulsar relative to the solar system barycentre. A changing $`v_R`$ leads to a Galactic term $$\left(\frac{\dot{P_b}}{P_b}\right)=\frac{a_R}{c}+\frac{v_T^2}{cd},$$ (2) where $`a_R`$ is the radial component of the acceleration, $`v_T`$ is the transverse velocity, and $`d`$ is the distance to the pulsar. The second term in this equation is the familiar transverse Doppler or “train-whistle” effect. The best estimate correction factor for PSR B1913+16, given its only approximately known location in the Galaxy, is $`(0.0124\pm 0.0064)\times 10^{12}`$ Damour & Taylor (1991); Taylor (1992). With this correction applied to $`\dot{P_b^{obs}}`$, the comparison with the GR prediction can be made; the result Taylor (1992) is that $$\frac{\dot{P_b^{obs}}}{\dot{P_b}^{GR}}=1.0032\pm 0.0035.$$ (3) Note that the uncertainty in this expression is dominated by the uncertainty in the Galactic acceleration term. Since $`a_R`$ and $`d`$ are unlikely to be known with much greater precision than is currently available, this particular test of GR will probably not improve much in the near future. Additional tests of GR may still be possible with the PSR B1913+16 system if the parameters $`r`$ and $`s`$ can be measured. This may be possible given the recent major upgrade to the Arecibo telescope, as higher timing precision should now be available. ### PSR B1913+16 and Geodetic Precession Relativistic geodetic precession, the gravitational analogue of Thomas precession (the origin of fine structure in atomic spectra), is predicted to result in a changing orientation of the pulsar spin axis. As the pulsar precesses, our line of sight should intersect different parts of the radio emission beam. Thus, the average pulse profile could vary significantly over time. The first evidence for this in the PSR B1913+16 system was presented by Weisberg, Romani & Taylor (1989) \[but see also Cordes, Wasserman & Blaskiewicz 1990\]. They reported a gradual, secular evolution in the ratio of the amplitudes of the two pulse peaks (see Fig. 1). Recently, Kramer (1998) has clearly demonstrated that this trend continues. Figure 2 shows the ratio of the amplitudes of the two pulse components as a function of time; the variation is striking. If the emission results from a cone of radiation, then a secular change in the separation of the two peaks ought to be observed as well; strong evidence for this is also now seen Kramer (1998). Quantitative modeling of this variation depends on the unknown beam morphology. Under the assumption of a hollow, circular emission beam, if GR is correct, Kramer shows that the pulsar, sadly, will no longer grace the skies of our Earth after the year 2025. Happily however, it should reappear around the year 2220. The exact dates of disappearance, together with the form of the secular variation in average pulse morphology, will permit the first direct observation and study of the morphology of a radio pulsar emission beam. ## PSR B1534+12 The binary pulsar PSR B1534+12 was discovered by Wolszczan (1991) using the Arecibo telescope. This 38 ms pulsar is in a 10 hr eccentric orbit with a second neutron star (see Table 1). PSR B1534+12 offers the hope of additional and more precise tests of GR for a number of reasons: first, the narrower pulse profile of PSR B1534+12 (Fig. 1) means higher timing precision. Second, the orbital plane of this system is more inclined than that of PSR B1913+16, which facilitates the measurements of two additional relativistic parameters $`r`$ and $`s`$. Thus, in principle, five relativistic post-Keplerian parameters are measurable with high precision for PSR B1534+12, which allows two new additional tests of GR that have not been done for PSR B1913+16. This is particularly important for testing alternative theories of gravity, as it permits the separation of the radiative and strong-field components of the theory. This cannot be accomplished in the simple $`\dot{\omega }`$-$`\gamma `$-$`\dot{P_b}`$ test, as it mixes radiative and non-radiative effects \[see Damour & Taylor 1992 for details\]. Stairs et al. (1998) report on seven years of timing observations of PSR B1534+12 made at Arecibo, at the 43 m dish at Green Bank, as well as at the 76 m Lovell radio telescope at Jodrell Bank. As expected, they measure the five post-Keplerian relativistic parameters $`\dot{\omega },\gamma ,\dot{P_b},r`$ and $`s`$. The results are nicely summarized in Figure 3 (Fig. 4 in Stairs et al. 1998), where the component masses are plotted on the axes. As each of the five post-Keplerian parameters has a different dependence on the masses, each parameter defines a curve in this plane. If GR holds, then the five curves, as calculated in GR, should meet at a single point. As can be seen in Figure 3, the curves for $`\dot{\omega },\gamma `$ and $`s`$ agree to better than 1% (though that for $`r`$ is not yet precise enough to be very constraining). Surprisingly, their intersection implies that the pulsar and companion have exactly equal masses within uncertainties, $`1.339\pm 0.003`$ $`M_{}`$. As is clear in Figure 3, the curve for $`\dot{P_b}`$ just misses this intersection point. Note, however, that the value of $`\dot{P_b}`$ used to produce the curve in Figure 3 included a correction for Galactic acceleration (Eq. 2) that assumed a distance of 0.7 kpc to the pulsar, from its observed DM and the best model for the free electron distribution Taylor & Cordes (1993). The model is known to be only approximate, with uncertainties on inferred distance for anyone source optimistically 25%, and realistically considerably larger. Stairs et al. therefore argue that the discrepancy seen in Figure 3 can be removed by simply invoking a larger distance to the pulsar, 1.1 kpc. Put differently, by assuming GR is correct, the distance to this relativistic binary pulsar can be determined with greater precision than is otherwise available Bell & Bailes (1996). This demonstrates that the measurement of an improved $`\dot{P_b}`$ for PSR B1534+12 is unlikely to offer a useful test of GR unless the distance to the source can be determined independently (for example, via a timing or interferometric parallax measurement). However, the expected improved determination of the $`r`$ parameter, following the Arecibo upgrade, could yield a useful test in addition to that from $`\dot{\omega }`$-$`\gamma `$-$`s`$. The improved distance determination to PSR B1534+12 made by Stairs et al. (1998) has implications for estimates of the coalescence rate of double neutron star binaries. A larger distance implies a more intrinsically luminous pulsar, which in turn implies that there are fewer in the Galaxy, as otherwise more would be detected. Stairs et al. suggest that the expected rate must be reduced relative to previous estimates Phinney (1991); Curran & Lorimer (1995); van den Heuvel & Lorimer (1996) by factors of 2.5–20. This rate is of considerable interest to the builders of gravitational wave detectors like LIGO (see paper by P. Saulson, this volume). Of course rates that vary greatly depending on the estimated distance to a single object should be regarded as crude estimates only. ## Finding More Relativistic Binaries: <br>The Parkes Multibeam Survey A major survey of the Galactic Plane for radio pulsars is currently underway. This survey offers the hope of finding new examples of relativistic binary pulsars suitable for studying GR effects. The observations are being done using the Parkes 64 m radio telescope in Australia Lyne et al. (1999). The survey is planned to cover the inner Galactic Plane, in the Galactic longitude range $`260^{}<l<50^{}`$ and Galactic latitude range $`|b|<5^{}`$. The search is being carried out at radio frequencies near 1400 MHz and has roughly seven times the sensitivity of previous 1400 MHz surveys of the Galactic Plane Clifton & Lyne (1986); Johnston et al. (1992), owing mainly to the longer integration time permitted by the use of the new multibeam receiver at Parkes. This new instrument consists of 13 independent, non-overlapping receivers in the telescope focal plane. This allows the Galaxy to be surveyed to much greater depth than was previously possible, without using a prohibitive amount of telescope time. Each beam pointing consists of a 35 min integration, with a total of 288 MHz bandwidth 1-bit sampled every 250 $`\mu `$s. Some 35,000 beams will be observed, and the data for each will be subject to a Fast Fourier Transform of $`2^{23}`$ points. The project is thus computer resource intensive. With approximately half of the survey complete, 405 previously unknown radio pulsars have been discovered, making this by far the most successful pulsar survey ever. Among the first sources found in the survey is the very likely double neutron star binary PSR J1811$``$1736 (see Table 1) Lyne et al. (1999). Although this system is unlikely to be useful for tests of GR, its early discovery in the survey suggests there are many more such systems to be found. Indeed, not long after the conference for which these proceedings are a record, a third relativistic binary pulsar suitable for tests of GR was discovered among the new Parkes Multibeam sources. Detailed observations of this exciting source are just getting underway as this paper is being written. ## Conclusions The now famous technique of timing relativistic binary pulsars has yielded confirmation that GR is the correct theory of gravity at better than the 1% level. Future additional tests of GR, using the only two known sources well-suited to such tests, PSR B1534+12 and PSR B1913+16, are possible, from improved measurements of the Shapiro delay $`r`$ and $`s`$ parameters. The precision in the $`\dot{\omega }`$-$`\gamma `$-$`\dot{P_b}`$ test is limited by the uncertainty in our estimates for the Galactic acceleration of these objects. However, under the now justified assumption that GR is correct, observations of relativistic binary pulsars can yield unique astrophysical measurements that have never before been possible, including precise determination of neutron star masses, distances to these sources, LIGO source rates, and morphological studies of the pulsar radio emission beam. The ongoing Parkes Multibeam survey of the Galactic Plane promises (and indeed has already begun) to discover new examples of these fascinating objects. VMK is an Alfred P. Sloan Research Fellow. She thanks Michael Kramer and Ingrid Stairs for sharing their figures, and the organizers of the 8th Canadian Conference on General Relativity and Relativistic Astrophysics for their hospitality and patience.
no-problem/9912/cond-mat9912183.html
ar5iv
text
# I INTRODUCTION ## I INTRODUCTION The relaxation and the transport properties of molecular liquids depend on both their translational and rotational motion. Since their mutual interplay cannot be neglected both dynamical aspects must be jointly considered. If the liquids are supercooled or supercompressed the overwhelming difficulties to molecular rearrangement are expected to enhance the role of the rotational degrees of freedom and, more particularly, the rotational-translational coupling as noticed by experiments , theory and numerical studies and discussed in a recent topical meeting . Molecular-dynamics numerical ( MD ) studies provided considerable insight in supercooled liquids during the last years . However, the question of rotational dynamics seems to have been partially overlooked since most studies dealt with atomic systems where rotational dynamics is missing. Notable exceptions addressed the issue in model systems of disordered dipolar lattice , biatomic molecules and well studied glassformers, e.g. CKN , OTP and methanol . The case of supercooled water was investigated in detail . Studies of plastic crystals and orientational glasses ( i.e. no allowed translation ) are also known . We have recently presented numerical results on the the translational motion of a supercooled molecular model liquid ( hereafter referred to as I ). The present paper wishes to complement I by extending the analysis to rotational degrees of freedom. As in I one issue is the detection and characterization of jump dynamics. It is found that rotational jumps are fairly more frequent than translational ones in the present system. This makes it easier their study. The occurrence of jumps poses the question of the coupling of the molecular reorientation with the shear viscous flow. This is the second issue addressed in the paper. Jump dynamics may take place in the absence of any shear flow. Nonetheless, shear motion may favour jumps over energy barriers . The question is of relevance in that the experimental situation is rather controversial. For macroscopic bodies hydrodynamics predicts that the reorientation is strongly coupled to the viscosity $`\eta `$ according to the Debye-Stokes-Einstein law ( DSE ), $`\tau ,D_r^1\eta `$, where $`\tau `$ and $`D_r`$ are the rotational correlation time and diffusion coefficient, respectively . DSE is quite robust. In fact, the coupling of the reorientation to the viscosity is usually found even at a molecular level if the viscosity is smaller of about $`110Poise`$. At higher values DSE overestimates the correlation times of tracers in supercooled liquids according to time-resolved fluorescence and Electron Spin Resonance ( ESR ) studies . On the other hand, photobleaching and NMR studies found only small deviations from DSE even close to $`T_g`$. Interestingly, according to ESR studies in the region where tracer reorientation decouples by the viscosity, an ESR study evidenced that it occurs by jump motion . The paper is organized as follows. In section II details are given on the model and the simulations. In Sec. III and IV the results are discussed and the conclusions are summarized, respectively. ## II MODEL AND DETAILS OF SIMULATION The system under study is a model molecular liquid of rigid dumbbells . The atoms A and B of each molecule have mass $`m`$ and are spaced by $`d`$. Atoms on different molecules interact via the Lennard-Jones potential: $$V_{\alpha \beta }(r)=4ϵ_{\alpha \beta }\left[(\sigma _{\alpha \beta }/r)^{12}(\sigma _{\alpha \beta }/r)^6\right],\alpha ,\beta \{A,B\}$$ (1) The potential was cutoff and shifted at $`r_{cutoff}=2.49\sigma _{AA}`$. Henceforth, reduced units will be used. Lenghts are in units of $`\sigma _{AA}`$, energies in units of $`ϵ_{AA}`$ and masses in units of $`m`$. The time unit is $`\left(\frac{m\sigma _{AA}^2}{ϵ_{AA}}\right)^{1/2}`$, corresponding to about $`2ps`$ for the Argon atom. The pressure $`P`$, temperature $`T`$ and shear viscosity $`\eta `$ are in units of $`ϵ_{AA}/\sigma _{AA}^3`$ , $`ϵ_{AA}/k_B`$ and $`\sqrt{mϵ_{AA}}/\sigma _{AA}^2`$, respectively. The model parameters in reduced units are: $`\sigma _{AA}=\sigma _{AB}=1.0`$, $`\sigma _{BB}=0.95`$, $`ϵ_{AA}=ϵ_{AB}=1.0`$, $`ϵ_{BB}=0.95`$, $`d=0.5`$, $`m_A=m_B=m=1.0`$. The $`\sigma _{AA}`$ and $`\sigma _{BB}`$ values were chosen to avoid crystallization. The sample has $`N=N_{at}/2=1000`$ molecules which are accommodated in a cubic box with periodic boundary conditions. Further details on the simulations may be found in I. We examined the isobar at $`P=1.5`$ by equilibrating the sample under isothermal-isobaric conditions and then collecting the data by a production run in microcanonical conditions. The temperatures we investigated are $`T=6,5,3,2,1.4,1.1,0.85,0.70,0.632,0.588,0.549,0.52,0.5`$. ## III RESULTS AND DISCUSSION This section will discuss the results of the study. We characterize the correlation losses of the system by investigating several rotational correlation functions. Then, the related correlation times and transport coefficients will be presented. The presence of rotational jumps will be evidenced and their waiting time distribution will be discussed. Finally, the decoupling of the transport and the relaxation from the viscous flow will be presented. ### A Correlation Functions The rotational correlation loss is conveniently presented by suitable correlation functions. We study the dynamics of both the orientation and the angular velocity of the dumb-bell. #### 1 Orientation The rotational correlation functions are defined: $$C_l(t)=\frac{1}{N}\underset{i=1}{\overset{N}{}}P_l(𝐮_i(t)𝐮_i(0))$$ (2) $`𝐮_i(t)`$ is the unit vector parallel to the axis of the molecule $`i`$ at time $`t`$ and $`P_l(x)`$ the Legendre polynomial of order $`l`$. It is worth noting that $`C_1`$ and $`C_2`$ are accessible to several experimental techniques, e.g. dielectric spectroscopy, NMR, ESR, light and neutron scattering. Fig. 1 shows $`C_1`$ and $`C_2`$. At high temperature and short times damped oscillations are present. They are typical features of free rotators in gas-like systems . At lower temperatures and intemediate times $`C_1`$ and $`C_2`$ exhibit a wide plateau which evidences the increased angular trapping. At longer times the decay is fairly well described by the stretched exponential $`exp[(t/\tau )^\beta ]`$ with $`\tau =62.9`$, $`\beta =0.70`$ for $`l=1`$ and $`\tau =96.1`$, $`\beta =0.60`$ for $`l=2`$ at $`T=0.5`$. Even if $`C_1`$ and $`C_2`$ are quite similar two differences must be noted. First the plateau is lower at $`l=2`$ than at $`l=1`$. Second, at lower temperatures $`C_2`$ vanishes at longer times than $`C_1`$. The first feature is understood by noting that the oscillatory character of $`P_l(𝐮_i(t)𝐮_i(0))`$ with respect to the angle between $`𝐮_i(t)`$ and $`𝐮_i(0)`$ increases with $`l`$. Then, by increasing $`l`$ even random angular changes with small amplitude occurring at short times affect the decay of $`C_l`$. The second feature is due to the fact that, as it will be shown later, molecules undergo frequent $`180^{}`$ flips at lower temperatures. Due to the nearly head-tail symmetry, the flips reverse the sign of $`P_l(𝐮_i(t)𝐮_i(0))`$ if $`l`$ is odd whereas no change takes place if $`l`$ is even. Then, they mainly affect the decay of $`C_l`$ with odd $`l`$ values. Fig. 2 plots the functions $`C_3`$ and $`C_4`$. The discussion is similar to the case $`l=1,2`$. We note that, as expected, the plateau decreasess by increasing $`l`$. On the basis of the above discussion $`C_3`$ should vanish before $`C_4`$. However, on increasing $`l`$, the effect of the head-tail symmetry is partially masked by the increased oscillatory character of the Legendre polynomials, which yields a larger sensitivity to small-angle reorientations. Similarly to $`C_{1,2}`$ at longer times the decay is fairly well described by the stretched exponential $`exp[(t/\tau )^\beta ]`$ with $`\tau =32.4`$, $`\beta =0.60`$ for $`l=3`$ and $`\tau =27.9`$, $`\beta =0.47`$ for $`l=4`$ at $`T=0.5`$. We notice that the stretching parameter decreases with increasing $`l`$. Fig. 3 compares the four correlation functions $`C_l`$ with $`l=14`$ at $`T=0.5`$ are shown. Both the larger correlation loss at short times at larger $`l`$ values and the odd-even effect on the long-time decay of the correlations are evidenced. #### 2 Angular velocity For a linear molecule the angular velocity is: $$\omega =𝐮\times \dot{𝐮}$$ (3) A set of correlation functions is defined as: $$\mathrm{\Psi }_l(t)=\frac{1}{N}\underset{i=1}{\overset{N}{}}P_l(\mathrm{cos}\alpha _i(t)),l1,$$ (4) $`\alpha _i(t)`$ is the angle between $`\omega _i(t)`$ and $`\omega _i(0)`$. In particular, for $`l=1`$ one has: $$\mathrm{\Psi }_1=\frac{\omega (0)\omega (t)}{|\omega |^2}$$ (5) which is the usual correlation function of the angular velocity. In fig. 4 $`\mathrm{\Psi }_1`$ and $`\mathrm{\Psi }_2`$ are drawn for all temperatures investigated. $`\mathrm{\Psi }_1`$ decays fast and increasing $`T`$ slows down the decay ( note the difference with the orientation case ). In particular, in the free-rotator limit $`\mathrm{\Psi }_1`$ is a constant. At lower temperatures $`\mathrm{\Psi }_1`$ shows a negative part at short times which evidences a change of sign of $`\omega `$. This must be ascribed to the collisions on the rigid cage trapping the molecule ( see figs. 1 and 2 ). An analogous effect was also noted for the linear velocity correlation function in I. More insight on the rotational trapping may be gained by inspecting $`\mathrm{\Psi }_2`$ in fig.4. No significant differences between $`\mathrm{\Psi }_2`$ and $`\mathrm{\Psi }_1`$ are seen at high temperature. At lower temperatures, after a similar ballistic initial decay of $`\mathrm{\Psi }_1`$ and $`\mathrm{\Psi }_2`$, the latter slows down when $`\mathrm{\Psi }_20.25`$. The long-living tail which shows up is interpreted by noting that at lower temperatures, after the ballistic regime the angular velocity is approximately trapped in a circle ( see eq.3 ). An elementary calculation shows that $`\mathrm{\Psi }_1(t)=0`$ whereas $`\mathrm{\Psi }_2(t)=0.25`$ as long as the trapping is effective . When the molecular rearrangement allows the orientation relaxation, the angular velocity tends to be distributed over a sphere and $`\mathrm{\Psi }_2`$ vanishes approximately as $`C_2`$. ### B Diffusion coefficient and relaxation times The rotational diffusion coefficient of a linear molecule may be defined by a suitable Green-Kubo formula in close analogy to the translational counterpart as : $$D_r=\frac{1}{2}_0^{\mathrm{}}\omega (0)\omega (t)𝑑t$$ (6) From a computational point of view the evaluation of the above integral is delicate and it is more convenient to evaluate $`D_r`$ via the Einstein relation $$D_r=\underset{t\mathrm{}}{lim}\frac{R_r}{4t}$$ (7) $`R_r`$ is the mean squared angular displacement : $$R_r(t)=\frac{1}{N}\underset{i=1}{\overset{N}{}}|\varphi _i(t+t_0)\varphi _i(t_0)|^2$$ (8) where $`\varphi _i(t)`$ is : $$\varphi _i(t)\varphi _i(0)=\mathrm{\Delta }\varphi _i(t)=_0^t\omega _i(t^{})𝑑t^{}$$ (9) In Fig. 5 $`R_r(t)`$ is shown. The plots are qualitatively similar to the mean squared translational dislacement ( see I ). At short time the motion is ballistic. At intermediate times and lower temperatures a plateau shows up. It signals the increasing trapping of the molecular orientation due to the severe contraints on the structure relaxation. At longer times the reorientation is diffusive according to eq.7. By comparing $`R_r(t)`$ with the translational mean square displacement ( see fig.3 of I) it is seen that the angular trapping is weaker than the one affecting the center-of-mass motion since the subdiffusive intermediate regime is less pronounced and extends less on the time scale. The rotational correlation times are defined as : $$\tau _l=_0^{\mathrm{}}C_l(t)𝑑t$$ (10) Fig. 6 presents the T-dependence of $`\tau _l`$, $`l=14`$ and $`D_r`$. It is seen that a wide region exists where the above quantities exhibit approximately the same Arrhenius behavior ( about $`0.7<T<2`$ ). At lower temperatures the apparent activation energy of the rotational correlation times increase. In particular, as noted in Sec.III A 1, $`\tau _1`$ becomes shorter than $`\tau _2`$ and a similar crossover is anticipated between $`\tau _3`$ and $`\tau _4`$ at temperatures just below $`0.5`$. Differently, the rotational diffusion coefficient $`D_r`$ exhibits the same activated behavior over a region which was shown to extend also below the critical temperature $`T_c`$ predicted by the mode-coupling theory ( MCT ) . The decoupling of $`D_r`$ with respect to $`\tau _l`$ may be anticipated by noting that the former is related to the area below $`\mathrm{\Psi }_1(t)`$ ( eq.7 ) and the latter to the area below $`C_l(t)`$ ( eq.10 ). At lower temperatures $`\mathrm{\Psi }_1(t)`$ vanishes faster so probing the fast dynamics of the supercooled liquid whereas the decay of $`C_l(t)`$ slows down more and more ( see Sec. III A ). Alternatively, it has to be noted that even in highly-constrained liquids small angular motions which are unable to relax the orientation lead to a finite value of $`D_r`$ in view of eq. 7 . Such librational motions were detected in an MD study of OTP . Fig. 6 shows also the MCT analysis of the T-dependence of the correlation time and the rotational diffusion . According to MCT, both $`\tau _l`$ and $`D_r`$ should scale as $$\tau _l,D_r^1(TT_c)^\gamma $$ (11) The underlying expectation on the scaling 11 is that it should work with the same $`T_c`$ value for any transport coefficient and relaxation time. Differently, the physical meaning of $`T_c`$ could be weakened. In I it was shown that eq.11 fits the divergence of the translational diffusion coefficient $`D`$ over four orders of magnitude with $`T_c=0.458\pm 0.002`$ and $`\gamma _D=1.93\pm 0.02`$ ( the data are partially shown in fig. 6 ) and that at lower temperatures the primary relaxation time $`\tau _\alpha D^1`$. Fig. 6 shows that the scaling 11 is also effective for $`\tau _1`$ with $`\gamma =1.47\pm 0.01`$ ( see also fig.7 ). However, meaningful deviations are apparent for $`l2`$ and $`D_r`$. The deviations increase as $`l`$ increases. Since the rotational correlation functions $`C_l`$ are more and more sensitive to small-amplitude reorientations and then to small molecular displacements, the poorer scaling may be a consequence of the difficulties of the mode-coupling theories at short distances . To characterize the reorientation process, we studied the quantity $`l(l+1)D_r\tau _l`$ and the ratio $`l(l+1)\tau _l/2\tau _1`$. If the the molecule rotates by small angular jumps, or equivalently the waiting time in a single angular site is fairly shorter than the correlation time $`\tau _l`$, the motion is said to be diffusive and both quantities are equal to $`1`$ for any $`l`$ value . The results are shown in fig.8. Three regions may be broadly defined. For $`T>2`$ the properties are gas-like and the rotational correlation times become fairly long. For $`0.7<T<2`$ $`l(l+1)D_r\tau _l`$ and $`l(l+1)\tau _l/2\tau _1`$ do not change appreciably and are in the range $`12`$. For $`T<0.7`$ the above quantities diverge abruptly. The rapid increase in the deeply supercooled regime demonstrates the failure of the diffusion model which in fact is expected to work only in liquids with moderate viscosity or if the reorientating molecule is quite large. If the assumption of small angular jumps is released and proper account of finite jumps with a single average waiting time in each angular site is made, the socalled jump-rotation model is derived . The main conclusion is that $`\tau _l`$ is roughly independent of $`l`$. In fact, for $`l=2`$ it is found that the quantity $`l(l+1)\tau _l/2\tau _13.5`$ at $`T=0.5`$ ( see fig.8 ) and the jump-rotation model predicts a value of about $`3`$. However, at higher $`l`$ values the comparison becomes much less favourable. For $`l=3`$ $`l(l+1)\tau _l/2\tau _11.9`$ at $`T=0.5`$ whereas the prediction is about $`6`$. For $`l=4`$ $`l(l+1)\tau _l/2\tau _12.75`$ to be compared to the prediction is about $`10`$. The failure of the usual simple rotational models is not unexpected. Their basic assumptions are rather questionable in supercooled liquids, e.g. the inherent homogeneity of the liquid and the presence of a single time scale both leading to the simple exponential decay of the rotational correlation functions. ### C Jump rotation The inadequate description provided by the diffusion and the jump model calls for a further characterization of the rotational motion. To this aim we consider the self-part of the angular Van Hove function: $$G_s^\theta (\theta ,t)=\frac{2}{N\mathrm{sin}\theta }\underset{i=1}{\overset{N}{}}\delta (\theta \theta _i(t))$$ (12) $`\theta _i(t)`$ is the angle between the molecular axis of the i-th molecule at the initial time and time $`t`$ . $`1/2G_s^\theta (\theta ,t)sin\theta d\theta `$ is the probability to have the axis of a molecule at angle between $`\theta `$ and $`\theta +d\theta `$ at time $`t`$ with respect to the initial orientation. At long times $`G_s^\theta (\theta ,t)1`$ since all the orientations are equiprobable. In Fig. 9 the function $`G_s^\theta `$ is plotted for different temperatures and several times. At higher temperatures , as the time goes by, the molecule explores more and more angular sites in a continuous way. Instead, at lower temperatures $`G_s^\theta `$ exhibits a peak at $`\theta 180^{}`$ and intermediate times signaling that the reorientation has a meaningful probability to occur by jumps. The indications provided by the VanHove function concerning the presence of rotational jumps are confirmed by directly inspecting the single particles trajectories ( fig. 10 ). Similar findings were reported also in other studies on dumbbells and CKN glassformers . With respect to the translational counterparts, it must be pointed out that they are quite faster ( see I ) and more frequent ( about one order of magnitude). The higher number of rotational jumps is also anticipated by noting that, differently from the translational Van-Hove function, the rotational one does exhibit explicit signatures of jump motion ( see I ). To characterize the jumps we studied the distribution $`\psi _{rot}(t)`$ of the waiting-time, namely the residence time in one angular site of the unit vector $`𝐮_i`$ being parallel to the axis of the i-th molecule. A jump of the i-th molecule is detected at $`t_0`$ if the angle between $`𝐮_i(t_0)`$ and $`𝐮_i(t_0+\mathrm{\Delta }t^{})`$ is larger than $`100^{}`$ with $`\mathrm{\Delta }t^{}=24`$. To prevent multiple countings of the same jump, the molecule which jumped at time $`t`$ is forgotten for a lapse of time $`\mathrm{\Delta }t^{}`$. To minimize possible contributions due to fast rattling motion, each angular displacement is averaged with the previous and the next ones being spaced typically by $`68`$ time units, depending on the temperature. The jump search procedure was validated by inspecting several single-molecule trajectories. The above definition of rotational jump fits well their general features, i.e. they are rather fast and exhibit no meaningful distribution of both the amplitude and the time needed to complete a jump ( see fig.10 ). It is worth noting that in I it was found that the time needed to complete the translational jumps exhibits a distribution. The absence of a similar distribution for the rotational jumps points to a larger freedom of the latter. Fig.11 shows $`\psi _{rot}(t)`$ at different temperatures. At $`T=0.632`$ is virtually exponential. At lower temperatures deviations become apparent which are analyzed for $`T=0.5`$ in fig. 12. In I it was noted that the translational waiting-time distribution may be fitted nicely by the truncated power law: $$\psi (t)=\left[\mathrm{\Gamma }(\xi )\tau ^\xi \right]^1t^{\xi 1}e^{t/\tau }0<\xi 1$$ (13) The best fit provided by eq.13 is compared to the fits by using the stretched ( $`exp[(t/\tau )^\beta ]`$ ) and the usual exponential functions in fig. 12. The better agreement of eq.13 at short times may be appreciated by looking at the residuals. The fractal behavior of both the translational and the rotational waiting-time distributions is an indication that the molecular motion at short times exhibits intermittent behavior. The issue in the framework of glasses has been addressed by several authors . The exponent $`\xi `$ of eq.13 has a simple interpretation. If a dot on the time axis marks a jump, the fractal dimension of the set of dots is $`\xi `$. For $`\xi <1`$, it follows $`\psi (t)t^{\xi 1}`$ at short times in agreement with our results. If $`\xi =1`$, the distribution of dots is uniform and $`\psi (t)`$ recovers the exponential form. This is expected beyond a time scale $`\tau `$ and the exponential decay of $`\psi _{rot}(t)`$ at long times signals the crossover to the usual Poisson regime. It must be noted that the translational waiting time distribution at the lowest temperature ( $`T=0.5`$ ) shows a weak tendency to vanish faster than eq.13. A similar feature is not observed in $`\psi _{rot}(t)`$. ### D Breakdown of the Debye-Stokes-Einstein Law For large Brownian particles the reorientation in a liquid occurs via a series of small angular steps, i.e. it is diffusive. Hydrodynamics predicts that the diffusion manifests a strong coupling to the viscosity $`\eta `$ which is accounted for by the Debye-Stokes-Einstein law ( DSE ). For biaxial ellipsoids it takes the form $$D_i=\frac{kT}{\mu _i\eta },i=x,y,z$$ (14) $`D_{x,y,z}`$ are the principal values of the diffusion tensor, $`k`$ is the Boltzmann constant. The coefficients $`\mu _i`$ depend on the geometry and the boundary conditions ( BC ). For a sphere with stick BC $`\mu _{x,y,z}=6v`$, $`v`$ being the volume of the sphere. For an uniaxial ellipsoid one considers $`D_{}=D_z`$ and $`D_{}=D_x=D_y`$. The case of stick BC can be worked analtycally . For slip BC numerical results for $`D_{}`$ are known ( note that in this case the fluid does not exert torques parallel to the symmetry axis ). Eq.14 is sometimes rewritten in an alternative form in terms of proper rotational correlation times, e.g. for uniaxial molecules the equality $`\tau _l=1/l(l+1)D_{}`$ holds. The new form is more suitable for comparison with the experiments since they do not usually provide direct access to the rotational diffusion coefficients. Irrespective of the heavy hydrodynamic assumptions, DSE works nicely even at a molecular level if the viscosity is not high ( $`\eta <1Poise`$ ). Deviations are observed at higher viscosities for tracers in supercooled liquids by time-resolved fluorescence and Electron Spin Resonance ( ESR ) studies . On the other hand, photobleaching and NMR , studies found only small deviations from DSE even close to $`T_g`$. In all the cases known, DSE is found to overestimate the rotational correlation times since on cooling their increase is less than the one being exhibited by the viscosity. In this decoupling region ESR evidenced that the tracer under investigation rotates by jump motion . Shear motion facilitates molecular jumps over energy barriers . On the other hand, guest molecules may jump in frozen hosts in the absence of viscous flow. Since a meaningful fraction of the molecule in the system reorientates by finite angular steps with intermittent behavior, not quite expected in a liquid, it is of interest to investigate to what extent the reorientation is coupled to the viscous shear flow. The results are shown in fig. 13 by plotting the the quantity $`\eta /XkT`$ with $`X=D_r^1,l(l+1)\tau _l`$ with $`l=14`$. According to DSE it should be constant. The viscosity data were taken from I. At high temperatures the quantity approaches the value expected for stick BC. For $`T>5`$ a tendency of $`\eta /XkT`$ for $`X=D_r,\tau _1`$ to increase is noted. However, at such temperatures the system manifests gas-like features ( see figs.1,2,4 ) . On cooling $`\eta /XkT`$ increases. For intermediate temperatures the liquid properties are well developed, the system is diffusive ( $`l(l+1)\tau _lD_r1`$, see fig.8 ) and $`\eta /XkT`$ has a value close to the DSE expectation with slip BC. Notably, $`D_r\eta /kT`$ remains close to this value in the wide interval $`2<T<6`$. At lower temperatures $`\eta /XkT`$ diverges. The stronger deviations are exhibited by $`D_r`$ and $`\tau _1`$, the weaker ones by $`\tau _2`$. $`\tau _3`$ and $`\tau _4`$ track the behavior of $`\tau _1`$ and $`\tau _4`$, respectively, being the pair $`\tau _{1,3}`$ being affected by the jump motion much more than the pair $`\tau _{2,4}`$ ( see sec.III A 1. $`\eta `$ increases of a factor of about $`400`$ between $`T=1.4`$ and $`0.5`$. The corresponding changes of $`D_r\eta /kT`$, $`\eta /\tau _1kT`$ and $`\eta /\tau _2kT`$ are $`41`$, $`11`$ and $`3.6`$, respectively. The discussion support the conclusion that the correlators being affected by rotational jumps, e.g. $`C_{1,3}(t)`$, yield correlation times fairly more decoupled by the viscosity. If one compares the changes of $`\eta /\tau _2kT`$ to the ones drawn by ESR and fluorescence experiments in the region $`T/T_c1.11.5`$, a broad agreement is found . These experiments found even larger values of $`\eta /\tau _2kT`$ on approaching $`T_g`$. Instead, photobleaching studies detected changes of less than order of magnitude by changing $`\eta `$ over about $`12`$ orders of magnitude which are extremely smaller than the present ones . Inspection of fig.13 suggests a way to reconcile, at least partially, the ESR , the photobleaching and NMR studies on the glass former o-terphenyl (OTP). Both photobleaching and NMR measure $`\tau _2`$ in a direct way. Instead, in the glass transition region the ESR lineshape depends in principle on several $`\tau _l`$ and a model is needed to relate them to each other and lead to $`\tau _2`$ . The model is adjusted by fitting the teoretical prediction with the highly structured ESR lineshape. Fig.13 shows that the decoupling, as expressed by $`\eta /\tau _lkT`$ increases with $`l`$. Since the weight of the $`\tau _l`$ to set the ESR lineshape is roughly comparable, one may anticipate that $`\tau _2^{ESR}`$ may be underestimated at some degree around the glass transition. On the other hand, it must be pointed out that the rotational decoupling is observed up to $`1.4T_g`$ in OTP where ESR yields $`\tau _2`$ in a model-independent way . Furthermore, the decoupling is evidenced also by fluorescence experiments which provide $`\tau _2`$ in a model-independent way . ## IV CONCLUSIONS The present paper investigated the rotational dynamics of a supercooled molecular system. The study addressed several general features and focussed on the characterization of the jump dynamics and the degree of coupling with the viscosity. The ensemble consists of rigid A-B dumbbells interacting via a L-J potential. All the properties were studied along the isobar $`P=1.5`$. The time orientation correlation functions $`C_l(t)`$ exhibit at high temperatures gas-like features. In the supercooled regime, after a first initial decay, a plateau is observed which signals the trapping of the molecule due to the increased difficulty of the surroundings to rearrange themselves. The plateau level decreases by increasing the $`l`$ rank of the correlation function due to the larger sensitivity to small-angle librations. The long-time decay of $`C_l(t)`$ is fairly well decscribed by a stretched exponential. The stretching increases from $`l=1`$ ( $`\beta =0.7`$ ) to $`l=4`$ ( $`\beta =0.47`$ ) at $`T=0.5`$. The influence on the decay time of $`C_l`$ ( and then on $`\tau _l`$ ) of the partial head-tail symmetry of the dumbbells was noted. A set of angular-velocity correlation functions $`\mathrm{\Psi }_l`$ was defined. $`\mathrm{\Psi }_1`$ decays faster and faster on decreasing $`T`$. Differently, $`\mathrm{\Psi }_2`$ develops a long-lasting tail which vanishes on the time scale of $`C_2`$. The temperature dependence of the rotational correlation times $`\tau _l`$ with $`l=24`$ and the rotational diffusion coefficient $`D_r`$ manifest deviations by the power-law scaling in $`TT_c`$, being $`T_c`$ the MCT critical temperature. There were ascribed to the difficulties which mode-coupling theories meet at short length scales . Remarkably, the scaling works nicely for $`\tau _1`$ over more than three orders of magnitude with an exponent $`\gamma =1.47\pm 0.01`$. This parallels the scaling which was noted for the translation diffusion coefficient and the primary relaxation time $`\tau _\alpha `$ in I. For $`0.7<T<2`$ the quantities $`l(l+1)D_r\tau _l`$ and $`l(l+1)\tau _l/2\tau _1`$ do not change appreciably and are in the range $`12`$ in good agreement with the diffusion model which predicts that both of them equal to $`1`$ independently of the temperature. For $`T<0.7`$ the above quantities increase abruptly. The increase of the quantity $`l(l+1)\tau _l/2\tau _1`$ is reasonably accounted for by the jump-rotation model for $`l=2`$ . For higher $`l`$ values the agreement becomes quite poor. Being $`\tau _1`$ and $`\tau _2`$ measured by most experiments this finding may account for the attention that the jump model has attracted during the last years . The analysis of the angular Van Hove function evidences that in this region a meaningful fraction of the sample reorientates by jumps of about $`180^{}`$. The flips are rather fast and exhibit no meaningful distribution of both the amplitude and the time needed to complete a jump. Differently, it was noted in I that translational jumps require different times to be performed. The absence of a similar effect for the reorientations indicates a larger angular freedom. This is also apparent by the larger number of rotational jumps which are detected with respect to the translational ones. It is worth noting that the ease to jump does not lead to trivial relaxation properties, as signaled by the stretched decay of $`C_{1,2,3,4}`$. We characterized the distribution $`\psi _{rot}`$ of the waiting-times in the angular sites. It vanishes exponentially at long times whereas at lower temperatures it decays at short times as $`t^{\xi 1}`$ with $`\xi =0.34\pm 0.04`$ at $`T=0.5`$. Interestingly, the translational waiting-time distribution exhibits the same behavior ( see I ). The exponent for the translational case is $`\xi =0.49`$. We ascribe the power-law to the intermittent features of the motion in glassy systems . The intermittent jump reorientation is fairly different from the motion in a liquid. Then, a decoupling from the viscous flow and the subsequent breakdown of the Debye-Stokes-Einstein is anticipated. Our study confirms the breakdown and shows that the quantity $`\eta /XkT`$ with $`X=D_r^1,l(l+1)\tau _l`$ and $`l=14`$ diverges below $`T=1`$. In particular, the correlators being affected by rotational jumps, e.g. $`C_{1,3}(t)`$, yield correlation times fairly more decoupled by the viscosity. A rather similar decoupling was found in I for the product $`D\eta `$, $`D`$ being the translational diffusion coefficient. The decoupling of the molecular reorientation by the viscosity could be also anticipated by the observed ease to perform rotational jumps. The reduced tendency to freeze of the rotational degrees of freedom was pointed out by MD and theoretical studies. The former investigating the residual rotational relaxation in a random lattice with quenched translations and the latter predicting a hyerarchy for the glassy freezing, i.e. the rotational dynamics can never freeze before the translational dynamics. The decoupling of the rotational motion of guest molecules from the viscous flow has been experimentally seen by by time-resolved fluorescence and Electron Spin Resonance while photobleaching and NMR studies reported small deviations from DSE even close to $`T_g`$ . It is worth noting that the decoupling of the translational diffusion from the viscosity and related phenomena as the so-called rotation-translation paradox have been ascribed to a spatial distribution of mobility and relaxation properties, so called dynamical heterogeneities . Their role will be addressed in a forthcoming study. ###### Acknowledgements. The authors warmly thank Walter Kob for having suggested the investigation of the present model system and the careful reading of the manuscript. Umberto Balucani , Claudio Donati and Francesco Sciortino are thanked for many helpful discussions and Jack Douglas for a preprint of ref. .
no-problem/9912/astro-ph9912566.html
ar5iv
text
# The Distribution of Burst Energy and Shock Parameters for Gamma-ray Bursts ## 1 Introduction The improvement in the determination of the angular position of gamma-ray bursts (GRBs) by the Dutch-Italian satellite Beppo/SAX has led to the discovery of extended emission in lower energy photons lasting for days to months, which has revolutionized our understanding of GRBs (cf. Costa et al. 1997, van Paradijs et al. 1997, Bond 1997, Frail et al. 1997). The afterglow emission was predicted prior to their actual discovery by a number of authors (Paczyński & Rhoads, 1993; Meszaros & Rees, 1993; Katz, 1994; Meszaros & Rees 1997) based on the calculation of synchrotron emission in a relativistic external shock. The afterglow observations have been found to be in good agreement with these theoretical predictions (cf. Sari, 1997; Vietri, 1997a; Waxman, 1997; Wijers et al. 1997). The medium surrounding the exploding object offers some clue as to the nature of the explosion. Vietri (1997b), and Chevalier and Li (1999a-b) in two very nice recent papers, have offered evidence that some GRB afterglow light curves are best explained by a stratified circumstellar medium which suggests the death of a massive object as the underlying mechanism for gamma-ray burst explosions as was suggested by Paczyński (1998), and Woosley (1993). Possible further evidence in support of such a model has come from the flattening and reddening of afterglow emission, a few days after the burst, in optical wavelength bands (eg. Bloom et al. 1999, Castro-Tirado & Gorosabel 1999, Reichart 1999, Galama et al. 1999). The goal of this paper is to explore the afterglow flux in different models, uniform ISM as well as stratified medium, and compare it with observations in a statistical sense, as opposed to comparison with individual GRBs as carried out by Chevalier and Li (1999b). We will use this statistical comparison to constrain various physical parameters that determine the afterglow luminosity such as the energy, $`E`$, the fractional energy in electrons ($`ϵ_e`$) and magnetic field ($`ϵ_B`$), the electron energy index ($`p`$), and the circumstellar density $`n`$. In the next section we discuss the afterglow flux and its distribution and comparison with observations. ## 2 Afterglow flux and its distribution Consider an explosion which releases an equivalent of isotropic energy $`E`$ in a medium where the density varies as $`Ar^s`$; $`r`$ is the distance from the center of the explosion, and $`A`$ is a constant. The deacceleration radius, $`r_d`$, where the shell starts to slow down as a result of sweeping up the circumstellar material, and the deacceleration time, $`T_{da}`$, in the observer frame are given by $$R_{da}=\left[\frac{(174s)E}{2\pi c^2A\mathrm{\Gamma }_0^2}\right]^{1/(3s)},T_{da}=\frac{R_{da}}{4\beta c(\mathrm{\Gamma }_0/2)^2},$$ $`(1)`$ where $`\mathrm{\Gamma }_0`$ is the initial Lorentz factor of the ejecta, and $`\beta 1`$ is a constant. The time dependence of the shell radius and the Lorentz factor can be obtained from the self-similar relativistic shock solution given in Blandford & McKee (1976) $$\frac{R(t_{obs})}{R_{da}}X=t_1^{1/(4s)},\mathrm{\Gamma }(t_{obs})=\frac{\mathrm{\Gamma }_0}{2}t_1^{(3s)/(82s)},\mathrm{where}t_1\frac{t_{obs}}{(1+z)T_{da}},$$ $`(2)`$ and $`t_{obs}`$ is the time in observers’ frame at redshift $`z`$. The magnetic field and the electron thermal Lorentz factor behind the forward shock vary as, $$B=B_{da}ϵ_B^{1/2}t_1^{3/(82s)},\gamma _e=ϵ_e\left(\frac{m_p}{m_e}\right)\frac{\mathrm{\Gamma }}{2^{1/2}},\mathrm{where}B_{da}=\left[\frac{2(174s)E}{\mathrm{\Gamma }_0^2R_{da}^3}\right]^{1/2},$$ $`(3)`$ and $`ϵ_B`$ and $`ϵ_e`$ are the fractional energies in the magnetic field and the electrons, respectively. Using these results we find that the peak of the synchrotron frequency ($`\nu _m`$) and the cooling frequency ($`\nu _c`$), in the observer frame, are $$\nu _m=\nu _{m,da}ϵ_e^2ϵ_B^{1/2}t_1^{3/2},\nu _c=\nu _{c,da}ϵ_B^{3/2}t_1^{(3s4)/(82s)},$$ $`(4)`$ where $$\nu _{m,da}=\frac{1}{32\pi 2^{1/2}}\frac{qB_{da}m_p^2}{cm_e^3},\nu _{c,da}=\frac{9\pi }{42^{1/2}}\frac{m_eqc^3\mathrm{\Gamma }_0^3}{\sigma _T^2B_{da}^3R_{da}^2}.$$ $`(5)`$ The synchrotron self-absorption frequency (in the observer frame) is given by $$\nu _A=\left[\frac{27^{1/2}m_ec^2\sigma _TAR_{da}^{1s}B_{da}\mathrm{\Gamma }_0}{64\pi qm_p^2}\right]^{3/5}ϵ_e^{3/5}ϵ_B^{3/10}t_1^{0.3(4+s)/(4s)}\left[\mathrm{min}(\nu _m,\nu _c)\right]^{1/5}.$$ $`(6)`$ The energy flux at the peak of the spectrum is given by $$f_{\nu _p}=\frac{27^{1/2}}{32\pi }\left[\frac{m_e\sigma _Tc^2}{qm_pd_L^2}\right]A\mathrm{\Gamma }_0B_{da}R_{da}^{(3s)}ϵ_B^{1/2}(1+z)t_1^{s/(82s)},$$ $`(7)`$ where $`\nu _p=\mathrm{min}\{\nu _m,\nu _c\}`$ i.e. for $`\nu _m>\nu _c`$ the peak occurs at $`\nu _c`$ instead of at $`\nu _m`$. The equations for $`s=2`$ are as in Chevalier and Li (1999a-b) and are given here for easy reference. The flux at an arbitrary observed frequency $`\nu `$ can be calculated following Sari, Piran and Narayan (1998) in terms of $`f_{\nu _p}`$, $`\nu _m`$, $`\nu _c`$ and $`\nu _A`$. For the particularly important case of $`\nu `$ greater than $`\nu _m`$ and $`\nu _c`$ the observed flux is: $$f_\nu =f_{\nu _p}\nu _c^{1/2}\nu _m^{(p1)/2}\nu ^{p/2}=\frac{3^{2.5}c^5}{\nu ^{p/2}d_L^2}\left(\frac{qm_p^2}{m_e^3}\right)^{(p2)/2}\frac{ϵ_e^{p1}}{ϵ_B^{(2p)/4}}\frac{(1+z)^{(3p+2)/4}}{t_{obs}^{(3p2)/4}}\left[\frac{(174s)E}{2^{10}\pi ^2c^5}\right]^{(p+2)/4},$$ $`(8)`$ Note that the flux does not depend on the circumstellar density parameters $`A`$ and $`s`$ when $`\nu >\nu _c`$, except through an unimportant multiplicative factor $`(174s)^{(p+2)/4}`$. The frequencies $`\nu _m/(ϵ_e^2ϵ_B^{1/2})`$, $`\nu _cϵ_B^{3/2}`$ and $`\nu _A`$ are shown in figure 1. The afterglow flux for the uniform ISM and the wind models only differ when $`\nu <\nu _c`$. Since the cooling frequency decreases with time for the uniform ISM model and increases with time for the wind model, one of the best ways to distinguish between these models is by observing the behavior of the light-curve when $`\nu _c`$ crosses the observed frequency band as has been pointed out by Chevalier & Li (1999). The predictions and comparison of individual GRB lightcurves for the two models will be discussed in some detail in a separate paper. Here we turn our attention to the statistical property of the afterglow light curve in the two models. ### 2.1 Afterglow flux distribution function The distribution function for GRB afterglow flux, $`P_t(L_\nu )`$, at a frequency $`\nu `$, and time $`t_{obs,g}`$ is the probability that the afterglow luminosity (isotropic) is $`L_\nu `$ at time $`t_{obs,g}`$ after the explosion; $`\nu `$, $`t_{obs,g}`$ and $`L_\nu `$ are measured in the rest frame of the host galaxy. The width of $`P_t(L_\nu )`$ is a function of the width of the distribution function for $`E`$, $`ϵ_e`$, $`ϵ_B`$, $`A`$ and $`p`$. Assuming that all these variables are independent Gaussian random variables the standard deviation (SD) for $`\mathrm{log}(L_\nu )`$, $`\sigma _{L_\nu }`$, can be obtained from equation (8), when $`\nu >\nu _c`$ & $`\nu _m`$, and is given by $$\sigma _{L_\nu }^2=\left(\frac{p+2}{4}\right)^2\sigma _E^2+(p1)^2\sigma _{ϵ_e}^2+\eta \sigma _p^2+\left(\frac{p2}{4}\right)^2\sigma _{ϵ_B}^2,$$ $`(9)`$ where $$\eta =\frac{1}{16}\left[2\mathrm{log}\frac{qm_p^2}{m_e^3}+\mathrm{log}\left(\frac{17\overline{ϵ}_B\overline{ϵ}_e^4\overline{E}}{2^{10}\pi ^2c^5\nu ^2t_{obs,g}^3}\right)\right]^2,$$ $`(10)`$ $`\overline{ϵ}`$ and $`\overline{E}`$ are the mean values of $`ϵ`$ and $`E`$, and $`\sigma _E`$, $`\sigma _{ϵ_e}`$, $`\sigma _{ϵ_B}`$ and $`\sigma _p`$ are the standard deviation for $`\mathrm{log}E`$, $`\mathrm{log}ϵ_e`$, $`\mathrm{log}ϵ_B`$ and $`p`$ respectively; $`\eta `$ for x-ray (10 kev) and optical (2.5 ev) photons are shown in figure 2. The standard deviation of the flux in the 2-10 kev band at 5 hours after the burst ($`\sigma _{L_\nu }`$) is approximately 0.58 (Kumar & Piran, 1999). Fig 1 shows that this energy band is above $`\nu _c`$ & $`\nu _m`$ so long as $`ϵ_B>10^4`$ and the density of the surrounding medium is not too small. Moreover, $`\eta =5`$ for this energy band (see fig. 2), from which we obtain an upper limit on $`\sigma _p`$ of 0.26 and the full-with at half-maximum of the distribution for $`p`$ to be less than about 0.6. We note that the electron energy index $`p`$ lies between 2 and 3 for supernovae remanents (cf. Chevalier 1990, Weiler et al. 1986), and Chevalier and Li (1999b) point out that the range in $`p`$ for GRB afterglows is at least $``$ 2.1–2.5. We can use the variation of $`\eta `$ with time or $`\nu `$ (see figs. 2 & 3) to obtain $`\sigma _p`$ from the observed variation to the width of $`\sigma _{L_\nu }`$. This is of course equivalent to the determination of $`p`$ from the slope of the light curve or the spectral slope. The currently available data, consisting of 7 bursts with known redshifts, does not provide a good constraint on the temporal variation of $`\sigma _{L_\nu }`$, however, HETE II and Swift would increase the GRB afterglow data base by more than an order of magnitude and provide much better constraint on $`\sigma _p`$. Unless $`ϵ_B`$ varies by many orders of magnitude from one burst to another the last term in equation (9) is very small and can be neglected. Equating the first two terms individually to $`\sigma _{L_\nu }=0.58`$ we obtain $`\sigma _E<0.51`$, and $`\sigma _{ϵ_e}<0.39`$ (for $`p=2.5`$). For comparison, if the first three terms in equation (9) were to contribute equally to $`\sigma _{L_\nu }`$ then we obtain $`\sigma _E=0.29`$, $`\sigma _{ϵ_e}=0.22`$, and $`\sigma _p=0.15`$. The mean values for $`E`$ and $`ϵ_e`$ are not well determined by this procedure (but see the discussion below), however $`\overline{E}\overline{ϵ}_e^{^{4(p1)/(p+2)}}`$ can be accurately obtained from the observed distribution and is $`10^{52}`$ erg. So far we have discussed a nearly model independent procedure for determining $`\sigma _p`$ and a linear combination of $`\sigma _E^2`$ and $`\sigma _{ϵ_e}^2`$ that relies on making observations in a frequency band that lies above $`\nu _m`$ & $`\nu _c`$. In order to determine $`\sigma _E`$ and $`\sigma _{ϵ_e}`$ separately we need to have some knowledge of $`\nu _c`$ and $`\nu _m`$, and therefore the result is model dependent and less certain. For instance, if the cooling and the peak synchrotron frequencies are known at some time, even if only approximately, then $`\sigma _E`$ can be determined from the distribution of the observed flux at a frequency $`\nu `$ such that $`\nu _c<\nu <\nu _m`$. The SD for $`f_1L_\nu t_{obs,g}^{1/4}\nu ^{1/2}`$ at such an intermediate frequency is independent of $`s`$ and is given by: $$\sigma _{f_1}^2=\frac{9}{16}\sigma _E^2+\frac{1}{16}\sigma _{ϵ_B}^2\frac{9}{16}\sigma _E^2.$$ $`(11)`$ Once $`\sigma _E`$ is known, equation (9) can be used to determine $`\sigma _{ϵ_e}`$. Observations at low frequencies i.e. $`\nu <\nu _m,\nu _c`$, can be used to constrain $`\sigma _{ϵ_B}`$ and $`\sigma _A`$ which when combined with the flux at the peak of the spectrum could be used to determine $`\sigma _{ϵ_B}`$ & $`\sigma _A`$ separately with the use of the following equations, $$\sigma _{f_2}^2=\frac{4}{(4s)^2}\sigma _A^2+\frac{4}{9}\left(\sigma _{ϵ_e}^2+\sigma _{ϵ_B}^2\right)+\left(\frac{145s}{123s}\right)^2\sigma _E^2,\mathrm{for}\nu <\nu _m<\nu _c,$$ $`(12)`$ $$\sigma _{f_3}^2=\frac{4}{9(4s)^2}\sigma _A^2+\sigma _{ϵ_B}^2+\left(\frac{146s}{123s}\right)^2\sigma _E^2,\mathrm{for}\nu <\nu _c<\nu _m,$$ $`(13)`$ and $$\sigma _{f_4}^2=\frac{1}{4}\sigma _{ϵ_B}^2+\frac{4}{(4s)^2}\sigma _A^2+\left(\frac{83s}{82s}\right)^2\sigma _E^2,$$ $`(14)`$ where $`f_1\nu ^{1/3}L_\nu t_{obs,g}^{(s2)/(4s)}`$, $`f_2\nu ^{1/3}L_\nu t_{obs,g}^{(3s2)/(123s)}`$, $`f_3L_{\nu _p}t_{obs,g}^{s/(82s)}`$, and $`L_{\nu _p}`$ is the isotropic luminosity at the peak of the spectrum. Another approach to determining the burst and shock parameters is to compare the observed flux distributions at several different frequencies and time with the theoretically calculated distributions. The latter can be easily calculated by varying $`E`$, $`ϵ_e`$, $`ϵ_B`$, $`A`$ and $`p`$ randomly and solving for the flux using the equations given in the last section. Fig. 3 shows a few cases of flux distribution functions for several different $`\nu `$ and $`t_{obs}`$. The advantage of this procedure is that it does not require observational determination of various characteristic frequencies i.e. $`\nu _m`$, $`\nu _c`$, and $`\nu _A`$, which is difficult to do unless we have good spectral and temporal coverage over many orders of magnitude. Moreover, this procedure can be used even without the knowledge of burst redshifts; the redshift distribution of a representative sub-sample of bursts can be used to calculate the expected theoretical distribution of observed afterglow flux which can be directly compared with the observed flux distribution to yield the burst and shock parameters. It should be noted that if $`\nu _m`$, $`\nu _c`$, $`\nu _A`$ and $`L_{\nu _p}`$ can be determined accurately then $`E`$, $`ϵ_e`$, $`ϵ_B`$, and $`A`$ can be obtained for individual bursts as described in eg. Chevalier & Li (1999b), and Wijers & Galama (1999), and there is no need to resort to the statistical treatment discussed above. ## 3 Conclusion We have described how the distribution function for GRB afterglow flux can be used to determine the width of the distribution function for the energy in the explosion and the shock parameters such as $`ϵ_e`$, $`ϵ_B`$ (the fractional energy in electrons and the magnetic field), $`p`$ (the power law index for electron energy), and $`A`$ (the interstellar density parameter). The afterglow flux at a frequency above the cooling and the synchrotron peak frequencies is independent of interstellar density and scales as $`E^{(p+2)/4}ϵ_e^{(p1)}ϵ_B^{(p2)/4}`$ for uniform ISM as well as for energy deposited in a stellar wind with power-law density stratification. Using the flux in 2-10 kev band for 7 bursts with known redshifts, 5 hours after the burst, which meet the above criteria, we find that the full width at half maximum of the distribution for Log$`E`$ is less than 1.2, for Log$`ϵ_e`$ is less than 0.9 and $`p`$ is less than 0.6. The width for the distribution of $`p`$ is consistent with the range in $`p`$ deduced from afterglow emissions (cf. Chevalier & Li, 1999b). The width of the distribution for $`p`$ can be more accurately determined from the time variation, or frequency dependence, of the width of the afterglow flux distribution. For a more accurate determination of the distribution of other parameters we need to determine the cooling and the synchrotron peak frequencies (at least approximately), or otherwise compare the theoretical and the observed distributions for flux at several different frequencies covering the range above and below the cooling and the synchrotron frequencies. HETE II and Swift missions are expected to significantly increase the number of GRBs with observed afterglow emission and should therefore provide a more accurate determination of afterglow luminosity function and the distribution for burst energy and shock parameters. Acknowledgment: I am indebted to Roger Chevalier for many useful discussions and for clarifying several points. I thank Alin Panaitescu, Tsvi Piran and Bohdan Paczyński for numerous exciting discussions about gamma-ray bursts, and Eliot Quataert for comments on the paper. I thank E. Waxman for sending me his recent preprint which bears some technical similarity with this work, although the results and conclusions are different.
no-problem/9912/nucl-th9912048.html
ar5iv
text
# Large Lorentz Scalar and Vector Potentials in Nuclei ## I Prolog Large Lorentz scalar and four-vector nucleon self-energies or optical potentials, each several hundred MeV in the interior of heavy nuclei, are key but controversial ingredients of successful relativistic phenomenology<sup>*</sup><sup>*</sup>*In this paper we use “relativistic phenomenology” to refer only to field-theory based models, and not to the relativistic hamiltonian models discussed in Ref. . . The controversy has persisted because there can be no direct experimental verification (or refutation) of such large nuclear potentials. Here we revisit this issue from the modern perspectives of effective field theory (EFT) and density functional theory (DFT) . We argue that the large potentials in the covariant representation used in relativistic phenomenology are manifestations of the underlying mass scales of low-energy quantum chromodynamics (QCD), which are largely hidden in nonrelativistic treatments. The connection between low-energy QCD scales and nuclear phenomenology can be made by applying Georgi and Manohar’s Naive Dimensional Analysis (NDA) and naturalness . These principles prescribe how to count powers of the pion decay constant $`f_\pi 94`$MeV and a larger mass scale $`\mathrm{\Lambda }`$ in effective lagrangians or energy functionals. The mass scale $`\mathrm{\Lambda }`$ is associated with the new physics beyond the pions: the non-Goldstone boson masses or the nucleon mass. The signature of these low-energy QCD scales in the coefficients of a relativistic point-coupling model was first pointed out by Friar, Lynn, and Madland . Subsequent analyses have extended and supplemented this idea, testing it in nonrelativistic mean-field models as well as in different types of relativistic models . Estimates of contributions to the energy functional from individual terms, based on NDA power counting, are quantitatively consistent with direct, high-quality fits to bulk nuclear observables . Naturalness based on NDA scales has proved to be a very robust concept: nuclei know about these scales! The EFT perspective, with the freedom to redefine and transform fields, implies that there are infinitely many representations of low-energy QCD physics. However, not all are equally efficient or physically transparent. One of the possible choices is between relativistic and nonrelativistic formulations. (In the context of EFT, these can be related by the heavy-baryon expansion.) We suggest that the relativistic formulation offers greater insight. Relativistic phenomenology for nuclei has often been motivated by the need for relativistic kinematics when extrapolating to extreme conditions of density, temperature, or momentum transfer. However, this obscures the issue of relativistic vs. nonrelativistic approaches for nuclei under ordinary conditions. The important aspect of relativity in ordinary nuclear systems is not that a nucleon’s momentum is comparable to its rest mass, but that maintaining covariance allows scalars to be distinguished from the time components of four vectors. Despite a long history of criticisms of relativistic approaches , the use of a relativistic formulation should not itself be a point of contention. The EFT/DFT perspective has largely abrogated the objections, as we discuss more fully in Ref. . Furthermore, recent developments in baryon chiral perturbation theory support the consistency (and usefulness) of a covariant EFT, with Dirac nucleon fields in a Lorentz invariant effective lagrangian . A similar framework underlies relativistic approaches to nuclei. In the nuclear medium, a covariant treatment implies distinct scalar and four-vector nucleon self-energies. The relevant question is: what are their mean values? Relativistic phenomenology suggests several hundred MeV in the center of a heavy nucleus. Historically, the successes of nonrelativistic nuclear phenomenology have been cited to cast doubt on the relevance of large scalar and vector potentials. But in a nonrelativistic treatment of nuclei, the distinction between a potential that transforms like a scalar and one that transforms like the time component of a four-vector is lost. Because the leading-order contributions of these two types are opposite in sign, an underlying large scale characterizing individual covariant potentials would be hidden in the nonrelativistic central potential. Furthermore, the EFT expansion implies that even potentials as large as 300 to 400 MeV are sufficiently smaller than the nucleon mass that a nonrelativistic expansion should converge, if not necessarily optimally. Thus the success of nonrelativistic nuclear phenomenology provides little direct evidence about covariant potentials. If there were an approximate symmetry that enforced the cancellation between scalar and vector contributions, then it would be desirable to build the cancellation into any EFT lagrangian or energy functional. (Chiral symmetry alone does not lead to scalar-vector fine tuning.) However, if the cancellation is accidental or of unknown origin , then hiding the underlying scales may be counterproductive. We argue that nuclei fall into the second category, with the relevant scales set not by the nonrelativistic binding energy and central potential (tens of MeV), but by the large covariant potentials (hundreds of MeV). The signals of large underlying scales would be patterns in the data that are simply and efficiently explained by large potentials, but which require more complicated explanations in a nonrelativistic treatment. Scattered through the literature over many years is evidence to support our contention that a representation with large fields, which is achieved only with a covariant formulation, is natural. We believe that in light of the EFT and DFT reinterpretation of relativistic phenomenology, it is appropriate at this time to compile and update the arguments to highlight their strengths and weaknesses. In the following section, we give a concise list of empirical and theoretical evidence that large scales are natural for nuclei, with short descriptions and pointers to more detailed discussions. We also include with each item a brief discussion (which we will call a “loophole”) of how large fields could be avoided, even in a covariant formulation. ## II QCD Scales in Nuclei 1. Covariant density functionals fit to nuclei. Conventional density functional theory (DFT) is based on energy functionals of the ground-state density of a many-body system, whose extremization yields a variety of ground-state properties . In a covariant generalization of DFT applied to nuclei, these become functionals of the ground-state scalar density $`\rho _\mathrm{s}`$ as well as the baryon current $`B_\mu `$. Relativistic mean-field models are analogs of the Kohn–Sham formalism of DFT , with local scalar and vector fields $`\mathrm{\Phi }(𝐱)`$ and $`W(𝐱)`$ appearing in the role of relativistic Kohn–Sham potentials . The mean-field models approximate the exact functional, which includes all higher-order correlations, using powers of auxiliary meson fields or nucleon densities. The scalar and vector potentials are determined by extremizing the energy functional, which gives rise to a Dirac single-particle hamiltonian. The isoscalar part (for spherical nuclei) is $$h_0=i\mathbf{}\mathbf{}𝜶+\beta \left(M\mathrm{\Phi }(r)\right)+W(r),$$ (1) where $`M`$ is the nucleon mass and we define $`M^{}M\mathrm{\Phi }`$. It is not necessary to assume that $`\mathrm{\Phi }`$ is simply proportional to a scalar meson field $`\varphi `$. In fact, $`\mathrm{\Phi }`$ could be proportional to $`\varphi `$ (as in conventional quantum hadrodynamic models ), or could be expressed as a sum of scalar and vector densities (as in relativistic point-couplings models ), or could be a nonlinear function of $`\varphi `$ (e.g., see Refs. ). The parameters of the density functional for generalized models have been determined by detailed fits to a set of nuclear properties that should be accurately reproduced according to DFT . Except when a large isoscalar tensor coupling is included, the scalar and vector potentials $`\mathrm{\Phi }`$ and $`W`$ are always larger than 300 MeV in the interior of a nucleus. These potentials produce a hierarchy of energy contributions that follow the NDA predictions, as illustrated by the large unfilled symbols in Fig. 1. This agreement persists when correlation corrections are included explicitly , and provides strong evidence that nuclear observables demand naturally sized parameters. Loophole: The addition of an isoscalar tensor coupling in the energy functional allows excellent fits to nuclear properties while reducing the size of the scalar and vector potentials slightly, to roughly 250 MeV . It should also be noted that relativistic formulations of DFT at present lack the rigor of conventional DFT ; a re-examination from the EFT perspective may improve the situation. 2. Natural size of leading contribution to binding energy per nucleon. Coefficients in successful relativistic mean-field models are consistent with naive dimensional analysis (NDA) and naturalness, as expected in low-energy effective field theories of QCD . If one applies naturalness arguments to the terms in a relativistic energy functional for nuclear matter and nuclei, the leading scalar and vector terms at equilibrium density $`\rho _\mathrm{B}^0`$ are each predicted to be of order $`\rho _\mathrm{B}^0/f_\pi ^2150`$MeV , independent of $`\mathrm{\Lambda }`$. (The $`n=2`$ energy estimate in Fig. 1 is lower because it uses an average density in <sup>208</sup>Pb rather than the peak density in the interior.) The scalar and vector potentials $`\mathrm{\Phi }`$ and $`W`$ in Eq. (1) are each twice the corresponding energy contributions , so they are predicted to be roughly 300 MeV in the center of a nucleus. Loophole: Naturalness may give only order-of-magnitude estimates and there are numerical factors (e.g., combinatoric factors) that may not be correctly accounted for. On the other hand, the estimates of contributions to the binding energy, recently extended to all terms in the energy functional, appear to be quite robust (see Fig. 1). 3. One-boson-exchange potentials. The nucleon-nucleon (NN) scattering matrix can be calculated by unitarizing a kernel for the NN force. The Lorentz structure of the kernel follows from covariance, without mentioning degrees of freedom, but it can be efficiently characterized in terms of boson exchanges in different physical channels. This is a very natural procedure from the point of view of dispersion theory . Each channel is characterized by strength and range parameters and a cutoff. A physical interpretation of each is not needed since the states are virtual in NN scattering. The parameters are directly related to prominent resonances in some channels (vector), but not in others (scalar). Every accurate fit of the parameters to NN observables using the most general (covariant) kernel has led to an interaction with large, isoscalar, scalar and vector contributions of comparable magnitude, but opposite in sign . In the nuclear medium, these scalar and vector NN amplitudes translate into strong single-particle potentials, of order several hundred MeV at equilibrium density. These strong potentials persist when short-range correlations are included explicitly . Loophole: There may be alternative (covariant) decompositions of the kernel that do not result in large scalar and vector components (but we are unaware of any!). 4. Nuclear saturation and observed spin-orbit splittings. If one adopts a covariant formulation of the energy functional, the Lorentz transformation properties of a scalar component induce a velocity dependence in the interaction. When the functional is fit to nuclear saturation in nuclear matter, one automatically produces a spin-orbit force and its observable consequences in a finite system (e.g., nuclear shell structure) . Furthermore, the strength of the spin-orbit interaction with natural-sized scalar and vector potentials agrees with the empirical strength (see Fig. 2 with $`M_0^{}/M0.60`$). In contrast, the spin-orbit contribution in nonrelativistic energy functionals must be adjusted by hand . To our knowledge, there are no simple alternative explanations for the origin of the full spin-orbit strength. Negele and Vautherin tried to take Brueckner calculations of light nuclei and extract the spin-orbit force from the splittings, but found only a fraction of the empirical magnitude, equal to the result obtained by applying Thomas precession to the nonrelativistic central potential. The most sophisticated modern calculations get only half of the empirical spin-orbit splittings in light nuclei without including three-nucleon forces, and only two-thirds of the splittings using current three-nucleon-interaction models . Loophole: An isoscalar tensor term can be used to partially reduce the scalar and vector potentials while maintaining a large spin-orbit splitting (the filled symbols in Fig. 2) . 5. Proton–nucleus scattering spin observables. In impulse approximation calculations of medium-energy proton–nucleus scattering spin observables, relativistic treatments with large scalar and vector optical potentials accurately reproduce the data, while nonrelativistic treatments are deficient . To get agreement at a similar level in a nonrelativistic formulation, one has to go beyond the simplest impulse approximation to include a full-folding treatment and medium effects (e.g., with a $`G`$-matrix interaction). Furthermore, while the radial shapes of the scalar and vector potentials simply look like the nuclear (baryon) densities with an energy-dependent overall scale, the geometries of the nonrelativistic optical potentials are much less intuitive and change qualitatively with different incident energies (see Fig. 3). Thus the treatment is clean when natural scales are manifest but becomes more complicated when they are hidden. Loophole: Large scalar and vector optical potentials can be transformed away in favor of smaller potentials of different Lorentz structure . However, the transformed potentials no longer have simple radial shapes (see Fig. 3) . 6. Energy dependence of the nucleon-nucleus optical potential. The real part of the empirical optical potential for nucleon–nucleus scattering up to 100 MeV incident kinetic energy $`ϵ`$ has a well-determined, nearly linear energy dependence of $`0.3ϵ`$ . This energy dependence is directly predicted in a relativistic mean-field formulation to be $`(W/M)ϵ`$ , which is quantitatively correct for a vector potential of natural size. In contrast, the energy dependence in conventional nonrelativistic treatments arises from the non-locality of the exchange corrections in a Hartree–Fock or Brueckner–Hartree–Fock approximation to the mean-field part of the optical potential. Explicit studies of relativistic calculations at different approximation levels show that the energy dependence is dominated by the direct contribution and that exchange corrections are small . Loophole: A direct connection between energy dependence from the Lorentz structure of the interaction in relativistic formulations and from exchange corrections in nonrelativistic formulations has not been demonstrated. 7. Pseudo-spin symmetry. There is an observed near-degeneracy among sets of energy levels in medium and heavy nuclei, which have been called pseudo-spin doublets . This degeneracy relies on having a specific relationship between the nonrelativistic central and spin-orbit potentials. Ginocchio has shown that this relationship follows from an $`SU(2)`$ symmetry of a covariant single-particle hamiltonian if the nucleon scalar and vector potentials are equal in magnitude . Such a hamiltonian with covariant Kohn–Sham potentials $`\mathrm{\Phi }`$ and $`W`$ results from the extremization of relativistic energy functionals. In the exact symmetry limit, with $`\mathrm{\Phi }=W`$, there are no bound positive-energy states, so nuclei do not exist. However, an approximate pseudo-spin symmetry leading to approximate pseudo-spin doublets exists for $`\mathrm{\Phi }W`$ . Each potential must be individually large, since their (near) cancellation must leave a sufficient residual central potential for nuclear binding. Loophole: The symmetry is significantly broken for empirical relativistic potentials and the consequences of this breaking are not understood, so the evidence for pseudo-spin symmetry is not entirely convincing. The observed doublets could be accidental or have an unrelated origin . 8. Correlated two-pion exchange, chiral symmetry, and scalar strength. The scalar-isoscalar part of the NN kernel below 1 GeV can be studied in an essentially model-independent way in terms of $`\pi `$$`\pi `$ scattering in this channel . Chiral symmetry and unitarity constrain the threshold behavior. The natural strength of the $`\pi `$$`\pi `$ interaction implies that the amplitude increases from zero as fast as the unitarity bound. The predicted integrated strength is consistent with a large scalar potential . Thus we can understand the origin of the large scale in the scalar channel from QCD symmetry constraints, unitarity, and naturalness. Loophole: We do not know of a loophole here. 9. Cancellations and fine-tuning of nuclear matter. The small binding energy of nuclear matter would appear to be a counter argument to the claim that the natural scale for nuclei is several hundred MeV. It would be valid, however, only if nuclear matter were an ordinary, nonrelativistic Fermi system. The existence of a minimum in the energy per particle suggests that different orders in the expansion of the energy in powers of the Fermi momentum $`k_\mathrm{F}`$ must be comparable. A logical conclusion is that this should occur only near the underlying mass scale, where all terms contribute roughly equally, and that the binding energy should be roughly of this scale. Yet the empirical equilibrium conditions are not consistent with this conclusion. In fact, nuclear matter appears to be an exceptionally fine-tuned system, with an equilibrium density far lower than expected for an ordinary Fermi liquid . Covariant formulations offer a compelling explanation: there is an interplay of different orders in the energy expansion, but it is highly restricted. In particular, repulsion from the $`k_\mathrm{F}^5`$ and $`k_\mathrm{F}^6`$ terms becomes important compared to the attraction from the $`k_\mathrm{F}^3`$ piece well below an underlying scale of order several hundred MeV. Furthermore, this happens because the coefficient of the $`k_\mathrm{F}^3`$ term is “unnaturally” small, roughly half the size one would expect from NDA estimates. In covariant models, this is a direct result of cancellations between Lorentz scalar and vector contributions that are each of natural size. The cancellations leading to a small $`k_\mathrm{F}^3`$ term do not recur in higher orders. This scenario can be tested using $`k_\mathrm{F}`$ expansions for the energy from phenomenologically successful nonrelativistic and relativistic mean-field models. If the coefficient of the $`k_\mathrm{F}^3`$ term in one of these expansions is doubled or tripled in size, equilibrium does occur at much higher density and the system is bound by 120 to 300 MeV or more (see Fig. 4). No other term in the expansion exhibits such a sensitivity. In nonrelativistic models, the cancellation at $`k_\mathrm{F}^3`$ has no direct explanation. If imposing the cancellation were desirable, one would expect cancellations to occur at higher orders in the expansion. However, an analysis of nonrelativistic Skyrme energy functionals finds energy contributions that are consistent with NDA counting and a hidden scale at leading order only (comparable to the filled symbols at $`n=2`$ in Fig. 1) . Furthermore, we know of no alternative dimensional analysis based on binding-energy scales that can account for the size of these energy contributions. Loophole: The cancellation in the leading term is only at the 50% level, which could still be considered natural, without resorting to explanations based on scalar–vector fine tuning. In any case, based on the arguments of Jackson , the extremely low empirical equilibrium density of nuclear matter is not consistent with a typical, nonrelativistic, velocity-independent NN potential. 10. Ambiguity in nuclear matter saturation. If different nonrelativistic potentials, each fit to NN scattering, are used to calculate the equilibrium binding energy of nuclear matter, the results fall along a line (the “Coester line”) with a spread of 15 MeV or more. This spread, which has usually been attributed to “off-shell” variations in the NN potentials, actually arises because the nuclear matter calculation requires an interaction that is also calibrated to three-body (and, in principle, many-body) on-shell amplitudes. This has not been done in calculations leading to the Coester line, since only two-body data was used as input in these nonrelativistic calculations. The magnitude of the variation in equilibrium binding energies would be difficult to understand if the underlying scale of the two-body interaction were only 50 MeV. Large covariant two-body potentials in a relativistic formulation, however, imply sizable three-body contributions in the corresponding nonrelativistic calculation. These contributions are consistent with the spread of the Coester line. One would expect a smaller spread in relativistic-model predictions of the equilibrium point, and this is observed in practice . Moreover, two-hole-line Dirac–Brueckner–Hartree–Fock calculations using potentials with natural scales can reproduce both NN scattering observables and the nuclear matter equilibrium point simultaneously . Loophole: There have been fewer systematic studies of relativistic predictions compared to nonrelativistic predictions, so the relativistic spread may be underestimated. 11. QCD sum rules for nucleons at finite density. The QCD sum-rule method relates ground-state matrix elements of QCD operators, such as the quark condensate, to spectral properties of hadrons (e.g., masses) . Adapted to finite density, it relates the density dependence of condensates to relative residues at the nucleon quasiparticle poles, which can be used to predict on-shell scalar and vector self-energies . The mass scales of QCD are directly incorporated into the analysis through the condensates. The key feature of the finite-density sum rule analysis is the covariant decomposition of a correlator of nucleon interpolating fields. The quasiparticle pole position is unchanged within the coarse resolution of the sum rule approach, but the self-energies extracted from the correlator residues are sizable. This is consistent with weak binding but large covariant potentials. A detailed sum-rule analysis predicts in-medium scalar and vector self-energies of close to 300 MeV (albeit with large error bars) . Loophole: Many provisional assumptions must be made to carry out the QCD sum-rule analysis . ## III Epilog In summary, we have argued that a covariant formulation of nuclear physics has the advantage of manifesting the underlying scales of QCD. The common signatures of these scales are large nucleon scalar and vector self-energies. This connection is shown through both theoretical and empirical considerations of naturalness in covariant analyses of NN scattering and nuclear properties. The manifestation of scales translates in many instances into simpler, more efficient, or more natural explanations of nuclear phenomena than in nonrelativistic formulations. Examples include the spin-orbit force, the nucleon–nucleon potential, the energy dependence of the proton–nucleus optical potential, pseudo-spin doublets, and the cancellations observed in energy functionals of nuclear matter. The pieces of evidence supporting a representation with large nucleon scalar and vector potentials, while not definitive when considered individually, collectively comprise a compelling positive argument. The evidence shows that the natural scales are not introduced “artificially” in a covariant formalism, and that the small binding energy (2%) of nuclear matter arises because it is a finely tuned fermionic system. Of course, the argument would be moot if there were direct experimental evidence that rules out the possibility of large potentials. We are unaware of any such evidence. At one time it was thought that predictions in relativistic models for isoscalar magnetic moments of odd-$`A`$ nuclei are strongly enhanced compared to the data, which are close to the Schmidt predictions . Naively, the baryon current of a nucleus with a valence nucleon of momentum $`p`$ outside a closed shell is $`p/M^{}`$, compared to the Schmidt current $`p/M`$. However, if the calculation is forced to respect Lorentz covariance and the first law of thermodynamics, the nuclear current is constrained to be $`p/\mu `$, where $`\mu M`$ is the chemical potential . Thus there is no enhancement in a consistent relativistic framework. The situation for currents at low $`q>0`$ is still an open problem, and should be re-examined in the context of modern EFT-inspired models. We have emphasized throughout that relativistic and nonrelativistic formulations are not mutually exclusive alternatives: both should work, although possibly in very different ways. Parallel EFT model calculations of the phenomena discussed here, such as the energy dependence of the optical potential, would more firmly establish the connections between relativistic and nonrelativistic explanations. While we have focused on the naturalness of covariant models, there is also the pragmatic question of the convergence of relativistic and nonrelativistic EFT expansions. Large nucleon fields can mean that a Foldy–Wouthuysen reduction may converge slowly, even though $`p/M`$ is small, because this is also an expansion in the ratio of potential strength to the nucleon mass. The relative convergence rates, particularly for spin properties, merit further examination. ###### Acknowledgements. We thank H. Mueller and N. Tirfessa for useful comments. One of us (B.D.S.) thanks the Ohio State University physics department for its hospitality and financial support during the course of this research. This work was supported in part by the National Science Foundation under Grant No. PHY–9800964 and by the U.S. Department of Energy under Contract No. DE-FG02-87ER40365.
no-problem/9912/cond-mat9912199.html
ar5iv
text
# Metastability and nucleation in the dilute fluid phase of a simple model of globular proteins ## 1 Introduction Metastability is the persistence for a long time of a phase which is not the equilibrium phase. It can be both a blessing and a curse. In protein solutions it is a curse. Protein crystals are required for X-ray crystallography to determine their full structure but protein solutions at concentrations well in excess of the solubility of the crystalline phase are often stable essentially indefinitely; the rate of nucleation of the crystalline phase is essentially zero. Here, we consider a crude model of a globular protein and we find that depending on the parameters of the model, the dilute fluid phase may be stable indefinitely with respect to crystallisation. If the solution is cooled at some low density, it is stable with respect to crystallisation down to temperatures at which the solution undergoes a fluid-fluid transition. This transition has been observed in protein solutions . The agreement in the phase and nucleation behaviour between the simple model and experiment is encouraging. It is clear what underlies the behaviour of the model and we may hope that similar physics underlies the behaviour of globular proteins. The fluid is metastable for a wide range of parameters and temperatures because as the attractions are directional and short-ranged the crystal is only stable when these attractions are strong relative to the thermal energy $`kT`$. These strong attractions mean that the interfacial tension between the dilute fluid and crystalline phases is high and it is this that inhibits crystallisation. This suggests that to increase the nucleation rate the attractions should be modified to become more like that in argon or other simple atoms and molecules, i.e., to become less anisotropic and longer ranged. The phase diagram will correspondingly become more like that of simple atomic fluids. The interactions between globular proteins are rather poorly understood but it seems clear that many of the attractive interactions are directional and quite short-ranged ; two protein molecules must not only be close to each other to attract each other but they must also be correctly oriented. An example is the attraction between hydrophobic patches on the surfaces of globular proteins; only if the proteins are oriented so that these parts of their surfaces face each other is there an attraction. So, our model, specified in section 2, contains directional attractions; in fact for simplicity it contains only directional attractions. The model was introduced by us and its bulk phase behaviour calculated in Ref. . Here, we carefully define metastability and derive an approximate theory to tell us when the dilute fluid is metastable and when nucleation occurs. We then present and discuss results, and finish with a conclusion. ## 2 Model Our model is exactly the same as in Ref. . The potential is a pair potential $`\varphi `$ which is a sum of two parts: a hard-sphere repulsion, $`\varphi _{hs}`$, and a set of sites which mediate short-range, directional attractions. There are $`n_s`$ sites, where $`n_s`$ is an even integer. In order to keep the model as simple as possible there are no isotropic attractions and all the directional attractions are of the same strength. The sites come in pairs: a site on one particle binds only to the other site of the pair on another particle. The two sites of a pair are numbered consecutively so that an odd-numbered site, $`i`$, binds only to the even-numbered site, $`i+1`$. This is the only interaction between the sites, an odd-numbered site, $`i`$, does not interact at all with sites other than the $`(i+1)`$th site. The orientation of site number $`i`$ is specified by means of a unit vector $`𝐮_i`$. We can write the interaction potential between a pair of particles as $$\varphi (r_{12},\mathrm{\Omega }_1,\mathrm{\Omega }_2)=\varphi _{hs}(r_{12})+\underset{i}{\overset{^{}}{}}\left[\varphi _{ii+1}(r_{12},\mathrm{\Omega }_1,\mathrm{\Omega }_2)+\varphi _{ii+1}(r_{12},\mathrm{\Omega }_2,\mathrm{\Omega }_1)\right],$$ (1) where the dash on the first sum denotes that it is restricted to odd values of $`i`$. The interactions between the sites on the two particles are $`\varphi _{ii+1}(r_{12},\mathrm{\Omega }_1,\mathrm{\Omega }_2)`$, which is the interaction between site $`i`$ on particle 1 and site $`(i+1)`$ on particle 2, and $`\varphi _{ii+1}(r_{12},\mathrm{\Omega }_2,\mathrm{\Omega }_1)`$, which is the interaction between site $`i`$ on particle 2 and site $`(i+1)`$ on particle 1. These are functions of $`r_{12}`$, $`\mathrm{\Omega }_1`$ and $`\mathrm{\Omega }_2`$, which are the scalar distance between the centres of particles 1 and 2, the orientation of particle 1 and the orientation of particle 2, respectively. The particle is rigid, but not axially symmetric, so its position is completely specified by the position of its centre and its orientation $`\mathrm{\Omega }`$, which may be expressed in terms of the three Euler angles. The hard-sphere potential, $`\varphi _{hs}`$, is given by $$\varphi _{hs}(r)=\{\begin{array}{cc}\mathrm{}\hfill & r\sigma \hfill \\ 0\hfill & r>\sigma \hfill \end{array},$$ (2) where $`\sigma `$ is the hard-sphere diameter. The conical-site interaction potential $`\varphi _{ii+1}`$ is given by $$\varphi _{ii+1}(r_{12},\mathrm{\Omega }_1,\mathrm{\Omega }_2)=\{\begin{array}{cc}ϵ\hfill & r_{12}r_c\text{and}\theta _{1i}\theta _c\text{and}\theta _{2i+1}\theta _c\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array},$$ (3) where $`\theta _{1i}`$ is the angle between a line joining the centres of the two particles and the unit vector $`𝐮_i`$ of particle 1, and $`\theta _{2i+1}`$ is the angle between a line joining the centres of the two particles and the unit vector $`𝐮_{i+1}`$ of particle 2. The conical-site potential depends on two parameters: the range, $`r_c`$, and the maximum angle at which a bond is formed, $`\theta _c`$. Of course, as the attractions are directional, $`\theta _c`$ will be small, no more than about $`30^{}`$. The attractions are also short ranged, $`r_c`$ no more than 10% larger than $`\sigma `$. The angles between the site orientations, the vectors $`𝐮_i`$ will determine which crystal lattice is formed. For simplicity, we will take the sites to be arranged such that they are compatible with a simple cubic lattice. Then if we express the unit vectors $`𝐮_i`$ in Cartesian coordinates, $`(x,y,z)`$, then when we have four sites, $`n_s=4`$, the set of vectors $`𝐮_1=(1,0,0)`$, $`𝐮_2=(1,0,0)`$, $`𝐮_3=(0,1,0)`$ and $`𝐮_4=(0,1,0)`$ would describe our model. For six sites then we add two additional sites at orientations $`𝐮_5=(0,0,1)`$ and $`𝐮_6=(0,0,1)`$. Later on when we discuss the metastable fluid we will discuss rates. In order to do this we let $`\tau `$ be the characteristic time for the dynamics in the dilute fluid. We will not need to specify $`\tau `$ exactly but it is of order the time a molecule takes to diffuse the average separation between the molecules. ## 3 Theory for the bulk phases The free energies of the fluid and bulk phases were derived in our previous paper, Ref. , so we only outline their derivations here. See Ref. for details. ### 3.1 Theory for the fluid phase The theory for the fluid phase of particles interacting via a hard-core and directional attractions mediated by sites is well established . Our theory is based on the generalisation of Chapman, Jackson and Gubbins of Wertheim’s perturbation theory . The perturbation theory gives for the Helmholtz free energy per particle of the fluid phase, $`a_f`$, $$\beta a_f(\eta ,T)=\beta a_{hs}(\eta )+n_s\beta \mathrm{\Delta }a(\eta ,T),$$ (4) where $`a_{hs}`$ is the Helmholtz free energy per particle of a fluid of hard spheres, and $`\mathrm{\Delta }a`$ is the change in free energy per bonding site due to bonding, $$\beta \mathrm{\Delta }a=\mathrm{ln}X+\frac{1}{2}(1X).$$ (5) We use an accurate expression derived from the equation for the pressure of Carnahan and Starling for $`a_{hs}`$. The volume fraction $`\eta =(N/V)(\pi /6)\sigma ^3`$ is a reduced density, it is the fraction of the solution’s volume occupied by the molecules. $`N`$ and $`V`$ are the number of molecules and the volume, respectively. $`\beta =1/kT`$, where $`k`$ is Boltzmann’s constant and $`T`$ is the temperature. $`X`$ is the fraction of sites which are not bonded to another site. As all site-site interactions are equivalent the fraction of each type of site which is not bonded is the same. The fraction of sites which are bonded and the fraction which are not bonded must, of course, add up to one. Thus we can simply write down a mass-action equation for $`X`$, $$1=X+\rho X^2Kg_{hs}^c(\eta )\mathrm{exp}(\beta ϵ),$$ (6) where $`g_{hs}^c`$ is the contact value of pair distribution function of a fluid of hard spheres, and $`\rho =(N/V)\sigma ^3`$. The volume of phase space (both translational and orientational coordinates) over which a bond exists is $`K`$ , $$K=\pi \sigma ^2(r_c\sigma )(1\mathrm{cos}\theta _c)^2.$$ (7) The mass-action equation, Eq. (6), is a quadratic equation for $`X`$ and it can be solved for $`X`$. Inserting this solution in Eq. (5) and then the result into Eq. (4) yields the Helmholtz free energy as a function of density and temperature. The state of our single component fluid is specified by the ratio of the site energy to the thermal energy, $`\beta ϵ`$, and the volume fraction, $`\eta `$. Note that Eq. (6) is not quite the same as the equivalent equations in Refs. . In those references $`\mathrm{exp}(\beta ϵ)`$ is replaced by $`\mathrm{exp}(\beta ϵ)1`$. As $`\beta ϵ`$ is quite large, five or more, the difference between the two is very small. Also, our $`K`$ is $`4\pi `$ times the $`K_{AB}`$ of Ref. . The second virial coefficient $`B_2`$ was obtained in Ref. . It is $$B_2=B_2^{hs}\frac{n_s}{2}K\mathrm{exp}(\beta ϵ),$$ (8) where $`B_2^{hs}=(2\pi /3)\sigma ^3`$ is the second virial coefficient of hard spheres. ### 3.2 Theory for the crystalline phase At low temperature, crystallisation is driven by the attractive interactions, not packing effects as it is with hard spheres. In Ref. we used a cell theory to describe the free energy of the crystalline phase of our model . The theory is a low-temperature theory and we will use it only at low temperatures. Vega and Monson used a cell theory to describe the solid phase of a very similar model, a simple model of water. They avoid a couple of the approximations used here at the cost of not having an analytical free energy. Within a cell theory for a solid phase, the Helmholtz free energy per particle, $`a_s`$, is given by $$\beta a_s(\eta ,T)=\mathrm{ln}q_P$$ (9) where $`q_P`$ is the partition function of a single particle trapped in a cage formed by the requirements that all its $`n_s`$ sites bond to neighbouring particles, and that its hard core not overlap with any of these neighbours. If the lattice constant is $`b`$, then the particle can move a distance distance $`b\sigma `$ in the direction of any of its neighbours, without overlapping with the neighbour. In order for the bonds to not be broken the particle must always be within $`r_c`$ of the surrounding particles. This fixes the lattice constant, $`b`$, at a little less than $`r_c`$. It is a little less as when the particle moves off the lattice site it will be moving towards some of its neighbours and away from others. Thus it can explore regions where it is further than $`b`$ from some of its neighbours. The exact value of the maximum lattice constant for which the particle can move about, constrained by the hard-sphere interactions, without breaking any bond, is difficult to estimate; as is the volume available to the centre of mass of the particle . Therefore, we approximate the lattice constant $`b`$ by $`r_c`$ and the volume to which the particle is restricted by $`(r_c\sigma )^3`$. The requirement that no bonds be broken also severely restricts the orientations of the particle. When a non-axially symmetric particle is free to rotate it explores an angular phase space of $`8\pi ^2`$. However, in the crystal its rotations will be restricted to those which are small enough not to violate the requirement that the orientations of its site vectors are within $`\theta _c`$ of the lines joining the centre of the particle with the those of the neighbouring particles. Again the exact value of angular space available to the particle is complex, and it also depends on the position of the particle. We approximate this angular space by assuming that each of the three angular degrees of freedom can vary independently over a range of $`2\theta _c`$. The normalised angular space available to a particle in the solid phase is then $`(2\theta _c)^3/8\pi ^2=\theta _c^3/\pi ^2`$. The energy per particle is, of course, $`(n_s/2)ϵ`$, and so the partition function, $`q_P`$, is then just the volume available to the centre of mass of the particle times the angular space available times $`\mathrm{\Lambda }^1\mathrm{exp}[(n_s/2)\beta ϵ]`$, where $`\mathrm{\Lambda }^1`$ is the integral over the momentum degrees of freedom. Thus, we have for $`q_P`$, $$q_P=v_P\mathrm{\Lambda }^1\mathrm{exp}\left(\frac{n_s}{2}\beta ϵ\right),$$ (10) where $$v_P=(r_c\sigma )^3\left(\frac{\theta _c^3}{\pi ^2}\right).$$ (11) Inserting Eq. (10) for $`q_P`$ into Eq. (9), $$\beta a_s=\mathrm{ln}\left(v_P/\mathrm{\Lambda }\right)\frac{n_s}{2}\beta ϵ=\beta \mu _s.$$ (12) This is the free energy at a lattice constant of $`r_c`$. The maximum possible density of a simple-cubic lattice is when the lattice constant $`b=\sigma `$, then the density is $`\sigma ^3`$. This density corresponds to a volume fraction $`\eta =\pi /6`$. When the lattice constant is $`r_c`$, the density is $`r_c^3`$ and the volume fraction is $`(\pi /6)(\sigma /r_c)^3`$. We are interested in finding coexistence between the crystal phase and the fluid phase at low temperature, when our assumption that no bonds are broken in the solid phase will be accurate. Then the pressure at coexistence will be low and the solid will be near its minimum possible density, $`r_c^3`$. The chemical potential $`\mu _s=a_s+p_s/\rho `$ where $`p_s`$ is the pressure and $`\rho `$ is the density. At low pressure $`p_s/\rho `$ contributes a negligible amount to the chemical potential, which enables us to equate $`a_s`$ and $`\mu _s`$ as we have done in Eq. (12). The coexisting fluid density at the fluid-solid transition is then found by equating the chemical potentials in the two phases. The density of the coexisting solid phase, when the temperature is low enough that solidification is driven by the attractive interactions not packing effects, is assumed constant at $`r_c^3`$. See Ref. for details. ## 4 Crystalline clusters We derive a simple but rather crude approximation for the equilibrium density of crystalline clusters in a dilute fluid. The approximations used are similar in spirit to our calculation of the interfacial tension between the crystal and dilute fluid phases of the spheres with a short-range isotropic attraction . We will assume that the interface between the cluster and the surrounding dilute fluid is sharp and that the interaction between the crystalline cluster and the surrounding fluid is weak. Both these assumptions are reasonable if the fluid is dilute but not if it is dense or near a fluid-fluid critical point . Thus we will only be able to predict the densities of crystalline clusters and therefore the nucleation rate of the crystalline phase in the dilute fluid. We require the density of crystalline clusters of $`n`$ particles, $`\rho _c(n)`$, in a dilute gas. To find this we start from the $`n`$-particle distribution function, $`\rho ^{(n)}(1\mathrm{}n)`$ in the grand-canonical ensemble $$\rho ^{(n)}(1\mathrm{}n)=\frac{_N\frac{z^N}{(Nn)!}\mathrm{d}(n+1)\mathrm{}\mathrm{d}(N)\mathrm{exp}\left(\beta U\right)}{_N\frac{z^N}{N!}\mathrm{d}(1)\mathrm{}\mathrm{d}(N)\mathrm{exp}\left(\beta U\right)},$$ (13) where $`(i)`$ is a compact form for the positional, $`𝐫_i`$, and orientational, $`\mathrm{\Omega }_i`$, coordinates of molecule $`i`$, $`(1\mathrm{}n)`$ indicates that $`\rho ^{(n)}`$ is a function of the set of $`n`$ coordinates of the $`n`$ molecules. $`U`$ is the total energy of the fluid and depends on all $`N`$ coordinates. $$z=\mathrm{\Lambda }^1\mathrm{exp}(\beta \mu )$$ (14) is the activity. Equation (13) gives the density of an $`n`$-tuple of particles with coordinates $`(1\mathrm{}n)`$ in the fluid. We want the density of an $`n`$-tuple of molecules which are in a configuration which is compatible with the $`n`$ molecules being part of a single compact crystalline cluster. Therefore we integrate over all the positions of the $`n`$ particles which are consistent with the $`n`$ particles forming a crystalline cluster, and over no other positions. Integration over all $`n`$ coordinates will give us the total number of crystalline clusters, to obtain the number density $`\rho _c(n)`$ (here $`(n)`$ indicates the dependence of $`\rho _c`$ on $`n`$ the number of molecules in the cluster, not that $`\rho _c`$ depends on the coordinates of the $`n`$th molecule) we divide by the volume, $$\rho _c(n)=\frac{1}{n!V}^{}\mathrm{d}(1)\mathrm{}\mathrm{d}(n)\rho ^{(n)}(1\mathrm{}n),$$ (15) where the dash on the integration sign indicates that the integration is restricted to those configurations of the $`n`$ particles which are consistent with them forming a cluster. The factor of $`1/n!`$ is present because the particles are indistinguishable and so the integral integrates over configurations which differ only by the exchange of indistinguishable particles. As we are assuming that the cluster is in an ideal gas Eq. (13) simplifies as we set the energy of interaction to be zero except for the energy of interaction between the $`n`$ particles in the cluster. Then the integral in the denominator of Eq. (13) is simply $`V^N`$ and that in the numerator is $`V^{Nn}\mathrm{exp}(\beta u(1\mathrm{}n))`$, where $`u(1\mathrm{}n)`$ is the energy of interaction of $`n`$ molecules. So Eq. (13) becomes $$\rho ^{(n)}(1\mathrm{}n)=\frac{_N\frac{z^NV^{Nn}}{(Nn)!}\mathrm{exp}\left(\beta u(1\mathrm{}n)\right)}{_N\frac{z^NV^N}{N!}}.$$ (16) Substituting this in Eq. (15), $$\rho _c(n)=\frac{_N\frac{z^NV^{Nn}}{(Nn)!}^{}\mathrm{d}(1)\mathrm{}\mathrm{d}(n)\mathrm{exp}\left(\beta u(1\mathrm{}n)\right)}{n!V_N\frac{z^NV^N}{N!}}.$$ (17) We can take $`z^n`$ times the integral out of the sum in the numerator leaving the sum in the numerator identical to that in the denominator. They cancel leaving $$\rho _c(n)=\frac{z^n}{Vn!}^{}\mathrm{d}(1)\mathrm{}\mathrm{d}(n)\mathrm{exp}\left(\beta u(1\mathrm{}n)\right).$$ (18) The density of crystalline clusters of $`n`$ molecules in an ideal gas is simply $`z^n/Vn!`$ times the configurational integral of $`n`$ molecules in a cluster. As in the cell theory for a bulk crystal we factorise the integration of Eq. (18) into a product of $`n`$ integrals and delete the factor of $`1/n!`$ as once the molecules are restricted to lie in cells they are distinguishable. Now, one of the $`n`$ integrations is over the whole volume $`V`$ of the fluid, the other $`(n1)`$ are just over the rattling motion as in the bulk and they each give a factor of $`v_P`$. The energy is taken to be the ground state energy as in the bulk and so is $`nn_sϵ/2+u_s(n)`$ where $`u_s`$ is the increase in energy due to broken bonds at the surface of the cluster. So, we have that Eq. (18) becomes $$\rho _c(n)=z^nv_P^{n1}\mathrm{exp}\left[\frac{nn_s}{2}\beta ϵ\beta u_s\right].$$ (19) The spheres at the faces of the cluster do not interact with the full $`n_s`$ other spheres and this increases the energy of a cluster. If we assume that the cluster of $`n`$ molecules is cubic then it has 6 faces, each of area $`n^{2/3}\sigma ^2`$, i.e., with $`n^{2/3}`$ molecules in each face. For $`n_s=6`$ there are sites pointing in all 6 directions and a sphere at any of the 6 faces but not at an edge or corner has one bond broken. So assuming that the cluster is cubic, neglecting the fact some spheres are at edges and some at corners and therefore have 2 or 3 bonds not 1 bond broken and treating $`n`$ as a continuous variable, results in the approximation that there are $`6n^{2/3}`$ bonds broken on the surface of the cluster. Each broken bond costs an energy $`ϵ/2`$ — the energy of a bond is $`ϵ`$ with $`ϵ/2`$ assigned to each of the two particles forming the bond. Thus, for $`n_s=6`$, the increase in energy $`u_s=3n^{2/3}ϵ`$. With this expression for $`u_s`$ Eq. (19) becomes $$\rho _c(n)=z^nv_P^{n1}\mathrm{exp}\left[\frac{n_sn}{2}\beta ϵ3n^{2/3}\beta ϵ\right]n_s=6.$$ (20) The approximation $`u_s=3n^{2/3}ϵ`$ becomes worse as $`n`$ decreases but it is never seriously wrong. Indeed for the smallest cluster we consider, that of 8 spheres, there is cancellation of errors and there are exactly $`6\times 8^{2/3}=24`$ bonds broken. For $`n=9`$ we predict 26.0 bonds broken when in fact there 28 broken bonds but this is not a large error. For $`n_s=4`$ only 4 of the 6 faces involve broken bonds, because there are no attraction sites on 2 faces. So, instead the energy cost is only two thirds that for 6 sites and the increase in energy is $`2n^{2/3}ϵ`$. For $`n_s=4`$ or 6 the increase in energy is given by $`(n_s/2)n^{2/3}ϵ`$. So far we have assumed that the cluster does not interact with any of the surrounding spheres. This is reasonable for a very dilute fluid. However for a fluid which is not very dilute and is at low temperature, spheres in the surrounding fluid will tend to bond to the spheres in the faces of the cluster. We can take this into account approximately by treating the sites on the faces of a crystalline cluster as if they were sites in the fluid, then for each site there is a free energy change given by Eq. (5) — which reflects the fact that it can bond to one of the surrounding spheres. The change to the configurational integral is then of course $`\mathrm{exp}(\beta \mathrm{\Delta }a)`$ per surface site. Then the configurational integral is $$\rho _c(n)=z^nv_P^{n1}\mathrm{exp}\left[\frac{n_sn}{2}\beta ϵn_sn^{2/3}\left(\frac{\beta ϵ}{2}+\beta \mathrm{\Delta }a\right)\right],$$ (21) where $`X`$ in Eq. (5) for $`\mathrm{\Delta }a`$ is the same as in the surrounding fluid. ## 5 Metastability and nucleation Consider the density of clusters $`\rho _c(n)`$ of Eq. (21). For large $`n`$, $`\rho _c`$ is dominated by the part $`(zv_P\mathrm{exp}[(n_s/2)\beta ϵ])^n`$ as the other parts vary only as the $`n^{2/3}`$ power or are constants. Using Eqs. (12) and (14), we obtain $$zv_P\mathrm{exp}[(n_s/2)\beta ϵ]=\mathrm{exp}(\beta \mu )\mathrm{exp}(\beta \mu _s).$$ (22) But as we are within the fluid-crystal coexistence region the chemical potential of the crystal $`\mu _s`$, is less than that of the fluid phase, $`\mu `$. So, Eq. (22) is greater than 1 and hence $`\rho _c(n)`$ diverges as $`n\mathrm{}`$. This is actually an automatic consequence of the fact that the crystal is more stable than the fluid. So, our Eq. (21) predicts that in the fluid there are high densities of large crystalline clusters. This is of course not what is observed in a metastable fluid. This is because our calculation of Eq. (21) assumed that the densities of all clusters were at equilibrium, whereas in a metastable fluid the system is by definition not at equilibrium. In order to describe a metastable fluid, a fluid which is out of true equilibrium, we must apply a constraint; see Refs for definitions and discussions of the application of constraints to study metastable fluids. This constraint must eliminate the large crystalline clusters to leave us with a fluid. We choose the constraint which eliminates all clusters above a size $`n_{min}`$: $$\rho _c(n)=0n>n_{min},$$ (23) where $`n_{min}`$ is defined by $$\rho _c(n_{min})=\underset{n}{\mathrm{min}}\left\{\rho _n\right\},$$ (24) i.e., $`n_{min}`$ is the number of molecules in the cluster with the lowest density, as predicted by Eq. (21). So, our constrained distribution of cluster densities is $$\rho _c(n)=\{\begin{array}{cc}z^nv_P^{n1}\mathrm{exp}\left[\frac{n_sn}{2}\beta ϵn_sn^{2/3}\left(\frac{\beta ϵ}{2}+\beta \mathrm{\Delta }a\right)\right],\hfill & nn_{min}\hfill \\ 0\hfill & n>n_{min}\hfill \end{array}.$$ (25) We set the constraint so as to eliminate all clusters above the size $`n_{min}`$ because this constraint is in a specific sense the least restrictive. It is the least restrictive because if we start with the constrained equilibrium distribution of clusters, which is given by Eq. (25) and then remove the constraint, i.e., allow clusters with $`n>n_{min}`$ to form, then the initial rate at which these clusters with $`n>n_{min}`$ form is minimised. This assumes that clusters only grow one molecule at a time; that a cluster with $`(n+1)`$ molecules is formed by a cluster of $`n`$ molecules adsorbing an additional molecule. This is a reasonable assumption in a dilute fluid in which the density of single molecules is much larger than the density of clusters of 2 or more molecules. With this assumption of growth one molecule at a time the initial rate at which clusters with $`n>n_{min}`$ appear is just equal to the rate at which clusters of $`n_{min}`$ molecules acquire an additional molecule to become clusters of $`(n_{min}+1)`$ molecules, which is approximately $$\text{rate}\rho _c(n_{min})\tau ^1.$$ (26) Therefore, with our choice of constraint the initial rate at which the distribution of clusters changes when the constraint is removed is minimised. This is what we meant by the constraint being least restrictive. When the constraint is removed the distribution will tend towards the equilibrium one with its crystalline-cluster densities which diverge in the $`n\mathrm{}`$ limit, i.e., the fluid will crystallise. If we neglect the fact that not all the clusters with $`n_{min}`$ molecules which gain an extra molecule will grow all the way into a crystallite, then the rate of nucleation of the crystalline phase is given by Eq. (26). In view of the highly approximate nature of our theory this neglect is reasonable so Eq. (26) is our approximation for the nucleation rate. If $`\rho _c(n_{min})`$ is very small then if the constraint is removed the distribution of cluster densities will change only very slowly. Therefore the unconstrained fluid will persist for a long time, much longer than $`\tau `$, and so the unconstrained fluid phase is observable: it is metastable. However, if $`\rho _c(n_{min})`$ is not very small then as soon as the constraint is removed the unconstrained fluid starts to crystallise. The unconstrained fluid does not last long enough to be observable: it is unstable. What constitutes a very small density is of course rather arbitrary but we will try to quantify it when we discuss our results in the next section. ## 6 Results Experiments on globular proteins have found metastable fluid–fluid transitions , i.e., a fluid-fluid transition which lies within the fluid-solid coexistence region. The crystallisation of proteins is often slow, taking several days, which allows the protein solution to be cooled into a region of the phase diagram where the fluid phase separates into two fluid phases of differing densities. Therefore, we show phase diagrams, in Figs. 2 and 3, in which the fluid-fluid transition lies within the fluid-solid coexistence region. For other values of the parameters of the models, $`n_s`$, $`\theta _c`$ and $`r_c`$, there is a stable fluid-fluid transition . Fig. 2 is the phase diagram of a model protein with 4 sites and Fig. 3 is the phase diagram for a model with 6 sites and a larger value of $`\theta _c`$. These two models were chosen as their phase diagrams were calculated and discussed in Ref. and they differ markedly in how deep the fluid-fluid transition is into the fluid-solid coexistence region. In Fig. 2, the fluid-fluid critical point is at a volume fraction $`\eta =0.090`$ and at reciprocal temperature $`ϵ/kT=10.24`$. We can use as a measure of how deep the fluid-fluid transition is into the fluid-solid coexistence region the ratio of the temperature at the critical point to that of a fluid of the same density which coexists with the solid. For the model of Fig. 2 fluid at a volume fraction $`\eta =0.090`$ coexists with the solid at $`ϵ/kT=8.37`$. The ratio of the temperatures is then 0.82. For the model of Fig. 3 the critical point is at $`\eta =0.154`$ and $`ϵ/kT=7.18`$. A fluid with this density coexists with the solid phase at $`ϵ/kT=4.54`$. The ratio of the temperatures is now 0.63. Note that our temperature is a reduced temperature, a dimensionless ratio $`kT/ϵ`$. We have plotted our phase diagrams as a function of $`kT/ϵ`$ but this scale is not directly related to the real temperature of a protein solution as the protein-protein interactions (which determine $`ϵ`$) vary with the temperature of the experiment. In Figs. 2 and 3 we have shown as a dot-dashed curve an estimate of where percolation occurs in the fluid. At percolation the association of the molecules is sufficiently strong that an infinite cluster appears , that is to say that there are an infinite number of the molecules which are joined to each other via pathways of bonds. The percolation curve gives us an indication of when the density is too high or the interactions too strong for our approximation that the crystalline clusters interact weakly with the surrounding fluid to be valid. We will not use our approximation for the cluster densities, Eq. (25), beyond (i.e., to the right of) the percolation curve. See Ref. for an introduction to percolation. If we neglect loops of bonds we obtain what is called the classical theory of percolation which predicts that percolation occurs at a fraction of bonds $`(1X_p)`$ given by $$1X_p=\frac{1}{n_s1}\text{or}X_p=1\frac{1}{n_s1},$$ (27) $`X_p`$ is the fraction of sites not bonded when percolation occurs. Now we will use Eq. (25) to calculate the cluster densities within the dilute fluid part of the fluid-solid coexistence region of the phase diagrams in Figs. 2 and 3. For the phase diagram of Fig. 2, the 4-site model, we have calculated cluster densities in the region of the phase diagram bounded at the right by curve where percolation occurs, from below by the curve describing the density of the fluid phase which coexists with the crystal and from above by the density of the dilute fluid phase which coexists with the dense fluid phase. The approximations we used to calculate the cluster densities, $`\rho _c(n)`$, are only reasonable at low densities and away from a critical point. The region is bounded from above by the fluid-fluid coexistence curve as we expect the fluid to become unstable with respect to condensation a little inside the coexistence curve and so our calculated cluster densities are meaningless there. We expect condensation to occur only a little into the fluid-fluid coexistence region as we expect the interfacial tension between the two fluid phases will be small and therefore that nucleation of the dense fluid phase will be rapid except very near the coexistence curve. Throughout this region the densities of crystalline clusters of all sizes $`n=8`$ and up are tiny. For example, at $`\eta =0.1`$ and $`\beta ϵ=9`$ the density of crystalline clusters of 8 spheres is $`\rho _c(8)=𝒪(10^{21}\sigma ^3)`$ and as $`n`$ increases the density rapidly decreases. So, the density of even small crystalline clusters is negligible. The nucleation rate, Eq. (26), is effectively zero and the dilute fluid phase will be stable with respect to crystallisation effectively indefinitely: it is metastable. This finding that the crystal cannot nucleate from a dilute fluid is interesting as experiments on solutions of many proteins find it difficult or impossible to find crystallisation. The nucleation rate is so low because the nuclei, the crystalline clusters have extremely low densities. This can be traced to the interfacial term in our expression for $`\rho _c`$, Eq. (21). This is the second term in the exponential which varies as the number of molecules at the surface, as $`n^{2/3}`$. It is large because under conditions that the crystal coexists with a dilute fluid the ratio between the attraction energy and the thermal energy $`ϵ/kT`$ is large. At the surface of the cluster bonds are broken and each broken bond decreases the density of a nucleus by $`\mathrm{exp}(\beta ϵ/2)`$, which is rather large. In the language of classical nucleation theory the barrier to nucleation is high because the surface tension is high. The surface tension $`\gamma `$ here comes from the energy of the broken bonds, $`\gamma (1/2)ϵ\sigma ^2+\mathrm{\Delta }a\sigma ^2`$, where $`\mathrm{\Delta }a`$ is small, $`𝒪(0.1kT)`$. In view of the extremely small numbers we have not plotted cluster densities for the model parameters of Fig. 2. However, the fluid-fluid transition is deeper in the fluid-solid coexistence region in Fig. 3 so larger cluster densities are achievable. Plots of $`\rho _c(n)`$ against $`n`$ for three points in the dilute phase of Fig. 3 are shown in Fig. 4. The three points are chosen to be at roughly the highest densities at which the theory is reliable and the fluid is outside the fluid-fluid coexistence region the solid, dashed and dotted curves the supersaturations $`\beta (\mu \mu _s)`$ are 3.71, 4.77 and 5.52, respectively. An approximation to the nucleation rate is given by Eq. (26) which is proportional to the densities at the minima of the curves in Fig. 4. We can get an estimate of what the numbers mean for a protein solution. Protein molecules are a few nms in diameter so in a sample 1mm across there are of order $`10^{16}`$ protein molecules. At $`\beta ϵ=7.5`$, $`\eta =0.05`$, $`\rho _c(n_{min})=𝒪(10^{16}\sigma ^3)`$ so in a sample 1mm across we have $`𝒪(1)`$ crystallites nucleating in the sample per time $`\tau `$. Muschol and Rosenberger estimate diffusivities for lysozyme (a well studied globular protein) of order $`10^{10}`$m<sup>2</sup>s<sup>-1</sup>. The characteristic time of the dynamics $`\tau `$ should be of order the time a protein takes to diffuse its own diameter, this time is the square of the diameter, $`10^{17}`$m<sup>2</sup> divided by the diffusion constant $`10^{10}`$m<sup>2</sup>s<sup>-1</sup>, so we have $`\tau =𝒪(10^7\text{s})`$. So we end up with the rough estimate of $`10^7`$ crystallites nucleating in the sample per second. Nucleation is therefore rapid. In common with classical nucleation theory our approximation for the nucleation rate is the calculation of a very small number and so the errors are typically large, easily several orders of magnitude . Bearing this in mind our theory can only tell us that the model parameters of Fig. 3 lie close to the dividing line between parameter values for which nucleation of crystalline phase from a dilute fluid phase is not achievable on experimental time scales and parameter values for which it is. George and Wilson determined the second virial coefficients of a number of globular proteins under the conditions for which they crystallised. They found that the values of the second virial coefficients lay within a small range, which they called the ‘crystallisation slot’. Using Eq. (8) for the second virial coefficient, $`B_2`$, we can determine the values of $`B_2`$ at the 3 temperatures for which we plotted the cluster densities in Fig. 4. They are $`B_2=0.21\sigma ^3`$, $`3.03\sigma ^3`$ and $`6.35\sigma ^3`$ for $`\beta ϵ=6`$, 7 and 7.5, respectively. So, although at all 3 temperatures we have (at different volume fractions) similar densities of the minimum-density cluster the second virial coefficient varies over a large range, it even changes sign. Thus, our results for the nucleation rate do not offer an explanation of George and Wilson’s finding. ## 7 Conclusion We have studied a simple model of a globular protein molecule in solution. The phase diagram and the densities of crystalline clusters in the dilute fluid phase have been calculated. The phase diagram predicted by our bulk free energy includes fluid-fluid coexistence within the fluid-crystal coexistence region. When this fluid-fluid coexistence region is not too deep into the fluid-crystal coexistence region, as in Fig. 2, we find that the dilute fluid phase outside of the fluid-fluid coexistence region is metastable, i.e., the rate of nucleation of the crystalline phase is negligible. It is not possible to produce a crystal directly from the dilute fluid for this model. When the fluid-fluid coexistence region is deeper into the fluid-crystal coexistence region, as in Fig. 3, the nucleation rate becomes large enough to be observable within the dilute fluid. Essentially, we defined the dilute fluid as being the fluid at densities below the percolation threshold. This means that the fluid-fluid critical point is not included in our definition of the dilute fluid. Ten Wolde and Frenkel have shown that near a fluid-fluid critical point the interface between the crystalline nucleus and the surrounding fluid is diffuse and that this enhances the nucleation rate dramatically. The diffuse interface is very different from the sharp interface we had to assume to obtain approximations for the cluster densities and hence the nucleation rate. If we consider the (highly inaccurate) predictions of our theory near the critical points of Figs. 2 and 3, we find that $`\rho _c(n_{min})=𝒪(10^{135}\sigma ^3)`$ and $`𝒪(10^{14}\sigma ^3)`$, respectively. So, nucleation is certainly rapid near the critical point of Fig. 3. However, the density $`\rho _c(n_{min})`$ is predicted to be so low near the critical point of Fig 2, that even taking into account the very large errors in our theory we would not expect nucleation. Although the nucleation rate is enhanced by the nature of the fluid near its critical point, as the model parameters are varied to move the critical point toward the fluid-crystal coexistence curve, the rate will tend to zero. In the limit that the critical point touches the fluid-crystal coexistence curve, i.e., at the point where the fluid-fluid transition goes from being metastable to being stable, the supersaturation at the critical point tends to zero, reducing the nucleation rate to zero. As this work has been motivated by the difficult and important problem of crystallising globular proteins it is interesting to speculate on how the model of Fig. 2 could be crystallised. The nucleation rate is far too low in the dilute fluid so in order to increase the rate the fluid must either be made more dense or the attractions strengthened. Both of these may result in equilibrium being difficult to reach with the result that fluid could become gel-like. Also, if the fluid undergoes a fluid-fluid transition its density and hence its nucleation rate jumps . There is an optimum nucleation rate to obtain good, i.e., large with few defects, crystals. Now, if there were no fluid-fluid transition then the crystalline cluster densities and hence the nucleation rate vary continuously but at condensation the densities will jump so there is a risk that the nucleation rate will jump over the optimum one making good crystals hard to obtain. Crystallisation would be facilitated if the free energy cost of the surface of the cluster, the second term in the exponential of Eq. (21), was less. If the interactions were less directional then the crystal would be stable at higher temperatures, i.e., smaller values of $`\beta ϵ`$, where the surface would have a lower free energy.
no-problem/9912/cond-mat9912416.html
ar5iv
text
# Phase-matched second-harmonic generation in a ferroelectric liquid crystal waveguide Abstract True phase-matched second-harmonic generation in a waveguide of crosslinkable ferroelectric liquid crystals is demonstrated. These materials allow the formation of macroscopically polar structures whose order can be frozen by photopolymerization. Homeotropic alignment was chosen which offers decisive advantages compared to other geometries. All parameters contributing to the conversion efficiency are maximized by deliberately controlling the supramolecular arrangement. The system has the potential to achieve practical level of performances as a frequency doubler for low power laser diodes. PACS number(s): 61.30.Gd (orienatational order of liquid crystals; electric and magnetic field effects on order) 42.65.Tg (optical solitons; nonlinear waveguides) 42.79.Nv (optical frequency converters) The classical domain of nonlinear optical (NLO) devices based on second-order effects ($`\chi ^{(2)}`$-effects) is frequency doubling which is important for extending the frequency range of laser light sources. The major goals of devices based on third-order nonlinear optical effects ($`\chi ^{(3)}`$-effects) is the realization of optical switches as the decisive hurdle on the way to an all-optical data processing . The design concepts exploit the intensity dependent refractive index due to $`\chi ^{(3)}`$-interactions in Mach-Zehnder type interferometers . Recently it has been demonstrated that an intensity dependent refractive index can also be obtained by a cascading of second order nonlinear processes. This route is far more efficient than the one using $`\chi ^{(3)}`$-effects with currently available materials. As a result, switching occurs at lower intensity levels. The figure of merit of both, frequency doublers and optical switches, is given by the ratio of the susceptibility $`\chi ^{(2)}`$ and refractive index $`n`$ as $`\chi _{}^{(2)}{}_{}{}^{2}/n^3`$. Organic materials possess refractive indices $`n1.5`$ and thus have an edge to most inorganic materials with refractive index $`n2.2`$. Furthermore, organic molecules can be tailored according to the demands and different desired functionalities can be incorporated within a single molecule . The inherent potential has been early recognized and meanwhile there is a sound knowledge of the correlation between molecular structure and corresponding hyperpolarizability, $`\beta `$ . Organic chromophores possess a remarkably high hyperpolarizability and the major obstacle towards efficient devices is not the availability of suitable chromophores, but the fabrication of proper macroscopic structures. A high conversion efficiency requires the simultaneous maximization of many parameters and quite often there is a trade-off between some properties. A crucial quantity is $`\chi ^{(2)}`$ which is, subject to certain simplifying assumptions, proportional to the number density of the NLO chromophores and to the orientational average of the hyperpolarizabilities . Hence, a chromophore with high hyperpolarizability should be arranged in a noncentrosymmetric fashion with high number density and high degree of orientational order. To achieve this, mainly two concepts have been pursued so far: Langmuir-Blodgett (LB) films and poled polymers . However, due to intrinsic peculiarities of both techniques, the chromophore is rather diluted and furthermore the films possess limited thermal and mechanical stability. In this study we pursued a different strategy based on ferroelectric liquid crystals (FLCs). Liquid crystals (LCs) in general form highly ordered phases which possess an intrinsic quadrupolar order but not a dipolar one . Hence for $`\chi ^{(2)}`$ applications conventional LCs are not of any use. However, the picture changed with the advent of FLCs whose molecular symmetry allows a local dipole perpendicular to the director . The arrangement can be manipulated by electric field and huge single domains can be formed. At this stage the orientational order within the monomeric system is still fragile and also sensitive to slight changes in temperature. To overcome these problems the FLCs are further functionalized with photoreactive groups. Subsequent photopolymerization leads to the formation of stable polymer networks where the polar order is frozen (pyroelectric polymer, PP). Various aspects of the preparation process as well as some nonlinear optical properties are described in a recently submitted publication . In this contribution we focus on the problem of phase-matching in waveguide geometry and demonstrate that true phase-matching is possible to achieve. The chemical structures of the FLC monomers are shown in Fig. 1. A mixture of 60 % A1b and 40 % A2c is used which adopts at room temperature a chiral smectic C phase. This mixture is filled in a cell depicted in Figure 2(a) . The bottom plate is equipped with parallel ITO electrodes stripes to achieve a quasi-homeotropic alignment: the smectic layers are aligned parallel to the glass plates, the molecular dipole moments are oriented by the electric field, and the helical structure of the chiral phase is unwound. The mixture of A1b and A2c balances the trade-off between a high polarization on one hand and the field strength required for a manipulation of the helix on the other hand. Even a moderate electric field strength is sufficient to obtain a highly ordered structure and no aligning layers are required. The achieved polar order of the monomeric FLC system is then permanently fixed by photopolymerization leading to a mechanically and thermally stable PP network. Without any additional preparation step this arrangement is also a channel waveguide for TM modes. A linear and nonlinear optical characterization is presented in ref.. According to the prevailing symmetry, second-harmonic generation (SHG) can only occur for $`\text{TE}^\omega `$$`\text{TE}^{2\omega }`$ and $`\text{TM}^\omega `$$`\text{TE}^{2\omega }`$ modes. The measured nonlinear optical constants are remarkably large (up to 1.26 pm V<sup>-1</sup>). Phase-matching can be achieved by taking advantage of the modal dispersion of the waveguide . The effective refractive index $`n_{\text{eff}}`$ of a mode is a function of waveguide thickness and polarization. Thus, phase-matching requires the fabrication of a waveguide of a precisely defined thickness given by the linear optical constants. The tolerances are quite tight and already minor deviations within the nanometer range change the characteristics of a device. Also, due to the dispersion of the refractive index, phase-matching is only possible between modes of different order. However, even if this is achieved, the resulting efficiency may still be rather low due to the small value of the overlap integral of the electric field distribution of the interacting modes across the cross-sectional area $$=_0^{\mathrm{}}\frac{\chi _{}^{(2)}{}_{ijk}{}^{}}{\chi _{}^{(2)}{}_{\text{eff}}{}^{}}E_{i}^{}{}_{}{}^{(m^{},\omega )}(z)E_{j}^{}{}_{}{}^{(m^{},\omega )}(z)E_{k}^{}{}_{}{}^{(m,2\omega )}(z)𝑑z,$$ where $`\chi _{}^{(2)}{}_{ijk}{}^{}`$ is the second-order susceptibility tensor, $`\chi _{}^{(2)}{}_{\text{eff}}{}^{}`$ is the effective second-order susceptibility, and $`E_{i}^{}{}_{}{}^{(m^{},\omega )}(z)`$ is the electric field distribution of the $`m^{}`$-th mode of frequency $`\omega `$ across the waveguide thickness. Field distributions of modes of different order yield a nearly vanishing overlap integral and a poor conversion efficiency . A way out of this dilemma is to influence the susceptibility tensor . A reversal of sign of $`\chi ^{(2)}`$ at the nodal plane of the electric field distribution of the first-order mode maximizes the value of the overlap integral and thus enables a phase-matching scheme $`\text{TM}_{0}^{}{}_{}{}^{\omega }`$$`\text{TE}_{1}^{}{}_{}{}^{2\omega }`$ and $`\text{TE}_{0}^{}{}_{}{}^{\omega }`$$`\text{TE}_{1}^{}{}_{}{}^{2\omega }`$. The sign of $`\chi ^{(2)}`$ can be reversed by reversing the polar order of the chromophores. The desired inverted waveguide structure can be fabricated using the sandwich geometry shown in Figure 2(a). The top plate of a 540 nm thick cell was removed. No damage occurred in this preparation process (the mean roughness is on the order of few nanometers as confirmed by atomic force microscopy). The bottom plate with the polymer network was cut in two pieces of equal size ($``$ 4 mm) and the parts were glued onto each other with inverse polarities in the channel region as illustrated in Figure 2(b). Waveguide modes were excited by end-fire coupling. The second-harmonic (SH) light was collected at the end of the guide and measured as a function of the fundamental light wavelength with a photomultiplier. A quadratic dependence of the SH light intensity on the fundamental one was established to ensure the true nature of the observed signal. The linear constants and the thickness of the waveguide were measured prior to the experiment and used to predict the wavelengths at which phase-matching occurs. According to these data and with a total cell thickness of 2 $`\times `$ 540 nm, TE–TE phase-matching should occur at 958 nm and TM–TE phase-matching at 1311 nm. Indeed, the experiment confirms these predictions: TE–TE phase-matching was observed at 955 nm and TM–TE at 1337 nm, as shown in Figure 3(a). The width of the peaks in Figure 3(b) and (c) depends on the known dispersion of the refractive indices and on the interaction length $`L`$ in which fundamental and second harmonic light are in phase. The interaction length can be determined by a fit of the experimental data to the function $$I_{2\omega }\text{sinc}^2\left(\frac{L\mathrm{\Delta }k}{2}\right),$$ where $`\mathrm{\Delta }k=4\pi [n_{\text{eff}}(2\omega )n_{\text{eff}}(\omega )]/\lambda _\omega `$, with $`L`$ as the only unknown parameter. Figures 3(b) and (c) present the experimental data together with the corresponding fits. The interaction lengths are listed in Table 1. In a sample without $`\chi ^{(2)}`$-inversion the SH signal was about 1000 times smaller than that of an inverted sample at the phase-matching condition, which demonstrated the superior performance due to the optimization of the overlap integral in our geometry. The conversion efficienciency $$\eta =\frac{P_{2\omega }}{P_{\omega }^{}{}_{}{}^{2}L^2}$$ of the two phase-matching schemes is also given in Table 1. The values are among the largest ones in organic materials. Also, the confinement of TM modes three-dimensionally in the waveguide yields a larger conversion efficiency for TM–TE than for TE–TE phase-matching. The demonstration of true phase matching in a waveguide format using FLCs is a major step towards a more general use of these materials for NLO devices. FLCs maximize the possible number density of active chromophores and this, together with a high degree of orientation, leads to remarkably high values of the off-resonant nonlinear susceptibilities. Phase-matching was achieved between modes of different order using the modal dispersion of the waveguide. The concept of an inverted structure maximizes the overlap integral and thus enables high efficiency in the desired phase matching-scheme. We have successfully manufactured a macroscopic inverted waveguide and demonstrated phase-matching. The quasi-homeotropic alignment avoids the use of aligning layers and leads to an inherent channel waveguide for TM modes without any additional preparation steps, yielding a very high conversion efficiency for TM–TE phase-matching scheme. Another major feature is that the order of the monomeric FLC is made permanent by photopolymerization. The photopolymerization does not lead to any degradation of the quality of the waveguide, as it is for instance observed in LB films. Apparently the intrinsic fluidity of FLC heals all distortions caused by the formation of new bonds. The polar network is thermally and mechanically stable and all samples kept their NLO properties over the monitored period of several months. Thus, the system has the potential to achieve practical levels of performance. The authors are grateful to Dr. S. Schrader for helpful discussions and to Prof. H. Möhwald for generous support and encouraging discussions. V. S. U. Fazio and S. T. Lagerwall are grateful to the TMR European Programme (contract number ERBFMNICT983023) and to the Swedish Fundation for Strategic Research for financial support. P. Busson acknowledges the financial support from the Swedish Research Council for Engineering Science (TFR, grant 95-807).
no-problem/9912/astro-ph9912173.html
ar5iv
text
# Possible signatures for strange stars in stellar X–ray binaries ## 1 Introduction Discovery of kHz QPOs in the flux from certain X–ray burst sources have prompted substantial amount of work in connection with accretion physics and structure properties of the central accretors in such systems. In particular, these oscillations have been used to derive estimates of the mass of the neutron star in X–ray binaries (Kaaret, Ford & Chen 1997; Zhang, Strohmayer & Swank 1997; Kluźniak 1998). All these estimates, based on the beat frequency model, tacitly assume that the highest QPO frequency of 1.22 kHz observed so far (in the source 4U 1636–53; Zhang et al. 1997) can be identified with the Keplerian orbital frequency corresponding to the marginally stable orbit associated with the neutron star. Beat frequency models require that the difference in frequencies between the twin QPO peaks be the spin frequency of the neutron star and that this remain constant. However, further observations have shown that there exist microsecond lags in the QPO difference frequencies in many sources implying that an exact beat frequency mechanism may not be at work. Recently, Osherovich & Titarchuk (1999a), Titarchuk & Osherovich (1999), Osherovich & Titarchuk (1999b) have developed alternative models unifying the mechanism for production of low frequency QPOs and that for high frequency QPOs. This model requires the lower frequency QPO to be due to Keplerian circulation of matter in the disk and the higher frequency one to be hybrid between the lower frequency and the rotational frequency of the stellar magnetosphere. Li et al. (1999b) have suggested that if such a model is taken recourse of, then the compact star in the source 4U 1728 – 34 may possibly be a strange star. The possible existence of a new sequence of degenerate compact stellar objects, made up of light mass u, d and s quarks, has been suggested (Witten 1984; Haensel, Zdunik & Schaeffer 1986; Alcock, Farhi & Olinto 1986) for quite sometime now, based on ideas from particle physics which indicate that a more fundamental description of hadronic degrees of freedom at high matter densities must be in terms of their quark constituents. For energetic reasons, a two–component (u,d) quark matter is believed to convert to a three–component (u,d,s) quark matter in beta equilibrium. As suggested by Witten (1984), the latter form of matter could be the absolute ground state of strongly interacting matter rather than $`{}_{}{}^{56}Fe`$. Because of the important role played by the confinement forces in quantum chromodynamics (QCD) to describe the quark interactions, the mass–radius relationship for stable strange stars differ in an essential manner from that of neutron stars (Haensel, Zdunik & Schaeffer 1986; Alcock, Farhi & Olinto 1986). Recent work (Cheng et al. 1998; Li et al. 1999a, Li et al. 1999b) seem to suggest that a consistent explanation of the observed features of the hard X–ray burster GRO J 1744 – 28, the transient X–ray burst source SAX J 1808.4 – 3658 and the source 4U 1728 – 34 is possible only in terms of an accreting strange star binary system. A new class of low–mass X–ray binaries, with strange star as the central compact object (SSXBs), is thus an interesting astrophysical possibility that merits study. Some consequences of the SSXB hypothesis for the properties of bulk strange matter have been discussed recently by Bulik, Gondek-Rosińska and Kluźniak (1999) (see also Schaab & Weigel 1999). The compact nature of the sources make general relativity important in describing these systems. Furthermore, their existence in binary systems imply that these may possess rapid rotation rates (Bhattacharya & van den Heuvel 1991 and references therein). These two properties make the incorporation of general relativistic effects of rotation imperative for satisfactory treatment of the problem. General relativity predicts the existence of marginally stable orbits around compact stars. For material particles within the radius of such orbits, no Keplerian orbit is possible and the particles will undergo free fall under gravity. This radius ($`r_{ms}`$) can be calculated for equilibrium sequences of rapidly rotating strange stars in a general relativistic space–time in the same way as for neutron stars (Datta, Thampan & Bombaci 1998). In this letter, we calculate the Keplerian frequency of matter revolving around rapidly rotating strange stars. The present results, together with those obtained assuming a neutron star as the central accretor (Thampan et al. 1999), demonstrate that QPO frequencies in the range (1.9-3.1) kHz can be interpreted in terms of a non-magnetized SSXB rather than a NSXB. Future discovery of such high frequency QPOs from X–ray burst sources will constitute a new astrophysical diagnostic for SSXBs. In section (2) we very briefly discuss the formalism used to construct rapidly rotating strange star sequences and further computing the Kepler frequencies around such objects. Section(3) provides a brief outline of the equation of state (EOS) models used by us. In section (4) we discuss the results and conclusions. ## 2 Calculations We use the methodology described in detail in Datta, Thampan & Bombaci (1998) to calculate the structure of rapidly rotating strange stars. For completeness, we briefly describe the method here. For a general axisymmetric and stationary space–time, assuming a perfect fluid configuration, the Einsten field equations reduce to ordinary integrals (using Green’s function approach). These integrals may be self consistently (numerically and iteratively) solved to yield the value of metric coefficients in all space. Using these metric coefficients, one may then compute the structure parameters, moment of inertia and angular momentum corresponding to initially assumed central density and polar to equatorial radius ratio. The values of the structure parameters and the metric coefficients, so computed, may then be used (as described in Thampan & Datta 1998) to calculate parameters connected with stable circular orbits (like the innermost stable orbit and the Keplerian angular velocities) around the configuration in question. ## 3 Strange star equations of state For purpose of this letter, we have calculated the relevant quantities (of interest here), corresponding to three different equation of state (EOS) models for strange stars. Two of these equations of state are based on the MIT bag model (Chodos et al. 1974) with the following values for the bag pressure ($`B`$), the strange quark mass ($`m_s`$) and the QCD structure constant ($`\alpha _c`$): (i) $`B=90`$ MeV fm<sup>-3</sup>, $`m_s=0`$ MeV and $`\alpha _c=0`$; (ii) $`B=56`$ MeV fm<sup>-3</sup>, $`m_s=150`$ MeV, with the short range quark–quark interaction incorporated perturbatively to second order in $`\alpha _c`$ according to Freedman & McLerran (1978) and Goyal & Anand (1990). Next we considered a phenomenological model by Dey et al. (1998) (model (iii)) that has the basic features of QCD (namely, quark confinement and asymptotic freedom), but employs a potential description for the interaction. These models for the EOS are quite divergent in their approach, so that the conclusions presented here using these will be of sufficient generality. ## 4 Results and Conclusions For the EOS models described in the previous section, we calculate the Keplerian frequencies corresponding to the innermost ‘allowed’ orbits (as given by general relativity) for rotating strange stars, and obtain their relationship with QPO frequencies in the kHz range, assuming the SSXB scenario. The inner edge of the accretion disk may not always be coincident with $`r_{ms}`$, but there can be instabilities in the disk that can relocate it outside of $`r_{ms}`$. If the radius ($`R`$) of the strange star is larger than $`r_{ms}`$, the innermost possible orbit will be at the surface of the strange star. It must be mentioned here that rotation of the central accretor is an important consideration because the accretion driven angular momentum transfer over dynamical timescales can be quite large (Bhattacharya & van den Heuvel 1991). Because the values of $`r_{ms}`$ and the mass of the spinning strange star will depend on two independent parameters, namely, the central density ($`\rho _c`$) of the star and its spin frequency ($`\nu _s`$), a range of values of ($`\rho _c`$,$`\nu _s`$) will exist that will allow solutions for a Keplerian frequency corresponding to any specified value of the QPO frequency. The variation of the Keplerian frequency ($`\nu _K`$) of the innermost ‘allowed’ orbit with respect to the gravitational mass (M) of the spinning strange star is shown in Fig. 1. For purpose of illustration, we have chosen three values of $`\nu _s`$ : 0 (the static limit), 200 Hz and 580 Hz (the last rotation rate inferred from the X–ray source 4U 1636–53 as given by Zhang et al. 1997, using beat frequency model). It can be noted from Fig. 1 that all the curves have a cusp. For any curve, the nearly flat part (to the left of the cusp) corresponds to the case $`Rr_{ms}`$, and the descending part (to the right of the cusp) corresponds to the case $`Rr_{ms}`$. These are the only possibilities for the location of $`r_{ms}`$ with respect to the stellar surface. The highest kHz QPO frequency observed so far is 1.22 kHz, exhibited by the source 4U 1636–53. Fig. 1 shows that only the maximum mass end of the curve for non–rotating configuration described by EOS model (ii) attains the value $`\nu _K=1.22`$ kHz. A simple analysis, relating the minimum value of $`\nu _\mathrm{K}`$ to the bag constant (see Fig. 1 for EOS (i)) in the case of non-rotating strange stars within the MIT bag model EOS for massless non-interacting quarks gives $`\nu _K(r_{ms},M_{max})=1.081(B/56)^{1/2}`$ kHz, where B is in MeV fm<sup>-3</sup>. The lowest possible value for $`B`$, which is compatible with the Witten’s hypothesis (Witten 1984), is $`56`$ Mev fm<sup>-3</sup>. Finite values of $`m_s`$, $`\alpha _c`$, and $`\nu _s`$ increase the value of $`\nu _K(r_{ms},M_{max})`$ with respect to the previous case. This implies that, if one adheres to the restrictive assumption that $`\nu _{\mathrm{QPO}}=1.22`$ kHz in the X–ray source 4U 1636–53 is generated at the marginally stable orbit of the central compact star (with $`r_{ms}>R`$), then the latter being a strange star is an admissible solution only for low values of the bag constant and for very slowly rotating configurations of the star. Next we investigate the possibility that the kHz QPO frequency is generated at locations outside the marginally stable orbit. Since $`\nu _K(r)`$ is a decreasing function of r, $`\nu _K=1.22`$ kHz in SSXBs will occur at $`r>r_{ms}`$, that is, somewhere in the accretion disk and not at the disk inner edge. In Fig. 2 we show the plot of the Keplerian frequency profiles $`\nu _K(r)`$ of test particles around a (rotating) strange star of one solar mass (for the same values of the rotation rates as before). This figure shows that the radial location in the disk, where a solution : $`\nu _K=1.22`$ kHz occurs in a SSXB, is about $`4.5r_g`$, where $`r_g=2GM/c^2`$ is the Schwarzschild radius of the strange star. A similar analysis for $`M=1.4`$ $`M_{}`$yields $`r_{1.22}`$ (radius at which $`\nu _{\mathrm{QPO}}=1.22`$ kHz is produced) in the range ($`3.53`$, $`3.55`$$`r_g`$, the higher value being that for the non–rotating configuration and the lower for $`\nu _s=580`$ Hz. It is interesting to ask what range of $`\nu _K`$ obtains for a specified value of the strange star mass. From Fig. 1, it can be seen that the values of $`\nu _K`$ for SSXB, for a one solar mass strange star, lie in the range (2.2–2.3) kHz for EOS model (i), (1.8–1.9) kHz for EOS model (ii) and (2–2.6) kHz for EOS model (iii). The first two ranges of kHz QPOs occur at $`r=R`$, while the third at $`r=r_{ms}`$. For $`M=1.4`$ $`M_{}`$, these ranges are: (1.57–1.84), (1.57-1.87) and (1.57–1.79), respectively for EOS models (i), (ii) and (iii). The similarity in these ranges is due to $`r_{ms}>R`$ for all these configurations. It also follows from Fig. 1 that the EOS model (iii) gives the maximum value of $`\nu _K`$, namely, 3 kHz. The most interesting result ensues if a comparison is made of Fig. 1 with its counterpart for the case of a NSXB. A detailed calculation of the latter was reported recently by Thampan, Bhattacharya & Datta (1999), using realistic EOS models. This calculation showed that the maximum theoretically expected value of $`\nu _{\mathrm{QPO}}`$ for NSXBs is 1.84 kHz. Therefore, values of $`\nu _{\mathrm{QPO}}`$ in excess of $`1.84`$ kHz, if observed, cannot be understood in terms of a NSXB. The SSXB scenario is a more likely one for these events (assuming that generation of X–ray bursts is possible on strange star surfaces); this will constitute a new astrophysical diagnostic for the existence of strange stars in our galaxy.
no-problem/9912/astro-ph9912475.html
ar5iv
text
# Photospheres, Comptonization and X-ray Lines in Gamma Ray Bursts ## I Photospheres, Shocks and Pairs A significant fraction of bursts appear to have low energy spectral slopes steeper than 1/3 in energypreece+98 ; crider+97 . This has motivated consideration of a thermal or nonthermalliang+97 ; liang+99 comptonization mechanism, while leaving the astrophysical model largely unspecified. There is also evidence that the apparent clustering of the break energy of GRB spectra in the 50-500 keV range may not be due to observational selection preece+98 ; brainerd+98peak ; dermer+99apjl . Models using Compton attenuation brainerd+98apj require reprocessing by an external medium whose column density adjusts itself to a few g cm<sup>-2</sup>. More recently a preferred break has been attributed to a blackbody peak at the comoving pair recombination temperature in the fireball photosphere eichlerlev99 . In order for such photospheres to occur at the pair recombination temperature in the accelerating regime requires an extremely low baryon load. For very large baryon loads, a different explanation has been invoked tho94 involving scattering of photospheric photons off MHD waves in the photosphere, which upscatters the adiabatically cooled photons up to the observed break energy. Motivated by the above observations, these ideas have been synthesized mr99b to produce a generic scenario in which the presence of a photospheric component as well as shocks subject to pair breakdown can produce steep low energy spectra and preferred breaks (see Figure 1). In some of our previous work mlr93 ; rm94 considering photospheres and pair formation, their thermal character, the uncompensated photosphere redshift in the coasting phase, and the requirement of a power law extending to GeV energies were arguments in favor of a synchrotron and inverse Compton mechanism in shocks. The latter should, indeed, play a significant role in any model. However, a photosphere is always present, even if not always dominant. If the photosphere occurs in the accelerating regime where $`\mathrm{\Gamma }r`$, its energy is comparable to that of shocks which may occur further out, and the energy at which the black-body peak (T) is observed is in the “magic” range near 0.5 MeV, for $`\eta \eta _{}`$, where $`\eta =L/\dot{M}c^2\mathrm{\Gamma }_f`$ is the terminal bulk Lorentz factor and $`\eta _{}=(L\sigma _T/4\pi m_pc^3r_o)^{1/4}10^3(L_{52}r_7^1)^{1/4}`$. Both its peak energy and its total energy are lower if the photosphere occurs in the coasting phase ($`\eta \eta _{}`$). A steep low energy spectral slope is provided by the Rayleigh-Jeans part of the photosphere, and a low-energy excess or terrace by its Wien part. A high energy power law extending above this up to GeV requires, however, a separate explanation. One possibility is up-scattering of photospheric photons in the $`\tau _T\stackrel{>}{}1`$ region by Alfvén waves, whose energy may be a fraction of the bulk kinetic energy tho94 . This leads to a comptonized broken power law spectrum (PHC) in $`xF_x`$ ($`x=h\nu /m_ec^2`$) of slope 1 up to the “magic” break energy $`x\stackrel{<}{}1`$, and slope 0 up to $`x\stackrel{<}{}\eta `$ above that (Fig. 1). The energy in this PHC wave-comptonized component can be substantial relative to the photosphere, and equals the ratio of wave to bulk kinetic energy. Above the photosphere, internal shocks are expected to occur rm94 , which would lead to a nonthermal Synchrotron/IC spectrum (S) additional to the above. However, if the compactness parameter $`\mathrm{}^{}`$ (or comoving luminosity) is high, pair formation occurs which could produce a self-regulated low pair (comoving) temperature $`\mathrm{\Theta }_p^{}=kT^{}/m_ec^210^1`$ favoring comptonization ghiscel99apjl . In this $`\mathrm{}^{}1`$ case, thermal comptonization on the subrelativistic electrons leads to another comptonized component (C) of slope 1 up to an observer-frame energy $`x\mathrm{\Theta }_p^{}\eta 10^1\eta `$. Above this, if scattering off waves also occurs in the shocks, a second component of slope 0 would extend above it to $`x\stackrel{<}{}\eta `$. ## II X-ray and UV Line Spectra of GRB The environment in which a GRB occurs may also lead, in the afterglow phase, to specific spectral signatures from the external medium imprinted in the continuum, such as atomic edges and lines bkt97 ; pl98 ; mr98b . These may be used both to diagnose the chemical abundances and the ionization state (or local separation from the burst), as well as serving as potential alternative redshift indicators. (In addition, the outflowing ejecta itself may also contribute blue-shifted edge and line features, especially if metal-rich blobs or filaments are entrained in the flow from the disrupted progenitor debris mr98a , which could serve as diagnostic for the progenitor composition and outflow Lorentz factor). An interesting prediction mr98b is that an Fe K-$`\alpha `$ X-ray emission line could be a diagnostic of a hypernova, since in this case one may expect a massive envelope at a radius comparable to a light-day where $`\tau _T\stackrel{<}{}1`$, capable of reprocessing the X-ray continuum by recombination and fluorescence (see also ghi98 ; bot98 ). Detailed radiative transfer calculations have been performed to simulate the time-dependent X/UV line spectra of massive progenitor (hypernova) remnantsweth+99 , see Figure 2. Two types of hypernova environment geometries were considered, which are illuminated by a typical time-dependent broken power law afterglow continuum spectrum. One model consists of a dense shell, such as a supernova remnant, which could be the product of an inhomogeneous wind of variable velocity. This is essentially a transmission model, and produces initially an absorption X-ray line spectrum, turning later into an emmision spectrum, in which for Fe abundances 10 or 100 times solar the Fe line luminosities are $`\stackrel{<}{}10^{42}10^{43}`$ erg s<sup>-1</sup>. The other model assumes a funnel geometry and is essentially a reflection model, with an empty or low density region along an axis, such as would arise in a rotating stellar envelope or a wind. The fireball and the afterglow propagate inside this funnel, which acts as a channel that collimates and reflects the continuum. This results in an emission line spectrum (Fig. 2), where for 10 or 100 times solar abundances the Fe K-$`\alpha `$ line luminosity reaches $`L_{Fe}\stackrel{<}{}10^{44}`$ erg s<sup>-1</sup>, with line and edge equivalent widths $`EW\stackrel{<}{}1`$ keV. This is comparable to the $`3\sigma `$ Fe features reported by two groups piro98b ; yosh98 in GRB 970508 and GRB 970828. It is interesting that the Fe K-edge is significant in a funnel model such as shown Fig 2. While the energy of an Fe line 6.7 keV feature in GRB 970508 agrees with its previously known redshift $`z=0.835`$, the line feature of GRB 970828 would be in agreement with the 9.28 keV Fe K-edge energy at this object’s newly reported djorg+00 redshift of $`z=0.958`$. The line features in the 30-40 eV source-frame range seen in Figure 2 would be redshifted into the optical for $`z\stackrel{>}{}5`$, but are likely to be blanketed by the Ly-$`\alpha `$ forest of intervening high redshift galaxies. However, it may be possible to detect the soft X-ray metallic lines which become prominent soon after the Fe features, as the continuum softens and the gas cools, e.g. S and Si in the 2-3 keV source-frame range, or $`11.5`$ keV at $`z1`$. I am grateful to M.J. Rees, C. Weth and T. Kallman for stimulating collaborations, NASA NAG-5 2857, the Guggenheim Foundation and the Division of Physics, Math & Astronomy, the Astronomy Visitor and the Merle Kingsley funds at Caltech for support.
no-problem/9912/astro-ph9912354.html
ar5iv
text
# More redshifts of powerful equatorial radio sources from the BRL sample ## 1 Introduction Radio sources have many important roles to play in astrophysical and cosmological studies (e.g. see McCarthy 1993 for a review). In order to provide a large, spectroscopically complete sample of luminous radio sources accessible to both northern radio interferometers such as the Very Large Array (VLA) and to large southern telescope facilities, such as the Very Large Telescope, Gemini South, and the Atacama Large Millimetre Array (ALMA), Best et al. recently defined a new sample of very powerful equatorial radio sources from the Molonglo Reference Catalogue (MRC; Large et al. 1981), according to the criteria (see Best et al. for details): $`S_{408\mathrm{M}\mathrm{H}\mathrm{z}}5`$ Jy, $`30^{}\delta +10^{}`$, $`|b|10^{}`$. This sample (hereafter the BRL sample) consists of 178 objects and, following radio imaging, optical imaging, and spectroscopic observations, spectroscopic redshifts were provided for 174 of these in the original paper. The host galaxies of the remaining four sources were all optically identified, but no spectroscopic redshifts were obtained. In this paper, spectroscopic redshifts are derived from new observations of three of these remaining four objects, 1413$``$215, 1859$``$235 and 1953$``$077 (3C404). In Section 2, details of the observations and data reduction are provided. The reduced spectra are presented and discussed in Section 3. The reader is referred to Best et al. for a complete description of the sample and its properties. ## 2 Observations and Data Reduction Long–slit spectra of 1859$``$235 and 1953$``$077 were taken using the duel–beam ISIS spectrograph on the William Herschel Telescope (WHT) in photometric conditions during service time on the night of 1999 July 5 (see Table 1 for details). The observations were made using the 5700Å dichroic and the R158B and R158R gratings in the blue and red arms of the spectrograph. In the blue arm this provided a spatial scale of 0.19 arcsec per pixel and a spectral resolution of about 19Å, and in the red arm a spatial scale of 0.36 arcsec per pixel and a spectral resolution of about 12Å. The data were reduced using standard packages within the iraf noao reduction software. After subtraction of the bias level, the spectroscopic data were flat–fielded using observations of internal calibration lamps, and the sky background was removed. The two exposures of each galaxy were combined, removing cosmic ray events, and one dimensional spectra were extracted from an angular extent of 2.9 arcsec along the slit. The extracted spectra were wavelength calibrated using observations of CuNe and CuAr arc lamps, and flux calibration was achieved using observations of the spectrophotometric standard star Kopff 27. The determined fluxes were corrected for any atmospheric extinction arising from the non–unity airmass of the observations. 1413$``$215 was observed at the Keck II telescope in photometric conditions during evening twilight on 1999 July 11 (see Table 1). The observations were made using the Low–Resolution Imaging Spectrograph (LRIS; Oke et al. 1995) with the 150 line / mm grating (7500Å blaze), providing a spatial pixel scale of 0.21 arcsec and a spectral resolution of about 25Å. The galaxy was shifted 10<sup>′′</sup> along the slit between two separate observations to reduce fringing effects. Data reduction followed essentially the same procedure as outlined for the WHT observations, except that the spectrum was extracted from an angular extent of 2.1<sup>′′</sup> along the slit (due to a smaller spatial extent of the object). Feige 110 and HZ44 were used for flux calibration. ## 3 Results and Discussion The extracted spectra of the three galaxies are provided in Figures 1, 2 and 3, and details of the emission line properties are provided in Table 2. Emission lines are detected for all three objects, confirming the identifications proposed in the paper by Best et al. . For 1859$``$235 and 1413$``$215, several emission lines are detected, providing unambiguous redshift measurements. For 1953$``$077 only a single strong emission line is detected, at 8714Å. This emission line is assumed to be \[OII\] 3727 for a number of reasons: (i) were this \[OIII\] 5007 or H$`\alpha `$ (or any other weaker line), then the lack of any other strong emission lines between 3500 and 9000Å would be very surprising; (ii) weak continuum emission is detected in the red–arm observations down to about 6000Å, ruling out the possibility that the line is Ly$`\alpha `$; (iii) if the line is \[OII\] 3727 then, given the $`R`$ magnitude of the source ($`R=22.90`$), the derived redshift places it in the middle of the $`Rz`$ diagram of the other radio galaxies (cf. Figure 51 of Best et al. 1999). It appears fairly secure, therefore, that this emission line is \[OII\] 3727. Following these results, the BRL sample is now 99.5% complete. The only object without a spectroscopic redshift in the sample is 1059$``$010 (3C249), whose very faint $`R`$ magnitude ($`R=24.20`$; Best et al. 1999) suggests a minimum redshift of 1.5. Spinrad, Stern and Dey (private communication) have attempted, without success, to obtain a redshift for this object using the Keck Telescope, and Rawlings (private communication) has carried out near–infrared spectroscopy with UKIRT in the J–band and between 1.6 and 2.2 microns, detecting the continuum but no lines. Obtaining the final redshift in the sample may prove difficult. ## Acknowledgements This work was supported in part by the Formation and Evolution of Galaxies network set up by the European Commission under contract ERB FMRX– CT96–086 of its TMR programme. The WHT is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roches de los Muchachos of the Instituto de Astrofisica de Canarias. The W. M. Keck Observatory is operated as a scientific partnership among the University of California, the California Institute of Technology, and NASA. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors are grateful to the WHT service observer, John Telting, to Wil van Breugel, Carlos De Breuck, Dan Stern and Adam Stanford for kindly observing 1413$``$215 at the Keck Telescope, and to the referee, Steve Rawlings, for helpful comments.
no-problem/9912/astro-ph9912351.html
ar5iv
text
# On the Time Evolution of Gamma-Ray Burst Pulses: A Self-Consistent Description ## 1 Introduction The mechanisms giving rise to the observed gamma-ray burst (GRB) emission may reveal themselves in correlations describing the continuum spectral evolution. A few such correlations between observable quantities have been found. One of these is that between the instantaneous hardness of the spectrum and the instantaneous flux within individual pulses, especially during their decay phases. Somewhat more than every second pulse decay exhibits such a correlation (e.g., Kargatis et al. 1995). Furthermore, a correlation between the instantaneous hardness of the spectrum and the time-integrated flux, the fluence, has been established in a majority of the pulses where it has been searched for (e.g., Crider et al. 1999). In this work, we demonstrate that for the decay phases of the light curve, for which both these correlations are valid, the light curve will follow a specific decay law. This is shown analytically in §2 by combining the two, previously well-studied, empirical relations into a new compact description of the temporal behavior of the decay phase. In §3, we study a sample of pulse decays observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma-Ray Observatory (CGRO) and give a few illustrative examples. A discussion is given in §4. ## 2 Descriptions of the Time Evolution The instantaneous photon spectrum, $`N_\mathrm{E}(E,t)`$ \[photons cm<sup>-2</sup> s<sup>-1</sup> keV<sup>-1</sup>\], having approximately the shape of a broken power-law, is characterized mainly by two entities, the total instantaneous photon flux, $`N(t)`$ \[photons cm<sup>-2</sup> s<sup>-1</sup>\], and a measure of the “hardness”, e.g., the instantaneous peak energy, $`E_{\mathrm{pk}}(t)`$, of the $`E^2N_\mathrm{E}(E,t)`$ spectrum. The time evolution of, for instance, a pulse decay phase in a GRB light curve can then be described by a vector function $`𝐆(t^{}t_0;\mathrm{parameters})=(N(t^{}t_0),E_{\mathrm{pk}}(t^{}t_0))`$, where $`t^{}`$ is the time parameter and $`t_0`$ is the starting time of the pulse decay. Apart from the running time parameter there are a number of parameters specific to the pulse decay. The initial value at $`tt^{}t_0=0`$ is $`𝐆(0)(N_0,E_{\mathrm{pk},0})`$. The relation between the evolution of the instantaneous spectral characteristics, e.g., $`E_{\mathrm{pk}}(t)`$, and the corresponding intensity of the light curve, e.g., $`N(t)`$, has been widely studied, leading to empirical relations describing the observed behavior within a GRB and even within individual pulses. The most common trend is the hard-to-soft evolution, in which the peak energy of the spectrum decreases monotonically over the entire pulse (Norris et al. 1986). A less common trend, where the hardness and the intensity track each other, was found by Golenetskii et al. (1983). Similar results were also reported by Kargatis et al. (1994) and Bhat et al. (1994). Especially for the decay phase of pulses, such a hardness-intensity correlation (HIC) is common. Kargatis et al. (1995) found a strong correlation for 28 pulse decay phases in a sample of 26 GRBs with pulse pairs. For the decay phase, the HIC can be expressed as $$E_{\mathrm{pk}}(t)=E_{\mathrm{pk},0}\left[\frac{N(t)}{N_0}\right]^\delta ,$$ (1) where $`\delta `$ is the correlation index. A second empirical relation was found by Liang & Kargatis (1996), who pointed out a correlation between the peak energy and the time-integrated photon flux, $$E_{\mathrm{pk}}(t)=E_{\mathrm{pk},0}e^{\mathrm{\Phi }(t)/\mathrm{\Phi }_0},$$ (2) where $`\mathrm{\Phi }(t)`$ is the photon fluence $`_0^tN(t^{\prime \prime })𝑑t^{\prime \prime }`$ \[cm<sup>-2</sup>\] integrated from the time of $`E_{\mathrm{pk},0}`$ and $`\mathrm{\Phi }_0`$ is the exponential decay constant. In their discovery paper, Liang & Kargatis (1996) showed the correlation to be valid for several long, smooth, single pulses and especially for their decays. They found that 35 out of 37 pulses were consistent or marginally consistent with the correlation. This work was followed by a series of papers by Crider and coworkers (1997, 1998a, b, 1999). In Crider et al. (1999), a sample of 41 pulses in 26 GRBs were studied in greater detail confirming the original discovery. They, however, preferred a slight modification for the description of the decay and studied the peak energy versus the energy fluence, instead. The two approaches are very similar and do not describe fundamentally different trends of the decay. This empirical relation was also studied in Ryde & Svensson (1999), who showed that the relation can be used to derive the shape of the time-integrated spectrum, as this is the result of integrating the evolving time-resolved spectra. The two relations given by equations (1) and (2) fully describe the evolution. If these two relations are fulfilled, one can show (e.g., in §4 below) that the function $`𝐆(t)=(N(t),E_{\mathrm{pk}}(t))`$ is given by $`N(t)`$ $`=`$ $`{\displaystyle \frac{N_0}{(1+t/\tau )}},`$ (3) $`E_{\mathrm{pk}}(t)`$ $`=`$ $`{\displaystyle \frac{E_{\mathrm{pk},0}}{(1+t/\tau )^\delta }},`$ (4) i.e., the instantaneous photon flux is a reciprocal function in time with the time constant $`\tau `$. The peak energy has a similar dependence, differing by the HIC index $`\delta `$. The number of parameters in the present formulation is limited to two, $`\tau `$ and $`\delta `$. From equations (3) and (4), all the empirical results discussed above follow. Eliminating the explicit time dependence by combining the two equations, the hardness-intensity correlation described by equation (1) is found: $`E_{\mathrm{pk}}=E_{\mathrm{pk},0}(N/N_0)^\delta `$. The function $`𝐆(t;\tau ,\delta )`$ describes this relation as a path in the $`NE_{\mathrm{pk}}`$ plane, and as $`N(t)`$ and $`E_{\mathrm{pk}}(t)`$ evolve in the same manner, except for the exponent $`\delta `$, the relations (3) and (4) give rise to the Golenetskii power-law (Eq. 1). Furthermore, the photon fluence is found by integrating equation (3): $$\mathrm{\Phi }(t)=N_0\tau \mathrm{ln}(1+t/\tau ),$$ (5) which, when used to eliminate the $`(1+t/\tau )`$-dependence in equation (4), gives the hardness-fluence relation: $$E_{\mathrm{pk}}(t)=E_{\mathrm{pk},0}e^{\delta \mathrm{\Phi }(t)/N_0\tau }.$$ (6) Identifying this equation with equation (2), one finds the exponential decay constant to be given by $`\mathrm{\Phi }_0N_0\tau /\delta `$, and thus that the time constant, $$\tau =\delta \mathrm{\Phi }_0/N_0.$$ (7) This is the crucial relation that connects the pulse timescale with the properties of relations (1) and (2). ## 3 BATSE Observations To verify and illustrate the results above, we searched for this specific spectral-temporal behavior in GRBs observed by BATSE. We studied the high energy resolution Large Area Detector (LAD) observations of GRBs. We selected bursts from the BATSE catalog<sup>1</sup><sup>1</sup>1 The BATSE GRB catalog is available online at: http://gammaray.msfc.nasa.gov/batse/grb/data/catalog/ up to GRB 990126 (with trigger number 7353), with a peak flux (50-300 keV on a 256 ms timescale) greater than $`5`$ photons s<sup>-1</sup> cm<sup>-2</sup>, giving totally 155 bursts having useful LAD high energy resolution burst (HERB) data. Out of these, we found a sample of 59 GRBs with a total of 83 pulse decays, strong enough to allow spectral fitting to be done of the time-resolved spectra in at least four time bins with a signal-to-noise (S/N) ratio of $`\stackrel{>}{}30`$ in the $`251900`$ keV band. The purpose of the fitting is to determine the hardness parameter, $`E_{\mathrm{pk}}(t)`$, and to allow the deconvolution of the count spectrum in order to obtain $`N(t)`$. For this, a larger S/N ratio is not motivated. Commonly, the Large Area Detector (LAD) and its 4 energy-channel DISCSC data are used, and light curves are studied in narrow spectral ranges, typically $`50300`$ keV, in units of count rates. For our analysis we use the LAD HERB data, since they have the necessary higher energy resolution, with 128 energy channels, and cover the maximal possible energy range, $`25`$ keV $`1.9`$ MeV. Furthermore, we use light curves in terms of photon flux rather than count rates. The spectral fitting was done using the Band et al. (1993) function, with all its parameters free. Out of the 83 pulse decays in our sample, we found 38 ($`45\%`$) to be consistent with the reciprocal decay law in Equation (3). We start by presenting detailed results of the analysis of two of these pulses, namely GRB 921207 (# 2083) and GRB 950624, (# 3648), see Table 1. A fit of the function $`N=N(t)`$ (Eq. 3) to the data, gives the photon flux at $`t=0`$, $`N_0`$, and the time constant, $`\tau `$. Freezing $`\tau `$ to this value, the fit of $`E_{\mathrm{pk}}(t)`$ (Eq. 4) gives the initial value of the peak energy, $`E_{\mathrm{pk},0}`$ and the HIC index $`\delta `$. The results of these fittings are given in the upper half of Table 1. The same information can be found from fits of the empirical relations described by equations (1) and (2). A fit of the hardness-fluence correlation (2) allows the determination of $`E_{\mathrm{pk},0}`$ and the exponential decay constant, $`\mathrm{\Phi }_0`$. Freezing $`E_{\mathrm{pk},0}`$ to the value obtained, the fit of the hardness-intensity correlation (Eq. 1) gives values for $`N_0`$ and $`\delta `$. These fitted values, as well as the computed values for $`\tau =\delta \mathrm{\Phi }_0/N_0`$, are given in the lower half of Table 1. Note the excellent consistency between the two sets of fitted parameters. In the data for the first pulse of GRB 921207, especially regarding the dependence of $`N(t)`$ on $`t`$ and on $`E_{\mathrm{pk}}`$, we do not find any strong indication for it being a double pulse, as suggested by Crider et al. (1998a) (See Fig. 1). The results of the analysis of a few further examples are given in Table 2. The fitted decay phases of these are presented in Figure 1, where the linear $`1/N(t)`$ functions are displayed. The four parameters describing the decay phases, $`N_0`$, $`E_{\mathrm{pk},0}`$, $`\tau `$, $`\delta `$, are found from the fits of $`𝐆(t;\tau ,\delta )`$ given by equations (3) and (4), while $`\mathrm{\Phi }_0`$ is found from fits of the empirical relation (2). Equation (7) gives consistent $`\tau `$-values within the errors. For our purposes, we need as high time-resolution as possible, but still permitting proper spectral fitting. We therefore chose a S/N$`=30`$. When possible, we redid the analysis with S/N$`=45`$, checking that we arrived at the same results, now, however, with lower temporal resolution. The panel furthest to the right in the lower row of Figure 1 illustrates a case, GRB 960807 (# 5567), for which the reciprocal law is not valid. In this specific case, the hardness-fluence correlation (Eq.2) is valid while the HIC is not a power law. A detailed discussion of such cases is given in Ryde et al. (1999). ## 4 Discussion and Conclusions Equation (3) is an important result, as it describes how the intensity declines with time in the decay phase of a GRB pulse, belonging to the subgroup of pulses for which both the two empirical relations, equations (1) and (2), are valid. The reciprocal, instantaneous intensity is a linear function in time, $`1/N(t)=\left(1+t/\tau \right)/N_0`$. This should be compared to the generally discussed exponential decay, e.g., in the terms of a FRED (fast rise, exponential or stretched exponential decay) often used to characterize single pulses within GRBs (e.g. Norris et al., 1996). Our result is an analytical result following from the two empirical relations. We note that $`N(t)`$ approaches $`(1t/\tau )N_0`$ for $`t\tau `$, which is the same time behavior as that of the stretched exponential $`N(t)=N_0\mathrm{exp}\left[(t/\tau _{})^\nu \right]`$ in the same limit. When $`t\stackrel{<}{}\nu \tau _{}`$, it is difficult to distinguish between the two behaviors. The hardness, represented by the peak energy, $`E_{\mathrm{pk}}(t)`$, also declines reciprocally with time, but is stretched by the $`\delta `$-power (see Eq. 4). A comparison between the intensity decline and the $`E_{\mathrm{pk}}`$-decline is shown, e.g., in Ford et al. (1995). As shown by equation (5), the fluence increases logarithmically with time. This divergent behavior must eventually change, when the emission of radiation changes behavior, terminates, or shifts out of the observed spectral range. The differential equations governing the time evolution of $`𝐆(t;\tau ,\delta )`$ are readily found. Differentiating equation (2) gives $$\frac{dE_{\mathrm{pk}}(t)}{dt}=\frac{\delta }{\tau N_0}N(t)E_{\mathrm{pk}}(t),$$ (8) which, combined with equation (1), gives the equation for $`E_{\mathrm{pk}}(t)`$ as $$\frac{dE_{\mathrm{pk}}(t)}{dt}=\frac{\delta }{\tau (E_{\mathrm{pk},0})^{1/\delta }}E_{\mathrm{pk}}^{1+1/\delta }(t).$$ (9) Furthermore, combining equations (1) and (2) to $$N(t)=N_0e^{\mathrm{\Phi }(t)/N_0\tau },$$ (10) gives, after differentiation, the equation for $`N(t)`$ as $$\frac{dN(t)}{dt}=\frac{1}{N_0\tau }N^2(t).$$ (11) Integrating equations (9) and (11) then gives equations (4) and (3). The description is complete. A different $`N(t)`$-shape of the decay phase will of necessity lead to either that the hardness-intensity correlation or the hardness-fluence correlation or both must have a different shape from the well-observed empirical ones (Eqs. 1 and 2). Details are discussed in Ryde et al. (1999). The description does, unfortunately, not uniquely point to a specific radiation process. For instance, equation (8) could be due to any thermal process. Consequently, it is consistent with saturated Comptonization, without extra heating, which though, in general, is difficult to achieve. Such a scenario has, however, been discussed by Liang & Kargatis (1996) and Liang (1997). In their theoretical, saturated Compton cooling model, Liang et al. (1997) arrive at the equation $`N(t)=d\mathrm{\Phi }/dt=kt^s\mathrm{exp}(n\mathrm{\Phi }/\mathrm{\Phi }_0)(1+E_{\mathrm{pk}}(t))^n/b(t)`$ (obtained by rewriting their Eq. 5), where $`k`$ is a constant and $`b(t)`$ = (the soft photon injection rate/BATSE photon flux). In order to simplify this equation, they set, in a rather ad hoc manner (guided by Monte Carlo simulations) $`b(t)=(1+E_{\mathrm{pk}}(t))^n`$, resulting in the last two factors cancelling. The resulting equation for $`\mathrm{\Phi }`$ then becomes identical to our equation (10) for the case $`s=0`$ (i.e., no Thomson thinning) and taking $`n=1/\delta `$. The resulting solutions are, of course, identical. We strongly emphasize the difference. We obtain our results based on two empirical relations, while Liang et al. obtain their results by employing a rather ad hoc simplification within a theoretical model. In conclusion, we find a subgroup ($`45\%`$) of GRB pulse decays which behave in a similar way, with the decay phases being reciprocal functions, Eqs. (3) and (4). This should thus represent a signature of the underlying physical processes giving rise to these pulses. We have illustrated the results both analytically and by studying BATSE pulses. The time evolution is fully defined by the two initial conditions at the start of the decay, $`N_0`$ and $`E_{\mathrm{pk},0}`$, and by two parameters specific to the GRB pulse decay, $`\tau `$ and $`\delta `$, or, equivalently, $`\mathrm{\Phi }_0`$ and $`\delta `$. This research made use of data obtained through the HEASARC Online Service provided by NASA/GSFC. We are also grateful to the GROSSC for support. We thank S. Larsson, L. Borgonovo, R. Preece, A. Beloborodov, and J. Poutanen for useful discussions. We acknowledge support from the Swedish Natural Science Research Council (NFR), the Anna-Greta and Holger Crafoord Fund, a NORDITA grant, and NSF Grant No. PHY94-07194.
no-problem/9912/astro-ph9912337.html
ar5iv
text
# Probing the Geometry of Supernovae with Spectropolarimetry ## Introduction Are supernovae (SNe) round? This simple question belies a menacing observational challenge, since all extragalactic SNe remain unresolvable point sources throughout the crucial early phases of their evolution. Since a hot young supernova (SN) atmosphere is dominated by electron scattering, which, by its nature, is highly polarizing, a powerful tool for investigating SN geometry is spectropolarimetry of the expanding fireball shortly after the explosion (Fig. 1). Typical polarizations of $`1\%`$ are expected for moderate ($`20\%`$) SN asphericity hoflich91 . Detecting such low polarization requires a very high signal-to-noise ratio, which has limited previous detailed spectropolarimetric studies to only the two brightest recent events, SN 1987A jeffery91 and SN 1993J trammell93 ; tran97 . We thus began a program to obtain spectropolarimetry of nearby SNe using the 10-m Keck telescopes. A complication in the interpretation of all polarization measurements is disentangling the polarization intrinsic to the object from interstellar polarization (ISP) produced by dust along the line-of-sight. Fortunately, the ISP is constant with time and a smoothly varying function of wavelength. Therefore, we consider distinct spectral polarization features, temporal changes in the overall polarization level, or continuum polarization characteristics differing from the known form produced by interstellar dust as evidence for intrinsic SN polarization. ## Results and Discussion Single-epoch polarization data for six SNe of various types are shown in Fig. 2. Since determining the intrinsic SN polarization level requires knowledge of the (unknown) ISP, we focus instead on the sharp changes seen in the polarization at the location of strong features in the total flux spectra; these features remain, regardless of the ISP contribution. Since all the objects studied possess spectropolarimetric line features, we conclude that all types of SNe show evidence for intrinsic polarization at early times, suggesting that asphericity may be a ubiquitous SN characteristic. The fact that the strongest spectropolarimetric features are often seen in the troughs of strong P-Cygni lines is not surprising. A simple explanation may be that P-Cygni absorption selectively blocks photons coming from the central, more forward-scattered (and thus less polarized) regions, thereby enhancing the relative contribution of the more highly polarized photons from the limb regions (c.f., Fig. 1). Unfortunately, since different (allowable) choices for the ISP can make inferred intrinsic polarization dips become peaks and vice-versa leonard99a , we cannot say for certain whether the changes seen here in P-Cygni troughs represent increases or decreases in the intrinsic polarization level. We do note, however, that trough polarization increases are seen in the ISP-corrected data of both SN 1987A jeffery91 and SN 1993J tran97 . A total flux spectrum dominated by strong line emission without P-Cygni absorption is a distinguishing characteristic of SNe IIn schlegel90 , likely resulting from an intense interaction between the SN and a dense circumstellar environment (CSM). SN 1997eg (Fig. 2) shows sharp polarization changes across its strong, multi-component emission lines, suggesting distinct scattering origins for the intermediate (full width at half maximum (FWHM) $``$ 2000 km/s) and broad (FWHM $``$ 15000 km/s) components. Two additional spectropolarimetric epochs (not shown) revealed a change in continuum polarization level of $`1\%`$ over 78 days, further confirming the presence of intrinsic polarization. A detailed analysis combining spectropolarimetry and total flux spectra of another IIn event, SN 1998S, also found evidence for a highly aspherical ($`45\%`$) continuum scattering region, with the CSM likely distributed in a disk-like or ring-like morphology, quite similar to what is seen directly in SN 1987A crotts99 , except much closer to the progenitor in the case of 1998S leonard99a . ## Conclusion The number of SNe studied spectropolarimetrically is still very small, but early indications are that all types reveal intrinsic polarization if examined in sufficient detail. In addition to the implications of spectropolarimetry on the core-collapse mechanism, the mass-loss history of evolved stars, and the spatial distribution of SN ejecta, this work has direct consequences on the use of SNe as cosmological distance indicators. Although the empirically-based, standard-candle technique used to measure SN Ia distances does not rely on spherical symmetry, distances derived to SNe II-P through the “expanding photosphere method” kirshner74 would need to be corrected for directionally-dependent flux if asphericity is found to be common among this SN class. We note that SN 1999em, a type II-P event discovered shortly after this conference, showed no evidence for intrinsic polarization when it was observed less than two weeks after the explosion leonard99b . It will be interesting to see if it remains unpolarized at an age comparable to the II-P observation (SN 1997ds, observed $`50`$ days after explosion) presented here. ## Acknowledgments We thank Aaron Barth for useful discussions and assistance with the observations and data reduction. Supernova research is supported at UC Berkeley through NSF grant AST-9417213 and NASA grant GO-7434.
no-problem/9912/astro-ph9912311.html
ar5iv
text
# COMPTEL Time-Averaged All-Sky Point Source Analysis ## Introduction The imaging COMPTEL experiment aboard CGRO is the pioneering satellite experiment of the MeV-sky ($``$1 - 30 MeV). For a detailed description of COMPTEL see Schonfelder93 . One of COMPTEL’s prime goals is the generation of all-sky maps, which provide a summary on the MeV-sky in total. This goal has been achieved by e.g. Strong97 , Bloemen99 who generated maximum-entropy all-sky images and by Blom97 , who generated the first COMPTEL all-sky maximum-likelihood maps, which – compared to maximum-entropy ones – have the advantage of providing quantitative results like significances and fluxes of source features. Here we present all-sky maximum-likelihood maps from which models of the diffuse emission have been removed. Our emphasis is on AGN. For a discussion on the method see Bloemen00 in these proceedings. The main analysis goals are 1) to derive a summary of known COMPTEL point sources, 2) to search for further point sources, 3) to derive time-averaged quantitative parameters (’first order’) of our brightest point sources, i.e., significances, fluxes, MeV-spectra, and possible time variability, and 4) to further investigate our data and analysis methods. ## Data and Analysis Method Using all data from the beginning of the CGRO mission (April ’91) to the end of CGRO Cycle VI (Nov. ’97), we generated a consistent database of relevant COMPTEL data sets (events, exposure, geometry) for individual CGRO viewing periods (VPs) in the 4 standard energy bands (0.75-1, 1-3, 3-10, 10-30 MeV) in galactic coordinates by applying consistent data selections. This database was supplemented by relevant data sets containing models describing the galactic diffuse $`\gamma `$-ray emission (HI, CO, and inverse-Compton components) and the isotropic extragalactic $`\gamma `$-ray background emission. To check for time variability of $`\gamma `$-ray sources these data sets were combined for different time periods: the six individual CGRO Phases/Cycles, the sum of all data (CGRO Phases I-VI; April ’91 - Nov. ’97) as well as the first (CGRO Phases I-III; April ’91 - Oct. ’94) and the second half (CGRO Phases IV-VI; Oct. ’94 - Nov. ’97). Each set of all-sky data is analysed by our standard maximum-likelihood method which simultaneously ’handles’ individual VPs, generates, iteratively, a background model (see Bloemen94 ), and finally generates significance and flux maps and/or significances and fluxes for individual sources. Because we are interested in point sources, the diffuse emission is always removed in the fitting procedure (e.g. Figure 1). For the derivation of the source fluxes (see Figure 2 as an example), the point sources of interest (e.g. 3C 273, Cyg X-1) have additionally been included in the fitting procedure. We like to mention however, that the results derived by such all-sky fits should be considered correct to first order only. To derive final/optimal results for a particular source, a dedicated analysis has to be carried out, which e.g. makes several cross checks by applying different background models and would take into account the presence of other source features in the region of interest. Also, along the galactic plane the results depend on the ’goodness’ of the applied diffuse emission models for the MeV-band. ## Results The significance maps in Figure 1, which contain all data of the first 6.5 years of the COMPTEL mission, are the first COMPTEL all-sky point source maps in the continuum bands. They provide a summary of the on-average brightest and most significant MeV-sources. Similar maps focussing on the Galactic plane only are given elsewhere in these proceedings (). The Crab – for display reasons removed in all maps of Figure 1 – is by far the most significant COMPTEL point source. In the 1-3 MeV band for example it reaches a significance of $``$110$`\sigma `$ (i.e. a likelihood ratio of $``$12000) for the CGRO Phase I-VI period. With significances of $``$11$`\sigma `$, $``$10$`\sigma `$, and $``$6$`\sigma `$ in the 1-3, 3-10, and 10-30 MeV bands is the quasar 3C 273 found to be on average the second most significant point source. Its fluxes in these bands are between 10% and 15% of the Crab flux. Several other extragalactic (e.g. 3C 279, PKS 0528+134, Cen A) and galactic (e.g. Cyg X-1, PSR 1509-58, a known but unidentified source at l;b: 18<sup>o</sup>;0<sup>o</sup>) sources are visible as well. In addition there are indications for previously unknown source features like at l;b: 75<sup>o</sup>;+65<sup>o</sup> in the 1-3 MeV map and at l;b: 85<sup>o</sup>;-65<sup>o</sup> in the 3-10 MeV map for example. Such spots are promising candidates for further dedicated analyses. This time-averaged approach suppresses sources which flare up only in short time periods. Therefore the maps show fewer sources than are listed in the COMPTEL source catalog (see Schonfelder99 ). For all bright and significant COMPTEL sources we have derived fluxes in our 4 standard energy bands for the different time periods mentioned above, and have combined them to MeV light curves and spectra. Some results for 3C 273 are shown as an example in Figure 2. In the 1-10 MeV energy band 3C 273 is detected in each CGRO Phase/Cycle i.e. in time periods of typically 1 year. The flux turns out to be rather stable and varies only within a factor of $``$2 in the 1-3 MeV and within a factor of $``$4 in the 3-10 MeV energy band. The spectra show the same trend. Whereas the flux below 3 MeV turns out to be same for both halves, there is an indication that at the upper COMPTEL energies ($`>`$3 MeV) the source was brighter during the second half. All three spectra clearly show the spectral turnover occuring at MeV-energies. However, we emphasize that for final conclusions a dedicated source analysis has to be carried out. ## Summary We have applied the maximum-likelihood method to COMPTEL all-sky data of different time periods. By simultaneously fitting models for the different diffuse emission components, this analysis method provided quantitative all-sky results – significances and fluxes – on point sources. An all-sky summary on their time-averaged fluxes and significances is thereby provided. After the Crab – pulsar plus nebula – the quasar 3C 273 was found to be the most significant COMPTEL MeV-source, having time-averaged fluxes of the order of 10% to 15% of the Crab. Additional evidence for previously unknown source features has been found as well. ACKNOWLEDGMENTS: The COMPTEL project is supported by the German government through DARA grant 50 QV 9096 8, by NASA under contract NAS5-26645, and by the Netherlands Organisation for Scientific Research (NWO).
no-problem/9912/astro-ph9912047.html
ar5iv
text
# Resolved CO(1→0) Nuclei in IRAS 14348-1447: Evidence for Massive Bulge Progenitors to Ultraluminous Infrared Galaxies ## 1. Introduction Galaxy tidal interactions and mergers are responsible for the most luminous galaxy phenomenon in the universe, be they starbursts or active galactic nuclei (AGN). The energy sources are fueled by molecular gas, which is subject to gravitational torques, dynamical friction and dissipation during the interaction. As a result, the distribution of young stars in the merger and the efficiency at which an AGN can be built or fueled depends not only on the amount of fuel available, but also on the Hubble-types of the progenitors (e.g., Mihos & Hernquist 1996; Mihos 1999). The most energetic examples of mergers known are the ultraluminous infrared galaxies (ULIGs: $`L_{\mathrm{IR}}[81000\mu \mathrm{m}]10^{12}`$ L). ULIGs have been of particular interest in the last 15 years, both because of their extreme starburst environment (e.g. Joseph & Wright 1985) and because of their possible evolutionary connection with QSOs and radio galaxies (Sanders et al. 1988a,b; Mirabel, Sanders, & Kazes 1989). In order to determine the properties of the progenitors of ULIGs and the mechanisms at work as the progenitors come under the gravitational influence of each other, a program has been initiated to map the distribution of CO($`10`$) in several ULIGs for which the nuclei of the progenitors have yet to coalesce. In this Letter, high-resolution CO($`10`$) observations of the ULIG IRAS 14348-1447, obtained with the Owens Valley Millimeter Array (OVRO), are presented. With an infrared luminosity $`L_{\mathrm{IR}}=1.8\times 10^{12}`$ L and a molecular gas mass of $`4.2\times 10^{10}`$ M (Sanders, Scoville, & Soifer 1991), IRAS 14348-1447 is one of the most luminous and molecular gas rich ULIGs that show no definitive evidence of AGN activity (i.e, broad line emission or strong high ionization lines; see Veilleux, Sanders, & Kim 1997). The nuclei have a projected separation of $`3.5\mathrm{}`$ (4.8 kpc: Carico et al. 1990; Surace, Sanders & Evans 1999; Scoville et al. 1999), making it ideally suited for the resolution of OVRO. The data presented here show evidence contrary to observations which have led to suggestions that molecular gas in luminous mergers as a class is stripped and collects between the merging stellar nuclei, but supports recent models that show the molecular disks in massive bulge galaxy-galaxy mergers are gravitationally stabilized against stripping (Mihos & Hernquist 1996; Mihos 1999). An $`H_0=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$ is assumed throughout such that 1$`\mathrm{}`$ subtends $`1.4`$ kpc at the redshift of the galaxy ($`z=0.0825`$). ### 1.1. Interferometric Observations Aperture synthesis CO($`10`$) maps of IRAS 14348-1447 were made with the Owens Valley Radio Observatory (OVRO) Millimeter Array during two observing periods from 1999 February to 1999 April. The array consists of six 10.4 m telescopes, and the longest observed baseline was 242 m. Each telescope was configured with $`120\times 4`$ MHz digital correlators. During the observations, the nearby quasar \[HB89\] 1334-127 (5.89 Jy at 107 GHz; B1950.0 coordinates 13:34:59.81 -12:42:09.9) was observed every 25 minutes to monitor phase and gain variations, and 3C 273 and 3C 345 were observed to determine the passband structure. Finally, flux calibration observations of Neptune were obtained. The OVRO data were reduced and calibrated using the standard Owens Valley data reduction package MMA (Scoville et al. 1992). The data were then exported to the mapping program DIFMAP (Shepherd, Pearson, & Taylor 1995), and the NRAO software package AIPs was used to extract spectra. ## 2. Results Figure 1a shows a 0.8 $`\mu `$m archival image of IRAS 14348-1447 taken with the Hubble Space Telescope (HST) WFPC2 instrument. The merger consists of two nearly face-on colliding spirals with a projected nuclear separation of 3.4$`\mathrm{}`$ (4.8 kpc). A prominent tidal tail is visible extending northward from the northeastern galaxy (hereafter IRAS 14348-1447NE), and a much fainter countertail extends southward from the southwestern galaxy (hereafter IRAS 14348-1447SW). Knots, or super star clusters, are visible around both nuclei, as well as along the tidal tail of IRAS 14348-1447NE. Figure 1b shows the CO(1$``$0) emission of IRAS 14348-1447 in contours superposed on a three-color HST NICMOS (Near-Infrared Camera and MultiObject Spectrometer) image of the ULIG. The CO emission consists of two unresolved components (FWHM = $`2.8\mathrm{}\times 1.9\mathrm{}`$ = 3.9 kpc $`\times `$ 2.7 kpc) which are centered on the respective stellar nuclei of the progenitors. The upper limit of the CO extent in each galaxy is significantly less than the average effective diameter<sup>1</sup><sup>1</sup>1Young et al. (1995) define the effective diameter to be the diameter that encloses 70% of the total CO emission. of 11.5$`\pm 7.5`$ kpc determined from a sample of $`140`$ nearby spiral galaxies (Young et al. 1995). The total CO flux density of IRAS 14348-1447 is 40.8 Jy km s<sup>-1</sup> and the CO luminosity is $`L_{\mathrm{CO}}^{}=8.0\times 10^9`$ K km s<sup>-1</sup> pc<sup>2</sup>, with $`42`$% and 68% of the emission emanating from the NE and SW components, respectively. The total CO flux density is 20% less than the flux density derived from the single-dish measurement of IRAS 14348-1447 with the NRAO 12m Telescope (half power beam width, HPBW = 55$`\mathrm{}`$: Sanders, Scoville, & Soifer 1991). Such a discrepancy is likely due to flux calibration uncertainties associated with the individual observations, which can each be as large as 15%. Assuming a standard ratio ($`\alpha `$) of CO luminosity to H<sub>2</sub> mass of 4 M (K km s<sup>-1</sup> pc<sup>2</sup>)<sup>-1</sup>, which is similar to the value determined for the bulk of the molecular gas in the Milky Way, the total molecular gas mass of the pair is calculated to be $`3.2\times 10^{10}`$ \[$`\alpha /4`$\] M, or 14 times the molecular gas mass of the Milky Way.<sup>2</sup><sup>2</sup>2Radford, Solomon, & Downes (1991) have used theoretical models to determine that $`\alpha `$ ranges from 2–5 M (K km s<sup>-1</sup> pc<sup>2</sup>)<sup>-1</sup> for a reasonable range of temperatures and densities. Downes & Solomon (1998) have modeled interferometric CO data of a sample of infrared luminous galaxies to derive an $`\alpha =0.8`$. Thus, the molecular gas mass of IRAS 14348-1447 may be a low as $`6.4\times 10^9`$ M. However, note that the dynamical mass derived using $`\mathrm{\Delta }v_{\mathrm{FWHM}}=350`$ and $`300`$ km s<sup>-1</sup>, $`r=108`$ and $`84`$ pc (see §3), and assuming the reasonable inclination angle of the disk axis relative to the line-of-sight of $`29^\mathrm{o}`$ and $`18^\mathrm{o}`$ for the IRAS 14348-1447NE and 14348-1447SW disks, respectively, yields masses consistent with the molecular gas mass calculated with $`\alpha =4`$ M (K km s<sup>-1</sup> pc<sup>2</sup>)<sup>-1</sup>. Figure 1b also shows the CO(1$``$0) emission-line spectra extracted from both progenitors. The line profile of IRAS 14348-1447NE appears asymmetric and blueshifted, consistent with the H$`\alpha `$ line profile determined from Fabry-Perot observations (Mihos & Bothun 1998), and has a full width at half the maximum intensity line width of $`\mathrm{\Delta }v_{\mathrm{FWHM}}350`$ km s<sup>-1</sup>. IRAS 14348-1447SW has a more symmetric CO line profile with a $`\mathrm{\Delta }v_{\mathrm{FWHM}}300`$ km s<sup>-1</sup>. The relative velocity offset of the CO line centroids is approximately 120 km s<sup>-1</sup>, consistent with the offsets measured from the optical emission lines (Veilleux et al. 1995; Mihos & Bothun 1998). Table 1 summarizes the properties discussed above. ## 3. Discussion Figure 1 clearly shows that IRAS 14348-1447 consists of two gas-rich spiral galaxies that have undergone gravitational response to at least one initial close approach. It is also clear that the molecular gas is associated with the individual progenitor nuclei at this stage of the merger process. Further, the LINER (Low Ionization Nuclear Emission-line Regions) emission-line spectra in both nuclei (Veilleux et al. 1995) indicates that shocks from either a low ionization AGN or supernovae resulting from massive starbursts with a total energy output of 1.8$`\times 10^{12}`$ L have commenced. All of these properties can be understood in terms of a merger in which the molecular disks associated with each progenitor have been stabilized against stripping by a dense, massive stellar bulge. Given the upper limit on the gas distribution of IRAS 14348-1447 relative to local spiral galaxies, the interaction has likely already driven the molecular gas inwards, resulting in enhanced nuclear activity (see simulations by Mihos & Hernquist 1996 and Mihos 1999). Disk stabilization against stripping is also applicable to other intermediate stage ULIGs. Recent observations of Arp 220 have shown that the major CO concentrations in this merger are on the individual stellar nuclei, which have a projected separation of 0.95$`\mathrm{}`$ ($`350`$ pc; Sakamoto et al. 1999). Likewise, PKS 1345+12, a ULIG with very warm, Seyfert-like infrared colors relative to Arp 220 and IRAS 14348-1447, shows the CO emission concentrated only on the active radio nucleus, indicating that gas infall and AGN activity has been triggered by interactions with the companion galaxy (Evans et al. 1999). The companion galaxy has colors consistent with an elliptical galaxy (Surace et al. 1998), which explains its lack of detectable molecular gas. In contrast, the morphologies of all of the lower luminosity luminous infrared galaxies (LIGs: $`L_{\mathrm{IR}}=10^{11.011.99}`$ L) with small ($``$ 5 kpc) projected nuclear separations observed to date are markedly different; Arp 244, VV 114, NGC 6240, and NGC 6090 have molecular gas predominantly in one component between the two nuclei (Stanford et al. 1991; Yun et al. 1994; Tacconi et al. 1999; Bryant & Scoville 1999). Tacconi et al. (1999) have speculated that the molecular gas in NGC 6240 has been ram-pressure stripped, and that the nuclei may later sweep up gas as the galaxy evolves into ULIGs such as Arp 220. An alternative explanation may be that the processes affecting the gas in LIGs differ from ULIGs because the progenitors of each luminosity class differ. Specifically, ULIGs may primarily be examples of equal mass galaxy mergers with dense stellar bulges, thus the gas is driven into the nuclear region of the respective progenitors as the merger advances. In contrast, LIGs may be predominantly collisions of galaxies of different masses and relatively low-density bulges (e.g., note the differences in the near-infrared stellar morphologies of the progenitor galaxies of NGC 6090 and VV 114: Dinshaw et al. 1999; Scoville et al. 1999), thus the gas is stripped from the progenitors with extreme efficiency. As a result, many LIGs may never evolve into ULIGs. A larger survey of double nuclei ULIGs, consisting of those with obvious AGN signatures and those without, is underway to investigate the ubiquity of these results. Such a survey will also benefit from kinematic determinations of the stellar bulge masses of the progenitors. A comparison of the molecular gas and radio fluxes and morphologies of the progenitors of IRAS 14348-1447 can be used to derive the gas densities of the nuclei. The measured IRAS 14348-1447NE to IRAS 14348-1447SW flux density ratios of CO and the 8.44 GHz and 1.49 GHz radio emission (i.e., Condon et al. 1990; 1991) yield values of $`f(NE)/f(SW)`$ 0.61, 0.62, and 0.58, respectively. The implication of this result is that the molecular gas mass of each component is related to the source of the radio emission. This can be understood if the nuclear radio emission is due to synchrotron emission from supernovae and if both galaxies have similar initial mass functions. Thus, the CO and radio flux density ratios of the progenitors are similar because the same fraction of massive stars are produced per unit of star-forming molecular gas. Therefore, if the likely assumption is made that the extent of the radio emission of each nucleus (FWHM(NE) $`0.16\mathrm{}`$ \[220 pc\] and FWHM(SW) $`0.12`$ \[170 pc\]) is similar to the true extent of the molecular gas (i.e., the supernovae fill the same volume as the gas they are formed from), then the northeastern and southwestern nuclei have gas densities of $`2.4\times 10^3`$ \[$`\alpha /4`$\] and $`7.7\times 10^3`$ \[$`\alpha /4`$\] M pc<sup>-3</sup>, respectively. Given the uncertainty in the value of $`\alpha `$, the density and the velocity dispersion of the gas in each nucleus are comparable to the stellar densities and velocity dispersions of elliptical galaxy cores (Faber et al. 1997), supporting the likely connection between ULIGs and the formation of elliptical galaxy bulges (Kormendy & Sanders 1992). Molecular gas-rich ULIGs in the local universe such as IRAS 14348-1447 may provide insights to the nature of massive galaxy formation in the universe. Figure 2 shows a plot of the logarithm of CO(1$``$0) luminosity of LIGs and ULIGs versus their redshift. On this plot, IR 14348-1447 is shown as an asterix enclosed in a circle. The rise in $`L_{\mathrm{CO}}^{}`$ at $`z<0.04`$ is simply due to the space density of the flux-limited infrared luminous galaxy sample. However, the leveling off of $`L_{\mathrm{CO}}^{}`$ beyond $`z>0.07`$ is a possible indication that, due to self regulating processes, galaxies do not contain molecular gas masses in excess of $`4\times 10^{10}`$ \[$`\alpha /4`$\] M (Evans et al. 1996; Frayer et al. 1999). The observed flatness of $`L_{\mathrm{CO}}^{}`$ beyond $`z0.07`$ remains constant out to redshifts of 4.7 (Frayer et al. 1999). Thus, in terms of the richness of the interstellar medium, galaxies such as IRAS 14348-1447 appear to be the low redshift counterparts of molecular gas-rich, high-redshift galaxies detected over the last decade (e.g. see Frayer et al. 1999 for a summary). If a substantial fraction of these systems have nuclear gas densities comparable to IRAS 14348-1447, then they as a class are the likely progenitors of massive elliptical galaxies. We thank the staff and postdoctoral scholars of the Owens Valley Millimeter array for their support both during and after the observations were obtained. ASE thanks D. Frayer and N. Trentham for useful discussion and assistance. We also thank the referee for many useful suggestions. ASE was supported by RF9736D and by NASA grant NAG 5-3042. J.M.M. was supported by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. The Owens Valley Millimeter Array is a radio telescope facility operated by the California Institute of Technology and is supported by NSF grants AST 93-14079 and AST 96-13717. The NASA/ESA Hubble Space Telescope is operated by the Space Telescope Science Institute managed by the Association of Universities for research in Astronomy Inc. under NASA contract NAS5-26555. Figure Captions Figure 1.a) Hubble Space Telescope WFPC2 image of IRAS 14348-1447. b) CO($`10`$) contours of the merger superimposed on a three-color composite NICMOS image (blue=1.1 $`\mu `$m, green = 1.6 $`\mu `$m, red=2.2 $`\mu `$m). The CO contours are plotted as 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, and 99% the peak flux of 0.0283 Jy/beam. The CO emission for each progenitor is unresolved, with a beam FWHM of 2.8″$`\times `$1.9″ at a position angle of -21.6<sup>o</sup>. Extracted spectra of the SW and NE progenitors are also shown. For the images, north is up and east is to the left. Figure 2. The plot of log($`L_{\mathrm{CO}}^{}`$) versus redshift ($`z`$) for a flux-limited sample ($`f_{60\mu \mathrm{m}}>5.24`$ Jy) of infrared luminous galaxies and a sample of ultraluminous infrared galaxies. The data have been obtained from Sanders, Scoville, & Soifer (1991) and Solomon et al. (1997). The data point representing IRAS 14348-1447 is encircled.
no-problem/9912/hep-th9912016.html
ar5iv
text
# 1 Introduction ## 1 Introduction The explicit computation of loop amplitudes in string theory is notoriously difficult. Even for the technically most simple theory – the bosonic string – the level of mathematical complexity is impressing if one tries to go beyond one loop. Adding fermions and supersymmetry on the world-sheet does not improve the situation. On the contrary, the calculations become still more intricate and only a few explicit results exist. It seems that even the general formalism has not yet been fully worked out . Fortunately, explicit computations can sometimes be replaced by more indirect methods, often related to symmetry arguments. It is thus not surprising that for the $`N=2`$ string (i.e. the theory based on extended supersymmetry on the world-sheet; see for a general review and for a discussion of loop amplitudes) Berkovits and Vafa succeeded to avoid the evaluation of the path integral and obtained powerful results for loop amplitudes by embedding the theory into an $`N=4`$ topological theory . In fact, they found that all amplitudes with more than three external legs vanish to all orders in the loop expansion. The purpose of this letter is to give an alternative derivation of this result. Our approach has the advantage that conceptually it is very clear what is going on since the equations used to derive the vanishing of the amplitudes can nicely be interpreted as Ward identities of an infinite set of unbroken symmetries in target space. Another interesting point is that from a technical point of view our analysis rests on the picture dependence of the BRST cohomology of the $`N=2`$ string at zero momentum and demonstrates what kind of information may be stored in the still somewhat obscure picture phenomenon . Maybe, this lesson can also be useful in some way for the $`N=1`$ string. The letter is organised as follows: In the next section we recall some facts about the BRST cohomology of the $`N=2`$ string. These results will be used in section three to derive an infinite set of target space Ward identities which will then be explicitly evaluated so that the vanishing of the loop amplitudes directly follows. We conclude with some further remarks and a brief discussion of the reliability of our arguments. ## 2 Symmetries and ground ring of the $`N=2`$ string One of the attractive features of the BRST approach to closed string theory is that it provides an efficient means to analyse symmetries in target space. More precisely, unbroken target space symmetries generally<sup>1</sup><sup>1</sup>1There are exceptions, see section five of . lead to the existence of ghost number one cohomology classes (in conventions where physical states have ghost number two). A detailed explanation of this fact is given in (see also ) where in addition an elegant method to derive the corresponding Ward identities – briefly reviewed below – is described. Due to the fact that the closed string Fock space factorises into right- and left-moving parts ghost number one cohomology classes can be further characterised: they are most conveniently constructed as a product of a holomorphic piece of ghost number zero and an antiholomorphic piece of ghost number one. The latter is usually the right-moving part of a physical vertex operator, taken at some discrete value of the momentum whereas the former very often is just the unit operator. If, however, the chiral (= left-moving) cohomology at ghost number zero contains further elements besides the unit operator more closed string operators of ghost number one can be constructed resulting in a much richer symmetry structure. An example is the bosonic string in two dimensions . Moreover, interesting algebraic structures emerge. The BRST cohomology possesses a natural multiplication rule, additive in ghost number. The ghost number zero cohomology therefore forms a ring under this multiplication (the so-called ground ring). As has been emphasised in the structure constants of this ground ring encode much information about the symmetry of the theory<sup>2</sup><sup>2</sup>2There exist two further operations – the Gerstenhaber-bracket and the $`\mathrm{\Delta }`$ operation – which, together with the ring multiplication, give the BRST cohomology the structure of a BV-algebra .. The $`N=2`$ string has been studied along these lines in . Based on the fact that so many of its scattering amplitudes are known or conjectured to vanish and comparison with the field theory that reproduces tree-level scattering it seemed very plausible that in this theory a large symmetry group is realized. In fact, a ground ring of the $`N=2`$ string has recently been found in and will now briefly be reviewed. The construction looks somewhat unconventional because it does not restrict to operators of a single picture only, but takes into account the full picture degeneracy of the Fock space<sup>3</sup><sup>3</sup>3This construction is non-trivial due to the picture dependence of the BRST cohomology of the $`N=2`$ string at zero momentum .. However, starting from this ground ring one may derive powerful Ward identities as has been shown for tree amplitudes in and will be demonstrated in this letter for loop amplitudes. At zero ghost number chiral cohomology classes occur only for vanishing momentum. For low-lying picture numbers and ghost number zero the cohomology problem is rather straightforward to solve<sup>4</sup><sup>4</sup>4 Poincaré-duality provides an isomorphism between the cohomologies for pictures $`(\pi ^+,\pi ^{})`$ and $`(\pi ^{}2,\pi ^+2)`$ . Moreover, the cohomologies for pictures $`(\pi ^++\rho ,\pi ^{}\rho )`$ with $`\rho \frac{1}{2}`$ coincide due to spectral flow. It is therefore sufficient to consider the case $`\pi ^\pm 1`$ only.: * The cohomology is empty for pictures $`(\pi ^+,\pi ^{})=\{(1,1),(1,0),(0,1)\}`$ and consists of the unit operator in the $`(0,0)`$ picture. * In the pictures $`(1,1)`$ and $`(1,1)`$ the cohomology consists of the spectral flow operators $$A(z)=(1cb^{})J^{}e^{\phi ^+}e^\phi ^{}(z)$$ (1) and $$A^1(z)=(1+cb^{})J^{++}e^{\phi ^+}e^\phi ^{}(z)$$ with $`J^{++}=\frac{1}{4}ϵ_{ab}\psi ^{+a}\psi ^{+b}`$ and $`J^{}=\frac{1}{4}ϵ_{\overline{a}\overline{b}}\psi ^{\overline{a}}\psi ^{\overline{b}}`$ (see for conventions and a description of the $`N=2`$ string ghost system). One may check that $`A`$ and $`A^1`$ are inverse to each other with respect to ring multiplication<sup>5</sup><sup>5</sup>5Multiplication of two operators, denoted by a dot in the following, means to take the regular term in their operator product expansion .. * In the $`(1,0)`$ picture the cohomology consists of the picture changing operator $$X^+(z)=\{Q,\xi ^+(z)\}$$ and the operator $$AX^{}\mathrm{with}X^{}(z)=\{Q,\xi ^{}(z)\}.$$ It should be emphasised that $`AX^{}`$ is BRST inequivalent to $`X^+`$. Analogously, the $`(0,1)`$ cohomology consists of the operators $`X^{}`$ and $`A^1X^+`$. We see that the size of the cohomology grows as the picture increases. To obtain cohomology classes with higher integral picture numbers one may simply consider polynomials of the operators $`A`$, $`A^1`$ and $`X^\pm `$, $$\left(X^+\right)^k\left(X^{}\right)^{\mathrm{}}A^n,k,\mathrm{},n.$$ Note that $`k`$ and $`\mathrm{}`$ must not be negative since, contrary to $`N=1`$ strings, there do not exist local inverse picture changing operators for the $`N=2`$ string (the cohomology at vanishing momentum and ghost number is empty for picture numbers $`(1,0)`$ and $`(0,1)`$). It has been shown in that all these operators are BRST inequivalent ! For a given picture $`(\pi ^+,\pi ^{})`$ we thus have constructed $`\pi ^++\pi ^{}+1`$ operators, $$𝒪_{\pi ^+,\pi ^{},n}=(X^+)^{\pi ^++n}(X^{})^{\pi ^{}n}A^n,n=\pi ^+,\mathrm{},\pi ^{}.$$ (2) To obtain ghost number one cohomology classes of the closed string connected to the symmetries of the theory the operators in (2) have to be combined with right-moving cohomology classes of zero momentum and ghost number one. These operators can be found in a similar way: In it has been shown that the relevant cohomology in the $`(0,0)`$ picture is spanned by the four elements $$i𝒫^a=cZ^a2\gamma ^{}\psi ^{+a},i\overline{𝒫}^{\overline{a}}=c\overline{Z}^{\overline{a}}2\gamma ^+\psi ^{\overline{a}}.$$ (3) Here the target space Lorentz indices $`a`$ and $`\overline{a}`$ range from $`0`$ to $`1`$. Multiplication with $`𝒪_{\pi ^+,\pi ^{},n}`$ gives similar operators in higher pictures: $`𝒫_{\pi ^+,\pi ^{},n}^a=𝒪_{\pi ^+,\pi ^{},n}𝒫^a,`$ $`\overline{𝒫}_{\pi ^+,\pi ^{},n}^{\overline{a}}=𝒪_{\pi ^+,\pi ^{},n}\overline{𝒫}^{\overline{a}}.`$ (4) We are now ready to write down the sought for closed string cohomology classes of ghost number one: $$\mathrm{\Sigma }_{\pi ^+,\pi ^{},m,n}^a=𝒪_{\pi ^+,\pi ^{},m}(z)\stackrel{~}{𝒫}_{\pi ^+,\pi ^{},n}^a(\overline{z}),m,n=\pi ^+,\mathrm{},\pi ^{}.$$ (5) To save space the analogous operators $`\mathrm{\Sigma }^{\overline{a}}`$ will not be explicitly mentioned in the following. Using the descent equations one may now construct an infinite set of symmetry charges and work out the transformation laws of the physical state. This has been done in . We conclude this section with one further remark. So far, we have only considered the relative cohomology of states that are annihilated by the zero modes of all fermionic antighosts. It would, however, be more appropriate also to take into account states that are not annihilated by $`b_0+\stackrel{~}{b}_0`$ which defines the so-called semi-relative cohomology (one way to see that this is the right space to consider is to write down a kinetic term in a string field formalism). Allowing for more states generally changes the cohomology. But fortunately, one can show that the operators (5) are still non-trivial in the semi-relative cohomology. One may also wonder whether new cohomology classes turn up, as happens for the bosonic string in two dimensions . We do not know the general answer to this question, but explicit calculations for low-lying pictures indicate that this is not the case. ## 3 Ward identities We will now use the results from the previous section to derive Ward identities for $`N=2`$ string amplitudes at arbitrary genus. Actually, an $`N=2`$ string scattering amplitude is further characterised by a Chern number classifying $`U(1)`$ bundles over the world-sheet Riemann surface. It is, however, sufficient to focus on vanishing Chern number in the following. This will be justified in section four. For reasons of space the general formalism will not be reviewed in detail here. Instead, we refer to for more extensive explanations. The basic object involved in the computation of scattering amplitudes is the vertex operator of the single degree of freedom in the theory. As usual, it splits into holomorphic and antiholomorphic parts: $$V(z,\overline{z},k)=V^{left}(z,k)\stackrel{~}{V}^{right}(\overline{z},k)$$ The left-moving operator is $$V_{(1,1)}^{left}(z,k)=cc^{}e^{\phi ^+}e^\phi ^{}e^{ikZ^{left}}$$ in the $`(1,1)`$ picture and $$V_{(\pi ^+,\pi ^{})}(z,k)=(X^+)^{\pi ^++1}(X^{})^{\pi ^{}+1}V_{(1,1)}(z,k)$$ in higher pictures (the right-moving piece $`\stackrel{~}{V}^{right}`$ looks similar)<sup>6</sup><sup>6</sup>6Application of spectral flow only leads to vertex operators proportional to those above.. Counting both metric and $`U(1)`$ but not supersymmetry ghost number vertex operators in closed $`N=2`$ string theory therefore have ghost number four (in our conventions picture changing operators have ghost number zero, see ). Moreover, they are not annihilated by the zero modes $`b_0^{}`$ and $`\stackrel{~}{b}_0^{}`$ of the $`U(1)`$ antighosts. On the other hand the ghost number one operators constructed in the previous section are all elements of the relative cohomology, i.e. they are all killed by the zero modes of all fermionic antighosts. It is, however, not too difficult to relate relative cohomology classes to operators of higher ghost number, essentially by multiplying with the relevant ghosts. In this way we can construct from the ghost number one operators in equation (5) new cohomology classes of ghost number three: $$\mathrm{\Sigma }_{\pi ^+,\pi ^{},m,n}^a\mathrm{\Omega }_{\pi ^+,\pi ^{},m,n}^ac^{}\stackrel{~}{c}^{}\mathrm{\Sigma }_{\pi ^+,\pi ^{},m,n}^a+\mathrm{}$$ Here the dots refer to further terms that might be necessary to achieve BRST invariance but are unimportant otherwise. We are now ready to derive a Ward identity involving a genus $`g`$ scattering amplitude of $`N`$ external states with momenta $`k_1,\mathrm{},k_N`$ (denoted $`A_N^g(k_1,\mathrm{},k_N)`$ in the following). One starts with the correlator<sup>7</sup><sup>7</sup>7For simplicity we only consider closed string operators whose left- and right-moving picture numbers coincide. $$\mathrm{\Omega }_{\pi ^+,\pi ^{},m,n}^a(z,\overline{z})\underset{i=1}{\overset{N}{}}V_{\pi _i^+,\pi _i^{}}^{cl}(z_i,\overline{z_i},k_i)\underset{l}{}(\mu _l,B)(\stackrel{~}{\mu }_l,\stackrel{~}{B})_g$$ (6) where $`(\mu _l,B)`$ and $`(\stackrel{~}{\mu }_l,\stackrel{~}{B})`$ are the the appropriate Beltrami differentials integrated with the corresponding antighosts and the index $`g`$ indicates that the correlator is meant to be evaluated with respect to the conformal field theory living on a Riemann surface of genus $`g`$. The antighosts can be applied to the vertex operators and the integrations can be pulled out of the brackets. Let us denote the remaining integrand by $`\mathrm{\Theta }`$. If the operator $`\mathrm{\Omega }`$ in (6) were replaced by an ordinary physical vertex operator $`V`$ one could integrate $`\mathrm{\Theta }`$ over the moduli space of a genus $`g`$ surface with $`N+1`$ punctures. From counting dimensions and ghost numbers it follows, however, that $`\mathrm{\Theta }`$ as defined by (6) can be integrated only over the boundary of moduli space. In fact, it can be considered as a differential form on moduli space of codimension one. Since $`\mathrm{\Theta }`$ can also be shown to be a closed form Stokes’ theorem leads to the desired Ward identity $$_{^{g,N+1}}\mathrm{\Theta }=_{^{g,N+1}}𝑑\mathrm{\Theta }=0.$$ (7) The next step is to have a closer look at the $`N=2`$ string moduli space $`^{g,N+1}`$, i.e. the moduli space of $`N=2`$ super Riemann surfaces with genus $`g`$ and $`N+1`$ punctures (and vanishing Chern number in our case). In addition to the usual metric and super moduli, we also have to consider the so-called $`U(1)`$ moduli describing a continuum of possible monodromy phases for the world-sheet fermions arising from their transport along non-trivial homology cycles. However, the $`U(1)`$ moduli space is compact (it always has the topology of a torus) and therefore does not contribute to the boundary of moduli space. As a result, in our Ward identity (7) only the familiar boundary components of the metric moduli space appear. The metric moduli are of two different types. One corresponds to the shape of the underlying Riemann surface whereas the other describes punctures, i.e. the locations of the vertex operators. If we move to the boundary of moduli space the Riemann surface degenerates in some way. In the following it is convenient to distinguish four different cases: First of all, the underlying surface may pinch either along a trivial or a non-trivial homology cycle. If a genus $`g`$ surface pinches along a non-trivial cycle it becomes a surface of genus $`g1`$ with two points coinciding. If it pinches along a trivial cycle the result is a connected pair of Riemann surfaces with genera $`g_1`$ and $`g_2`$ such that $`g_1+g_2=g`$. For a $`g=2`$ surface with four punctures these two cases are illustrated in the top row of the figure below. It may also happen that a number of punctures approach each other. This is conformally equivalent to a situation where a sphere containing the relevant punctures splits off of the rest of the surface. This is illustrated in the bottom line of the figure, where we also distinguished whether two vertex operators $`V`$ approach each other or one $`V`$ approaches the ghost number three operator $`\mathrm{\Omega }`$. To see how a pinch (denoted by $`P`$ in the figure) can properly be included in the computation let us recall that it can equivalently be described by an infinitely long cylinder. This cylinder can be taken into account by inserting a complete set of physical states. In this formulation the twist angle of the cylinder is one of the moduli leading to an insertion of the metric antighost combination $`b(z)\stackrel{~}{b}(\overline{z})`$. So the pinch can be represented by the sum $$\underset{i}{}|\widehat{O}^iO_i|$$ (8) where $`i`$ labels a basis of the absolute BRST cohomology and $$O_j|O^i=\delta _j^i,|\widehat{O}^i=(b_0\stackrel{~}{b}_0)|O^i.$$ (9) What about the fermionic and $`U(1)`$ moduli ? The former are correctly taken into account by obeying the right selection rules for picture numbers . Moreover, a pinch contributes one complex $`U(1)`$ modulus. This corresponds to the fact that the complete set of states (8) carries two units of $`U(1)`$ ghost number – just enough to compensate the antighost insertion due to the $`U(1)`$ modulus of the pinch. Let us now become more explicit: We assume $`N3`$, i.e. the presence of at least three vertex operators, and genus $`g>0`$ since tree-level amplitudes have been discussed in . It will also be sufficient and technically simpler to consider only operators $`\mathrm{\Sigma }`$ (and the corresponding $`\mathrm{\Omega }`$) of the form $$\mathrm{\Sigma }_n^a(z,\overline{z}):=\mathrm{\Sigma }_{n,n,n,n}^a(z,\overline{z})=A^n(z)\stackrel{~}{A}^n\stackrel{~}{𝒫}^a(\overline{z}),$$ which have picture numbers $`(n,n)`$. The four cases mentioned above will now be discussed in turn. ### 3.1 Case 1: A non-trivial homology cycle pinches Besides the $`N`$ physical vertex operators already present the pinching leads to an insertion of two further vertex operators $`O_i`$ and $`\widehat{O}^i`$, as explained above. So we have to evaluate the expression $$\underset{i}{}\mathrm{\Omega }_n^aV_1\mathrm{}V_N\widehat{O}^iO_i_{g1}.$$ (10) Here, the notation for the vertex operators has been simplified in a hopefully obvious way. The double bracket as usual denotes evaluation of the full amplitude including integration over moduli space. To further evaluate the expression (10) let us note that it contains at least six operators (since we assumed $`N3`$ in the beginning). Regardless of the value of $`g`$ integration over moduli space leads for this number of operators to insertions of metric antighosts that transform cohomology classes into integrated vertex operators. Since this effect will be crucial in the following, we briefly review some details: Assume the operator $`(z)\stackrel{~}{}(\overline{z})`$ represents a closed string cohomology class. From the explicit form of the BRST operator it follows that $$^{(1)}(z)=_z\frac{dw}{2\pi i}b(w)(z)$$ satisfies the relation $$[Q,^{(1)}]=,$$ $`Q`$ being the left-moving part of the BRST operator. Since this argument goes through for the right-moving half, as well, a $`b`$-ghost insertion leads to the integrated operator $$d^2z^{(1)}(z)\stackrel{~}{}^{(1)}(\overline{z})$$ which is BRST invariant since the integrand transforms into a total derivative. In practice, going over from a cohomology class to an integrated vertex operator simply amounts to getting rid of the undifferentiated $`c`$\- and $`\stackrel{~}{c}`$-ghosts. If some cohomology class does not contain both these ghost fields (as for example the unit operator) its integrated form is zero. We are always free to choose where to locate the $`b`$-ghost insertions<sup>8</sup><sup>8</sup>8Since we are dealing with vertex operators of non-standard ghost number, this is not completely obvious in the path integral formulation. In the operator formalism , however, one may explicitly check that the location of the $`b`$-ghost insertion is immaterial., i.e. which cohomology class to convert into an integrated operator. In the present case we can pick $`\mathrm{\Omega }`$. From the explicit form of $`A`$ in equation (1) one sees that stripping off a $`c`$-ghost necessarily leads to the presence of a $`b^{}`$-ghost, for example $`A^{(1)}=b^{}J^{}e^{\phi ^+}e^\phi ^{}`$. However, there is no corresponding $`c^{}`$-ghost in sight to compensate $`b^{}`$ in a correlation function. So we learn from simple $`U(1)`$ ghost number counting that the amplitude (10) vanishes! In other words, the kind of degeneration considered in this subsection does not contribute to the Ward identity. ### 3.2 Case 2: A trivial homology cycle pinches The contribution to the Ward identity of this component of the boundary is $$\underset{i,\alpha }{}V_{u_1}\mathrm{}V_{u_p}\mathrm{\Omega }_n^a\widehat{O}^i_{g_1}O_iV_{u_{p+1}}\mathrm{}V_{u_N}_{g_2}$$ (11) with $`g_1+g_2=g`$ and $`g_1,g_2>0`$. The sum over $`\alpha `$ runs over all possible ways to divide the set of $`N`$ physical vertex operators into a subset $`\{V_{u_1}\mathrm{}V_{u_p}\}`$ on the genus $`g_1`$ surface and the remainder $`\{V_{u_{p+1}}\mathrm{}V_{u_N}\}`$ located on the other surface. Since $`g_1`$ is strictly positive and the correlation function involving $`\mathrm{\Omega }`$ contains at least one further operator the expression (11) can again be evaluated by transforming $`\mathrm{\Omega }`$ to its integrated form. As in the previous subsection the vanishing of (11) then follows from $`U(1)`$ ghost number counting. ### 3.3 Case 3: A sphere not including $`\mathrm{\Omega }`$ splits off In this case we have to evaluate the expression $$\underset{i,\alpha }{}V_{u_1}\mathrm{}V_{u_p}\mathrm{\Omega }_n^a\widehat{O}^i_gO_iV_{u_{p+1}}\mathrm{}V_{u_N}_{g=0}.$$ (12) Since $`g>0`$ by assumption the correlator involving $`\mathrm{\Omega }`$ vanishes by the same argument as above. ### 3.4 Case 4: A sphere including $`\mathrm{\Omega }`$ splits off In this final case the contribution to the Ward identity reads $$\underset{i,\alpha }{}V_{u_1}\mathrm{}V_{u_p}\mathrm{\Omega }_n^a\widehat{O}^i_{g=0}O_iV_{u_{p+1}}\mathrm{}V_{u_N}_g.$$ (13) The ghost number three operator $`\mathrm{\Omega }`$ now appears in a tree-level amplitude whose evaluation involves metric antighost insertions as soon as more than three operators are present. Correspondingly, terms in the $`\alpha `$-sum vanish by the standard argument whenever the $`g=0`$ correlator involves more than one operator $`V`$ besides $`\mathrm{\Omega }`$ and $`\widehat{O}^i`$. What remains are those degenerations where $`\mathrm{\Omega }`$ splits off with precisely one vertex operator $`V`$. These are the only contributions to the Ward identity: $$\underset{u=1}{\overset{N}{}}\underset{i}{}V_u\mathrm{\Omega }_n^a\widehat{O}^i_{g=0}O_iV_1\mathrm{}V_{u1}V_{u+1}\mathrm{}V_N_g=0.$$ (14) Obviously, the only non-vanishing term in the above sum over $`i`$ occurs when $`O_i`$ coincides with the vertex operator $`V_u`$. In each term of the $`u`$-sum the second correlator therefore is just the genus $`g`$ amplitude of $`N`$ physical states $`A_N^g`$. Reinserting the momenta $`k_u`$ allows us to rewrite the Ward identity as $$A_N^g(k_1,\mathrm{},k_N)\underset{u=1}{\overset{N}{}}V(k_u)\mathrm{\Omega }_n^a\widehat{V}(k_u)_{g=0}=0.$$ (15) These identities have already been derived in for tree amplitudes. Equations (15) tell us that they do not get modified for higher genera. The remaining correlator can be evaluated as $$V(k)\mathrm{\Omega }_n^a\widehat{V}(k)_{g=0}=\left(\frac{\overline{k}^0}{k^1}\right)^nk^ah(k)^nk^a.$$ (16) The final identities for the genus $`g`$ amplitude thus read $$A_N^g(k_1,\mathrm{},k_N)\underset{i=1}{\overset{N}{}}h(k_i)^nk_i^a=0\text{for any}n$$ (17) and imply the vanishing of all amplitudes with $`N4`$ . The three point function, however, is generally non-zero. One may for example check that the tree-level amplitude $$A_{N=3}^{g=0}(k_1,k_2,k_3)=\left(\overline{k}_1k_2\overline{k}_2k_1\right)^2$$ satisfies all identities without being zero. On dimensional grounds it seems very plausible that for higher genus the three point function is just a power of the tree-level result: $$A_{N=3}^g(k_1,k_2,k_3)=\alpha _g\left(\overline{k}_1k_2\overline{k}_2k_1\right)^{4g+2}$$ Here the pre-factor $`\alpha _g`$ depends on the genus but not on the momenta. Explicit computations at one loop show that $`\alpha _{g=1}`$ is divergent . This concludes our discussion of the scattering amplitudes of the $`N=2`$ string. ## 4 Some remarks So far we have ignored the possibility of non-vanishing Chern number $`c`$, corresponding to topologically non-trivial configurations of the $`U(1)`$ gauge field on the world-sheet. A careful evaluation of the path integral shows that a non-zero Chern number can be simulated by inserting (a power of) the spectral flow operator $`A`$ into the $`c=0`$ correlation function and simultaneously adjusting the picture numbers of the vertex operators . Since the derivative of the spectral flow operator is BRST trivial each $`A`$ (or $`A^1`$) can be moved towards one of the vertex operators and simply pulls out a momentum factor $`h(k)`$ (or its inverse, see eq. (16) for a definition of $`h`$). Therefore, amplitudes with different Chern number are proportional to one another. Hence, it is sufficient to prove the vanishing of a scattering amplitude for one fixed value of $`c`$. Secondly we have ignored that, as a Riemann surface with $`c=0`$ degenerates and splits into two, the resulting surfaces may have non-vanishing Chern numbers $`c`$ and $`c`$. So we actually should include in our Ward identity a summation over all such splittings<sup>9</sup><sup>9</sup>9This sum is finite since supersymmetry ghost zero modes kill correlators when $`|c|`$ exceeds a certain value.. However, we have just explained that this only leads to additional factors $`h(k)^c`$ and $`h(k)^c`$ which cancel each other ($`k`$ is the momentum flowing through the pinch). This justifies our treatment where we completely neglected sectors with non-zero Chern number. A further point that deserves to be mentioned is the question of non-linear contributions to the symmetries. One of the remarkable features of the $`N=2`$ string that make it such an interesting toy model is the fact that we know a simple field theory that reproduces the tree-level amplitudes to all orders in $`\alpha ^{}`$. This field theory is well known to possess a highly non-linear symmetry structure. In the linearised version of the unbroken symmetries on the field theory side was compared to the transformation rules of the $`N=2`$ string vertex operators under the symmetries that lead to the above Ward identities. They were found to coincide. In fact, the Hilbert space in our formulation of the theory consists only of single string states. So it seems at first sight correct to restrict a comparison between symmetries in field theory and string theory to the linear level. However, it has been explained in (section 6) that non-linear symmetry structures can make their appearance in a first quantised string theory at the level of Ward identities. More precisely, a non-linear contribution to a Ward identity corresponds to a situation where the ghost number one (three for $`N=2`$ strings) operator $`\mathrm{\Omega }`$ splits off with more than one further vertex operator. In this case only the overlap between the charge acting on a single vertex operator with a multi-string state is sent through the pinch. In other words, a symmetry is realized non-linearly precisely when the tree-level amplitude $$\mathrm{\Omega }V(k_1)\mathrm{}V(k_{n1})\widehat{V}(k_n)_{g=0}$$ is non-vanishing for $`n3`$. A model where this indeed happens is the bosonic string in two dimensions. Yet it has been argued in section three that in our case of the $`N=2`$ string the relevant correlation functions vanish. As a consequence the Ward identity (17) is linear. This indicates a clear discrepancy to the field theory and suggests that the behaviour of the $`N=2`$ string is not fully captured by its tree-level effective field theory. Last but not least we should give our opinion on the reliability of our arguments. In fact, we must admit that the analysis of the boundary of the $`N=2`$ string moduli space has been somewhat heuristic. It is mainly based on counting of dimensions and ghost numbers. Hidden subtleties might be detected by a more careful investigation. For example, it is conceivable that the $`U(1)`$ moduli space behaves in some discontinuous way as the Riemann surface degenerates. Whether or not this is the case can only be answered by studying the relevant index theorem. Other potential difficulties are related to the fermionic moduli that we have treated in a rather straightforward way, ignoring possible ambiguities due to the location of picture changing operators. In any case it would be helpful to have an explicit computation of the one-loop four point function. If that turns out to be non-vanishing it will be extremely interesting to see by which mechanism the derivation of the Ward identities must be modified.
no-problem/9912/nucl-th9912001.html
ar5iv
text
# 1 Introduction ## 1 Introduction Boson expansion theory (BET) has played a significant role over the past decades in our understanding of the nuclear many-body problem. Starting with the pioneering work of Marumori and co-workers , and of Belyaev and Zelevinsky , the interest in this subject has culminated in the eighties through the formulation of the interacting boson model . Of particular interest, not only for the many-body problem, but also, as it has become apparent, for quantum-field theory (QFT), is the perturbative boson expansion (PBE) approach. Extensive use of it has been made in nuclear physics, in order to extract anharmonicities beyond the Random-Phase Approximation (RPA) (see ref. for reviews). Up until very recently its application to QFT has not attracted much attention and, therefore, has not been fully developed so far. The Holstein-Primakoff mapping for boson pairs, first introduced in , was recently applied, however, to the $`O(N)`$ vector model . It was demonstrated that the mapping is able to systematically classify the dynamics according to the $`1/N`$-expansion, rendering a promising and efficient alternative to the well-known functional methods. Furthermore, considering the model in the phase of spontaneously broken symmetry, the powerful machinery of the PBE approach as developed for deformed nuclear systems could be transcribed to QFT. As a consequence the Goldstone theorem as well as the whole hierarchy of Ward identities were exactly satisfied . However, the PBE in general, and the Holstein-Primakoff mapping (HPM) in particular , rely on the bosonisation of pairs of particles. Thereby, images for particle pairs are generated in an ideal Fock-space, while single particle images are absent after mapping. This problem has been appreciated for the fermionic case in the early days of the boson expansion theory. Marshalek has proposed an extension of HPM for fermions in order to allow for a perturbative boson expansion for both even and odd nuclei . In the present letter, we point out the occurrence of the same problem in the case of the PBE for purely bosonic models. The need for an extended bosonic HPM to include single bosons clearly revealed itself in where the lack of ideal single-boson states was an obstacle for defining unambiguously the two-point function for the Goldstone mode. While to leading order in the $`1/N`$-expansion this problem was circumvented in , a next-to-leading order calculation makes an extended version of the HPM mandatory. Finite-temperature applications of the PBE approach is another issue where an extended HPM to include single bosons is definitely called for. In the following, we wish to sketch a derivation of an extended version of HPM for bosons. We will also discuss an application to the $`O(N+1)`$ anharmonic oscillator where it will be demonstrated explicitly that this new method is capable of including single-boson images with the correct asymptotic energy. ## 2 Extended Holstein-Primakoff Mapping As a starting point let us consider a system with two types of bosonic creation and annihilation operators: $`a^+`$, $`a`$, and $`b^+`$, $`b`$. Pairing these in all possible ways leads to ten group generators of the non-compact $`Sp(4)`$ group. The pairs $`a^+a`$, $`aa`$, $`a^+a^+`$ and analogously the pairs of $`b`$-operators form two commuting $`Sp(2)`$ subgroups. The number conserving bilinears $`a^+a`$, $`b^+b`$, $`a^+b`$ and $`ab^+`$ span a closed $`U(2)`$ algebra. There remain the bilinears $`a^+b^+`$ and $`ab`$ which do not belong to any non-trivial subgroup of $`Sp(4)`$. Our goal will be to first set up the boson images of the ten group generators, replacing in the end the $`b`$-operators by c-numbers (the condensate). This will lead us to the boson image of the semidirect product group $`Sp(2)N(1)`$ made up of the elements $`a^+a`$, $`aa`$, $`a^+a^+`$, $`a`$, $`a^+`$, and $`1_d`$ respectively. The latter is the desired system because it involves even and odd number of boson operators. We will follow earlier work for interacting Fermions by Evans and Kraus , Klein, Cohen, Li, Rafelski, and Rafelski in which a mapping for the ten generators of the $`SO(5)`$ group was derived. Especially, use is made of the work of the latter group of authors to derive this time the mapping of the ten generators of the non-compact $`Sp(4)`$ group mentioned above. Since there is no room to go into details (which will be presented elsewhere) we essentially will only give the result here. One first realizes that the six generators of the two commuting $`Sp(2)`$ algebras can be mapped via the usual HPM. The difficult task lies in finding an adequate mapping for the generators $`a^+b^+`$ and $`ab`$, which allows one to close of the full $`Sp(4)`$ algebra. The reader is invited to consult reference for a similar derivation. Introducing a set of three new bosonic operators $`\alpha `$, $`A_1`$, and $`A_2`$, one can show that the net result for the complete mapping reads $`(a^+a^+)_I`$ $`=`$ $`A_1^+\sqrt{2+\mathrm{\hspace{0.17em}4}(n_1+m)},(aa)_I=\left((a^+a^+)_I\right)^+,(a^+a)_I=2n_1+m,`$ $`(b^+b^+)_I`$ $`=`$ $`A_2^+\sqrt{2+\mathrm{\hspace{0.17em}4}(n_2+m)},(bb)_I=\left((b^+b^+)_I\right)^+,(b^+b)_I=2n_2+m,`$ $`(a^+b^+)_I`$ $`=`$ $`\alpha ^+\sqrt{2+\mathrm{\hspace{0.17em}4}(n_1+m)}\sqrt{2+\mathrm{\hspace{0.17em}4}(n_2+m)}\mathrm{\Phi }(m)+\mathrm{\hspace{0.17em}\hspace{0.17em}4}\mathrm{\Phi }(m)A_2^+A_1^+\alpha ,`$ $`(ab)_I`$ $`=`$ $`\left((a^+b^+)_I\right)^+,`$ $`(a^+b)_I`$ $`=`$ $`{\displaystyle \frac{1}{2}}[(bb)_I,(a^+b^+)_I],(b^+a)_I=\left((a^+b)_I\right)^+,`$ (1) where $`n_1,n_2`$, and $`m`$ are occupation number operators defined by $$m=\alpha ^+\alpha ,n_i=A_i^+A_i(i=1,2).$$ (2) The $`+`$-sign in the Holstein-Primakoff square-root indicates the non-compact character of the group at hand. Finally, the function $`\mathrm{\Phi }`$ is given by $$\mathrm{\Phi }(m)=\left[\frac{r+m^2}{4(m+\mathrm{\hspace{0.17em}1})(2m+\mathrm{\hspace{0.17em}1})(2m\mathrm{\hspace{0.17em}1})}\right]^{\frac{1}{2}},$$ (3) where $`r`$ is a constant which is fixed using physical conditions as will be discussed in the next section. These results constitute only an intermediate step towards our final goal. As stated in the introduction, one wishes to extend the usual HPM for boson pairs, in such a way as to allow the mapping of single bosons, as well. In other words, and following the original Belyaev-Zelevinsky approach, one needs to achieve a realization of the following algebra $`[aa,a^+a^+]`$ $`=`$ $`\mathrm{\hspace{0.17em}2}+\mathrm{\hspace{0.17em}\hspace{0.17em}4}a^+a,`$ $`[aa,a^+a]`$ $`=`$ $`\mathrm{\hspace{0.17em}2}aa,`$ $`[a,a^+a^+]`$ $`=`$ $`\mathrm{\hspace{0.17em}2}a^+,`$ $`[a,a^+a]`$ $`=`$ $`a,`$ (4) where all other possible commutators are assumed but not explicitly shown here. This is nothing but the algebra of the semidirect product group $`Sp(2)N(1)`$. The first two commutation relations in Eq. (4) remind us of the $`Sp(2)`$ algebra, and as such, one can propose the bosonic HPM as a second realization for it. Here again, the difficulty lies in finding an adequate mapping for the single bosons so as to close the algebra above. A way out is to notice that, by considering the limit in which the operator $`b`$ and $`b^+`$ are transformed into the identity operator, one can ultimately contract the whole $`Sp(4)`$ group to a non-isomorphic semidirect group $`Sp(2)N(1)`$. This singular transformation, which can be thought of as a contraction à la Inönü-Wigner or Saletan , gives a clear hint on how to proceed with the desired extension. Indeed, the single bosons can be deduced from the contraction of the generators $`a^+b^+`$, $`ab`$, $`a^+b`$, and $`ab^+`$ down to the generators $`a`$ and $`a^+`$. With this intuitive picture in mind, one can show that the following mapping for the five relevant generators constitutes a realization of the algebra in Eq. (4). $`(aa)_I`$ $`=`$ $`\sqrt{2+\mathrm{\hspace{0.17em}4}(n_1+m)}A_1,(a^+a^+)_I=(aa)_I^+,(a^+a)_I=\mathrm{\hspace{0.17em}2}n_1+m,`$ $`(a)_I`$ $`=`$ $`\sqrt{2+\mathrm{\hspace{0.17em}4}(n_1+m)}\mathrm{\Gamma }_1(m)\alpha +\mathrm{\hspace{0.17em}\hspace{0.17em}2}\alpha ^+A_1\mathrm{\Gamma }_1(m),(a^+)_I=(a)_I^+,`$ (5) where the occupation number operators are, as before, given by $`n_1=A_1^+A_1,m=\alpha ^+\alpha `$, while the function $`\mathrm{\Gamma }_1`$ reads $$\mathrm{\Gamma }_1(m)=\left[\frac{z_1+m^2}{2(m+1)(2m+1)(2m1)}\right]^{\frac{1}{2}}.$$ (6) Here too $`z_1`$ is a constant which will be fixed by using physical conditions as will be explained in the next section. It is straightforward to verify, through a direct evaluation of the commutators in Eq. (4), that this is indeed a proper realization. This completes our considerations concerning the mapping. In the next section, the formalism will be applied to the interesting case of $`N`$ oscillators and used to develop the $`1/N`$-expansion. ## 3 The $`O(N+1)`$ Anharmonic Oscillator As an application, let us consider the anharmonic oscillator with an $`O(N+1)`$ symmetry broken down to $`O(N)`$. The properly scaled Hamiltonian of the system is given by $$H=\frac{\stackrel{}{P}_\pi ^2}{2}+\frac{P_\sigma ^2}{2}+\frac{\omega ^2}{2}\left[\stackrel{}{X}_\pi ^2+X_\sigma ^2\right]+\frac{g}{N}\left[\stackrel{}{X}_\pi ^2+X_\sigma ^2\right]^2\sqrt{N}\eta X_\sigma .$$ (7) Here we have considered an explicit $`(\eta 0)`$ and a spontaneous $`(X_\sigma 0)`$ symmetry breaking along the $`X_\sigma `$ mode. The variables $`\stackrel{}{X}_\pi ,X_\sigma `$ and their conjugate momenta $`\stackrel{}{P}_\pi ,P_\sigma `$ are expressed in second quantization as $`\stackrel{}{X}_\pi `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2\omega }}}(\stackrel{}{a}+\stackrel{}{a}^+),\stackrel{}{P}_\pi =i\sqrt{{\displaystyle \frac{\omega }{2}}}(\stackrel{}{a}^+\stackrel{}{a}),`$ $`X_\sigma `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2_\sigma }}}(b+b^+),P_\sigma =i\sqrt{{\displaystyle \frac{_\sigma }{2}}}(b^+b).`$ (8) The frequency, $`_\sigma `$, of the mode $`X_\sigma `$ will be fixed later. The subscripts $`\pi `$ and $`\sigma `$ are used in analogy with the linear $`\sigma `$-model in QFT, where these modes represent the pion- and sigma fields respectively. To sort-out the dynamics according to the $`1/N`$-expansion, one needs to adapt the mapping derived in the previous section to the situation of $`N`$ oscillators. This can be done in a straightforward way. It can be shown that the mapping in this case takes the form $`(\stackrel{}{a}\stackrel{}{a})_I`$ $`=`$ $`\sqrt{2N+\mathrm{\hspace{0.17em}4}(n_1+m)}A_1,(\stackrel{}{a}^+\stackrel{}{a})_I=\mathrm{\hspace{0.17em}2}n_1+m,(\stackrel{}{a}^+\stackrel{}{a}^+)_I=(\stackrel{}{a}\stackrel{}{a})_I^+,`$ $`(a_i)_I`$ $`=`$ $`\sqrt{2N+\mathrm{\hspace{0.17em}4}(n_1+m)}\mathrm{\Gamma }_N(m)\alpha _i+\mathrm{\hspace{0.17em}\hspace{0.17em}2}\alpha _i^+A_1\mathrm{\Gamma }_N(m),(a_i^+)_I=(a_i)_I^+,`$ (9) where $`N`$ is an integer, $`n_1=A_1^+A_1`$, and $`m=_i\alpha _i^+\alpha _i`$, while $`\mathrm{\Gamma }_N`$ is a generalization of the $`\mathrm{\Gamma }_1`$ function, of the last section to the case of $`N`$ oscillators. It reads $$\mathrm{\Gamma }_N(m)=\left[\frac{z_N+m^2+m(N1)}{2(m+1)(2m+N)(2m+N2)}\right]^{\frac{1}{2}}.$$ (10) The constant $`z_N`$ will be fixed below. One can also easily verify that this mapping leads to a realization of the following algebra: $`[\left(\stackrel{}{a}\stackrel{}{a}\right),\left(\stackrel{}{a}^+\stackrel{}{a}^+\right)]`$ $`=`$ $`\mathrm{\hspace{0.17em}2}N+\mathrm{\hspace{0.17em}\hspace{0.17em}4}\left(\stackrel{}{a}^+\stackrel{}{a}\right),`$ $`[\left(\stackrel{}{a}\stackrel{}{a}\right),\left(\stackrel{}{a}^+\stackrel{}{a}\right)]`$ $`=`$ $`\mathrm{\hspace{0.17em}2}\left(\stackrel{}{a}\stackrel{}{a}\right).`$ $`[a_i,\left(\stackrel{}{a}^+\stackrel{}{a}^+\right)]`$ $`=`$ $`\mathrm{\hspace{0.17em}2}a_i^+,`$ $`[a_i,\left(\stackrel{}{a}^+\stackrel{}{a}\right)]`$ $`=`$ $`a_i.`$ (11) For a finite $`N`$, the $`O(N+1)`$ anharmonic oscillator is purely quantum mechanical. For an infinite number of degrees of freedom $`N\mathrm{}`$, on the other hand, it can be used to mimic the quantum-field situation of the breaking and restoration of a continuous symmetry. Using the mapping in Eq. (9), one can expand the Hamiltonian of the system in powers of the operators $`A`$, $`\alpha `$, $`b`$ and their hermitian conjugates. One then arrives at an expansion of the form $`H=^{(0)}+^{(1)}+^{(2)}+^{(3)}+^{(4)}+\mathrm{}`$, where the superscripts indicate powers of operators without normal ordering. This expansion is in fact not unique and therefore the preservation of the symmetries is not necessarily guaranteed. A more useful approach consists in organizing the expansion in powers of the parameter $`N`$, such that $`H=NH_0+\sqrt{N}H_1+H_2+\frac{1}{\sqrt{N}}H_3+\frac{1}{N}H_4+\mathrm{}`$ This is possible if one chooses a coherent state as the variational ground state for the model $$|\psi =\mathrm{exp}\left[A_1A_1^++bb^+\right]|0.$$ (12) This trial vacuum state must accommodate two condensates respectively for the $`X_\sigma `$ mode and the newly introduced boson $`A_1`$ (see ref. for details). The mode $`\alpha `$, on the other hand, is not allowed to condense. The ground-state energy, $`NH_0=\frac{\psi |H|\psi }{\psi |\psi }`$ , calculated on the coherent state, takes the following form $$H_0=\frac{\omega }{2}\left(2d^2+1\right)+\frac{gs^2}{\omega }\left(d+\sqrt{1+d^2}\right)^2+\frac{g}{4\omega ^2}\left(d+\sqrt{1+d^2}\right)^4+\frac{\omega ^2s}{2}+gs^4\eta s,$$ (13) where we have introduced for convenience the rescaled condensates $`s=\frac{1}{\sqrt{N}}X_\sigma ,d=\sqrt{\frac{2}{N}}A`$. The coherent ground state is fully determined by requiring that the values taken by the two condensates above lead to the minimum of $`H_0`$. The minimization procedure with respect to $`s`$ and $`d`$ gives the following two coupled BCS equations $`\omega ^2+4gs^2+{\displaystyle \frac{2g}{\omega }}\left(d+\sqrt{1+d^2}\right)^2`$ $`=`$ $`{\displaystyle \frac{\eta }{s}},`$ $`2\omega d\sqrt{1+d^2}+\left(d+\sqrt{1+d^2}\right)^2\mathrm{\Delta }`$ $`=`$ $`0,`$ (14) where $`\mathrm{\Delta }=\frac{2gs^2}{\omega }+\frac{g}{\omega ^2}\left(d+\sqrt{1+d^2}\right)^2`$ is the gap parameter . To gather the full dynamics of the leading order in the $`1/N`$-expansion one needs to generate the terms $`H_1`$ and $`H_2`$ of the Hamiltonian. This can be done by using the parameter differentiation techniques (see ref. for details). The net result for both $`H_1`$ and $`H_2`$ then reads: $`H_1`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left[2\omega d+{\displaystyle \frac{\left(d+\sqrt{1+d^2}\right)^2}{\sqrt{1+d^2}}}\mathrm{\Delta }\right]\left(\stackrel{~}{A}_1+\stackrel{~}{A}_1^+\right)+\left[{\displaystyle \frac{2gs}{\omega }}\left(d+\sqrt{1+d^2}\right)^2+\omega ^2s+4gs^3\eta \right]{\displaystyle \frac{(\beta ^++\beta )}{\sqrt{2_\sigma }}}`$ $`H_2`$ $`=`$ $`_0+_\sigma \beta ^+\beta +\left[\omega +\mathrm{\Delta }+{\displaystyle \frac{\mathrm{\Delta }d}{\sqrt{1+d^2}}}\right]m+\left[2\omega +2\mathrm{\Delta }+{\displaystyle \frac{\mathrm{\Delta }d}{\sqrt{1+d^2}}}\right]\stackrel{~}{n}_1`$ $`+`$ $`\left(\stackrel{~}{A}_1+\stackrel{~}{A}_1^+\right)^2\left[{\displaystyle \frac{\mathrm{\Delta }d\left(2+d^2\right)}{4\sqrt{\left(1+d^2\right)^3}}}+{\displaystyle \frac{g}{2\omega ^2}}{\displaystyle \frac{\left(d+\sqrt{1+d^2}\right)^4}{1+d^2}}\right]+2gs{\displaystyle \frac{(\beta ^++\beta )(\stackrel{~}{A}_1+\stackrel{~}{A}_1^+)}{\omega \sqrt{_\sigma }}}{\displaystyle \frac{\left(d+\sqrt{1+d^2}\right)^2}{\sqrt{1+d^2}}}.`$ Here, $`_0`$ is a constant. Since one is not particularly interested in the ground-state energy, the latter will not be further specified. The shifted operators $`\stackrel{~}{A}_1=A_1A_1`$, $`\beta =bb`$, and $`\stackrel{~}{n}_1=\stackrel{~}{A}_1^+\stackrel{~}{A}_1`$ annihilate the coherent state $`|\mathrm{\Psi }`$. The frequency $`_\sigma `$ of the $`X_\sigma `$ mode is fixed such that the bilinear part of $`H_2`$ in the $`\beta `$ operators is diagonal. It is purely of perturbative character, and the frequency is explicitly given by $$_\sigma ^2=\omega ^2+12gs^2+\frac{2g}{\omega }\left(d+\sqrt{1+d^2}\right)^2.$$ (16) Using the gap equations (14) and the following easily verifiable identities $$\mathrm{\Delta }=2d\sqrt{1+d^2}\left[\omega \left(d\sqrt{1+d^2}\right)^2\right],\omega +\mathrm{\Delta }=(1+2d^2)\left[\omega \left(d\sqrt{1+d^2}\right)^2\right],$$ (17) one can establish that $`H_1`$ vanishes at the minimum. From $`H_2`$, and more precisely from the coefficient of $`m=_{i=1}^N\alpha _i^+\alpha _i`$, one can deduce the existence of $`N`$ uncoupled modes. The common frequency of these modes is denoted by $`_\pi `$ and is given by $$_\pi ^2=\omega ^2\left(d\sqrt{1+d^2}\right)^4=\omega ^2+4gs^2+\frac{2g}{_\pi }=\frac{\eta }{s}.$$ (18) These $`N`$ modes are our first asymptotic states. Furthermore, it can be easily verified that they have Goldstone character. In other words, their frequency vanishes in the exact symmetry limit $`(\eta =0)`$, and for a finite condensate $`(s0)`$. It is evident from the ansatz above that the model suffers from infrared divergences. However, since it is used for demonstration purposes only, we choose to disregard this difficulty here. Clearly the new and important result that has been obtained shows up in the fact that the proposed mapping provides asymptotic states in the ideal Fock-space which correspond to the images of the single bosons. It should be stressed that this is a non trivial-finding which, as shown above, is a direct consequence of the extended HPM. It reproduces the result anticipated in the introduction, in clear departure from the HPM for boson pairs and in accordance with the Goldstone theorem. So far, only the mapping of the bilinears in Eq. (9) was involved in expanding the Hamiltonian. The single-boson part of the mapping, on the other hand, was not directly used. The latter enters, however, in the definition of the two-point function $`\mathrm{\Psi }|TX_{\pi ,i}(t)X_{\pi ,i}(t^{})|\mathrm{\Psi }`$, where $`|\mathrm{\Psi }`$ is the coherent ground state. To leading order in $`1/N`$ and after a Fourier transform one obtains $$D_{\pi ,ij}(s)=𝑑te^{i\sqrt{s}(tt^{})}\mathrm{\Psi }|TX_{\pi ,i}(t)X_{\pi ,j}(t^{})|\mathrm{\Psi }=\delta _{ij}\frac{2N\mathrm{\Gamma }_N^2(0)}{s_\pi ^2+i\eta }.$$ (19) The fact that the residue at the pole has to be $`2N\mathrm{\Gamma }_N^2(0)=1`$, leads to $`z_N=N2`$. Besides the Goldstone modes there also exist other excitations. They can be made explicit in diagonalizing the remaining part of $`H_2`$. This is a straightforward procedure which can be found in . In short, since the non-diagonal part of $`H_2`$ is at most bilinear in the operators $`\stackrel{~}{A}_1,\stackrel{~}{A}_1^+,\beta ,\beta ^+`$, a generalized Bogoliubov rotation of the type $$Q_\nu ^+=X_\nu \beta ^+Y_\nu \beta +U_\nu \stackrel{~}{A}_1^+V_\nu \stackrel{~}{A}_1,$$ (20) can be performed and leads to uncoupled modes at the minimum of the action. The diagonalization is done by recalling the usual Rowe equations of motion $$RPA|[\delta Q_\nu ,[H_2,Q_\nu ^+]]|RPA=\mathrm{\Omega }_\nu RPA|[\delta Q_\nu ,Q_\nu ^+]|RPA,$$ (21) where $`|RPA`$, the full ground state of the theory at this order, is a random-phase approximation (RPA) ground state, defined by $`Q_\nu |RPA=0`$. The Hamiltonian can then be written in the RPA phonon basis, $`|\nu =Q_\nu ^+|RPA`$, as follows $$H=NH_0+E_{RPA}+_\pi \underset{i=1}{\overset{N}{}}\alpha _i^+\alpha _i+\underset{\nu =\pm 1,\pm 2}{}\mathrm{\Omega }_\nu Q_\nu ^+Q_\nu +𝒪(N^{\frac{1}{2}}).$$ (22) and contains three terms of order $`(\sqrt{N})^2,(\sqrt{N})^1,(\sqrt{N})^0,`$ respectively. The coefficient of the $`\sqrt{N}`$ term vanishes. The contribution $`E_{RPA}=RPA|H_2|RPA`$ is the RPA correction to the ground-state energy and will not be given explicitly here. The frequencies $`\mathrm{\Omega }_\nu `$ are solutions of the characteristic equation of the RPA eigenvalues problem and given by $$\mathrm{\Omega }_\nu ^2=\frac{\eta }{s}+\frac{8gs^2}{1\frac{4g}{_\pi }\frac{1}{\mathrm{\Omega }_\nu ^2\mathrm{\hspace{0.17em}4}_\pi ^2}}.$$ (23) In the exact symmetry limit $`(\eta =0)`$, there exist a pair of zero-energy solutions among the four RPA eigenvalues which correspond to two uncorrelated Goldstone modes <sup>1</sup><sup>1</sup>1 Here again, we disregard the infrared problem since the model is only used for demonstration purposes. The reader is referred to for a thorough study of these questions in four space-time dimensions.. This point is not the mean purpose to the present note and therefore will not be discussed further. The reader may consider looking into ref. for a complete treatment of this question. We therefore see that the Hamiltonian in Eq. (22) is the same as in , however, augmented by the ’single-pion’ term $`_{i=1}^N\alpha _i^+\alpha _i`$. This extra term arises necessarily in our approach where single bosons and pairs of bosons are treated on the same footing. In the single boson state has been treated on a heuristic level by neglecting exchange contributions to the self energy. So implicitly, this amounts to the same as using (22) at the order considered. The present systematic scheme puts the treatment of ref. on a firm theoretical ground. ## 4 Conclusion In this paper we have extended previous work on the Holstein-Primakoff boson expansion for boson pairs applied to a relativistic field theory of interacting bosons . The aim was to treat simultaneously single bosons and pairs of bosons which is necessary to unambiguously define the two-point function for the Goldstone mode and to extend the formalism to finite temperature. The mapping was applied to the anharmonic oscillator with broken $`O(N+1)`$-symmetry. It was explicitly shown that the extension to accommodate single bosons indeed renders, to leading order of the $`1/N`$-expansion, $`N`$ uncoupled Goldstone- as well as RPA phonon modes. This result is novel and inaccessible to the bosonic Holstein-Primakoff mapping for boson pairs. The latter is only able to provide RPA phonon modes as previously shown in ref. . The full power of the formalism will reveal itself in working out the next-to-leading order of the $`1/N`$-expansion by providing an unambiguous computation of all $`n`$-point functions. It also allows for a natural and straightforward extension to finite temperature. These two points will be discussed in a forthcoming publication. Acknowledgments: I would like to thank G. Chanfray, P. Schuck and J. Wambach for the fruitful collaboration and for their continuous support. I also would like to thank P. Schuck and J. Wambach for discussions, for their interest in this work, and for their comments on the manuscript. Finally, I would like to thank the Gesellschaft für Schwerionenforschung (GSI) Darmstadt for the financial support.
no-problem/9912/physics9912007.html
ar5iv
text
# 1 Introduction ## 1 Introduction The motivation for our theoretical study of problem of causality comes from three sources. The first is due to physical interest: what is the cause of all? The second is from a story happened about non-Euclidean geometry. And the last one is our review of the four basic concepts: time, space, matter and motion. #### 1.0.1 Cause of all If the World is in unification, then it must be unified by connections of causality, and the unification is to be indicated only in that sense. According to that spirit, contingency, if there is really something by chance, is only product of indispensably. Since the World is united in connections of causality, nothing of the World exists outside them, we can divide the World into two systems: $`A`$ comprises all of what are called causes, and $`B`$ all of what are called effects. Eliminating from two systems all alike elements, we thus have the following possibilities: 1. Both $`A`$ and $`B`$ are empty, i.e. there are not pure cause and pure effect. In other words, the World have no beginning and no end. 2. $`A`$ is not empty, but $`B`$ is. Thus, there is an existence of a pure cause. The World have a beginning but no end. 3. $`A`$ is empty, but $`B`$ is not. There are no pure cause but a pure effect. The World have no beginning but an end. 4. Both $`A`$ and $`B`$ are not empty. The World have both a beginning and an end. And only one of the four above possibilities corresponds with the reality. Which possibility is it and what is the fact dependent on? If the World is assumed as a unity system comprising causes and effects, any effect must be a direct result of causes which have generated it, and these causes also had been effects, direct results of other causes before, etc. – there is no effect without cause. A mystery motivation always hurries man to search for causes of every phenomenon and everything. Idealistic ideology believes that an absolute ideation, a supreme spirit, or a Creator, a God,… is the supreme cause, the cause of all. Materialistic ideology thinks that matter is the origin of all, the first one of all. That actuality is in contradiction. If it is honesty to exist a supreme cause, then one must be the difference! Indeed, if there were no existence of difference, there would not be any existence of anything, including idealistic ideology with its ideation, spirit and materialistic ideology with its material facilities. Briefly, If there were no difference, this World did not exist. But, if the difference is the supreme cause, namely the cause of all causes, then it must be the cause of itself, or in other words, it also must be the effect of itself. We have recognized the existence of difference, it means that we have tacitly recognized its relative conservation: indeed, you could not be idealist if you now are materialist; anything, as long as it still is itself, then cannot be anything else! ### 1.1 A story happened in geometry Let us return an old story: a matter of argument about the axiomatics of Euclid’s geometry. Still by the only mystery motivation people always thirst for searching out “the supreme cause”. The goal here is humbler, it is restrained in geometry, and the first to realize that was Euclid. Euclid showed in his Elements how geometry could be deduced from a few definitions, axioms, and postulates. These assumptions for the most part dealt with the most fundamental properties of points, lines, and figures. His first four assumptions has been easily to be accepted since they seem seft-evident, but the fifth, the so-called Euclidean postulate, incited everybody to suspect its essence: “this postulate is complicated and less evident”. For twenty centuries geometers tried to purify Euclid’s system by proving that the fifth postulate is a logical consequence of his other assumptions. Today we know that this is impossible. Euclid was right, there is no logical inconsistency in a geometry without the fifth postulate, and if we want it we will have to put it in at the beginning rather than prove it at the end. And the struggle to prove the fifth postulate as a theorem ultimately gave birth to a new geometry – non-Euclidean geometry. Without exception, their efforts only succeeded in replacing the fifth postulate with some other equivalent postulate, which might or might not seem more self-evident, but which in any case could not be proved from Euclid’s other postulates either. By that way they affirmed that this problem had solved, Euclid’s postulate was just an axiom, because the opposite supposition led to non-Euclidean geometry without immanent contradiction. But… whether such a conclusion was accommodating? While everybody was joyful because it seemed that everything was arranged all right and the proposed goal had been carried out: minimized quantity of geometric axioms and purified them, whimsically, a new axiom was intruded underhand into: Lobachevski’s axiom – this axiom and Euclid’s fifth excluded mutually! Nobody got to know clearly and profoundly how this contradiction meant. But contradiction is still contradiction, it brought about many arguments and violent opponencies, even grudges. Afterwards, since Beltrami had proved correctness of Lobachevski’s geometry on pseudosphere – an infinite two-space of constant negative curvature in which all of Euclid’s assumptions are satisfied except the fifth postulate, the situation was made less tense. If non-Euclidean geometers, from the outset, since setting to build their geometry, declared to readers that objects of new geometry were not Euclidean plane surface but pseudosphere, not Euclidean straight line but line of pseudosphere, maybe nobody doubted and opposed at all! What a pity ! or it was not a pity that nothing happened such a thing? But an actual regret was: the whole of problem was not what was brought out and solved on stage but what - its consequence - happened on backstage. Because, even if non-Euclidean geometry was right absolutely anywhere, it meant: with the same objects of geometry – Euclidean plane surface and straight line – among them, nevertheless, there might be coexistence of two forms of mutually excludible relationships which were conveyed in Euclid’s axioms and Lobachevski’s axioms. It was possible to allege something and other as a reason for forcing everybody to accept this disagreeableness, but that fact was not faithful. Here, causal single-valuedness was broken; here, relative conservation of difference was confused white and black; there was a danger that one thing was other and vice versa. The usual way to “prove” that a system of mathematical postulates is self-consistent is to construct a model that satisfies the postulates out of some other system whose consistency is unquestioned. Axiomatic method used broadly in mathematics is clear to bring much conveniences, but this method is only good when causal single-valuedness is ensured, when you always pay attention on order not to take real and physical sense away from considered subjects. Brought out forms of relationships of objects as axioms and defied objects - real owners of relationships, it is quite possible that at a most unexpected causal single-valuedness is broken and contradiction develops. Because what we unify together is: objects are former ones, their relationships are corollaries formed by their coexistence, but is not on the contrary. If we have a system of objects and we desire to search for all possible relationships among them by logically arguing method, perhaps at first and at least we have to know intrinsic relationships of objects. Intrinsic relationships control nature of objects, in turn nature of objects directs possible relationships among them and, assuredly, among them there may not be coexistence of mutually excludable relationships. Intrinsic relationship, according to the way of philosopher’s speaking, is spontaneousness of things. Science today is in search of spontaneity of things in two directions: more extensive and more elementary. Now return the story, as we already stated, the same objects themselve of Euclid’s geometry had two forms of mutually excludable relationships, how is this understood? It is only possible that Euclid’s axiomatics is not completed yet with the meaning that: comprehension of geometrical objects is not perfected yet. Euclid himself had ever put in definitions of his geometrical objects, but modern mathematicians have criticized that they are “puzzled” and “heavily intuitive”. According to them, primary objects of geometry are indefinable and are merely called points, lines, and sufaces, etc. only for historic reason. But, geometrical objects have other names: “zero”-, “one”-, “two”-, and “three”-dimensional spaces (“zero”-dimesional space, thai is point, added by the author to complete a set). We can ask that, could the objects self-exist independently? If could, why would they relate together? Following logical course of fact, we realize that conceptions of objects are developed from experience which is gained by practical activities of mankind in nature, but which is not innate and available by itself in our head. (Therefore, we should not consider them apart from intuition, should not dispossess of ability to imagine them, how reasonless that is!). Acknowledging at deeper level, we can perceive that no all of geometrical objects may exist independently, but any $`n`$-dimensional space is intersection of two other spaces with dimension higher one ($`n+1`$). Thus, it seems that we have definitions: point is intersection of two lines; line is intersection of two surfaces; surface is intersection of two volumes, and volume… of what is it intersection? However, in a geometry, by human imaginable capability, they are evident to be independent objects, and for convenience, we call them spatial entities. Simplest geometrical objects are homogeneous entities. They are elements, speaking simply, in which as transferring with respect to all their possible degrees of freedom, it is quite impossible to find out any inner difference. Objects of Euclid’s geometry are a part of a system of homogeneous entities. If we build an axiomatics only for this part, it is clear that this axiomatics is not generalized. An axiomatics used for homogeneous spaces is just one for spherical surface<sup>2</sup><sup>2</sup>2The surface of a sphere is a two-dimensional space of constant positive curvature.. Euclid’s geometry is only a limited case of this generalized geometry. For spherical surface, that is homogeneous surface in general, there exists a following postulate: any two non-coincident “straight” lines (“straight” line is homogeneous line dividing the surface that contains it into two equal halves) always intersect mutualy at two points and these two points divide into two halves of each line. It is possible to express further: any two points on a homogeneous surface belong to only a sole “straight” line also on that surface if they do not divide this line into two equal halves. Applying this postulate for Euclidean plane surface as a limited case, we realize immediately that it is just the purport of the first Euclidean axiom: through two given points it is possible to draw only a sole straight line. Indeed, any two points in an investigated scope of Euclidean plane surface belong to only a sole “straight” line since they do not divide this line that contains it into two equal halves. So we can say that the mode of stating the fifth Euclidean postulate was inaccurate from the outset, because any two “straight” lines on a given homogeneous surface always intersect mutually at two points and divide into two halves of each other. In any sufficiently small region of the surface it would be possible to find either only one their intersectional point and the other at infinity or no point - they are at infinities. In this case these two “straight” line are regarded to be parallel apparently with each other. Equivalent stating the fifth postulate, after correcting in the sense of above comment, is quite possible to be proved as a theorem. There is a very important property of spatial entities that: any spatial entity is possible to be contained only in other spatial entity with the same dimension and the same curvature, or with higher dimension but no higher curvature. This seems to be awfully evident: two circles with different curvatures are impossible to be contained in each other; a spherical surface with any curvature is impossible to contain a circle with lower curvature… Similarly, two spaces with different curvatures are impossible to contain in each other. Curvature, here, is correspondent to any quantity characterized by inner relationship of investigated object. ### 1.2 Contradiction generated based on difference is dynamic power of all In essence, the Nature is a system of positive actions and negative actions. What has the Nature thus positive actions on and negative actions on? Those secrets are explored and discovered by science more and more and in searching, if not counting its dynamic source, logical argument plays a great role. But what we call logic is true not a string of positive actions and negative actions with all orders? Because thought is only a phenomenon of the Nature, the law of positive actions and negative actions of thought is also the law of positive actions and negative actions of the Nature. In other words, the law of actions of the Nature is reflected and presented in the law of actions of thought. This law is that: what without immanent contradiction is in positive action by itself, what with immanent contadiction is in negative action by itself. Positive action (if looking after the process) and negative action (if looking back upon the process) both have an ultimate target which is coming to and closing with a new action. Let us take a class of similarly meaning concepts such as: having, existence, conservation, and positive action. In opposition to them, another class includes: nothing, non-existence, non-conservation, and negative action. They belong among the most general and basic concepts, because in any phenomenon of the Nature: sensation, thinking, motion, and variation, etc. there are always their presences. But it turned out to be that the powers of two classes of concepts are not equivalent to each other (and that is really a lucky thing!). Let us now establish a following action, called $`A`$ action: “Having all, existing all, conserving all, and acting positively on all.” And an another, called $`B`$ action, has opposite purport: “Nothing at all, non-existing all, non-conserving all, and acting negatively on all.” Acting positively $`A`$ action is acting negatively $`B`$ action, and vice versa. $`B`$ action says that: * ‘Nothing at all’, i.e. not having $`B`$ action itself. * ‘Non-existing all’, i.e. not existing $`B`$ action itself. * ‘Non-conserving all’, thus $`B`$ action itself is not conserved. * ‘Acting negatively on all’, this is acting negatively on $`B`$ action itself. Briefly, $`B`$ action contains an immanent contradiction. It acts negatively on itself. Self-acting negatively on, $`B`$ action auto-acts positively on $`A`$ action. It means that: there is not existence of absolute nihility or absolute emptiness, and therefore, the World was born! And $`A`$ action acts on all, including itself and $`B`$ action, but $`B`$ action self-acts negatively on itself, so $`A`$ action has not immanent contradiction. Thus, in the sphere of $`A`$ action all what do not self-act negatively on then self-act positively on. ### 1.3 What is the most elementary? There are four very important concepts of knowledge that: time, space, matter, and motion. They are different from each other, but is it true that they are equal to each other and they can co-exist independently? Let us start from $`\mathrm{𝑡𝑖𝑚𝑒}`$. Is it an entity? Could it exist independently apart from space, matter, and motion? Evidently not! Just isolated time out of motion, the conception of it would be no longer here, time would be dead. And the conception of motion has higher independence than time’s. So time is not the first. It could not self-exist, it is only consequence of remainders. Motion is not the first either. It could not self-exist apart from matter and space. In fact, motion is only a manifestation of relationship between matter and space. Thus, one of two remainders, matter and space, which is the most elementary? which is the former? or they are equal to each other and were born by one more elementary other? Perhaps setting such a question is unnecessary, because just as time and motion, matter could not exist apart from space. For instance, a concrete manifestation of matter, it exists not only because of itself but also because of simultaneous existence of space which surrounds it (and contains it) so that it is still itself. Clearly, matter is also in spatial category and it is anything else if not the space with inner relationships different from those of usual space that we know?! But now, according to the property of spatial entities raised in the previous subsection, this fact is contradiction: two same dimensional spaces of different curvatures (inner relationships) are impossible to contain in each other! Thus, either we are wrong: it is evident to place coincidentally two circles of different radii in each other or the Nature is wrong: different spaces can place in each other, defying contradiction. And contradiction generated by this reason is power of motion, motion to escape from contradiction. Thus, we may to say that matter is spatial entity of some curvature. But, where were spatial entities born from? and how can they exist? or are they products of higher dimensional spaces?, so what is about higher dimensional spaces? Let us imagine that all vanish, including matter, space, … and as a whole all possible differences. Then, what is left? Nothing at all! But that is a unique remainder! Clearly, this unique remainder is limitless and homogeneous “everywhere”. Otherwise, it will violate our requirement. We now require the next: even the unique remainder vanishes, too. What will remain, then? Not hardly, we indentify immedeately that substitute which replace it is just itself! Therefore, we call it absolute space. The absolute space can vanish in itself, in other words, acting negatively it leads to acting positively itself. That means, the absolute space can self-exist not depending on any other. It is the former element. It is the “supreme cause”, too. Because, contrary to anyone’s will, it still contains a difference. Indeed, in the unique there is not anything, but still is the Nothing! Nothing is contained in Having, Nothing creates Having. Having, but Nothing at all! Here, negative action is also positive action, Nothing is also Having, and vice versa. Immanent contradiction of this state is infinitely great. Express mathematically, the absolute space has zero curvature. In this space there exist points of infinite curvatures. This difference is infinitely great and, therefore, contradiction generated is infinitely great, too. The Nature did not want to exist in such a contradiction state. It had self-looked for a way to solve, and the consequence was that the Nature was born. Thus, anew the familiar vague truth is that: “matter is not born naturally (from nihility), not vanished naturally (in nihility), it always is in motion and transformation from one form to other. Nowadays, it is necessary to be affirmed again that: “matter is just created from nothing, but this is not motiveless. The force that makes it generate is also the power makes it exist, move, and transform. ## 2 Representation of contradiction under quantitative formula: Equation of causality Any contradiction is originated by coexistence of two mutually rejectable actions. That is represented as follows: $$M=\{\begin{array}{c}AA\text{Action }K_1\\ A=A\text{Action }K_2\end{array}\text{.}$$ Clearly, the more severe contradiction $`M`$ will be if the higher power of mutual rejection between two actions $`K_1`$and $`K_2`$ is. And power of mutual rejection of two actions is estimated only from the degree of difference of those two actions. A contradiction which is solved means that difference of two actions diminishes to zero. Herein, two actions $`K_1`$and $`K_2`$ all vary to reach and to end at a new action $`K_3`$. Thus, what are the differences $`[K_1K_3]`$ and $`[K_2K_3]`$ dependent on? Obviously, these differences are dependent on conservation capacities of actions $`K_1`$ and $`K_2`$. The higher conservation capacity of any action is, the lower difference between it and the last action is. And then, in turn what is conservation capacity of any action dependent on? There are two elements: It is dependent on immanent contradiction of action, the greater its immanent contradiction is the lower its conservation capacity is. It is dependent on new contradiction generated by variation of action. The greater this contradiction is the more variation of action is resisted and, therefore, the higher its contradiction capacity is. Variation, and one kind of which - motion, is generated by contradiction. More exactly, motion is a manifestation of solution to contradiction. The more severe contradiction becomes the more urgent need of solution to contradiction will be, and hence the more violent motion, variation of state, i.e. of contradiction will become. Call the violence, or the quickness of variation of contradiction $`Q`$, the contradiction state is $`M`$, the above principle can be represented as follows: $$Q=K_{(M)}M.$$ We call it equation of causality, where $`K_{(M)}`$ is means of solution to contradiction. On simplest level, $`K_{(M)}`$ can be a function of contradiction state. Actually, it represents easiness of escape from contradiction of state. If contradiction is characterized by quantities $`x,y,z,\mathrm{}`$, these quantities themselves will be facilities of transport of contradiction, degrees of freedom over which contradiction is solved. Hence, easiness is valued as the derivative of contradiction with respect to its degree of freedom. The greater derivative value of contradiction with respect to any degree of freedom is, the higher way-out “scenting” capability in this direction of state becomes, the more “amount of contradiction” escaped with respect to this degree of freedom is. Thus, $$K_{(M)}\left|M^{}(x,y,z,\mathrm{})\right|.$$ And we have $$Q=a\left|M^{}(x,y,z,\mathrm{})\right|M(x,y,z,\mathrm{}),$$ where the coefficient $`a`$ is generated only by choosing system of units of quantities. We said that difference is the origin of all, but difference itself has no meaning. The so-called meaning is generated in direct relationship, in direct comparison. The Nature cannot feel difference through “distance”. A some state which has any immanent contradiction must vary to reach a new one having no intrinsic contradiction, or exactly, having infinitesimal contradiction. That process is one-way, going continuously through all values of contradiction, from the beginning value to closing one. We thus have endeavored to convince that motion (variation) is imperative to have its cause and property of motion obeys the equation of causality. Then, must invariation, i.e. conservation be evident without any cause? It is possible to say that: any state has only two probabilities – either conservable or variable, and more exactly, all are conserved but if that conservation causes a contradiction, then it must let variation have place to escape contradiction and this variation obeys the equation of causality. If this theoretical point is true, our work is only that: learning manner of comprehension, estimating exactly and completely contradiction of state, and describing it in the equation of causality, at that time we will have any law of variation. But is such enough for our terminal perception about the Nature, about people themselves with own thought power, to explain wonders, which always surprise generations: why can the Nature self-perceive itself, through its product - people?! ## 3 Using the causal principle in some concrete and simplest phenomena Advance a quantity $`T`$, inverse of $`Q`$, to be stagnancy of solution to contradiction. Thus, $$T=\frac{1}{aM^{}M}$$ The sum of stagnancy in the process of solution to contradiction from $`M_0`$ to $`M_0\mathrm{\Delta }M`$ called the time is generated by this variation ($`\mathrm{\Delta }t`$). From the above definition and Figure 1, we identify that $$\mathrm{\Delta }t\frac{T+(T+\mathrm{\Delta }T)}{2}\mathrm{\Delta }M.$$ Thus, $$\frac{\mathrm{\Delta }M}{\mathrm{\Delta }t}\frac{2}{2T+\mathrm{\Delta }T}.$$ We have $$lim_{\mathrm{\Delta }T0,\mathrm{\Delta }M0,\mathrm{\Delta }t0}\frac{\mathrm{\Delta }M}{\mathrm{\Delta }t}=\frac{dM}{dt}=\frac{1}{T}=a\left|M^{}\right|M.$$ Therefrom, we obtain a new form of the equation of causality, $$\frac{dM}{dt}=a\left|M^{}(x,y,z,\mathrm{})\right|M(x,y,z,\mathrm{}).$$ Thus, if we consent to the time as an independent quantity and contradiction as a time-dependent one, speed of escape from contradiction with respect to the time is proportional to magnitude of contradiction and means of solution. In the case that contradiction is characterized by itself, namely $`M=M_{(M)}`$, we have $$M=M_0e^{a(tt_0)},$$ where $`M_0`$ is the contradiction at the time $`t=t_0`$. ### 3.1 Thermotransfer principle Supposing that inn a some distance of an one-dimensional space we have a distribution of a some quantity $`L`$. If the distribution has immanent difference, i.e. immanent contradiction, it will self-vary to reach a new state with lowest immanent contradiction. That variation obeys the equation of causality, $$\frac{dM}{dt}=a\left|M^{}\right|M.$$ For convenience, we spread this distribution out on the $`x`$ axis and take a some point to be an origin of coordinates. Because the distribution is one of a some quantity $`L`$, all its values at points in the space of distribution must have equidimension (homogeneity). And the immanent difference of distribution is just the difference of degree. At two points $`x_1`$ and $`x_2`$, the quantity $`L`$ obtains two values $`L_1`$ and $`L_2`$, respectively. Due to the difference of degree, there is only a way of estimation: taking the difference ($`L_2L_1`$). But two points $`x_1`$ and $`x_2`$ only ‘feel’ the difference from each other in direct connection, contradiction may appear or may not only in that direct connection: at a boundary of two neighbouring points $`x_1`$ and $`x_2`$, $`L`$ quantity obtains simultaneously two values $`L_1`$ and $`L_2`$; two these actions act negatively on each other and magnitude of contradiction depends on the difference ($`L_1L_2`$). Therefore, in order for the difference ($`L_1L_2`$) to be the yield of direct connection between two points $`x_1`$ and $`x_2`$, we must let, for example, $`x_2`$ tend infinitely to $`x_1`$ (but not coincide with it). Whereat, the immanent contradiction at infinitesimal neighbourhood of $`x_1`$ will be valued as the limit of the ratio: $$\frac{L_1L_2}{x_1x_2},$$ as $`x_2x_1`$, i.e. the derivative value of $`L`$ over the space of distribution at $`x_1`$. From the presented problems, we have $$M=\frac{dL}{dx}=\frac{L}{x}.$$ Substituting the value of $`M`$ in the equation of causality: $$\frac{}{t}\frac{L}{x}=a\frac{L}{x}.$$ (1) The immanent contradiction at each point is solved as Eq. (1). That makes the distribution vary. We will seek for the law of this variation. The immanent contradiction at neighbourhood of $`x`$ is $$M_{x,t}=\frac{L}{x}|_{x,t}.$$ Later a time interval $`\mathrm{\Delta }t`$, this contradiction is decreased to the value $$M_{x,t+\mathrm{\Delta }t}=\frac{L}{x}|_{x,t+\mathrm{\Delta }t}.$$ Thus, it seems that this variation has compressed a some amount of values of $`L`$ from higher valued points to lower ones, making ‘a flowing current’ of values of $`L`$ through $`x`$ (Figure 2). Clearly, the magnitude of ‘the flowing current’, i.e. the amount of values of $`L`$ flows through $`x`$ in the time interval $`\mathrm{\Delta }t`$, is $$\mathrm{}_x=\frac{}{t}\frac{L}{x}|_x\mathrm{\Delta }t=a\frac{L}{x}|_x\mathrm{\Delta }t.$$ Similarly, at the point $`x+\mathrm{\Delta }x`$ we have $$\mathrm{}_{x+\mathrm{\Delta }x}=a\frac{L}{x}|_{x+\mathrm{\Delta }x}\mathrm{\Delta }t.$$ In this example, the current $`\mathrm{}_x`$ makes values of $`L`$ at points in the interval $`\mathrm{\Delta }x`$ increase, and $`\mathrm{}_{x+\mathrm{\Delta }x}`$ makes them decrease. The consequence is that the increment $`\mathrm{\Delta }L`$ the interval $`\mathrm{\Delta }x`$ obtains is $`\mathrm{\Delta }L|_{\mathrm{\Delta }t}`$ $`=`$ $`a\mathrm{\Delta }t\left({\displaystyle \frac{L}{x}}|_{x+\mathrm{\Delta }x}{\displaystyle \frac{L}{x}}|_x\right)`$ $`=`$ $`a\mathrm{\Delta }t{\displaystyle \frac{^2L}{x^2}}|_{x\xi x+\mathrm{\Delta }x}\mathrm{\Delta }x.`$ The average density value $`\overline{\mathrm{\Delta }L}`$ at each point in the interval $`\mathrm{\Delta }x`$ will be $$\overline{\mathrm{\Delta }L}|_{\mathrm{\Delta }t}\frac{a\mathrm{\Delta }t\frac{^2L}{x^2}|_\xi \mathrm{\Delta }x}{\mathrm{\Delta }x}.$$ The exact value reaches at the limit $`\mathrm{\Delta }x0`$, $$\mathrm{\Delta }L|_{x,\mathrm{\Delta }t}=\underset{\mathrm{\Delta }x0}{lim}\overline{\mathrm{\Delta }L}|_{\mathrm{\Delta }t}=a\mathrm{\Delta }t\frac{^2L}{x^2}|_x.$$ Thus $$\underset{\mathrm{\Delta }t0}{lim}\frac{\mathrm{\Delta }L}{\mathrm{\Delta }t}|_x=a\frac{^2L}{x^2},$$ or $$\frac{L}{t}=a\frac{^2L}{x^2}.$$ (2) The time-variational speed of $`L`$ at neighbourhood of any point of the distribution is proportional to the second derivative over the space of distribution of this quantity right at that point. And as was known, Eq. (2) is just diffusion equation (heat-transfer equation) that had been sought on experimental basis. On the other hand, the corollary of the above reasoning manner has announced to us the conservation of values of the quantity $`L`$ in the whole distribution, although values of this quantity at each separate point may vary, whenever value at any point decreases a some amount, then value at its some neighbouring point increases right the same amount. If the space of distribution is limitless, then along with increase of time the mean value of distribution will decrease gradually to zero. ### 3.2 Gyroscope The conservation of angular momentum vectors may be regarded as the conservation of two components: direction and magnitude. If in a system the directive conservation is not violated but the magnitude conservation of vectors is violated, this system must vary by some way so that the whole system will have a sole angular momentum vector. And in the case where the conservation not only of magnitude but also of direction are both violated, solution to contradiction of state depends on the form of articulation. We now consider the case, in which the gravitational and centrifugal components may be negligible (Figure 3). For simplicity, we admit that there is a motor to maintain a constant angular velocity $`\omega `$ of system. Thus, we are only interested in the contradiction generated by violation of the directive conservation $`k\omega _0`$. The action $`K_1`$ – the conservation of $`k\stackrel{}{\omega }_0`$, say that: variational speed of the vector direction $`k\stackrel{}{\omega }_0`$ equals zero. But the action $`K_2`$ – the conservation of $`\stackrel{}{\omega }`$, say that: the direction $`k\stackrel{}{\omega }_0`$ must be varied with the angular velocity $`\omega \mathrm{cos}\alpha `$. Thus, in macroscope, the difference $`[K_1K_2]=\omega \mathrm{cos}\alpha `$ is the origin of that contradiction, and the contradiction is proportional to this difference. $`M`$ $``$ $`\omega \mathrm{cos}\alpha ,`$ $`M`$ $`=`$ $`|k\stackrel{}{\omega }_0\times \stackrel{}{\omega }|=k\omega _0\omega \mathrm{cos}\alpha .`$ The taken proportionality factor $`k\omega _0`$ (still in macroanalysis) is based on an argument: if $`\omega _0`$ equals zero, the vector direction $`k\stackrel{}{\omega }_0`$ will not exist certainly, and therefore the problem of contradiction generated by its directive conservation will not be invented. Taking the value of $`M`$ into the equation of causality, we obtain $$\frac{M}{t}=ak^2\omega _0^2\omega \mathrm{sin}\alpha \mathrm{cos}\alpha .$$ Here, we have calcuted $`M^{}=M_\alpha ^{}`$. From the equation we identify that if $`\alpha =0`$, then the escaping speed of contradiction state will equal zero. The derivative of contradiction with respect to the time is $$\frac{}{t}(k\omega _0\omega \mathrm{cos}\alpha )=ak^2\omega _0^2\omega \mathrm{sin}\alpha \mathrm{cos}\alpha ,$$ or $$\frac{\alpha }{t}=ak\omega _0\omega \mathrm{cos}\alpha ,\alpha 0.$$ (3) The variation of $`\alpha `$ causes a new contradiction, this contradiction is proportional to value of $`\alpha /t`$, therefore there is not motional conservation over the component $`\alpha `$. And thus the escaping speed in Eq. (3) is also just the instantaneous velocity of the axis of rotation plane surface over $`\alpha `$. The time, so that the angle between the axis of rotation plane surface (i.e. the direction of the vector $`k\stackrel{}{\omega }_0`$) and the horizonal direction varies from the value $`+0`$ to $`\alpha `$, will be $$t=\frac{1}{2ak\omega _0\omega }\mathrm{ln}\frac{1+\mathrm{sin}\alpha }{1\mathrm{sin}\alpha }|_{+0}^\alpha .$$ ### 3.3 Buffer zone of finite space Supposing that there is a finite space $`[A]`$ with intrinsic structure satisfying the invariance for the principle of causality. This space is in the absolute space $`[O]`$. At the boundary of these two space there exists a contradiction caused by difference between them. Because both of the spaces conserve themselves, contradiction is only possible to be solved by forming a buffer zone (i.e. field), owing to which difference becomes lesser and more harmonic. The structure of the buffer zone must have a some form so that the level of harmonicity reaches to a greatest value, i.e. immanent contradiction at each point in the field has lowest possible value. It is clear that the farther from the center of the space $`[A]`$ it is the more the property of $`[A]`$ diminishes. In other words, the $`[A]`$-surrounding buffer zone (field) has also the property of $`[A]`$ and this property is a function of $`r`$ – i.e. the distance from the center of the space $`[A]`$ to considered point in the field (Figure 4). From the problems presented and if the notation of the buffer zone is $`T`$, we will have $$T_{[A]}=g(r)\frac{[A]}{r},$$ where $`g(r)`$ is an unknown function of $`r`$ alone, characterized for the intrinsic harmonicity of the field. If in the field zone $`T_{[A]}`$ there is a space $`[B]`$ and this space does not disturb considerably the field $`T_{[A]}`$, whereat the difference between $`[B]`$ and $`T_{[A]}`$ forces $`[B]`$ to move in the field $`T_{[A]}`$ to approach to position where the difference between $`[B]`$ and $`T_{[A]}`$ has lowest value (here, we have admitted that the space $`[B]`$ has also self-conservation capability). This contradiction of state is proportional to the difference ($`[B]T_{[A]}`$). If we detect a factor $`c`$ to use for ‘translating language’ from the property of $`[B]`$ into the property of $`[A]`$, then the contradiction may be expressed as follows $$M=f.\left(c[B]g(r)\frac{[A]}{r}\right),$$ where $`f`$ is proportionality factor. And the law of motion of the space $`[B]`$ in the field $`T_{[A]}`$ is sought by the equation of causality, $`{\displaystyle \frac{M}{t}}`$ $`=`$ $`a|M^{}|M`$ $`=`$ $`af^2[A]\left|{\displaystyle \frac{g(r)rg^{}(r)}{r^2}}\right|\left(c[B]g(r){\displaystyle \frac{[A]}{r}}\right).`$ Here, the transfer quantity (degree of freedom) of contradiction is $`r`$. Because the motion of the space $`[B]`$ must happen simultaneously over all directions which have centripetal components, therefore the resultant escaping velocity of the state – i.e. the resultant velocity of the space $`[B]`$ in the field $`T_{[A]}`$ must be estimated as the integral of the escaping speed over all directions which have centripetal components. $`{\displaystyle \frac{M}{t}}`$ $`=`$ $`a[A]4\pi f^2{\displaystyle \underset{0}{\overset{\pi /2}{}}}\left|{\displaystyle \frac{g(r)rg^{}(r)}{r^2}}\right|\left(c[B]g(r){\displaystyle \frac{[A]}{r}}\right)\mathrm{cos}^2\phi d\phi `$ $`=`$ $`a\pi ^2f^2[A]\left|{\displaystyle \frac{g(r)rg^{}(r)}{r^2}}\right|\left(c[B]g(r){\displaystyle \frac{[A]}{r}}\right).`$ Expanding the left side hand, we obtain $$f[A]\left(\frac{g(r)}{r}\right)\frac{r}{t}=af^2\pi ^2[A]\left|\frac{g(r)rg^{}(r)}{r^2}\right|\left(c[B]g(r)\frac{[A]}{r}\right),$$ or $$\frac{r}{t}=af\pi ^2\frac{g(r)rg^{}(r)}{r^2}\left(c[B]g(r)\frac{[A]}{r}\right).$$ Notice here that $`g(r)`$ is a function of $`r`$ alone. If proving that the variation of $`r`$ as well as the conservation of $`r/t`$ causes a new contradiction proportional to right $`r/t`$, then the escaping speed obtained is just the instantaneous velocity of $`[B]`$ in the field $`T_{[A]}`$.
no-problem/9912/astro-ph9912303.html
ar5iv
text
# Vacuum discharge as a possible source of gamma-ray bursts ## Abstract We propose that spontaneous particle–anti-particle pair creations from the discharged vacuum caused by the strong interactions in dense matter are major sources of $`\gamma `$-ray bursts. Two neutron star collisions or black hole-neutron star mergers at cosmological distance could produce a compact object with its density exceeding the critical density for pair creations. The emitted anti-particles annihilate with corresponding particles at the ambient medium. This releases a large amount of energy. We discuss the spontaneous $`p\overline{p}`$ pair creations within two neutron star collision and estimate the exploded energy from $`p\overline{p}`$ annihilation processes. The total energy could be around $`10^{51}10^{53}`$ erg depending on the impact parameter of colliding neutron stars. This value fits well into the range of the initial energy of the most energetic $`\gamma `$-ray bursts. PACS number(s): 98.70.Rz; 52.80.Vp; 26.60.+c Gamma-ray bursts (GRBs) were discovered accidentally in the late 1960s by the Vela satellites. The discovery was announced in 1973 . Since then, they have been one of the greatest mysteries in high-energy astrophysics for almost 30 years. The situation has improved dramatically in 1997, when the BeppoSAX satellite discovered X-ray afterglow , which enabled accurate position determination and the discovery of optical and radio afterglows and host galaxies. The distance scale to GRBs was finally unambiguously determined: their sources are at cosmological distances . In spite of all these recent progress, we still do not know what produces GRBs! The nature of the underlying physical mechanism that powers these sources remains unclear. The optical identification and measurement of redshifts for GRBs allow us to determine their distances and the amount of energy that would be radiated in an isotropic explosion. In recent three observations (GRB971214 , 980703 and 990123 ), the total isotropic energy radiated was estimated to be in excess of $`10^{53}`$ erg. For GRB990123, the inferred isotropic energy release is up to $`3.4\times 10^{54}`$ erg, or $`1.9`$ $`M_{}`$ (where $`M_{}`$ is the solar mass), which is larger than the rest mass of most neutron stars. It has been suggested that the explosion of GRB990123 is not isotropic, which reduces the energy released in $`\gamma `$-rays alone to be $`6\times 10^{52}`$ erg due to finite beaming angle. However, if one adopts the picture of the fireball internal shock model that random internal collisions among shells produce the highly variable $`\gamma `$-ray burst emissions, the required initial energy will be raised by a factor of about 100 since it is argued that only 1% of the energy of the initial explosion can be converted into the observed radiation . Therefore, it appears that the total exploded energy for the most energetic bursts is close to or possibly greater than $`10^{54}`$ erg. It seems to be difficult to imagine a source that could provide so much energy. The first and foremost open question concerning GRBs is what are the inner engines that power GRBs ? On the other hand, the GRB spectrum is nonthermal. In most cases there is a strong power law high-energy tail extending to a few GeV. A particular high-energy tail up to 18 GeV has been reported in GRB940217 . This nonthermal spectrum provides an important clue to the nature of GRBs. Various GRB models have been suggested in the literature, see e.g. Refs. . Among them, the neutron star merger seems to be the most promising candidate. Three-dimensional hydrodynamical simulations of the coalescence of binary neutron stars (NS-NS) , direct collision of two neutron stars as well as black hole-neutron star (BH-NS) merger have been performed by some authors. The largest energy deposition of $`10^{51}`$ erg by $`\nu \overline{\nu }`$ annihilation was obtained in the black hole-neutron star merger (for NS-NS collision, the total energy is around $`10^{50}`$ erg ). This may account for certain low-energy GRBs on the one hand, but it is, on the other hand, still far away from the energetic ones mentioned above. However, it should be pointed out that in those macroscopic simulations (and almost all GRB fireball models) the effects of strong interactions, e.g., the modification of hadron properties in dense matter, many body effects, vacuum correlations et al., have been largely neglected except that a nuclear equation of state is applied. In this Letter we propose an alternative scenario for the source of the most energetic $`\gamma `$-ray bursts. It is well known that the density is fairly high at the center of neutron stars. The central density can be several times nuclear saturation density . Furthermore, superdense matter could be formed at NS-NS/BH-NS mergers and direct NS-NS collisions. Three-dimensional hydrodynamical simulations showed that when two neutron stars collide with a free-fall velocity, the maximum density of the compressed core can be 1.4 (off-center collision, the impact parameter $`b=R`$, i.e., one neutron star radius) to 1.9 (head-on collision) times the central density of a single neutron star . At such high density, not only the properties of baryons will be modified drastically according to the investigation of relativistic mean-field theory (RMF) and relativistic Hartree approach (RHA) , but also the vacuum, i.e., the lower Dirac sea, might be distorted substantially since the meson fields, which describe the strong interactions between baryons, are very large. At certain densities, when the threshold energy of the “negative-energy sea”-nucleons (i.e., the nucleons in the Dirac sea) is larger than the nucleon free mass, the nucleon–anti-nucleon pairs can be created spontaneously from the vacuum . A schematic picture for this phenomena is depicted in Fig. 1. The situation is quite similar to the electron-positron pair creations in QED with strong electromagnetic fields . The produced anti-nucleons will then annihilate with the nucleons at the ambient medium through the $`N\overline{N}\gamma \gamma `$ reaction. This yields a large amount of energy and photons. This process may happen in addition to the $`\nu \overline{\nu }`$ annihilation process. The sequential process, $`\gamma \gamma e^+e^{}`$, inevitably leads to the creation of a fireball. The dynamical expansion of the fireball will radiate the observed $`\gamma `$-rays through the nonthermal processes in shocks . In the following, we shall estimate whether enough energy is available within this scenario to satisfy the requirement of a source of energetic GRBs. We start from the Lagrangian density for nucleons interacting through the exchange of mesons $``$ $`=`$ $`\overline{\psi }[i\gamma _\mu ^\mu M_N]\psi +{\displaystyle \frac{1}{2}}_\mu \sigma ^\mu \sigma {\displaystyle \frac{1}{2}}m_\sigma ^2\sigma ^2{\displaystyle \frac{1}{4}}\omega _{\mu \nu }\omega ^{\mu \nu }`$ (3) $`+{\displaystyle \frac{1}{2}}m_\omega ^2\omega _\mu \omega ^\mu {\displaystyle \frac{1}{4}}𝐑_{\mu \nu }𝐑^{\mu \nu }+{\displaystyle \frac{1}{2}}m_\rho ^2𝐑_\mu 𝐑^\mu `$ $`+\mathrm{g}_\sigma \overline{\psi }\psi \sigma \mathrm{g}_\omega \overline{\psi }\gamma _\mu \psi \omega ^\mu {\displaystyle \frac{1}{2}}\mathrm{g}_\rho \overline{\psi }\gamma _\mu 𝝉\psi 𝐑^\mu ,`$ where the usual notation is used as given in the literature . Based on this Lagrangian, we have developed a relativistic Hartree approach including vacuum contributions which describe the properties of nucleons and anti-nucleons in nuclear matter and finite nuclei quite successfully . The parameters of the model are fitted to the ground state properties of spherical nuclei. The RHA0 set of parameters gives $`\mathrm{g}_\sigma ^2(\mathrm{M}_N/m_\sigma )^2=229.67`$, $`\mathrm{g}_\omega ^2(\mathrm{M}_N/m_\omega )^2=146.31`$, $`\mathrm{g}_\rho ^2(\mathrm{M}_N/m_\rho )^2=151.90`$. It leads to the nuclear matter saturation density $`\rho _0=0.1513`$ $`fm^3`$ (0.1484 – 0.1854 $`fm^3`$) with a binding energy $`E_{bind}=17.39`$ MeV ($`16\pm 1`$ MeV) and a bulk symmetry energy $`a_{sym}=40.4`$ MeV (33.2 MeV). The corresponding empirical values are given in parentheses. The model can be further applied to the neutron-proton-electron ($`n`$-$`p`$-$`e`$) system under the beta equilibrium and the charge neutrality conditions which is in particular important for the neutron star. The positive energy of the nucleons in the Fermi sea $`E_+`$ and the negative energy of the nucleons in the Dirac sea $`E_{}`$ can be written as $`E_+`$ $`=`$ $`\left\{\left[k^2+\left(M_N\mathrm{g}_\sigma \sigma \right)^2\right]^{1/2}+\mathrm{g}_\omega \omega _0+{\displaystyle \frac{1}{2}}\mathrm{g}_\rho \tau _0R_{0,0}\right\},`$ (4) $`E_{}`$ $`=`$ $`\left\{\left[k^2+\left(M_N\mathrm{g}_\sigma \sigma \right)^2\right]^{1/2}\mathrm{g}_\omega \omega _0+{\displaystyle \frac{1}{2}}\mathrm{g}_\rho \tau _0R_{0,0}\right\}.`$ (5) Here $`\sigma `$, $`\omega _0`$ and $`R_{0,0}`$ are the mean values of the scalar field, the time-like component of the vector field, and the time-like isospin 3-component of the vector-isovector field in neutron star matter, respectively. They are obtained by solving the non-linear equations of the meson fields including vacuum contributions under the constraints of charge neutrality and general equilibrium. The energy of anti-nucleons $`\overline{E}_+`$ is just the negative of $`E_{}`$, i.e., $`\overline{E}_+=E_{}`$ . By setting $`k=0`$ in Eqs. (2) and (3), one gets the energies of nucleons and anti-nucleons at zero momentum. The critical density $`\rho _C`$ for nucleon–anti-nucleon pair creation is reached when $`E_{}=M_N`$. The results are given in Fig. 2 where the single-particle energies of the positive-energy nucleon and the negative-energy nucleon are plotted as a function of density. Due to the effects of the $`\rho `$-meson field, $`\rho _C=6.1`$ $`\rho _0`$ for $`p\overline{p}`$ pair creation and $`7.5`$ $`\rho _0`$ for $`n\overline{n}`$ pair creation. At the same time, we have calculated the equation of state (EOS) of neutron star matter. The structures and properties of neutron stars can be obtained by applying the equation of state to solve the Oppenheimer-Volkoff equation . The maximum mass of stars turns out to be $`M_{max}=2.44`$ $`M_{}`$, and the corresponding radius $`R=12.75`$ km and the central density $`\rho _{cen}=5.0`$ $`\rho _0`$. The $`\rho _{cen}`$ is smaller than the critical density $`\rho _C`$. That means that the spontaneous $`N\overline{N}`$ pair creation does not happen for a single neutron star within the model employed. We consider the following case of neutron star collision: Two identical neutron stars with $`\rho _{cen}=4.5`$ $`\rho _0`$ (with the current EOS, it is related to $`M=2.43`$ $`M_{}`$ and $`R=13.0`$ km) collide with each other with a free-fall velocity. The impact parameter $`b`$ stays between $`0`$ and $`R`$, which determines the factor of density enhancement. We assume that a compact object of average density $`7.2`$ $`\rho _0`$ is created in the reaction zone. The radius of the compact object is assumed to be $`r=1`$ km (case A) or $`r=3`$ km (case B) depending on the values of $`b`$. Since for a single neutron star with $`\rho _{cen}=4.5`$ $`\rho _0`$ the density at $`r=1`$ km is $`4.46`$ $`\rho _0`$ and at $`r=3`$ km is $`4.18`$ $`\rho _0`$, in case A the density is enhanced during neutron star collision by a factor around $`1.6`$ while in case B around $`1.7`$. In both cases the $`p\overline{p}`$ pair creation will happen while the contributions of the $`n\overline{n}`$ pair creation is negligible (it contributes at higher density but does not affect our discussions). We define a Dirac momentum $`k_D`$ which describes the negative-energy nucleons occupying the eigenstates of the Dirac sea from the uppermost level (the lowest-energy antiparticle level) to the negative continuum (see, Fig. 1), i.e., $`E_{}=M_N`$ in Eq. (3). At the critical density for $`p\overline{p}`$ pair creation $`\rho _C^{p\overline{p}}=6.1`$ $`\rho _0`$, the Dirac momentum $`k_D^C=11.28`$ $`fm^1`$; and at $`\rho =7.2`$ $`\rho _0`$, $`k_D=12.45`$ $`fm^1`$. We further define a momentum $`p_{max}`$ at $`E_{}=M_N`$, which turns out to be $$p_{max}=\sqrt{\left(\mathrm{g}_\omega \omega _0\frac{1}{2}\mathrm{g}_\rho \tau _0R_{0,0}+\mathrm{g}_\sigma \sigma 2M_N\right)\left(\mathrm{g}_\omega \omega _0\frac{1}{2}\mathrm{g}_\rho \tau _0R_{0,0}\mathrm{g}_\sigma \sigma \right)}.$$ (6) Based on the semi-classical phase-space assumption we then estimate the number of the $`p\overline{p}`$ pairs whose energies are larger than the nucleon free mass at $`\rho =7.2`$ $`\rho _0`$ as $$N_{pair}=\frac{4}{3}\pi r^3\times \frac{p_{max}^3}{3\pi ^2}=2.147r^3\times 10^{54}.$$ (7) Before expansion, this compact object remains high density. Let us check whether most of the $`p\overline{p}`$ pairs can be created spontaneously. The rates for the $`N\overline{N}`$ pair production per unit surface area and unit time, $`dN_{pair}/dSdt`$, has been calculated in Ref. for compressed matter. In the case of $`\rho =7`$ $`\rho _0`$ and tunnel distance $`d=1`$ $`fm`$, the rate turns out to be $`2.68\times 10^2`$ $`fm^3`$. For case B with $`r=3`$ $`km`$, the time needed to emit the available $`p\overline{p}`$ pairs is $`t=1.9\times 10^{19}`$ $`fm=6.3\times 10^5`$ $`s`$, which is smaller than the typical dynamical scale of NS-NS collision $`\tau 10^3`$ $`s`$. Thus, we have enough time to produce proton–anti-proton pairs spontaneously. The produced protons stay in the atmosphere due to gravitational force. However, at that time holes (anti-protons) are still in bound states due to potentials they feel (a small fraction may be transported into the negative continuum). The above process may happen before the compact object expands (we are discussing a microscopic procedure in a macroscopic phenomenon). Then the compact object expands and the potentials in the Dirac sea fall down. Those anti-particles (holes) in bound states are pushed into the lower continuum and thus escape. They annihilate with the protons in the atmosphere or in the surrounding objects and release a large sum of energy. If one assumes that 80% of the produced anti-protons annihilate with protons in the surrounding medium and the released energy is $`2`$ GeV at each event (at the moment it’s not very clear how many anti-protons in the Dirac-sea can escape through the lower continuum. This is a problem, which should be investigated more closely.), the total exploded energy $`E_{tot}`$ turns out to be $`5.5\times 10^{51}`$ erg and $`1.5\times 10^{53}`$ erg for cases A and B, respectively. As mentioned before, the efficiency to transfer the initial energy to the observed radiation is only 1% . It seems to be necessary to adopt the picture of beaming explosion for the most energetic $`\gamma `$-ray bursts. Some discussion is now appropriate. Neutron star collisions have repeatedly been suggested in the literature as possible sources of $`\gamma `$-ray bursts , powered either by $`\nu \overline{\nu }`$ annihilation or by highly relativistic shocks. In Ref. Ruffert and Janka claimed that a $`\gamma `$-ray burst powered by neutrino emission from colliding neutron stars is ruled out. Here we propose a new scenario caused by the strong interactions in dense matter. A large number of anti-particles may be created from the vacuum when the density is higher than the critical density for spontaneous particle–anti-particle pair creation. Such high density can be reached during the NS-NS collisions, BH-NS mergers, or even NS-NS mergers when the merged binary neutron stars have large maximum densities. Some of the produced anti-particles can be ejected from the reaction zone due to violent dynamics. They may be the novel source of low-energy cosmic-ray anti-particles which is currently an exciting topic in modern astrophysics . Most of them will annihilate with the corresponding particles at the ambient medium, and thus release a large amount of energy. As a first step we have discussed the $`p\overline{p}`$ pair creation in two neutron star collision scenarios because its critical density is lower than that of other baryons. Our calculations show that the exploded energy satisfies the requirement for the initial energy of the energetic GRBs observed up to now. The variation of the released energies of different GRBs can be attributed to the different impact parameters of colliding neutron stars. Those anti-protons, although produced spontaneously, annihilate during the dynamical procedure with random probability in collisions with protons. Furthermore, the anti-protons annihilating later might be accelerated by the photons produced by the nearby $`p\overline{p}`$ pair annihilations taking place earlier. This leads to the high-energy anti-protons and, consequently, the high-energy photons. Some of them may escape from the fireball before being distorted by the medium. Those escaping high-energy photons may constitute the observed high-energy tail of $`\gamma `$-ray bursts. This has to be pursued further theoretically. In summary, we have proposed a new scenario of vacuum discharge due to strong interactions in dense matter as a possible source of $`\gamma `$-ray bursts. Based on the meson field theoretical model we have estimated the exploded energy $`E_{tot}10^{51}10^{53}`$ erg within two neutron star collisions, which fits well into the range of the initial energy necessary for most energetic $`\gamma `$-ray bursts. For a more quantitative study, one needs to introduce hyperon degrees of freedom and even quark degree of freedom if one assumes that the center of neutron star is in quark phase. Here we have mainly discussed NS-NS collisions. In fact, the proposed scenario may happen more frequently for BH-NS mergers since the production rate for BH-NS binaries is $``$ $`10^4`$ per yr per galaxy which is much larger than the rate of direct NS-NS collisions (for an estimation of collision rate in dense cluster of neutron stars, see Ref. ). In this case one might obtain a even higher explosion energy reaching the value of $`10^{54}`$ erg. A relativistic dynamical model like relativistic fluid dynamics incorporating meson fields is highly desirable to simulate NS-NS collisions, NS-NS/BH-NS mergers. At the end, we would like to mention that a similar process may happen in nucleus-nucleus collisions as discussed in the introduction of Ref. where a dynamical production of anti-matter clusters due to the variation of the time-dependent meson fields has been suggested. We propose to study the photon and anti-proton spectra in ultra-relativistic heavy-ion collisions which may provide us with information of structure of discharged vacuum. Works on these aspects are presently underway. Acknowledgements: The authors thank N.K. Glendenning for fruitful comments on the preliminary version of manuscript. G. Mao acknowledges the STA foundation for financial support and the members of the Research Group for Hadron Science at Japan Atomic Energy Research Institute for their hospitality.
no-problem/9912/hep-ph9912363.html
ar5iv
text
# Cosmological measurement of neutrino mass in the presence of leptonic asymmetry ## I Introduction One of the most intriguing questions in cosmology is the possibility of having an asymmetry in the number of leptons and antileptons in the Universe. This asymmetry is restricted to be in the form of neutrinos from the requirement of universal electric neutrality. A large neutrino asymmetry is not excluded by current observational data of primordial abundances of light elements, cosmic microwave background (CMB) anisotropies and large scale structure in the Universe. If a relic asymmetry exists, the corresponding neutrinos, called degenerate, are characterized by the dimensionless degeneracy parameter $`\xi \mu /T_\nu `$, where $`\mu `$ is their chemical potential and $`T_\nu `$ their temperature. The energy density of degenerate neutrinos is much larger than the one of standard neutrinos, and is a function of $`\xi `$. From a theoretical point of view, in most particle physics models the leptonic asymmetry is naturally of the same order as the baryonic one, i.e. one part in $`10^910^{10}`$ as required by big bang nucleosynthesis (BBN) . However, there are some specific scenarios where the leptonic asymmetry can grow up to large values in the early universe , while the baryonic one remains small. Some examples include lepton asymmetries created by an Affleck-Dine mechanism or by active-sterile neutrino oscillations , which allow the neutrino asymmetry to reach order one before neutrinos decouple from the rest of the plasma. In general, these generating mechanisms create a different asymmetry for different neutrino flavors. From an observational point of view, BBN constrains the neutrino degeneracy to be at most of order one for the electronic neutrino ($`0.06<\xi _{\nu _e}<1.1`$) , but is compatible with larger degeneracies for $`\nu _\mu `$ or $`\nu _\tau `$. Interestingly, the current CMB anisotropy data are compatible with a large neutrino asymmetry $`\xi 3.5`$ in the framework of the standard cold dark matter (CDM) cosmological scenario, as shown in . More recently, two of us have shown in a systematic analysis that this conclusion also applies to flat models with a cosmological constant ($`\mathrm{\Lambda }`$CDM), even when CMB data are combined with constraints on the matter power spectrum (for an earlier discussion see ). This analysis is based on the data available when was submitted. We have checked that more recent data such as TOCO and BOOMERANG 97 are still compatible with our previous upper bound, $`\xi <3.5`$. In the case of massless degenerate neutrinos, the only relevant effect of $`\xi `$ is to increase the total density of radiation, and to postpone the time of equality between radiation and matter. This modification has got large observable effects: it boosts the first CMB peak amplitude, shifts all peaks to smaller scales, and suppresses matter fluctuations on small scales . However, it can be simply described by introducing an effective number of massless neutrino families $$N(\xi )3+\frac{30}{7}\left(\frac{\xi }{\pi }\right)^2+\frac{15}{7}\left(\frac{\xi }{\pi }\right)^4$$ (1) which is as large as $`5`$ for $`\xi 2`$. This excess in the effective number of neutrinos would wash out the small corrections that arise in the standard model (slight heating of neutrinos by $`e^+e^{}`$ annihilations and finite-temperature QED effects), whose effects on the CMB were considered in . The analysis in also included the case of massive degenerate neutrinos, to which we adapted the Boltzmann code cmbfast by Seljak and Zaldarriaga , that calculates the radiation and matter power spectra. It appeared that combining the asymmetry with a small mass for one family of neutrinos had some subtle effects, that can not be parametrized simply with $`N(\xi )`$. For instance, the suppression of small scale matter fluctuations caused by the free-streaming of neutrinos (when they become non-relativistic) is more efficient in presence of an asymmetry, due to the enhanced average momentum of the degenerate neutrino. Also, by combining a mass and a degeneracy for the same neutrino family, we reach a bigger neutrino density today than by introducing these parameters separately, or for different families. This point has very interesting phenomenological consequences. The evidence for neutrino oscillations from Super–Kamiokande , if explained by standard $`\nu _\mu \nu _\tau `$ oscillations (for recent reviews, see for instance ), requires differences of squared masses of the order $`\mathrm{\Delta }m^2(18)\times 10^3`$ eV<sup>2</sup>. This determines a lower limit on the value of the neutrino mass, $`m0.030.09`$ eV. This bound is saturated in the case when there is a hierarchy in the neutrino mass pattern, i.e. when the two neutrino masses are very different. In the present work we consider as a typical value $`m=m_{SK}=0.07`$ eV. However such very light neutrinos only make a small contribution to the present energy density of the Universe, of the order $`\mathrm{\Omega }_\nu =0.00075h^2`$ for a dimensionless Hubble parameter $`h=H/(100\text{km s}^1\text{Mpc}^1)`$, while at the same time they have no visible effect on the power spectra of matter and CMB anisotropies. Therefore one concludes that a $`0.07`$ eV neutrino is of little relevance for cosmology. But this conclusion is modified when one considers the combined effects of mass and degeneracy. For instance, the present energy density of neutrinos with $`m_{SK}`$ and $`\xi =3`$ is of the same order of magnitude as the one of baryons . In such a case the light degenerate neutrino plays the role of a significant hot dark matter component. The main motivation of this paper is to address the following question: are the future CMB experiments sensitive enough to detect a $`0.07`$ eV neutrino mass in the presence of a relic neutrino asymmetry? Such an evidence would be of tremendous importance for our understanding of neutrino models, since it would probe the absolute value of the neutrino mass, while neutrino oscillations are sensitive to the difference of squared masses. The sensitivity of the future satellite missions Microwave Anisotropy Probe (MAP) and Planck to heavier neutrinos was considered in . Since $`\xi `$ enhances the effect of the mass, but, on the other hand, introduces a new degree of freedom in the model, it is not obvious whether the scheduled experiments have the required sensitivity to detect a degenerate neutrino mass as small as $`m_{SK}`$. From our calculations we conclude that Planck will be able to detect it, provided that there exists a large relic neutrino asymmetry, typically $`\xi >23`$. This value is close to the one suggested to explain the production of ultra-high energy cosmic rays beyond the Greisen-Zatsepin-Kuzmin cutoff . In a previous work , Kinney and Riotto already calculated the precision with which the MAP and Planck satellites could measure a large degeneracy parameter $`\xi 𝒪(1)`$. The effect that we consider in this work is so tiny that we will skip MAP and focus on the capabilities of Planck, as well as those of the future Sloan Digital Sky Survey (SDSS), that will probe the shape of the matter power spectrum. It is well known that for ordinary massive neutrinos, combining CMB and Large Scale Structure (LSS) data is crucial for the mass extraction ; in our case it is even more important, due to the enhanced free streaming effect on small scale matter fluctuations. For completeness, we also consider the case of a slightly heavier neutrino, with $`m=1`$ eV. This mass is of the order of magnitude that could explain the results from the Los Alamos Liquid Scintillation Neutrino Detector (LSND) experiment through neutrino oscillations, which require $`\mathrm{\Delta }m^20.11`$ eV<sup>2</sup>. An eV neutrino mass is also required in cold + hot dark matter models (CHDM), because it produces $`\mathrm{\Omega }_\nu >0.01h^2`$. It has been already shown that such a mass could be extracted with $`20`$% precision by Planck + SDSS . We calculate how much this result is improved in the presence of a large asymmetry. ## II The Fisher matrix Since the sensitivity of Planck and of the SDSS is already known, it is possible to assume a “fiducial” model, i.e., a cosmological model that would yield the best fit to the future data, and to forecast the error with which each parameters would be extracted. Starting with a set of parameters $`\theta _i`$ describing the fiducial model, one can compute the power spectra of CMB temperature and polarization anisotropies. Since the anisotropy data consists in two-dimensional maps of the sky, these power spectra are usually expanded in multipoles $`C_l^X`$, where $`l`$ is the multipole number, and $`X`$ is one of the temperature or polarization modes $`T,E,TE,B`$ . Simultaneously, one can derive the linear power spectrum of matter fluctuations $`P(k)`$, expanded in Fourier space. Although CMB experiments measure the $`C_l^X`$’s directly, redshift surveys such as the SDSS probe the linear power spectrum only on the largest scales, and modulo a biasing factor $`b^2`$. For a given survey, the biasing reflects the discrepancy between the total matter fluctuations in the Universe, and those actually seen by the instruments. It is usually assumed to be independent of $`k`$. The error $`\delta \theta _i`$ on each parameter can be calculated from the reduced Fisher matrix $`F_{ij}`$, which has two terms. The first one accounts for Planck and is computed according to ref. , while the second term accounts for the SDSS and is calculated following Tegmark $`F_{ij}`$ $`=`$ $`{\displaystyle \underset{l=2}{\overset{+\mathrm{}}{}}}{\displaystyle \underset{X,Y}{}}{\displaystyle \frac{C_l^X}{\mathrm{ln}\theta _i}}\mathrm{Cov}^1(C_l^X,C_l^Y){\displaystyle \frac{C_l^Y}{\mathrm{ln}\theta _j}}`$ (2) $`+`$ $`2\pi {\displaystyle _0^{k_{max}}}{\displaystyle \frac{\mathrm{ln}P_{obs}(k)}{\mathrm{ln}\theta _i}}{\displaystyle \frac{\mathrm{ln}P_{obs}(k)}{\mathrm{ln}\theta _j}}w(k)d\mathrm{ln}k.`$ (3) Here $`\mathrm{Cov}(C_l^X,C_l^Y)`$ is the covariance matrix of the estimators of the CMB spectra for Planck, and $`w(k)`$ is the weight function for the bright red galaxies sample of the SDSS, taken from Tegmark . We defined $`P_{obs}(k)b^2P(k)`$, and $`k_{max}`$ is the maximal wave number on which linear predictions are reliable. Following , we will use either the conservative value $`k_{max}=0.1h`$ Mpc<sup>-1</sup>, or the optimistic but still reasonable value $`k_{max}=0.2h`$ Mpc<sup>-1</sup>. Inverting $`F_{ij}`$, one obtains the 1-$`\sigma `$ error on each parameter, assuming that all other parameters are unknown $$\frac{\delta \theta _i}{\theta _i}=(F^1)_{ii}^{1/2}.$$ (4) It is also useful to compute the eigenvectors of the reduced Fisher matrix (i.e., the axes of the likelihood ellipsoid in the space of relative errors). The error on each eigenvector is given by the inverse square root of the corresponding eigenvalue. The eigenvectors with large errors indicate directions of parameter degeneracy; those with the smallest errors are the best constrained combinations of parameters. We assume that a best fit to the future Planck and SDSS data (our “fiducial” model) is a $`\mathrm{\Lambda }`$CDM model with ten parameters: (1) a neutrino mass $`m=0,0.07`$ or $`1`$ eV, (2) a neutrino degeneracy $`\xi `$, (3) a Hubble parameter $`h=0.65`$, (4) a baryon density $`\mathrm{\Omega }_b=0.015h^2`$, (5) a cold dark matter density $`\mathrm{\Omega }_{CDM}=0.3`$, (6) a primordial spectrum tilt $`n=0.98`$, (7) a primordial spectrum normalization, fixed in cmbfast by fitting to COBE, (8) an optical depth to reionization $`\tau =0.05`$, (9) a quadrupole tensor-to-scalar ratio $`T/S=0.14`$, (10) an arbitrary SDSS bias $`b`$. These parameters were chosen in such way that for $`m_{SK}`$ and $`0\xi 3.5`$, the fiducial models pass the observational tests of . These tests are independent of the bias, and so is the Fisher matrix as can be seen from Eq. (3). ## III Measuring the degeneracy parameter We first consider a fiducial model with $`m=0`$ and a degeneracy parameter $`\xi `$. Our results are shown in Fig. 1, where we plot $`\delta \xi /\xi `$ as a function of $`\xi `$. For a very large degeneracy $`\xi =3.5`$, we find for Planck alone, without polarization, $`\delta \xi /\xi =2.5\%`$. Such a small error is justified by the large effect of the degeneracy on the amplitude and shape of the acoustic peaks . It is limited by a small parameter degeneracy between $`\xi `$ and $`\mathrm{\Omega }_{CDM}`$. Computing the eigenvectors of the Fisher matrix, we find that the combination $`\mathrm{\Omega }_{CDM}^{0.8}/\xi ^{0.6}`$ is measured with $`2.9\%`$ uncertainty. This is equivalent to a degeneracy between $`\xi `$ and the cosmological constant, since when $`m=0`$, one has that $`\mathrm{\Omega }_{CDM}+\mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_b`$. There is a simple physical explanation: the only effect of $`\xi `$ is to change the time of equality, and this is one of the main effects of $`\mathrm{\Omega }_\mathrm{\Lambda }`$. Since this explanation holds not only for the temperature spectrum, but also for the polarization and matter spectra, we do not expect to remove this degeneracy by including the information from polarization and the SDSS: indeed, $`\delta \xi /\xi `$ is reduced only from $`2.5\%`$ to $`2.3\%`$. So, only direct precise measurements of $`\mathrm{\Omega }_{CDM}`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ (using gravitational lensing or supernovae) could improve this already good result. Our results are in good agreement with Kinney and Riotto . Interestingly, they are found to be almost independent of the mass of the degenerate neutrino. With a significant value of the mass, one may expect to lose precision on $`\xi `$, due to possible degeneracies between $`\xi `$ and $`m`$: both parameters boost the acoustic peaks and suppress power on small scales. However, the two effects should remain separable, because $`\xi `$ changes the radiation density at all times, while $`m`$ affects it only when the neutrinos become non-relativistic. Indeed, for $`m=1`$ eV, we check by diagonalizing the Fisher matrix that no degeneracy appears between $`\xi `$ and $`m`$. By varying $`m`$ from 0 to $`1`$ eV, we find that $`\delta \xi /\xi `$ increases by only $`10\%`$. ## IV Measuring the degenerate neutrino mass We introduce one family of massive degenerate neutrinos with $`m_{SK}`$. In this case, the results for $`\delta m/m`$ as a function of $`\xi `$ are shown as the upper curves in Fig. 2. For instance, when $`\xi =2.75`$, the temperature anisotropy measurement by Planck can bring evidence for $`m_{SK}`$, but only marginally: $`\delta m/m=102\%`$. Here the precision is limited by a parameter degeneracy direction $`(T/S)^{0.7}/m^{0.7}`$, with eigenvalue $`130\%`$. Since the polarization measurement is able to constrain $`T/S`$ better, when including it we find $`\delta m/m=93\%`$. Finally, significant progress is made with the SDSS, and the final error depends on $`k_{max}`$, since the SDSS is sensitive on small scales to the free streaming produced by degenerate neutrinos. For $`k_{max}=0.1h`$ Mpc<sup>-1</sup> (respectively $`0.2`$) we obtain $`\delta m/m=84\%`$ (respectively $`59\%`$), which results in a clear detection, especially if we recall that $`C_l/m`$ is a strongly non-linear function of $`m`$ when $`m0`$, so that the above errors, when large, are overestimated, as pointed out in . When the neutrino degeneracy is as large as $`\xi =3.5`$, we get $`\delta m/m=37\%`$ for Planck + SDSS. We also plot in Fig. 2 the results for $`m=1`$ eV (lower curves). When $`\xi 0`$, the estimated errors are consistent with . In the presence of a relic neutrino asymmetry, they are lowered from $`\delta m/m=1520\%`$ (for $`\xi =0`$) to $`34\%`$ (for $`\xi =3`$), depending on $`k_{max}`$. ## V Conclusions We have shown that Planck will be able to detect a neutrino mass of the order $`m_{SK}`$, provided that the relic neutrinos are strongly degenerate, with a degeneracy parameter $`\xi >3`$. The combination of Planck with the SDSS improves the results and allows a detection of $`m_{SK}`$ if $`\xi >2.5`$. Therefore the neutrino mass suggested by Super–Kamiokande could be relevant for cosmological models of structure formation and CMB anisotropies. We have also confirmed that in the massless neutrino case, the degeneracy parameter $`\xi `$ can be extracted from the CMB data by Planck with the precision found by Kinney and Riotto . For this parameter, we find that the inclusion of the SDSS data or the addition of a neutrino mass would not significantly change the results. Last but not least, if the mass of the neutrinos is of the order of $`1`$ eV, then even in absence of asymmetry it can be extracted with Planck and SDSS with the precision found by , but the relic neutrino asymmetry allows a more accurate detection. The possibility of detecting a neutrino mass and/or a relic neutrino asymmetry from future cosmological experiments is an example of the fascinating connection between large scale cosmology and particle physics. ## Acknowledgments J. Lesgourgues and S. Pastor are supported by the European Commission under the TMR contract ERBFMRXCT960090. Figure 1 Figure 2
no-problem/9912/astro-ph9912204.html
ar5iv
text
# A High Signal-to-Noise UV Spectrum of NGC 7469: New Support for Reprocessing of Continuum Radiation1 ## 1. Introduction For the last year of operations of the International Ultraviolet Explorer (IUE) the International AGN Watch successfully carried out an intensive continuous monitoring campaign on the bright Seyfert 1 galaxy NGC 7469 (Wanders et al. (1997)). During the course of these observations we obtained a single high signal-to-noise UV spectrum covering 1150–3300 Å using the Faint Object Spectrograph (FOS) on the Hubble Space Telescope (HST). Simultaneous high-energy X-ray observations were obtained using the Rossi X-ray Timing Explorer (RXTE) (Nandra et al. (1998), and a network of ground-based facilities obtained optical spectra (Collier et al. (1998)). These data sets have been used to study the structure of the continuum and line-emitting regions in NGC 7469 using reverberation mapping. Previous IUE and ground-based campaigns that have applied reverberation mapping techniques to the study of AGN broad-line regions (BLR) have greatly illuminated our understanding of their structure (see the review by Peterson (1993)). The reverberation mapping method (Blandford & McKee (1982)) uses the light-travel-time delayed response of the emission line clouds to variations in the continuum to unravel the spatial and kinematic structure of the BLR. Campaigns on NGC 5548 and NGC 3783 using IUE (Clavel et al. (1991); Reichert et al. (1994)) and again on NGC 5548 using IUE and HST (Korista et al. (1995)) have determined that the BLR is smaller than single-zone photoionization models have suggested, and that it is highly stratified. Inner and outer radii differ by an order of magnitude, and higher ionization lines are characteristically formed in the innermost regions. Analysis of the IUE data for the NGC 7469 campaign (Wanders et al. (1997)) have led to similar results for its broad emission lines. The most remarkable result, however, is the apparent detection of a time delay in the response of different UV continuum windows. The fluxes in bands centered at 1485 Å, 1740 Å, and 1825 Å have cross-correlation centroids with time delays of 0.21, 0.35, and 0.28 days with respect to the flux at 1315 Å. Monte Carlo simulations indicate probable errors of $`0.07`$ days in measuring the delays. Even longer delays ($``$1 day) are found for the optical continuum relative to the UV (Collier et al. (1998)). A variety of explanations may lead to the observed effects. The most interesting in terms of the overall structure of AGN is that the delays are due to a continuum reprocessing zone near the central continuum source. A more mundane possibility is that the delay is the result of contamination of the flux in the continuum bands by a very broad emission feature such as blended Fe ii emission or Balmer continuum emission. While this can explain some portion of the UV-continuum delays, as we show later, it is difficult to ascribe the lag of the optical continuum to emission-line contamination. The higher spectral resolution and higher S/N of the FOS spectrum of NGC 7469 allows a better assessment of the possible contaminants in the chosen IUE continuum intervals. By using the FOS spectrum as a template, a model of the line and continuum emission features can be fitted to the series of IUE spectra. Similar techniques were successfully used on the earlier NGC 3783 (Reichert et al. (1994)) and NGC 5548 (Korista et al. (1995)) campaigns. This paper describes the FOS data for NGC 7469 and presents new line and continuum flux measurements extracted from the IUE data. In §2 we present the FOS observations and the analysis of that spectrum. In §3 we describe how the template based on the FOS data was fit to the time series of IUE spectra and present a new analysis of the line and continuum variability based on these measurements. In §4 we discuss the UV and X-ray absorbing material in NGC 7469. We discuss our results in §5, and give a summary of our conclusions in §6. ## 2. FOS Observations We observed NGC 7469 on 1996 June 18 (UT) using gratings G130H, G190H, and G270H on the blue side of the FOS. These three spectra cover the wavelength range 1150–3300 Å with a resolution of $`220\mathrm{km}\mathrm{s}^1`$. The start times and integration times of the observations are given in Table 1. To ensure high S/N, good photometry and accurate flat-fielding, we observed through the 0.86<sup>′′</sup> aperture and acquired the target using a precision peak-up sequence. The last 5$`\times `$5 peak up was done using the 0.26<sup>′′</sup> sec aperture on 0.052<sup>′′</sup> centers. Centering in the $`0.86^{\prime \prime }`$ circular aperture was better than 0.04<sup>′′</sup>. The peak flux seen through the 0.26<sup>′′</sup> aperture at the last peak-up position has a ratio to that seen through the 0.86<sup>′′</sup> aperture used for the observation consistent with that of a point source. This should alleviate any concern that the spectrum might be contaminated by starlight from a nuclear starburst region. The standard pipeline calibration applied to the data gives good results. The two G130H observations agree to within 0.5% with each other. A weighted average of these two spectra was taken to produce a mean G130H spectrum. The overlap regions between the G130H, G190H, and G270H spectra agree to better than 1%. In the 14 separate groups read out for the G130H observation, there is a variation of 3.8% peak to peak. It is smooth and non-random, but could be an instrumental artifact such as thermal variations around the orbit. No renormalizations of the flux scale were applied to any of the spectra. We used the low ionization Galactic absorption lines to correct the wavelength scale of each observation, assuming that these features are at zero velocity. G130H required a 0.3 Å shift to the blue, as did G270H. The G190H spectrum required no adjustment, but only the Al ii $`\lambda `$1670 line is strong enough to measure reliably. We estimate our wavelengths are accurate to $``$50 $`\mathrm{km}\mathrm{s}^1`$. The merged, flux-calibrated spectrum from the four separate observations is shown in Fig. 1. The S/N per pixel (0.25–0.50 Å) is greater than 10 at all wavelengths longward of 1200 Å; per 1 Å it exceeds 20. In addition to the usual broad emission lines and blue continuum, note the pronounced dip in the spectrum at 2200 Å indicative of Galactic extinction, and the broad blends of Fe ii emission that become apparent longward of 2000 Å. NGC 7469 also shows high-ionization, intrinsic absorption lines. These are shown in the C iv region in Fig. 2, and they are also present in N v and in Ly$`\alpha `$. To model the lines and continuum in our spectrum, we used the IRAF<sup>1</sup><sup>1</sup>1 The Image Reduction and Analysis Facility (IRAF) is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation. task specfit (Kriss (1994)) to fit a model comprised of a reddened power law in $`F_\lambda `$ $$F_\lambda =F_0\left(\frac{\lambda }{1000}\right)^\alpha ,$$ with extinction following the form given by Cardelli, Clayton, & Mathis (1989) with $`\mathrm{R}_\mathrm{V}=3.1`$, multiple Gaussians for the emission lines, single Gaussians for the Galactic and intrinsic absorption lines, and a damped Lorentzian profile for the strong Galactic Ly$`\alpha `$ absorption. We used the minimum number of Gaussian components necessary to acceptably fit each emission line. For the brightest emission lines, three Gaussians (narrow, broad, and very broad) were typical. An exception is the Si iv+O iv\] $`\lambda `$1400 complex. Here we allowed single Gaussians for each line in each multiplet set, with all relative wavelengths linked in proportion to their vacuum values. The widths of the two Si iv lines were linked to be identical, and their flux ratio was fixed at 2:1. The widths of the five O iv\] lines were also linked to be identical, but independent of the Si iv lines. Their flux ratios were fixed in the proportion 0.1:0.2:1.0:0.4:0.1 as given by Osterbrock (1963), and the total O iv\] flux varied independently of the Si iv flux. Our fit covered the wavelength range 1170–3280 Å, excluding an 8 Å window centered on geocoronal Ly$`\alpha `$ emission. The best-fit $`\chi ^2`$ is 7058 for 5479 points and 183 freely varying parameters. We compute our error bars from the error matrix of the fit assuming a $`\mathrm{\Delta }\chi ^2=1`$ for a single interesting parameter (Avni (1976)). The best-fit continuum has a normalization of $`F_\lambda (1000\mathrm{\AA })=1.04\pm 0.01\times 10^{13}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{\AA }^1`$ and a powerlaw index $`\alpha =0.977\pm 0.003`$. The best fit extinction is $`E(BV)=0.12\pm 0.003`$, and the column density of the damped Ly$`\alpha `$ absorption is $`3.5\pm 0.2\times 10^{20}\mathrm{cm}^2`$. (Note that these errors are purely formal, statistical ones. Systematic errors due to our assumption of a continuum shape and due to the exclusion of a large portion of the damped Ly$`\alpha `$ profile will be larger.) Our measurements are in reasonable agreement with the properties of our own Galaxy along the line of sight. The Elvis, Lockman, & Wilkes (1989) H i survey of AGN sight lines reports an H i column of $`4.82\pm 0.17\times 10^{20}\mathrm{cm}^2`$. Using a gas-to-dust ratio of $`N_{HI}/E(BV)=5.2\times 10^{21}\mathrm{cm}^2`$ (Shull & Van Steenberg (1985)) predicts $`E(BV)=0.09`$. Fig. 2.— The blueshifted intrinsic C iv absorption lines of NGC 7469 are visible in this plot of the C iv emission line region of the spectrum. The 1-$`\sigma `$ statistical errors are the thin line under the data. The individual components for the emission lines are listed in Table 2. Parameters of the blueshifted intrinsic absorption features visible in Ly$`\alpha `$, N v, and C iv are given in Table 3. Galactic absorption features are listed in Table 4. The tabulated line widths are not corrected for the instrumental resolution of $`220\mathrm{km}\mathrm{s}^1`$. TABLE 2 Emission Line Fluxes in NGC 7469 Line $`\lambda _{\mathrm{vac}}`$ Flux<sup>a</sup> Velocity<sup>b</sup> FWHM (Å) $`\left(\mathrm{km}\mathrm{s}^1\right)`$ $`\left(\mathrm{km}\mathrm{s}^1\right)`$ Ly$`\alpha `$ 1215.67 $`258.0\pm 16.8`$ 0$`515\pm 21`$ $`967\pm 44`$ Ly$`\alpha `$ 1215.67 $`197.0\pm 8.1`$ 0$`268\pm 33`$ $`2932\pm 141`$ Ly$`\alpha `$ 1215.67 $`242.0\pm 19.3`$ 0$`268\pm 33`$ $`10965\pm 560`$ Ly$`\alpha `$ total 1215.67 $`697.0\pm 26.8`$ $`\mathrm{}`$ $`\mathrm{}`$ N V 1240.15 $`10.4\pm 1.5`$ $`389\pm 96`$ $`1598\pm 63`$ N V 1240.15 $`17.8\pm 2.1`$ $`389\pm 96`$ $`4949\pm 122`$ N V 1240.15 $`48.2\pm 5.7`$ $`389\pm 96`$ $`12042\pm 575`$ N V total 1240.15 $`76.4\pm 2.5`$ $`\mathrm{}`$ $`\mathrm{}`$ Si II 1260.45 $`6.2\pm 1.0`$ $`939\pm 132`$ $`2028\pm 346`$ O I 1304.35 $`21.0\pm 1.6`$ 0$`631\pm 169`$ $`4618\pm 373`$ C II 1335.30 $`21.0\pm 1.4`$ 0$`165\pm 116`$ $`3800\pm 245`$ Si IV 1393.76 $`45.2\pm 4.5`$ 0$`524\pm 62`$ $`11665\pm 498`$ Si IV 1402.77 $`22.6\pm 2.3`$ 0$`524\pm 62`$ $`11665\pm 498`$ Si IV total 1396.76 $`67.8\pm 6.7`$ $`\mathrm{}`$ $`\mathrm{}`$ O IV\] total 1402.06 $`38.0\pm 2.5`$ $`524\pm 62`$ $`4002\pm 300`$ N IV\] 1486.50 $`2.9\pm 0.5`$ 00$`84\pm 181`$ $`1420\pm 307`$ C IV 1549.05 $`66.2\pm 4.5`$ $`28\pm 14`$ $`1598\pm 63`$ C IV 1549.05 $`166.0\pm 2.0`$ 0$`101\pm 23`$ $`4949\pm 122`$ C IV 1549.05 $`160.0\pm 6.2`$ 0$`101\pm 23`$ $`12042\pm 575`$ C IV total 1549.05 $`392.2\pm 7.9`$ $`\mathrm{}`$ $`\mathrm{}`$ Fe II 1608.45 $`17.7\pm 1.7`$ 0$`391\pm 190`$ $`5498\pm 586`$ He II 1640.50 $`4.5\pm 1.6`$ 00$`81\pm 200`$ $`887\pm 443`$ He II 1640.50 $`18.5\pm 0.7`$ 00$`81\pm 200`$ $`4949\pm 122`$ He II 1640.50 $`32.9\pm 1.2`$ 00$`81\pm 200`$ $`12042\pm 575`$ He II total 1640.50 $`55.8\pm 1.8`$ $`\mathrm{}`$ $`\mathrm{}`$ O III\] 1663.48 $`2.5\pm 0.2`$ $`391\pm 97`$ $`790\pm 210`$ N III\] 1750.51 $`27.2\pm 2.8`$ 0$`210\pm 399`$ $`8222\pm 719`$ Al III 1857.40 $`18.5\pm 1.6`$ $`412\pm 157`$ $`4461\pm 553`$ Si III\] 1892.03 $`19.1\pm 3.9`$ 000$`7\pm 138`$ $`2470\pm 160`$ C III\] 1908.73 $`4.3\pm 0.9`$ 00$`77\pm 29`$ $`547\pm 99`$ C III\] 1908.73 $`42.9\pm 4.9`$ $`142\pm 136`$ $`3160\pm 204`$ C III\] 1908.73 $`75.0\pm 4.1`$ $`142\pm 136`$ $`17050\pm 1264`$ C III\] total 1908.73 $`122.2\pm 6.5`$ $`\mathrm{}`$ $`\mathrm{}`$ Mg II 2798.74 $`9.2\pm 1.8`$ $`56\pm 15`$ $`1195\pm 136`$ Mg II 2798.74 $`90.0\pm 1.8`$ $`56\pm 15`$ $`3426\pm 72`$ Mg II 2798.74 $`59.5\pm 2.9`$ $`56\pm 15`$ $`21393\pm 662`$ Mg II total 2798.74 $`158.7\pm 3.9`$ $`\mathrm{}`$ $`\mathrm{}`$ <sup>a</sup>Observed flux in units of $`10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. <sup>b</sup>Velocity is relative to a systemic redshift of $`cz=4916\mathrm{km}\mathrm{s}^1`$ (de Vaucouleurs et al. (1991)). TABLE 3 Intrinsic Absorption Lines in NGC 7469 Line $`\lambda _{\mathrm{vac}}`$ EW Velocity FWHM (Å) (Å) $`\left(\mathrm{km}\mathrm{s}^1\right)`$ $`\left(\mathrm{km}\mathrm{s}^1\right)`$ Ly$`\alpha `$ 1215.67 $`0.41\pm 0.08`$ $`1870\pm 17`$ $`280\pm 58`$ Ly$`\alpha `$ 1215.67 $`5.04\pm 0.30`$ 0$`656\pm 24`$ $`1439\pm 61`$ N V 1238.82 $`0.48\pm 0.08`$ $`1834\pm 20`$ $`309\pm 57`$ N V 1242.80 $`0.24\pm 0.07`$ $`1834\pm 20`$ $`309\pm 57`$ C IV 1548.19 $`0.45\pm 0.05`$ $`1819\pm 11`$ $`275\pm 27`$ C IV 1550.77 $`0.35\pm 0.04`$ $`1819\pm 11`$ $`275\pm 27`$ <sup>a</sup>Velocity is relative to a systemic redshift of $`cz=4916\mathrm{km}\mathrm{s}^1`$ (de Vaucouleurs et al. (1991)). ## 3. IUE Spectra ### 3.1. Measuring Continuum and Emission-line Fluxes Analysis of the IUE spectra of NGC 7469 during the summer 1996 monitoring campaign (Wanders et al. (1997)) suggests a time delay in the responses of the longer wavelength UV continuum bands relative to the shortest wavelength bin centered at 1315 Å. One possible explanation for these delays is that the IUE measurements are not using pure continuum, and that light from broadly distributed line emission in the spectrum might be contaminating the data. Our fits to the FOS data allow us to examine the degree of contamination. Fig. 3 shows the FOS spectrum of NGC 7469 scaled to provide a good view of the continuum. The bands used for the IUE measurements are indicated, and the best-fit reddened powerlaw is also shown. Note that only in the 1485 Å band does the fitted continuum pass through the actual data points. Fig. 3.— The best fit powerlaw continuum for the NGC 7469 is shown as the thin solid line. The dip at 2200 Å and the downturn at the short wavelength end reflect the extinction of $`E\left(BV\right)=0.12`$. At most points in the spectrum, the blended wings of the broad emission lines and Fe ii emission contribute a substantial amount of overlying flux. The four “continuum” windows used for measuring the fluxes in the IUE spectra are shown as heavy solid bars. TABLE 4 Galactic Absorption Lines in NGC 7469 Line $`\lambda _{\mathrm{vac}}`$ EW Velocity FWHM (Å) (Å) $`\left(\mathrm{km}\mathrm{s}^1\right)`$ $`\left(\mathrm{km}\mathrm{s}^1\right)`$ Si II 1190.42 $`0.16\pm 0.11`$ $`20\pm 54`$ $`211\pm 150`$ Si II 1193.14 $`0.12\pm 0.14`$ $`80\pm 93`$ $`211\pm 150`$ N I 1200.16 $`0.39\pm 0.17`$ 0$`117\pm 60`$ $`303\pm 119`$ Si III 1206.50 $`1.07\pm 0.22`$ 0$`186\pm 45`$ $`593\pm 112`$ S II 1250.58 $`0.23\pm 0.07`$ 0$`141\pm 0`$ $`303\pm 41`$ S II 1253.00 $`0.21\pm 0.06`$ $`60\pm 53`$ $`303\pm 41`$ S II 1259.52 $`0.26\pm 0.07`$ 00$`26\pm 50`$ $`303\pm 41`$ Si II 1260.42 $`0.58\pm 0.09`$ $`7\pm 20`$ $`303\pm 41`$ O I 1302.17 $`0.41\pm 0.07`$ 000$`2\pm 28`$ $`303\pm 41`$ Si II 1304.37 $`0.36\pm 0.07`$ 00$`60\pm 31`$ $`303\pm 41`$ C II 1334.53 $`0.32\pm 0.16`$ 0$`290\pm 47`$ $`433\pm 112`$ C II 1335.69 $`0.68\pm 0.19`$ 0$`290\pm 47`$ $`433\pm 112`$ Si IV 1393.76 $`0.97\pm 0.16`$ 00$`97\pm 58`$ $`950\pm 140`$ Si IV 1402.77 $`0.62\pm 0.15`$ 00$`68\pm 91`$ $`950\pm 140`$ Si II 1527.17 $`0.45\pm 0.06`$ 00$`55\pm 21`$ $`314\pm 91`$ C IV 1548.19 $`0.38\pm 0.07`$ 0$`325\pm 44`$ $`390\pm 57`$ C IV 1550.77 $`0.43\pm 0.07`$ 0$`325\pm 44`$ $`390\pm 57`$ Fe II 1608.45 $`0.27\pm 0.07`$ $`62\pm 45`$ $`314\pm 91`$ Al II 1670.79 $`0.52\pm 0.31`$ 00$`57\pm 17`$ $`314\pm 91`$ Fe II 2344.21 $`0.32\pm 0.10`$ 000$`3\pm 89`$ $`518\pm 58`$ Fe II 2374.46 $`1.05\pm 0.16`$ $`20\pm 39`$ $`518\pm 58`$ Fe II 2382.77 $`1.11\pm 0.18`$ 0$`113\pm 36`$ $`518\pm 58`$ Fe II 2586.65 $`0.78\pm 0.11`$ 0$`104\pm 44`$ $`518\pm 58`$ Fe II 2600.17 $`0.60\pm 0.10`$ 00$`48\pm 46`$ $`518\pm 58`$ Mg II 2796.35 $`0.57\pm 0.07`$ 000$`1\pm 11`$ $`230\pm 27`$ Fig. 4 illustrates the degree of contamination in more detail with magnified views of each of the continuum bands along with the fitted continuum. The shortest wavelength bin centered on 1315 Å is contaminated by O i $`\lambda `$1304 emission. Only 78% of the flux in the IUE measurement bin is from the fitted continuum. The two longest wavelengths bins, 1740 Å and 1825 Å, are each contaminated by Fe ii emission. Within the IUE measurement bins the percentage of flux due to the fitted continuum is 81% and 85%, respectively. The bin at 1485 Å is relatively clean. 99% of the flux in the IUE measurement bin is due to the fitted continuum. Our new analysis of the IUE spectra from the 1996 campaign aims to obtain clean measurements of the continuum and deblended measurements of the emission lines using the FOS spectrum as a template. For the fits described below we used only the TOMSIPS-extracted spectra described by Wanders et al. (1997). As noted by Wanders et al., these spectra appear slightly smoother to the eye and have slightly smaller error bars. In the initial analysis by Wanders et al., both the TOMSIPS and NEWSIPS data gave similar results. Table 1 of Wanders et al. (1997) logs 219 spectra obtained using IUE. We have restricted our analysis to the 207 spectra unaffected by target centering problems or short exposure times, i.e., we have eliminated all spectra with notes 1–4 from Table 1 of Wanders et al. The mean of these spectra is shown as Figure 1 of Wanders et al. Using the model fit to the FOS data in §2, we developed a template for fitting the series of IUE spectra by first fitting the mean IUE spectrum. The best-fit continuum has a normalization of $`F_\lambda (1000\mathrm{\AA })=1.37\times 10^{13}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{\AA }^1`$ and a powerlaw index $`\alpha =0.913\pm 0.003`$, close to the shape and intensity of the FOS snapshot. Emission-line fluxes, wavelengths, and widths are listed in Table 5, and absorption-line parameters are given in Table 6. The resulting best-fit model is shown overlayed on the mean IUE spectrum in Fig. 5; the residuals shown in the lower panel have an rms of a few percent of the spectral intensity. Due to the lower resolution and lower S/N of the individual IUE spectra, numerous constraints were imposed on the use of this template for the fits to the individual spectra. For example, the wavelengths of weak emission lines were tied to that of C iv$`\lambda 1549`$ by the ratios of their laboratory values; the widths of weak lines were fixed at the values obtained in a fit to the mean IUE spectrum; the wavelengths of multiple components of strong lines were all fixed at the same wavelength; the parameters of all absorption features were fixed at the values obtained in the fit to the mean IUE spectrum. This left 44 free parameters for the fit to each spectrum: the power-law normalization and exponent; the fluxes of the individual emission lines; and the wavelengths and widths of the bright emission lines. Each spectrum was then fit using specfit. To provide initial parameters for each fit, we used the best fit to the mean spectrum as a starting point. The continuum normalization was then scaled by the ratio of the 1485 Å continuum flux to the same continuum flux in the mean spectrum; line fluxes were scaled by the ratio of the integrated net C iv flux to the same flux measured in the mean spectrum; line wavelengths were shifted by the location of the peak of the C iv line relative to its location in the mean spectrum. TABLE 5 Emission Line Fluxes in the Mean IUE Spectrum of NGC 7469 Line $`\lambda _{\mathrm{vac}}`$ Flux<sup>a</sup> Velocity<sup>b</sup> FWHM (Å) $`\left(\mathrm{km}\mathrm{s}^1\right)`$ $`\left(\mathrm{km}\mathrm{s}^1\right)`$ Ly$`\alpha `$ 1215.67 086.4 $``$622 01122 Ly$`\alpha `$ 1215.67 156.0 $``$622 02144 Ly$`\alpha `$ 1215.67 150.0 $``$622 08134 Ly$`\alpha `$ total 1215.67 392.4 $`\mathrm{}`$ $`\mathrm{}`$ N V 1240.15 025.9 $``$513 02405 N V 1240.15 022.7 $``$513 06292 N V 1240.15 061.3 $``$513 14764 N V total 1240.15 109.9 $`\mathrm{}`$ $`\mathrm{}`$ Si II 1260.45 003.0 0747 03000 O I 1304.35 013.6 $``$823 04700 C II 1335.30 013.8 $``$322 03850 Si IV 1393.76 039.9 $``$881 11521 Si IV 1402.77 020.0 $``$871 11521 Si IV total 1396.76 059.9 $`\mathrm{}`$ $`\mathrm{}`$ O IV\] 1402.06 020.8 $``$303 03510 N IV\] 1486.50 001.1 $``$384 01000 C IV 1549.05 079.0 $``$103 02405 C IV 1549.05 137.0 $``$103 06292 C IV 1549.05 126.0 $``$103 14764 C IV total 1549.05 342.0 $`\mathrm{}`$ $`\mathrm{}`$ He II 1640.50 008.0 $``$153 01716 He II 1640.50 022.0 $``$153 06292 He II 1640.50 039.2 $``$153 14764 He II total 1640.50 069.2 $`\mathrm{}`$ $`\mathrm{}`$ O III\] 1663.48 000.5 0412 01795 N III\] 1750.51 022.4 $``$783 07821 Al III 1857.40 009.4 $``$192 04387 Si III\] 1892.03 014.4 $``$635 02557 C III\] 1908.73 003.0 $``$460 01063 C III\] 1908.73 035.2 $``$460 03143 C III\] 1908.73 057.7 $``$460 17552 C III\] total 1908.73 095.9 $`\mathrm{}`$ $`\mathrm{}`$ <sup>a</sup>Observed flux in units of $`10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. <sup>b</sup> Velocity is relative to a systemic redshift of $`cz=4916\mathrm{km}\mathrm{s}^1`$ (de Vaucouleurs et al. (1991)). Using the best-fit parameter values for each spectrum, we derived fluxes for the quantities of interest. Initial error bars were assigned based on the statistical 1-$`\sigma `$ values obtained from specfit. Final error bars were calculated using a procedure common to our previous work in International AGN Watch campaigns. We conservatively assume that there is no variation in flux between two data points with a time separation $`<0.25`$ d. (The mean separation between observations is 0.23 d.) We then scale the initial error bars so that their mean fractional uncertainty is equal to the root-mean-square (rms) of the distribution of flux ratios for all data pairs in the time series with $`\mathrm{\Delta }t<0.25`$ d (Rodríguez-Pascual et al. (1997); Wanders et al. (1997)). Note that the resulting error bars are an upper limit if there is any residual intrinsic variability on timescales shorter than successive observations in the time series. The derived fluxes and errors are shown as light curves described in the next section. The actual data points and error bars can be obtained from the International AGN Watch web site at the URL http://www.astronomy.ohio-state.edu/$``$agnwatch/#dat. TABLE 6 Absorption Lines in the Mean IUE Spectrum of NGC 7469 Line $`\lambda _{\mathrm{vac}}`$ EW Velocity FWHM (Å) (Å) $`\left(\mathrm{km}\mathrm{s}^1\right)`$ $`\left(\mathrm{km}\mathrm{s}^1\right)`$ Ly$`\alpha `$ 1215.67 0.39 $``$1898<sup>a</sup><sup>a</sup>footnotemark: 01500 Ly$`\alpha `$ 1215.67 5.06 0$``$661<sup>a</sup><sup>a</sup>footnotemark: 02000 N V 1238.82 1.17 $``$2215<sup>a</sup><sup>a</sup>footnotemark: 01455 N V 1242.80 1.72 $``$1595<sup>a</sup><sup>a</sup>footnotemark: 01455 C IV 1548.19 0.46 $``$1851<sup>a</sup><sup>a</sup>footnotemark: 00990 C IV 1550.77 0.36 $``$1841<sup>a</sup><sup>a</sup>footnotemark: 00990 S II 1250.58 0.18 0$``$141<sup>b</sup><sup>b</sup>footnotemark: 01455 S II 1253.00 0.20 00036<sup>b</sup><sup>b</sup>footnotemark: 01455 S II 1259.52 0.25 00$``$33<sup>b</sup><sup>b</sup>footnotemark: 01380 Si II 1260.42 0.56 00002<sup>b</sup><sup>b</sup>footnotemark: 01380 O I 1302.17 0.47 0$``$840<sup>b</sup><sup>b</sup>footnotemark: 01255 Si II 1304.37 0.47 0$``$899<sup>b</sup><sup>b</sup>footnotemark: 01255 C II 1334.53 0.25 0$``$892<sup>b</sup><sup>b</sup>footnotemark: 01170 C II 1335.69 0.50 0$``$884<sup>b</sup><sup>b</sup>footnotemark: 01170 Si IV 1393.76 0.51 0$``$391<sup>b</sup><sup>b</sup>footnotemark: 01638 Si IV 1402.77 0.32 0$``$391<sup>b</sup><sup>b</sup>footnotemark: 01638 Si II 1527.17 0.41 0$``$585<sup>b</sup><sup>b</sup>footnotemark: 00990 C IV 1548.19 0.34 0$``$380<sup>b</sup><sup>b</sup>footnotemark: 00990 C IV 1550.77 0.54 0$``$369<sup>b</sup><sup>b</sup>footnotemark: 00990 Fe II 1608.45 0.23 00058<sup>b</sup><sup>b</sup>footnotemark: 00995 Al II 1670.79 0.39 00$``$20<sup>b</sup><sup>b</sup>footnotemark: 01000 <sup>a</sup>Intrinsic absorption feature. Velocity is relative to a systemic redshift of $`cz=4916\mathrm{km}\mathrm{s}^1`$ (de Vaucouleurs et al. (1991)). <sup>b</sup>Galactic feature. Velocity is heliocentric. We note that our use of a global power-law model for the underlying continuum means that not all the continuum flux measurements we tabulate are statistically independent. The power-law model contains only two free parameters, its normalization and the spectral index. Thus in effect only two of the continuum fluxes suffice to describe the data set at a single point in time. ### 3.2. Continuum and Emission-line Light Curves The newly derived continuum light curves are shown in Fig. 6. These are quite similar to the original data presented in Wanders et al. All curves show the 10–15 day “events” superposed on a gradual decrease in flux from the start to the end of the campaign. There are subtle differences, however, that are only apparent in a ratio between the new measurements and the originals. Light curves of these ratios are shown in Fig. 7. All four light curves show slight differences from the originals throughout the “event” centered on day 280. The most apparent differences are in the light curve for F(1825 Å), which shows departures from the original surrounding all peaks in the light curve. The sense of the difference is that when the source is brighter, more of the 1825 Å flux is due to continuum light. The emission-line light curves are also quite similar to those of Wanders et al. These are shown in Figures 8–10. Note that our deblending process has recovered more signal in the N v and the He ii light curves. None of the weaker lines, however, show any strong correlation with the events in the continuum light curves. ### 3.3. Variability Characteristics To quantify the characteristics of the variability in our new measurements, we use the standard parameters adopted by the International AGN Watch. We summarize these for all our measured fluxes in Table 7. The mean flux, $`\overline{F}`$, and the sample standard deviation (or root-mean-square flux), $`\sigma _F`$, have their usual statistical definitions. The third parameter, $`F_{var}`$, is the fractional variation in the flux corrected for measurement errors: $$F_{var}=\frac{\sqrt{(\sigma _F^2\mathrm{\Delta }^2)}}{\overline{F}},$$ (1) where $`\mathrm{\Delta }^2`$ is the mean square value of the individual measurement errors. The fourth parameter, $`R_{max}`$, is the ratio of the maximum flux to the minimum flux. Note that both of these latter quantities are not very useful for weaker line fluxes where the measurement uncertainty is much larger than any intrinsic variations. For the continuum measurements listed in Table 7, our fitted fluxes show fractional variations and ratios of maximum to minimum flux that are slightly greater than or equal to that seen in the original data, showing that we have probably eliminated some small amount of less-variable contamination in our fitting process. In contrast, the fractional variability in the strong emission lines has either stayed the same or decreased. This is likely due to the broad wings we have included in our line flux measurements. As one can see in the rms spectrum shown in TABLE 7 Variability Parameters Feature $`N_{data}`$ $`\overline{F}^\mathrm{a}`$ $`\sigma _F^\mathrm{a}`$ $`F_{var}`$ $`R_{max}^\mathrm{b}`$ $`F_\lambda `$(1315 Å) 207 3.80 0.62 0.16 2.15 $`F_\lambda `$(1485 Å) 207 3.85 0.56 0.14 1.95 $`F_\lambda `$(1740 Å) 207 3.52 0.45 0.12 1.82 $`F_\lambda `$(1825 Å) 207 3.34 0.41 0.11 1.83 Ly$`\alpha `$ 207 396.77 57.09 0.12 2.13 Ly$`\alpha `$+N V 207 504.18 65.50 0.12 1.95 N V 207 107.42 32.33 0.23 (7.83) Si IV 207 82.96 22.12 0.21 4.45 C IV 207 343.10 34.44 0.07 1.80 He II 207 69.33 19.42 0.20 (7.19) C III\] 207 110.42 42.51 0.22 (6.36) Si II 207 3.38 2.84 0.62 (857) O I 207 13.00 4.27 0.27 (10.9) C II 207 13.17 4.26 0.24 (52.4) N IV\] 207 1.54 1.50 0.68 (467) O III\] 207 10.87 4.68 0.36 (23.0) N III\] 207 22.64 8.77 0.29 (21.7) Si III\] 207 14.01 4.73 0.23 (13.8) Si III\]+C III\] 207 124.43 42.04 0.19 (4.74) <sup>a</sup>Units are $`10^{14}\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1\mathrm{\AA }^1`$ for continuum fluxes and $`10^{14}\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1`$ for line fluxes. <sup>b</sup>Uncertain values enclosed in parentheses are dominated by noise. Figure 1 of Wanders et al., the most variable portion of each emission line is the line core. The contrast of this core is less in the fits we have done using the FOS spectral template. ### 3.4. Cross-correlation Analysis To re-examine the question of whether there are genuine time delays between the continuum variations at different wavelengths, we have performed a cross-correlation analysis of our newly extracted fluxes. We have used both the interpolation cross-correlation function (ICCF) (Gaskell & Sparke (1986); Gaskell & Peterson (1987)), and the discrete cross-correlation function (DCF) (Edelson & Krolik (1988)). Both algorithms use code as implemented by White & Peterson (1994). We show the derived cross-correlation functions for the continuum fluxes and bright emission lines in Fig. 11; the CCFs for the weak emission lines are shown in Fig. 12. We have made several measurements to quantify the characteristics of the cross-correlation functions for each measured feature. In Table 8 we list the time delay for the centroid of the peak in the CCF, $`\tau _{cent}`$, the time delay at which the peak occurs, $`\tau _{peak}`$, the peak amplitude, $`r_{max}`$, of each CCF and the full-width at half-maximum (FWHM) of the peak. We calculate the centroids using only CCF values exceeding 80% of the peak amplitude. As the results measured from both the ICCF and DCF curves are nearly identical, the tabulated numbers are based on the ICCF results. The error bars for $`\tau _{cent}`$ and $`\tau _{peak}`$ are based on model-independent Monte Carlo simulations using randomized fluxes and a random subset selection method as described by Peterson et al. (1998). Random noise contributions are added to each flux measurement in a light curve, and a random subset of data pairs is selected for analysis. This process is repeated many times in a procedure analogous to “bootstrapping”. Analysis of the resulting distributions from the simulations leads to the error bars quoted in Table 8 for $`\tau _{cent}`$ and $`\tau _{peak}`$. We note that the smallest of these errors are a factor of $``$2 smaller than the average sampling interval of $`0.2`$ days, and they are only valid if there is little variability on timescales shorter than this interval. High-time-resolution observations of NGC 7469 obtained by Welsh et al. (1998) show that this assumption is valid. Compared to the results of Wanders et al., the amplitudes of the continuum CCFs are generally slightly higher, the amplitudes of the emission-line CCFs are generally lower, and the time delays measured from our CCFs are a bit shorter. The difference in amplitudes reflects our previous results on the difference in variability amplitudes— the continuum measurements are indeed cleaner, free of low-variability contaminants, and the emission-line measurements have a greater contribution from the low-variability broad wings. The apparently cleaner continuum measurements now permit a critical re-examination of the question of time delay as a function of wavelength. Our measured time delays differ from those of Wanders et al., but the lag at long wavelengths relative to short wavelengths is still there, at roughly the same level. Relative to the flux at 1315 Å, the fluxes at 1485 Å, 1740 Å and 1825 Å have time delays of 0.09, 0.28, and 0.36 days, respectively, compared to the values of 0.19–0.22, 0.32–0.38 and 0.22–0.35 days found by Wanders et al. As noted in §3.2, effectively only two of the four continuum flux measurements are statistically independent due to the global power-law fit we have used to describe the continuum. Therefore, although Table 8 shows a monotonically increasing time delay with wavelength, the monotonic nature is largely a consequence of the global constraints we have imposed on the continuum shape. To assess the influence such a global constraint imposes on our measured cross-correlation functions, we have performed another Monte Carlo experiment using the time series of the measured power-law normalizations and spectral indices. Starting with the power-law fit parameters builds in the global linkages between the four wavelength intervals. As in the random subset selection method described by Peterson et al. (1998), we chose a random subset from the series of flux normalization points and indices, preserving the time order of the points. At the selected time points in a given subset, the normalizations and indices were assigned random values from a Gaussian distribution with a mean of the measured value at that time and a dispersion of the $`1\sigma `$ error bar. From these simulated values of normalization and spectral index, we generated flux points at 1315 Å, 1485 Å, 1740 Å, and 1825 Å. We used the ICCF technique to measure the time delay for the centroid of the CCF peak in these simulated light curves. For a total of 700 Monte Carlo realizations, relative to the flux at 1315 Å, we find median time delays of $`0.09\pm 0.03`$, $`0.026\pm 0.07`$, and $`0.32\pm 0.08`$ for the fluxes at 1485 Å, 1740 Å, and 1825 Å, respectively, where the error bars represent the $`1\sigma `$ confidence intervals. Thus, from our fits to the IUE data, we can conclude with confidence that the flux at longer wavelengths lags the flux at shorter wavelengths, but we cannot conclude that the lag increases as a function of wavelength. This requires the use of the optical continuum measurements as discussed in §5.1. ## 4. ASCA Observations of NGC 7469 Guainazzi et al. (1994) observed NGC 7469 using ASCA between 1993 November 24 and 1993 November 26 for a total exposure time of $``$40 ks. Their analysis of these data note the Fe K$`\alpha `$ emission feature and a soft excess, but find no evidence for a warm absorber. Subsequent analysis of these same data, benefiting from improved calibration, by Reynolds (1997) and George et al. (1998), however, do clearly detect absorption edges of O vii and O viii indicative of ionized absorbing gas. Reynolds (1997) finds optical depths in the edges of $`\tau _{O7}=0.17`$ and $`\tau _{O8}=0.03`$. To examine whether the UV absorption noted in our FOS spectrum of NGC 7469 could be interpreted in the context of a combined X-ray and UV absorber (e.g., Mathur, Wilkes, & Elvis (1995)), we have retrieved the ASCA data discussed by Guainazzi et al. from the High Energy Astrophysics Science Archive Research Center. These data have been reprocessed with the “Revision 1” software and calibration, and we have used the screened event files produced by this process. The acceptable SIS data produced by this filtering includes all data obtained outside of the South Atlantic Anomaly, above a limb angle of $`10^{}`$ from the dark earth and $`20^{}`$ from the bright earth, and in regions of geomagnetic rigidity exceeding 6 $`\mathrm{GeV}\mathrm{c}^1`$. In addition, we eliminated all data intervals with anomalously high count rates; the mean rates were 1.5 $`\mathrm{cts}\mathrm{s}^1`$ and 1.1 $`\mathrm{cts}\mathrm{s}^1`$ in the SIS0 and SIS1 detectors, respectively, and we excluded data with rates $`>3.0\mathrm{cts}\mathrm{s}^1`$. So that Gaussian statistics were applicable in our spectral analysis, we grouped the extracted spectra for the SIS0 and SIS1 detectors so that each energy bin contained a minimum of 25 counts. To avoid the worst uncertainties in the detector response, we restricted our spectral fits described below to bins with energies in the range $`0.6\mathrm{keV}<\mathrm{E}<10.0\mathrm{keV}`$. Before fitting these data with our warm absorber models, we first verified that our methods produced empirical results compatible with previous analyses. We use v10.0 of the X-ray spectral fitting program XSPEC (Arnaud (1996)) for our fits. We note that a simple power law with absorption by cold gas gives an unacceptable fit: $`\chi ^2=661.4`$ for 424 data bins and 3 free parameters. Adding a narrow (unresolved) Fe K$`\alpha `$ line markedly improves the fit: $`\chi ^2=636.4`$ for 424 points and 5 free parameters. A broad Fe K$`\alpha `$ line gives further significant improvements, with $`\chi ^2=565.3`$ for 424 points and 8 parameters. Our best empirical fit to the data is for a power law continuum, absorption by cold Galactic gas, broad and narrow Fe K$`\alpha `$ emission from the source, and two absorption edges representing intrinsic ionized oxygen absorption. This best fit has $`\chi ^2=482.1`$ for 424 points and 12 free parameters. The best-fit values for the free parameters are summarized in Table 9. Our model differs from Reynolds (1997) in that we have omitted any intrinsic cold-gas absorption, added separate narrow and broad Fe K$`\alpha `$ emission lines, permitted the edge absorption energies to vary freely, and binned our data differently, but our results are comparable. Our spectral index of $`2.14\pm 0.04`$ agrees well with his value of 2.11, and our edge depth of $`0.21\pm 0.03`$ for O vii is in good agreement with his value of 0.17. Our optical depth for the O viii edge of $`0.13\pm 0.03`$, however, is larger than Reynold’s value of 0.03. The main reason for this difference is that we have let the edge energies vary freely, while Reynold’s fixed them at their redshifted vacuum energies. We now describe photoionization models for the ionized absorbing gas that can be used to evaluate whether the same absorbing medium is responsible for both the X-ray and the UV absorption. These models are constructed in the same way as those discussed by Krolik & Kriss (1995) and Kriss et al. (1996b). For our ionizing spectrum, we use a spectral shape for NGC 7469 based on the UV and X-ray data discussed here, and the RXTE data presented by Nandra et al. (1998). The fit to the mean of the IUE data has a spectral index (for $`F_\nu \nu ^\alpha `$) of 1.087. The RXTE data has a mean 2–10 keV flux of $`3.4\times 10^{11}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. We re-normalize the ASCA spectrum above (which has $`F(210)=3.5\times 10^{11}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$) to this value. Since $`\alpha _{ox}`$ (the effective spectral index between 2500 Å and 2 keV) is 1.34, we note that the UV and X-ray spectra when extrapolated do not meet at any intermediate energy— the ionizing spectrum must steepen between the UV and X-ray bandpasses. Although the lack of simultaneity between the UV and X-ray observations may play some role in this mismatch, this is a common feature of AGN spectra, and composite QSO spectra suggest that the break occurs around the Lyman limit (Zheng et al. 1997). We therefore extrapolate the UV spectrum to the Lyman limit, and then introduce a spectral break to an index of 1.40 which we follow to an energy of 0.5 keV, where we then flatten to the X-ray energy index of 1.14. Since this spectrum does not diverge to higher energy, we simply extrapolate this to 500 keV for our photoionization calculations. As in Kriss et al. (1996b), we compute our models in thermal equilibrium, assume constant-density clouds ($`n_H=10^9\mathrm{cm}^3`$), and use the ionization parameter $`U=n_{ion}/n_H`$, where $`n_{ion}`$ is the number density of ionizing photons between 13.6 eV and 13.6 keV illuminating the cloud and $`n_H`$ is the density of hydrogen atoms. We assume that the absorbing medium covers 25% of the solid angle around the source. The transmission of each model is computed so that resonant line scattering and electron scattering are fully accounted for (Krolik & Kriss (1995)). In computing the widths of the resonance lines, we assume that all ions have turbulent velocities equal to the sound speed in the medium. The transmission is fully described by two parameters, the total column density $`N_{tot}`$, and the ionization parameter $`U`$. To fit these models to the ASCA spectra, we assemble our grid of models into a FITS table to be read into XSPEC, and we replace the photoionization edges in our empirical model with the total column density, ionization parameter and redshift of our warm absorber model grid. This gives a result comparable in quality to our best empirical fit: $`\chi ^2=484.2`$ for 424 points and 11 free parameters. Best-fit values for the parameters are given in Table 10, and the best-fit spectra are illustrated in Fig. 13. ## 5. Discussion ### 5.1. Time Delays and the Case for Continuum Reprocessing Our newly extracted continuum fluxes for the IUE observations of NGC 7469 in 1996 strengthen the arguments for wavelength-dependent time delays in the continuum flux from this active galaxy. In tests performed on the original data set, Wanders et al. (1997) found that contamination by 10% of a continuum flux interval by a spectral component with a 2-day lag could produce a time delay of $``$0.2 days in the lag measured for the continuum flux. As we note in §3.1, our model of the FOS spectrum indicates that contamination by 15–22% by weak lines and line wings could be present in the continuum fluxes for the bands centered at 1315 Å, 1740 Å and 1825 Å, thus implying that the originally measured lags could be affected by these other spectral features. Our new measurements of the IUE spectra greatly ameliorate the potential level of contamination in the continuum flux points. The new time delays we measure are slightly lower (perhaps reflecting some previous contamination), but the delays are still present, and they increase with increasing wavelength. As discussed by Collier et al. (1998), a simple model for radiative reprocessing by a steady-state accretion disk with a radial temperature profile determined by viscous heat dissipation predicts that the time delay between different continuum bands should depend on wavelength as $`\tau \lambda ^{4/3}`$, reflecting the $`TR^{3/4}`$ temperature profile and the $`\tau =R/c`$ dependence of the time delay. Fig. 14 shows the measured time delay of the UV and optical continuum points compared to a $`\tau \lambda ^{4/3}`$ curve. Our new measurements are more consistent with this dependence. While the UV and optical continuum time delays seem indicative of radiative reprocessing, the puzzle remains— what radiation is being reprocessed? As Nandra et al. (1998) show, producing the UV and optical continuum in NGC 7469 via reprocessing of the X-ray radiation is not energetically feasible, nor does it have the requisite time dependence. Simultaneous EUVE, ASCA, and RXTE observations of NGC 5548 by Chiang et al. (1999) show that the X-ray variations lag the EUV variations, and that therefore the EUV cannot be produced via reprocessing of the hard X-rays. Nandra et al.’s detailed comparison of the X-ray and UV continuum light curves in the NGC 7469 campaign shows fairly complex behavior. The main positive correlation is a 4-day lag in which the UV leads the X-ray continuum. This is largely due to the peaks in the UV light curve leading the X-ray peaks. In contrast, the light-curve minima are nearly simultaneous. Nandra et al. (1998) suggest that the longer timescale X-ray variability is due to upscattering of UV seed photons from a variety of sources at different distances that leads to multiple lags. At high flux levels, the source of the UV seed photons lies at a distance of $`4`$ lt days. In the flux minima, the seed photons arise closer to the X-ray production region. The most rapid X-ray variations are due to variations in the particle distribution of the scattering medium. In addition, they suggest that some portion of the EUV continuum is produced by X-ray reprocessing, and that this is what drives the line radiation. Such a scenario poses severe problems for the relative geometry of the continuum production zone and the broad-line cloud region (BLR), however. It also is at odds with simultaneous EUVE, ASCA, and RXTE observations of NGC 5548 (Chiang et al. (1999)) that show that X-ray variations lag the EUV variations, and that therefore the EUV cannot be produced via reprocessing of the hard X-rays. In NGC 7469, all the broad lines have measured lags $`<4`$ lt days. If the X-ray radiation is produced closest to the black hole, the scenario proposed by Nandra et al. would imply that the EUV production zone and the BLR lie between the X-ray and UV production zones. Another problem is then one of scale— 4 lt days from a $`10^7`$ $`M_{}`$ black hole corresponds to 7000 gravitational radii $`(GM/c^2)`$. This is a factor of more than 100 higher than the radius at which viscous dissipation in an accretion disk produces UV and optical continuum radiation. Producing the majority of UV and optical radiation at such large radii requires a new, highly efficient dissipation mechanism. TABLE 8 Cross-correlation Results Feature $`\tau _{cent}`$ $`\tau _{peak}`$ $`r_{max}`$ FWHM (days) (days) (days) $`F_\lambda `$(1315 Å) $`0.00_{0.09}^{+0.09}`$ $`0.00_{0.04}^{+0.05}`$ 1.00 5.11 $`F_\lambda `$(1485 Å) $`0.09_{0.08}^{+0.11}`$ $`0.02_{0.06}^{+0.04}`$ 0.99 5.12 $`F_\lambda `$(1740 Å) $`0.28_{0.13}^{+0.12}`$ $`0.06_{0.02}^{+0.14}`$ 0.95 5.10 $`F_\lambda `$(1825 Å) $`0.36_{0.17}^{+0.11}`$ $`0.08_{0.02}^{+0.20}`$ 0.93 5.12 $`F_\lambda `$(4945 Å) $`1.17_{0.33}^{+0.55}`$ $`1.33_{0.58}^{+0.35}`$ 0.89 5.60 $`F_\lambda `$(6962 Å) $`1.68_{0.82}^{+1.12}`$ $`1.43_{0.53}^{+1.67}`$ 0.71 6.09 Ly$`\alpha `$ $`1.30_{0.50}^{+0.61}`$ $`1.76_{1.06}^{+0.39}`$ 0.56 6.61 Ly$`\alpha `$+N v $`1.48_{0.38}^{+0.28}`$ $`1.71_{0.96}^{+0.04}`$ 0.74 6.37 N v $`1.24_{0.63}^{+0.41}`$ $`0.58_{0.13}^{+1.07}`$ 0.59 5.99 Si iv $`1.50_{0.71}^{+0.37}`$ $`1.24_{0.79}^{+0.56}`$ 0.65 6.30 C iv $`2.81_{0.87}^{+0.62}`$ $`2.24_{0.39}^{+1.26}`$ 0.59 6.38 He ii $`0.67_{0.64}^{+0.27}`$ $`0.31_{0.51}^{+0.64}`$ 0.48 4.38 C iii\] 0.18 Si ii\] 0.13 O i\] 0.23 C ii $`1.33_{1.86}^{+0.93}`$ $`1.06_{1.56}^{+1.29}`$ 0.42 7.27 N iv\] 0.14 O iii\] 0.25 N iii\] 0.19 Si iii\] 0.24 Si iii\]+C iii\] 0.20 TABLE 9 Empirical Fit to NGC 7469 X-ray Spectrum Parameter Best-fit Value Photon index, $`\alpha `$ $`2.14\pm 0.04`$ Power law normalization, $`F_{1keV}`$ $`\left(1.36\pm 0.04\right)\times 10^2`$ $`\mathrm{phot}\mathrm{cm}^2\mathrm{s}^1\mathrm{keV}^1`$ $`N_{HI}`$ $`\left(4.4\pm 1.0\right)\times 10^{20}\mathrm{cm}^2`$ Edge Energy, $`E_{O7}`$<sup>a</sup><sup>a</sup>footnotemark: $`0.685\pm 0.021`$ keV Optical Depth, $`\tau _{O7}`$ $`0.21\pm 0.04`$ Edge Energy, $`E_{O8}`$<sup>a</sup><sup>a</sup>footnotemark: $`0.848\pm 0.021`$ keV Optical Depth, $`\tau _{O8}`$ $`0.13\pm 0.03`$ Narrow Fe Energy<sup>a</sup><sup>a</sup>footnotemark: $`6.345\pm 0.031`$ keV Narrow Fe EW $`47\pm 18`$ eV Narrow Fe width, $`\sigma `$ Fixed at 0.0 Broad Fe Energy<sup>a</sup><sup>a</sup>footnotemark: $`7.03\pm 0.29`$ keV Broad Fe EW $`3.14\pm 0.82`$ keV Broad Fe width, $`\sigma `$ $`2.24\pm 0.48`$ keV $`\chi ^2`$/dof 482.11/412 <sup>a</sup>Energy in the rest frame of NGC 7469, $`z=0.0164`$. ### 5.2. UV and X-ray Absorption in NGC 7469 The far-UV spectrum of NGC 7469 as seen with the FOS is typical of other Seyfert 1s and low redshift AGN. The intrinsic absorption lines, while hinted at in earlier IUE spectra, show up clearly. Like most other Seyfert 1s in which these features are seen, the equivalent widths (EWs) of 1 Å or less are difficult to detect in the lower resolution, lower S/N IUE spectra. With the FOS (and the GHRS), they are now seen to be a common feature of Seyfert 1s (Crenshaw et al. (1999)), as common as the “warm absorbers” seen in ROSAT and ASCA X-ray spectra (e.g., Turner et al. (1993); Mathur et al. (1994); Nandra & Pounds (1994); Fabian et al. (1994); Reynolds (1997); George et al. (1998)). Mathur et al. (1994, 1995) have suggested a link between the two phenomena, in which the UV absorption lines are produced by the minority ions in the photoionized gas which is producing the X-ray absorption. Fig. 13.— Upper Panel: The solid lines are the best-fit warm absorber model for NGC 7469 folded through the ASCA SIS0 and SIS1 detector responses. The data points are crosses with 1$`\sigma `$ error bars. The model includes a power law with photon index 2.25, absorption by neutral gas with an equivalent neutral hydrogen column of $`\mathrm{N}_\mathrm{H}=5.8\times 10^{20}\mathrm{cm}^2`$, absorption by ionized gas with a total column density log $`N_{tot}=21.6\mathrm{cm}^2`$ and an ionization parameter of $`U=2.0`$, an unresolved iron K$`\alpha `$ line at 6.24 keV with an equivalent width of 46 eV, and a broad (FWHM = 5.9 keV) iron K$`\alpha `$ line at 6.78 keV with an equivalent width of 4 keV. Lower Panel: The contributions to $`\chi ^2`$ of each spectral bin are shown. The solid line is for SIS0 and the dotted line for SIS1. To test whether this is consistent with the strength of the UV absorption lines seen in NGC 7469, we can calculate the column densities of the UV ion species using our warm absorber model fit to the X-ray spectrum. If this single-zone model can simultaneously account for both the X-ray and the UV absorbers, then the observed EWs of the UV lines should fall on a single curve of growth consistent with the model. In Fig. 15 we plot the observed EWs of the Ly$`\alpha `$, N v, and C iv absorption lines at the column densities predicted by the best-fit X-ray warm absorber model. One can see that this is not a self-consistent description of both the X-ray and UV-absorbing gas. In particular, the strength of the C iv absorption lines is much higher than would be predicted for the residual column in gas ionized sufficiently to produce the observed X-ray absorption. The observed UV absorption is more consistent with lower column density gas at a lower ionization parameter. Fig. 16 shows curves of growth for a photoionization model that provides the best match to the observed UV absorption lines. With a total column density of log $`N_{tot}=19.2\mathrm{cm}^2`$ and an ionization parameter of $`U=0.04`$, the observed EWs of Ly$`\alpha `$, N v, and C iv are nearly all consistent with gas having a Doppler parameter of $`25\mathrm{km}\mathrm{s}^1`$. The total column of this UV-absorbing component is low enough that it would have negligible effect on the appearance of the X-ray spectrum. Similarly, as shown by a comparison of Figures 15 and 16, the X-ray absorbing gas makes little contribution to the UV absorption lines. TABLE 10 Warm Absorber Fit to NGC 7469 X-ray Spectrum Parameter Best-fit Value Photon index, $`\alpha `$ $`2.25\pm 0.06`$ Power law normalization, $`F_{1keV}`$ $`\left(1.56\pm 0.08\right)\times 10^2`$ $`\mathrm{phot}\mathrm{cm}^2\mathrm{s}^1\mathrm{keV}^1`$ $`N_H`$ $`\left(5.8\pm 1.2\right)\times 10^{20}\mathrm{cm}^2`$ Total Column Density, log $`N_{tot}`$ $`21.6\pm 0.08`$ $`\mathrm{cm}^2`$ Ionization Parameter, $`U`$ $`2.0\pm 0.4`$ Redshift, $`z`$ $`0.058\pm 0.0012`$ Narrow Fe Energy $`6.239\pm 0.036`$ keV Narrow Fe EW $`46\pm 34`$ eV Narrow Fe width, $`\sigma `$ Fixed at 0.0 Broad Fe Energy $`6.78\pm 0.33`$ keV Broad Fe EW $`4.1\pm 1.2`$ keV Broad Fe width, $`\sigma `$ $`2.5\pm 0.6`$ keV $`\chi ^2`$/dof 484.20/413 Fig. 14.— Time delays vs. wavelength for the IUE continuum bands and the optical bands presented by Collier et al. (1998). The gray open circles represent the original measurements of the IUE data from Wanders et al. (1997) and the optical data from Collier et al. (1998). The filled black circles are the new measurements from this paper. The solid line shows the $`\lambda ^{4/3}`$ dependence via the function $`\tau =3.0\left(\left(\lambda /10^4\mathrm{\AA }\right)^{4/3}\left(1315\mathrm{\AA }/10^4\mathrm{\AA }\right)^{4/3}\right)`$. Thus NGC 7469 is yet another instance of a Seyfert galaxy possessing a complex assortment of absorbing regions. This was previously shown to be true for NGC 4151 (Kriss et al. (1995)) and for NGC 3516 (Kriss et al. 1996a ). The case of NGC 3516 is particularly illuminating since high resolution UV spectra show that multiple kinematic components are present, an additional indication that multiple regions are contributing to the absorption (Crenshaw, Maran, & Mushotzky (1998)). The Seyferts NGC 4151 (Weymann et al. (1997)) and NGC 5548 (Crenshaw et al. (1999); Mathur, Elvis, & Wilkes (1999)) also appear kinematically complex when observed at high spectral resolution. In fact, in NGC 5548, the prototypical example of a combined “XUV” absorber (Mathur, Wilkes, & Elvis (1995)), Mathur et al. (1999) now acknowledge that at most one of the six different kinematic components visible in the high-resolution UV spectrum actually arises in the X-ray absorbing zone. Crenshaw and Kraemer (1999) have identified the kinematic component with the highest blueshift as the one associated with the X-ray absorber in NGC 5548. Fig. 15.— The observed EWs of the UV absorption lines in NGC 7469 are plotted on curves of growth using column densities predicted by the single warm absorber fit to the ASCA X-ray spectrum. This model has $`U=2.0`$ and a total column density of log $`N_{tot}=21.6\mathrm{cm}^2`$. Points are plotted at a horizontal position determined by the column density for the given ion in the model with a vertical coordinate determined by the observed EW for the corresponding absorption line. The vertical error bars are from Table 2, and the horizontal error bars are the range in column density allowed by the uncertainty in the fit to the ASCA spectrum. The thin solid lines show predicted EWs as a function of column density for Voigt profiles with Doppler parameters of $`b=10`$, 20, 30, 40, 50, 100, 200, and $`500\mathrm{km}\mathrm{s}^1`$. A model that fits the data would have all points lying on one of these curves. This model cannot simultaneously match both the UV and the X-ray absorption. Fig. 16.— As in Fig. 15, but for a photoionization model with $`U=0.04`$ and log $`N_{tot}=19.2\mathrm{cm}^2`$. A curve of growth with $`b25\mathrm{km}\mathrm{s}^1`$ can match nearly all the observed EWs. While most Seyferts with UV and X-ray absorption appear to have a complex assortment of absorbing regions with a broad range of physical conditions, this does not mean that these physically distinct regions are unrelated. Since UV and X-ray absorption (or the lack of both) appears to be linked in most Seyferts (Crenshaw et al. (1999)), it is likely that a common mechanism is responsible for both. Possibilities for this mechanism include outflows of material ablated from the obscuring torus (Weymann et al. (1991); Kriss et al. (1995)), or a wind from the accretion disk (Königl & Kartje (1994); Murray et al. (1995)). A natural origin for the separate UV and X-ray absorbing clouds would be to have higher density clumps embedded in a more tenuous wind. The smaller, higher density clumps would have lower total column densities and lower ionization parameters, a requirement for the UV absorbers. The tenuous surrounding wind (which may well have a range of physical conditions itself, e.g. Kriss et al. 1996b ) could be the source for the X-ray warm absorber. In such a scenario one might also expect to see correlated variability in the total column density of the X-ray and UV absorbers related to “events” in which new material was ablated from the torus or accretion disk into the outflowing wind. ## 6. Summary We have used a high S/N FOS spectrum of NGC 7469 to produce a model template for extracting deblended emission line and continuum fluxes from the series of IUE spectra obtained in the 1996 monitoring campaign. The FOS spectrum shows that “continuum” windows at 1315 Å, 1740 Å and 1825 Å used by Wanders et al. (1997) in the original analysis have significant contaminating contributions from the wings of the broad emission lines and other low-level features such as O i $`\lambda 1304`$ and Fe ii emission lines. Our new extractions for the most part eliminate these contaminating components from the measured fluxes. Using these cleaner data, we still find a time delay in the the response of the continuum flux at longer wavelengths relative to shorter wavelengths. We find time delays of 0.09, 0.28, and 0.36 days for the fluxes at 1485 Å, 1740 Å and 1825 Å, respectively, relative to F(1315 Å). When combined with the delays measured for the optical continuum by Collier et al. (1998), we find that the wavelength dependence of the time-delay follows a $`\lambda ^{4/3}`$ relation that is consistent with the simplest models of radiative reprocessing. The FOS spectrum of NGC 7469 reveals associated absorption in the high-ionization lines N v, C iv and Ly$`\alpha `$, a common feature of Seyfert galaxies (Crenshaw et al. (1999)). The X-ray spectrum of NGC 7469 also shows evidence for an ionized absorber (Reynolds (1997); George et al. (1998)), and we have analyzed the UV and X-ray absorbers in the context of a single UV/X-ray absorber (Mathur, Wilkes, & Elvis (1995)). We find, however, that such a unified description is untenable. The predicted column densities of UV-absorbing ions in the best-fitting warm absorber model for the X-ray spectrum imply line strengths well below those observed. The UV absorption requires gas with a lower ionization parameter and lower column density. Even though the X-ray and UV absorption in this Seyfert and in many others requires a complex assortment of kinematic components with different physical conditions, the fact that associated UV absorption and X-ray warm absorbers are often found in the same objects (Crenshaw et al. (1999)) suggests that the material for each of these absorbers has a common origin. Support for this work was provided by NASA through grant number GO-06747.01-95A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. G. Kriss and B. Peterson acknowledge additional support from NASA Long Term Space Astrophysics grants NAGW-4443 to the Johns Hopkins University and NAG5-8397 to the Ohio State University, respectively.
no-problem/9912/chao-dyn9912029.html
ar5iv
text
# Experimental improvement of chaotic synchronization due to multiplicative time–correlated Gaussian noise ## References Benettin, G., Galgani, L. & Strelcyn, J.M. ”Kolmogorov entropy and numerical experiments”, Phys. Rev. A 14(6), 2338-2345. Braiman, Y., Ditto, W.L., Wiesenfeld, K. & Spano, M.L. \[1995a\] ”Disorder-enhaced synchronization”, Phys. Lett. A 206, 54-60. Braiman, Y., Lindner, J.F., & Ditto, W.L. \[1995b\] ”Taming spatiotemporal chaos with disorder”, Nature 378, 465-467. Chua, L.O. ”Special issue on nonlinear waves, patterns and spatiotemporal chaos in dynamic arrays”, IEEE Trans. Circuits Syst. 42(10). Gade, P.M., & Basu, C. ”The origin of non–chaotic behavior in identically driven systems”, Phys. Lett. A 217, 21-27. Gailey, P.C., Neiman, A., Collins, J.J. & Moss, F. ”Stochastic resonance in ensembles of nondynamical elements: The role of internal noise”, Phys. Rev. Lett. 79, 4701-4704. Gammaitoni, L., Hänggi, P., Jung, P. & Marchesoni, F. ”Stochastic resonance”, Rev. Mod. Phys. 70(1), 223-287. Herzel, H. & Freund, J. ”Chaos, noise, and synchronization reconsidered”, Phys. Rev. E 52(3), 3238-3241. Klimontovich, Yu. L., ”Relative ordering criteria in open systems”, Uspekhi Fiz. Nauk. 166(11), 1231-1243. Lindner, J.F., Meadows, B.K., Ditto, W.L., Inchiosa, M.E. & Bulsara, A.R. ”Array enhanced stochastic resonance and spatiotemporal synchronization”, Phys. Rev. Lett. 75(1), 3-6. Lindner, J.F., Meadows, B.K., Ditto, W.L., Inchiosa, M.E. & Bulsara, A.R. ”Scaling laws for spatiotemporal synchronization and array enhanced stochastic resonance”, Phys. Rev. E 53(3), 2081-2086. Longa, L., Curado, E.M.F. & Oliveira, F.A. ”Roundoff–induced coalescence of chaotic trajectories”, Phys. Rev. E 54(3), 2201-2204. Luchinsky, D.G., McClintock, P.V.E. & Dykman, M.I. ”Analogue studies of nonlinear systems”, Rep. Prog. Phys. 61, 889-997. Madan, R.N. Chua’s Circuit: A Paradigm for Chaos, (World Scientific, Singapore). Malescio, G. ”Noise and synchronization in chaotic systems”, Phys. Rev. E 53, 6551-6554. Maritan, A. & Banavar, J.R. ”Chaos, noise and synchronization”, Phys. Rev. Lett. 72, 1451-1454. McClintock, P.V.E. & Moss, F. ”Anologue techniques for the study of problems in stochastic nonlinear dynamics” in Noise in Nonlinear Dynamical Systems. Vol.3 edited by F. Moss & P.V.E. McClintock, (Cambridge Univ. Press, UK), pp 243-271. Matías, M.A. & Güémez, J. ”Stabilization of chaos by proportional pulses in the system variables”, Phys. Rev. Lett. 72, 1455-1458. Matías, M.A. & Güémez, J. ”Chaos suppression in flows using proportional pulses in the system variables”, Phys. Rev. E 54, 198-209. Lorenzo, M.N. & Pérez-Muñuzuri, V. ”Array enhanced chaotic synchronization by colored Gaussian noise”, Phys. Rev. E 60, 2779-2787. Pikovsky, A.S. ”Comment on chaos, noise and synchronization”, Phys. Rev. Lett. 73(21), 2931. Sánchez, E., Matías, M.A. & Pérez-Muñuzuri, V. ”Analysis of synchronization of chaotic systems by noise: An experimental study”, Phys. Rev. E 56, 4068-4071. Sánchez, E., Matías, M.A. & Pérez-Muñuzuri, V. ”An experimental setup for studying the effect of noise on Chua’s circuit”, IEEE Trans. on Circ. and Syst. I, 46, 517-520. Sancho, J.M., San Miguel, M., Katz, S.L. & Gunton, J.D. ”Analytical and numerical studies of multiplicative noise”, Phys. Rev. A 26, 1589-1609. Shinbrot, T., Grebogi, C., Ott, E. & Yorke, J.A. ”Using small perturbations to control chaos”, Nature 363, 411-417. Shuai, J.W. & Wong, K.W. ”Noise and synchronization in chaotic neural networks”, Phys. Rev. E 57, 7002-7007. Wiesenfeld, K. & Moss, F. ”Stochastic resonance and the benefits of noise: from ice ages to crayfish and SQUIDs”, Nature 373, 33-36.
no-problem/9912/cond-mat9912442.html
ar5iv
text
# Fragilities of Liquids Predicted from the Random First Order Transition Theory of Glasses ## Abstract A microscopically motivated theory of glassy dynamics based on an underlying random first order transition is developed to explain the magnitude of free energy barriers for glassy relaxation. A variety of empirical correlations embodied in the concept of liquid “fragility” are shown to be quantitatively explained by such a model. The near universality of a Lindemann ratio characterizing the maximal amplitude of thermal vibrations within an amorphous minimum explains the variation of fragility with a liquid’s configurational heat capacity density. Furthermore the numerical prefactor of this correlation is well approximated by the microscopic calculation. The size of heterogeneous reconfiguring regions in a viscous liquid is inferred and the correlation of nonexponentiality of relaxation with fragility is qualitatively explained. Thus the wide variety of kinetic behavior in liquids of quite disparate chemical nature reflects quantitative rather than qualitative differences in their energy landscapes. It is believed all classical fluids could form glasses if cooled sufficiently fast so as to avoid crystallization. Central to glass formation is a dramatic slowing of molecular motions on cooling the liquid. The existence of a universal description of glass transitions is suggested by empirical observations connecting deviations from the Arrhenius law for the slowing of rates, nonexponential relaxations in the super cooled liquid state and the behavior of thermodynamic properties on cooling(1). Quantitative differences in behavior of different substances sometimes obscure this universality. This has led to a classification of liquids into “fragile” ones like o-terphenyl, having the most dramatic deviations from the Arrhenius law, and into “strong” ones like pure SiO<sub>2</sub> where the Arrhenius equation works well(1). In this paper, we show how the fragile versus strong behavior of liquids can be understood within a microscopically motivated theory based on the idea that glassy dynamics is caused by an underlying thermodynamic, ideal “random first order” transition(2-7). The notion that a random first order transition lies at the heart of glass formation received its early theoretical support from the remarkable confluence of approximate microscopic theories of the liquid glass transition(8-10) and the behavior of a large class of exactly solvable statistical mechanical models of spin glasses with quenched disorder(11). Two closely connected theories of the liquid glass transition suggest features similar to first order transitions. One of these, the so-called mode-mode coupling theory(8,12), focuses on the feedback between the slow fluctuations of fluid density in a molecule’s environment on the motion of that molecule. This theory predicts a sharp transition in the dynamics as well as a characteristic behavior of the time correlation functions near the predicted transition. Mössbauer effect(13) and neutron scattering(14) are roughly consistent with these precursor phenomena. At temperatures below the transition, mode coupling theory predicts the freezing of the liquid’s configuration near to a given random configuration; i.e., there is broken ergodicity. Another approach to the glass transition directly addresses broken ergodicity by investigating the stability of a frozen density wave using either self-consistent phonon theory(9) or the density functional theory of liquids, applying them to aperiodic structures (10,15). The mode coupling, self-consistent phonon and density functional approaches all predict that there is a Lindemann criterion for the stability of an aperiodic density wave: just as for a periodic crystalline solid, thermal vibrations cannot yield a root mean square displacement of particles from their fiducial location exceeding roughly one tenth of the interparticle spacing. The precise value of the Lindemann ratio only weakly depends on the detailed intermolecular forces. The predicted Lindemann ratio corresponds well to the experimentally measured magnitude of the intermediate time plateau in the structure function measured by neutrons(14). A finite Lindemann ratio would be consistent with a first order phase transition, but glass transitions in the laboratory do not show a latent heat as ordinary first order transitions do. This lack of latent heat is explained by the existence of the large number of aperiodic structures that may be frozen in at a glass transition in contrast to the unique periodic structure formed in ordinary crystallization. Many exactly solvable models of disordered magnetic systems have been shown to exhibit freezing into many structures(11,16,17). The major class of these also show a first order jump in a locally defined order parameter without any latent heat. This defines what has been called a “random first order” transition. Unlike Ising spin glasses, these models possess no symmetry between local states but have long range, quenched random interactions. Such systems include Potts spin glasses(11), $`p`$-spin glasses(17), and the elegantly solved Random Energy Model(16). There are further parallels between these systems and the phenomenology of glass forming liquids, most notably both glass forming liquids and these models exhibit a Kauzmann entropy crisis, i.e., the configurational entropy vanishes at a finite temperature above absolute zero(18). This crisis would define an underlying ideal glass transition. Whether the crisis for liquids would be avoided in some way at lower temperature than measurements have been made is controversial and is of limited relevance to describing the observed behavior using the analogy. In the exactly solvable statistical mechanical models, a dynamic transition occurs at a high temperature T<sub>A</sub> coincident with mode coupling and stability analyses, but the thermodynamic transition does not occur until at a lower temperature, T<sub>K</sub> the configurational entropy of different frozen solutions vanishes(4). The idea then is that the glassy dynamics in the measured temperature range is governed by the approach to an ideal glass transition described like that shown in the exactly solved models. There are two seeming differences between the exactly solved models and the situation for the liquid-glass transition. First, in liquids there is no quenched randomness; it must be self-generated. Second, while the models have infinite range forces, interactions in liquids are of finite range. The absence of quenched randomness has been addressed by exhibiting several mean field models without quenched randomness that do generate randomness internally(19-21). Also the formal statistical mechanical tools used for quenched random Hamiltonians, e.g., the replica technique, have been shown to be applicable to atomic fluid systems with self-generated randomness(7). Furthermore, computer simulations of fluid glass transitions show replica symmetry breaking like a random first order Transition(22). The consequences of finite range interactions are more important. The finite range causes the dynamic transition at T<sub>A</sub>, like a spinodal of an ordinary first order transition to be smeared out. It becomes a crossover to activated dynamics. Below T<sub>A</sub>, motions in the finite range system can still occur that involve the rearrangement of large regions of the liquid. The transition to such collective activated events in liquids has been confirmed in simulations(23,24). The events are driven by the configurational entropy. For finite range systems approaching a random first order transition, an “entropic droplet” scaling argument for the activation barriers naturally explains the non-Arrhenius transport behavior and leads to the Vogel-Fulcher law(4,5). The idea that configurational entropy is needed for motions in glasses predates the random first order transition theory and was described by Adam and Gibbs(25). The older argument is really quite different from the random first order transition theory, since it provides no explanation for how a rearranging unit’s activation energy is related to the microscopic forces. Here we show how the near universality of the Lindemann ratio explains the connection between barrier heights and thermodynamics for liquids of varying fragility. The naive density functional approach used to obtain the Lindemann criterion for vitrification allows an estimate for the free energy of dynamic rearrangements. The density functional(10,26) assesses the cost of forming any density wave by breaking the free energy into an entropic localization penalty and an interaction term. $$F=f(\rho (𝐫))d^3𝐫=k_BTd^3𝐫\rho (𝐫)[\mathrm{ln}\rho (𝐫)1]+d^3𝐫d^3𝐫^{}(\rho (𝐫)\rho _0)c(𝐫𝐫^{})(\rho (𝐫^{})\rho _0),$$ (1) where $`\rho _0`$ is the mean density. The localization cost is the same as for a perfect gas while the interaction term involves the direct correlation function of the liquid, a renormalized form of the bare interaction potential. The direct correlation function is determined by the condition that the functional gives small fluctuations in density reproducing the static liquid structure factor. Higher order terms in the density can also be included. In the frozen aperiodic state the density wave is decomposed into a sum of Gaussians centered around random lattice sites, $`\rho (𝐫)=_i(\frac{\pi }{\alpha })^{3/2}\mathrm{exp}(\alpha (𝐫𝐫_i)^2)`$, where $`\alpha `$ represents the effective local spring constant that determines the rms displacement from the fiducial lattice site. The localization sites are $`\{𝐫_i\}`$. For large $`\alpha `$, the densities around different sites overlap weakly giving $$\frac{F}{N}=k_BT[\frac{3}{2}\mathrm{ln}(\frac{\alpha r_0^2}{\pi })\frac{5}{2}]+\frac{1}{N}d^3𝐫d^3𝐫^{}(\rho (𝐫)\rho _0)c(𝐫𝐫^{})(\rho (𝐫^{})\rho _0),$$ (2) where $`N`$ is the total number of particles and $`r_0`$ is the mean lattice spacing. We can take $`\rho _0r_0^3=1`$. For small $`\alpha `$, $`F/N`$ reduces to the perfect gas value. A similar free energy expression is obtained from self-contained phonon theory where the direct correlation function is replaced by the Mayer function $`f=e^{\beta u(r)}1`$ for hard potentials(10) or by the potential itself(7). The free energy varies with the particular arrangement of sites $`\{𝐫_i\}`$, but assuming $`\alpha `$ is the constant, the mean free energy of aperiodic structures is plotted in Figure 1(a). The lowest value of $`\alpha `$ for which a secondary minimum occurs is given by the Lindemann value $`\alpha _L`$. This minimum representing the frozen wave is higher in free energy than the $`\alpha =0`$ fluid phase. For the exactly solvable random first order transitions the excess free energy of the frozen solution is known to equal the configurational entropy of possible mean field solutions, $`TS_c`$. For the fluid system in addition to the $`\alpha =0`$ and $`\alpha \alpha _L`$ uniform stationary solutions of the variational equation $`\delta F=0`$, there are saddle points representing droplet configurations in which a region of low $`\alpha 0`$ forms in the midst of a given large $`\alpha `$ solution (See Figure 1(b)). This saddle point is a transition state for reconfiguring the frozen density wave. Within the melted region there is a multiplicity of states corresponding to other aperiodic arrangements of the atoms. Much below T<sub>A</sub>, the interface should be quite sharp, that is, in a single atomic layer $`\alpha `$ changes from a value near $`\alpha _L`$ to near zero. Close to T<sub>A</sub> the transition should be smoother with $`\alpha `$ slowly varying over many atomic layers. In both cases, there will arise a surface tension $`\sigma `$ reflecting the deviation of $`\alpha `$ in the layers with $`\alpha `$ different from the bulk free energy minima values. The density functional expression for the droplet free energy then is given as a function of the radius of the droplet much as in conventional nucleation, $$F(r)=\frac{4}{3}\pi Ts_cr^3+4\pi \sigma r^2.$$ (3) Here $`s_c`$ is the configurational entropy density. The maximum of $`F(r)`$ gives a reconfiguration barrier $`\mathrm{\Delta }F^{}=\frac{16}{3}\pi \sigma ^3/(Ts_c)^2`$. A detailed calculation of this barrier for a specific glassy system, the random heteropolymer has been given by Takada and Wolynes(27). This naive droplet result(4,28) differs from the Adam Gibbs suggestion $`\mathrm{\Delta }F^{}=s_c^{}\mathrm{\Delta }\mu /s_c`$(25), where $`\mathrm{\Delta }\mu `$ is a bulk “activation energy” per particle and $`s_c^{}`$ is the “critical configurational entropy” taken to be usually $`k_B\mathrm{ln}2`$. The AG formula is not the result of a self-contained microscopic calculation but assumes the free energy cost of dynamically reconfiguring a region is independently given from the free energies that determine the low energy structures themselves. There is no apparent reason to assume $`\mathrm{\Delta }\mu `$ a constant for different substances. On the other hand, the modern random first order transition theory does suggest universality for $`\sigma `$ based on the universality of the Lindemann ratio $`\alpha _L^{1/2}/r_0`$. We see this in the following way: assuming a sharp interface between the localized and delocalized regions, the energy associated with the interface should be one-half of the interaction part of the free energy in the bulk stable phase. Therefore, $`\sigma =\frac{T}{2}r_0[\frac{3}{2}nk_B\mathrm{ln}(\frac{\alpha r_0^2}{\pi e})s_c(T)]`$, where $`n`$ is the density of particles. Since the localization part of the free energy depends only logarithmically on $`\alpha `$, we can replace $`\alpha `$ by its minimum value $`\alpha _L`$ which it achieves at T<sub>A</sub>. Near T<sub>K</sub>, on the other hand, we can neglect the configurational entropy part of the expression. For temperatures between T<sub>A</sub> and T<sub>K</sub>, the errors of making these two approximations largely cancel. This gives $`\sigma =\frac{3}{4}nr_0k_BT\mathrm{ln}(\frac{\alpha _Lr_0^2}{\pi e})=\sigma _0`$ as an approximation for temperatures much below T<sub>A</sub>. The universality of the Lindemann ratio $`\alpha _L^{1/2}/r_0`$ means $`\sigma /nr_0k_BT`$ is nearly universal and therefore that $`\mathrm{\Delta }F^{}`$ increases more rapidly with cooling for substances with a large configurational heat capacity. This explains the empirical correlation that strong liquids with nearly Arrhenius rate slowing have small excess heat capacities contrasting with fragile liquids having large excess heat capacity with dramatically non-Arrhenius slowing. Near T<sub>A</sub> the interface broadens and the sharp interface approximation breaks down. A gradient expansion of the free energy as a function of $`\alpha `$ yields a surface energy vanishing near T<sub>A</sub>. The universal value of $`\sigma _0`$ is only approximate. For given substance, the remaining temperature dependence of $`\sigma `$ from the broadening of the interface implies that the apparent fragility of liquids measured at high temperature should be larger than that measured at low temperature, as noted by Angell in his detailed survey of viscosity data(29). Similarly we note that $`\sigma `$ depends on the density and therefore the pressure. Thus although a kinetic glass transition defined by a specific numerical barrier height or fiducial relaxation time will be largely a function of the configurational entropy density there will be another explicit but weak thermodynamic dependence on pressure too. Consistent with Nieuwenhuizen’s recent analysis of the dynamic effects on glass transitions caused by pressure and temperature change(30), this could explain the mild deviation of the Prigogine-deFay ratio from 1. While the simple density functional calculation explains qualitatively the fragility/heat capacity density correlation, viscosity data are more consistent with an $`s_c^1`$ scaling for the free energy of activation (like that suggested by Adam and Gibbs(25) ) rather than the $`s_c^2`$ behavior predicted from the simple density functional theory. The scaling theory of the entropic droplet formulation already accounts for this observation(5). The modification comes from the complexity of the interface between aperiodic crystalline minima(5). Correct scaling near T<sub>K</sub> is restored by the wetting of droplets corresponding with one particular density wave, by a surface coating corresponding to a different aperiodic arrangement. This acts to lower the surface energy much like what happens in the random field Ising model(31). Wetting for a random system gives a surface tension that depends on the radius of the drop. This $`r`$ dependent energy yields $`s_c^1`$ scaling when the thermodynamic critical exponents for the random first order transition are used. We now reprise this argument based on a similar one for the random field Ising magnet(31) in Figure 1 (b). The wetting argument leads to a differential renormalization group equation for $`\sigma (r)`$, $$\sigma ^{1/3}d\sigma =(4^{1/3}4^{4/3})(T\sqrt{k_B\mathrm{\Delta }\stackrel{~}{c_p}})^{4/3}r^{5/3}dr,$$ (4) where $`\mathrm{\Delta }\stackrel{~}{c_p}`$ is the heat capacity jump per unit volume. This renormalization group equation is integrated outward from $`r_0`$ where the short range value is set by the naive density functional theory without wetting discussed earlier, $`\sigma _0`$. Between T<sub>K</sub> and T<sub>A</sub>, $`\sigma (r)`$ vanishes at large distance and is only finite below T<sub>K</sub>. Using this boundary condition, the solution of the renormalization equation for $`\sigma (r)`$ at T<sub>K</sub> is then $$\sigma (r)=\sigma _0(\frac{r_0}{r})^{1/2}.$$ (5) When this is substituted into the expression for $`F(r)`$, one finds that the maximum gives a barrier, $`\mathrm{\Delta }F^{}`$ which now varies inversely to the first power of the configurational entropy density; i.e., the Vogel-Fulcher scaling. We find a simple expression for the activation barrier: $$\mathrm{\Delta }F^{}=\frac{3\pi \sigma _0^2r_0}{Ts_c}=\frac{3\pi \sigma _0^2r_0}{T\mathrm{\Delta }\stackrel{~}{c_p}}\frac{T_K}{TT_K}=k_BTD\frac{T_K}{TT_K}.$$ (6) The coefficient $`D`$ in this expression has been called the liquid’s fragility, which has the expression $$D=\frac{27}{16}\pi \frac{nk_B}{\mathrm{\Delta }\stackrel{~}{c_p}}\mathrm{ln}^2\frac{\alpha _Lr_0^2}{\pi e}.$$ (7) Based on the Lindemann ratio universality, the root mean square displacement, $`\alpha _L^{1/2}`$ is taken as $`0.1r_0`$, the hard sphere value, so that $`D`$ can be expressed in terms of the heat capacity jump per mole, $`\mathrm{\Delta }c_p`$, $$D=32R/\mathrm{\Delta }c_p,$$ (8) where $`R=8.31`$ J mole<sup>-1</sup> K<sup>-1</sup>. The value of $`D`$ depends on the heat capacity jump per mole which varies greatly from substance to substance and is far from being universal. In Figure 2 we plot the $`D`$ predicted from this theory versus the inverse of the configurational heat capacity for several glass forming liquids. The straight line is given by Equation (8). Superimposed on the plot are the experimental values of the $`D`$. The agreement is excellent. We see that the magnitude of the activation barriers for rearrangement of the viscous liquid depends on the difference in temperature from T<sub>K</sub>, on universal microscopic parameters connected with the Lindemann ratio and on the excess heat capacity connected with configurational excitations. A hallmark of the random first order transition theory of glass dynamics is the dynamic heterogeneity required to explain the growing barriers upon cooling. After combining Equation 3 and 5 with our expression of $`\sigma _0`$, a little algebra shows the characteristic size of a rearranging region is $`\frac{\xi }{r_0}=2(\frac{2}{3\pi \mathrm{ln}\frac{\alpha _Lr_0^2}{\pi e}})^{2/3}(\frac{DT_K}{TT_K})^{2/3}`$. This can be expressed as a universal function of the relaxation time $`\frac{\xi }{r_0}=2(\frac{2}{3\pi \mathrm{ln}\frac{\alpha _Lr_0^2}{\pi e}})^{2/3}(\mathrm{ln}\frac{\tau }{\tau _0})^{2/3}`$, since $`\tau =\tau _0exp(\frac{DT_k}{TT_k})`$ according to the Vogel-Fulcher law. This is plotted in Figure 3. The kinetic laboratory glass transition occurs when molecular slowing gives relaxations in the hours range, i.e., $`\frac{\tau }{\tau _0}=10^{17}`$. Thus at T<sub>g</sub>, $`(\frac{\xi }{r_0})4.5`$, a rather modest size. We also note the universality of $`\sigma _0`$ suggests that $`s_c`$ is nearly the same for all substances at the laboratory glass transition. Roughly 90 molecules are involved in a rearranging unit according to the random first order transition theory at the conventionally defined glass transition temperature. The rearranging unit according to the Adam-Gibbs argument is a region just capable of having two states, therefore $`(\frac{\xi _{AG}}{r_0})=(\frac{R\mathrm{ln}2}{s_c})^{1/3}`$. $`\xi _{AG}`$ grows slowly as T<sub>K</sub> is approached in contrast to the random first order transition theory. The Adam-Gibbs argument gives rearranging units with at most 10 molecules near T<sub>g</sub> for the most fragile substances. It is clearly very ambiguous to have such small “cooperative” units. Recent observations of structural heterogeneities are inconsistent with units of the small size predicted by Adam-Gibbs, but are in harmony with the estimates of the random first order transition entropic droplet picture(32). A single size does not characterize the viscous liquid completely. The random first order transition entropic droplet picture actually leads to a “mosaic” structure of the supercooled liquid(33) with cooperative regions only somewhat larger than the critical droplet size $`\xi `$. These regions fluctuate in size and therefore have different flipping rates because of the configurational entropy fluctuations whose magnitude depends on the configurational heat capacity density jump $`\mathrm{\Delta }\stackrel{~}{c_p}`$ and the volume of the rearranging region, $`\mathrm{\Delta }S_c=\sqrt{k_B\mathrm{\Delta }\stackrel{~}{c_p}\xi ^3}`$. At $`T_g`$ both strong and fragile liquids have roughly the same absolute scale for their mosaic structures, i.e., $`\xi `$ is nearly universal at the laboratory glass temperature. It follows that the range of activation barriers is smaller for strong than for fragile liquids because of their smaller $`\mathrm{\Delta }\stackrel{~}{c_p}`$. This is in accord with the observed correlation between growing nonexponentiality of relaxation with growing fragility(34). We conclude that the wide variety of kinetic behavior seen in liquids reflects quantitative rather than qualitative differences in their energy landscape. Furthermore the random first order transition approach coupled with microscopic considerations about the stability of aperiodic structures can account semi-quantitatively for these differences. Acknowledgment. P.G.W. gratefully acknowledges stimulating discussions with Shoji Takada. This work was supported by NSF grant CHE-9530680. Figure 1 a, Free energy as a function of order parameter $`\alpha `$. Right below $`T_A`$, a second minimum emerges around $`\alpha \alpha _L`$, which corresponds to a glassy state. The free energy difference between the liquid and glass state is $`TS_c(T)`$, which approaches zero at the Kauzmann temperature $`T_K`$. b, An illustration of a liquid-like (multiconfiguration) droplet inside a glassy region corresponding to a single mean field minimum free energy configuration. The interface is wetted by suitable configurations to lower the surface energy. One considers an inhomogeneous situation with single minimum given by the density functional theory abutting another minimum as in a naive droplet solution with a radius of curvature $`r`$. Upon this surface, one erects a smaller droplet of one of the other solutions of the density functional theory as shown in the figure. The free energy of interpolating this wetting phase is given by $`\delta F=\sigma (r)r^{d1}(\frac{\zeta }{r})^2\delta s_cr^{d/2}(\frac{\zeta }{r})^{1/2}`$. This additional free energy cost depends on the surface tension at the scale $`r`$, $`\sigma (r)`$ and on the fluctuations in driving force for forming this smaller wetting droplet. In the Ising model, the fluctuations in driving force for these droplets arise from the random part of the magnetic field. For a random first order transition, the field fluctuation’s role in the disordered magnet is played by the fluctuations of configurational entropy density. The magnitude of these fluctuations should be given by the usual Landau expression $`\mathrm{\Delta }S_c^2=k_B\mathrm{\Delta }C_p`$ where $`\mathrm{\Delta }C_p`$ is the configurational heat capacity of a region. The contribution to the free energy from the interpolating wetting droplet yields a change with size of the surface tension at size $`r`$, $`d\sigma `$ (Equation 4). Figure 2 The fragility parameter $`D`$ as a function of the inverse heat capacity jump per mole. The glass formers chosen are those shown in Angell’s review article (a. Angell, C. A. Formation of glasses from liquids and biopolymers. Science 267, 1924–1935 (1995)). The solid line in the graph is calculated with random first order transition model (Equation (7) and (8)) based on the universality of the Lindemann ratio and the points are from experiments. Data for fragility parameter $`D`$ are found in a, b. Korus, J., Hempel, E., Beiner, M., Kahle, S. & Donth, E. Temperature dependence of $`\alpha `$ glass transition cooperativity. Acta Polymer 48, 369–378 (1997), c. Richert, R. & Angell, C. A. Dynamics of glass forming liquids V. On the link between molecular dynamics and configurational entropy. J. Chem. Phys. 109, 9016–9026 (1998) and references therein; for specific heat jump are found in d. Angell, C. A. & Smith, D. L. Test of the entropy basis of the Vogel-Tammann-Fulcher equation: Dielectric relaxation of polyalcohols near T<sub>g</sub>. J. Phys. Chem. 86, 3845–3852 (1982), e. Angell, C. A. & Torell, L. M. Short time structural relaxation process in liquids: Comparison of experimental and computer simulation glass transitions on picosecond time scales. J. Chem. Phys. 78, 937–945 (1983), and f. Torell, L. M., Ziegler, D. C. & Angell, C. A. Short time relaxation processes in liquids from viscosity and light scattering studies in molten KCl $``$ 2BiCl<sub>3</sub>. J. Chem. Phys. 81, 5053–5058 (1984). The heat capacity jump is given as per mole “beads” (d and g. Wunderlich, B. Study of the change in specific heat of monomeric and polymeric glasses during the glass transition. J. Phys. Chem. 64, 1052–1056 (1960) or “mobile units” (h. Schulz, M. Energy landscape, minimum points, and non-Arrhenius behavior of supercooled liquids. Phys. Rev. B 57, 11319–11333 (1998)). Generally speaking, “beads” are “rearrangeble elements in a relaxing liquid” (d). We have used the bead count of previous workers (g). The number of beads for GeO<sub>2</sub> and ZnCl<sub>2</sub> is 3 (d), 6 for glycerol (d), 10 for KCl$``$ 2BiCl<sub>3</sub> (f), 12 for 3KNO<sub>3</sub> $``$2Ca(NO<sub>3</sub>)<sub>2</sub> (e), 2 for m-fluorotoluene (g and I. Chang, S. S. & Bestul, A. B. Heat capacity and thermodynamic properties of o-terphenyl crytal, glass, and liquid. J. Chem. Phys. 56, 503–516 (1972)), and 3 for o-terphenyl (i). The “bead” count is a crude way of accounting for internal flexibility of the molecules (since the free energy functional is essentially that for a monatomic fluid). To illustrate the robustness of the correlation we indicate also the values of o-terphenyl as shown by a star ($``$) if its internal flexibility is ignored and it is assigned a bead count of one. Figure 3 The correlation length $`\xi `$ (in the unit of lattice spacing $`r_0`$) is shown as a function of relaxation time. The solid line is that predicted by random first order transition theory while the dashed line is the result of Adam-Gibbs theory(25) assuming $`\mathrm{\Delta }c_p=51.8`$ J mole<sup>-1</sup> K<sup>-1</sup>, the value for PVAC (polyvinyl acetate)(25). The Adam-Gibbs result weakly depends on fragility. The point (and its error bar) gives the only “direct” measurement by Spiess et al. on PVAC(32). Results of many “indirect” measurements on different glass formers around the glass transition temperature also fall in the range of the Spiess experiment(32).
no-problem/9912/astro-ph9912111.html
ar5iv
text
# Binary Black Hole Mergers from Planet-like Migrations ## 1 Introduction Supermassive black holes (BH) are nearly ubiquitous in nearby galaxy nuclei (e.g. Ho 1999). These BHs formed very early, probably during the epoch of quasars, $`z2`$, and are now largely dormant remnants of quasars. In the hierarchical picture of structure formation, present day galaxies are the product of successive mergers (e.g. White 1996), and indeed there is evidence for many mergers in the high-$`z`$ universe (Abraham et al. 1996). Hence, it appears almost inevitable that modern galaxies should harbor, or at least should have once harbored, multiple BHs that were collected during their merger history (Kauffman & Haehnelt 1999). BHs of mass $`M10^7M_{}`$ will quickly find their way to the center of a merger remnant by dynamical friction. Logically, there are only three possibilities. First, BH pairs could merge to form a single, larger BH. Second, the pairs of BHs could form binaries that would remain at galaxy centers to this day. Finally, a third BH could also fall in, leading to a three-body interaction violent enough to expel any number of the three BHs from the galaxy (Begelman, Blandford, & Rees 1980). While in principle this means that all three holes could be ejected, in practice such a violent ejection event is unlikely unless the binary’s internal velocity is much higher than the escape velocity from the galaxy ($`2000\mathrm{km}\mathrm{s}^1`$); in this case, the binary would be in the late stages of merging anyway (see § 2). Since the broad lines of quasars are not often observed to be displaced from the narrow lines by such high velocities, the fraction of binaries with such high internal velocities cannot be large, and therefore triple ejection cannot be common. Hence, mergers generically produce BH binaries, and these binaries either merge on timescales short compared to a Hubble time, or they are present in galaxies today. Observationally, there is evidence only for a few massive BH binaries (e.g., Lehto & Valtonen 1996) and in none of these cases is the evidence absolutely compelling. Theoretically, it has proven difficult to construct viable merger scenarios for these BH binaries. Here we first review this difficulty of driving the merger by the stellar-dynamical means that are discussed in the literature. We then propose a gas-dynamical alternative. ## 2 Near Impossibility of Stellar Dymnamics-Driven Mergers If a BH binary could (somehow) be driven to a sufficiently small orbit, then gravitational radiation would increasingly sap energy from the system and so engender a merger. For a circular orbit with an initial velocity $`v_{\mathrm{gr}}`$, the time $`T`$ to a merger due to gravitational radiation is given by $$v_{\mathrm{gr}}=c\left(\frac{5}{256}\frac{GM_{\mathrm{tot}}^2}{\mu Tc^3}\right)^{1/8}=3400\mathrm{km}\mathrm{s}^1\left(\frac{M_{\mathrm{tot}}^2/\mu }{8\times 10^8M_{}}\right)^{1/8}\left(\frac{T}{10\mathrm{Gyr}}\right)^{1/8}$$ (1) where $`M_{\mathrm{tot}}=M_1+M_2`$ is the total mass, $`\mu =M_1M_2/M_{\mathrm{tot}}`$ is the reduced mass, and where we have normalized to the case $`M_1=M_2=10^8M_{}`$. Note that for fixed total mass, the equal-mass case gives a lower limit on this required velocity, and that the result depends only very weakly on the total mass. However, as we now show it is almost impossible to achieve this velocity by any conceivable stellar-dynamical process. The basic problem is that when the orbital velocity $`v_{\mathrm{orb}}`$ is about equal to the stellar velocity dispersion $`\sigma 200\mathrm{km}\mathrm{s}^1`$, the total mass in stars within a volume circumscribed by the BH orbital radius $`(a5\mathrm{pc}M_{\mathrm{tot}}/10^8M_{})`$ is about $`M_{\mathrm{tot}}`$. If all of these stars were expelled from the BH binary at speed $`v_{\mathrm{orb}}(M_2/M)^{1/2}`$ (Rajagopal & Romani 1995 and references therein) the binding energy of the binary would increase by only a factor $`e`$. However, to get from a virial velocity of $`200`$km/s to $`v_{\mathrm{gr}}`$ (eq. 1), would require $`N_e6`$ $`e`$-foldings in binding energy. Hence, the binary will clear out a hole in the stellar distribution, and dynamical friction will be shut down (Quinlan 1996; Quinlan & Hernquist 1997). The most efficient conceivable process to rejuvenate the orbital decay would be to equip the binary with an intelligent “captain”. Like a fisherman working in over-fished waters, whenever the captain saw that the binary was running out of stars to expel, she would steer the binary to the densest unexploited region of the galaxy. To effect the merger, this would mean systematically moving through and expelling all the stars within a region containing about $`N_eM_{\mathrm{tot}}`$ in stars. For a galaxy with an $`r^2`$ density profile, this implies expelling all the stars within a radius $`r=N_eGM_{\mathrm{tot}}/2\sigma ^260`$pc, where we have made the evaluation for $`M_{\mathrm{tot}}=2\times 10^8M_{}`$ and $`\sigma =200\mathrm{km}\mathrm{s}^1`$. The real difficulty of the captain’s work is best understood by considering the last $`e`$-folding before gravitational radiation can take over. For $`v_{\mathrm{orb}}\sigma `$, the cross section for hard interactions (including gravitational focusing) is $`\pi a^2v_{\mathrm{orb}}/\sigma `$. If each incident particle is expelled with speed $`v_{\mathrm{orb}}(M_2/M)^{1/2}`$ (Rajagopal & Romani 1995), then the binding energy $`E_b`$ decays at $`d\mathrm{ln}E_b/dt2\pi a^2v_{\mathrm{orb}}\rho /M=G\rho P`$, where $`P`$ is the period, and $`\rho `$ is the local density. The last $`e`$-folding alone would require a time $`t[G\rho (r)P]^12\pi (r/\sigma )^2/P2`$Gyr, where we have assumed $`r30`$pc and our other canonical parameters. Thus, even with the captain’s careful guidance, the full merger requires a large fraction of a Hubble time. Moreover, comparing this decay rate with the standard formula for the decay of translational energy $`E_t`$ (Binney & Tremaine 1987) yields, $$\frac{d\mathrm{ln}E_b}{dt}0.1\left(\frac{\sigma }{v_{\mathrm{orb}}}\right)^3\frac{d\mathrm{ln}E_t}{dt}.$$ (2) That is, $`d\mathrm{ln}E_b/d\mathrm{ln}E_t10^4`$, so that the binary would be driven by dynamical friction back to the center of the Galaxy before it had completed $`10^4`$ of an $`e`$-folding of energy loss. Hence, the captain would have to initiate $`10^4`$ “course changes” in the last $`e`$-folding alone. Since the “captain” must in fact be some random process, the only source of such “course changes” is brownian motion due to continuous interaction with other compact objects. However, for stars of mass $`m`$ in an $`r^2`$ profile, the range of such Brownian motion is $`\mathrm{\Delta }\mathrm{ln}rm/M_{\mathrm{tot}}`$, i.e., too small by several orders of magnitude. In contrast to ordinary Brownian motion, the present system has an “external” energy source, the binary’s binding energy. However, it follows from equation (2) that even if all of this donated energy were acquired by the binary’s transverse motion, the brownian motion would be only slightly augmented. In any event, most of the donated energy goes to the stars, not the binary. Infall of globular clusters might well give the binary an occassional jolt, but these would be far too infrequent to drive the merger. In brief, any sort of mechanism to drive a merger by ordinary dynamical friction, no matter how contrived, is virtually ruled out. The only loophole to this argument is that we have assumed circular binary orbits. If an instability existed that systematically drove the BH binaries toward eccentricity $`e1`$ orbits, then either the binaries would suffer enhanced gravitational radiation (for a fixed semi-major axis) or could even merge in a head-on collision. Fukushige, Ebisuzaki, & Makino (1992) first suggested such an instability based on the following qualitative argument: dynamical friction is more effective at low speeds than high speeds and hence, in the regime where the ambient particles interact with the binary mainly by encounters with its individual members $`(v_{\mathrm{orb}}\sigma )`$, the binary would suffer more drag at apocenter than pericenter, tending to make the orbit more eccentric. Fukushige et al. (1992) presented numerical simulations that gave initial support to this conjecture. There are, however, two reasons for believing that this effect cannot drive mergers. First, several groups have conducted more sophisticated simulations, and these do not show any strong tendency for $`e1`$ (Makino et al. 1994; Rajagopal & Romani 1995; Quinlan & Hernquist 1997). Second, once the binary entered the regime $`v_{\mathrm{orb}}\sigma `$, the ambient particles would interact with the binary as whole, and so there is no reason to expect any drive toward high eccentricities. Hence, while this loophole is not definitively closed, neither does it look particularly promising. ## 3 Gas Dynamical Solution Begelman, Blandford & Rees (1980) were the first to suggest that gas infall may “lead to some orbital evolution”. But, at the time it was not clear that all other mechanism to overcome the BH hangup would most likely fail. To resolve the above dilemma, we suggest that gas dynamics play the decisive role in orbital decay, forcing the secondary BH to “migrate” in toward the primary in a manner analogous to the migration of planets. Such migration has been proposed to account for the discovery of jovian-mass and superjovian-mass planets at $`1`$AU from solar-type stars, while it is generally believed that such massive planets can only be created several AU from the stars (Trilling et al. 1998). Artymowicz & Lubow (1994, 1996) simulated interactions between moderately unequal-mass binaries and accretion disks, which is more directly relevant to the present case than extreme-ratio (planetary) systems. They did not follow the orbital evolution as has been done in more recent work on planets, but only evaluated the instantaneous effect of the torques. They found a migration to higher eccentricities was a larger effect than migration to smaller orbits. Regardless of which effect dominates, one would expect the final merger to be from circular rather than radial orbits: if the binary is driven toward radial orbits, its emission of gravitational radiation near pericenter will eventually pull in the apocenter of the orbit, decoupling the binary from the disk and allowing the gravitational radiation to circularize the orbit before final coalescence. For migration to work, the galaxy merger that creates the BH binary must eventually dump at least $`M_2`$ worth of gas into the inner $`5`$pc of the merger remnant where the binary coalescence has gotten “hung up”. Whether this happens on timescales short compared to a dynamical time at 5 pc ($`10^7`$yr), leading to tremendous gas densities and ensuing rapid star formation (Taniguchi & Wada 1996), or whether the gas accumulates over a longer timescale and so does not trigger a starburst, the basic scenario will be the same. There is every reason to expect mergers effect such a gas accumulation. First, quasars must gorge themselves on gas to reach their present size. Hence, regardless of whether our picture of binary mergers is correct, this much gas must find its way to central BHs. Second, there is substantial evidence that many quasars are in either recent merger remnants or at least significantly disturbed galaxies (Kirhakos et al. 1999 and references therein). Hence, it seems likely that mergers are the most efficient means to drive gas to the center. Third, many spiral bulges and ellipticals have cuspy profiles populated by metal rich stars whose total mass is comparable to that of their massive BHs (van der Marel 1999). Thus, it must be possible to funnel huge amounts of gas to the centers of galaxies. In planet migration, the migration timescale is similar to the accretion timescale for growing the planet because the two processes are governed by the same phenomena, gravitational torques and dissipation (Trilling et al. 1998; A. Nelson 1998, private communication). We expect the same to be true of migration of BH binaries. Thus, there should be a grand accretion disk around the primary with a “gap” opened up by the secondary. Material should be transported across this gap to a second, smaller accretion disk surrounding the secondary BH. The total energy liberated by this smaller accretion disk should be $`ϵM_2c^22\times 10^{61}(M_2/10^8M_{})`$ergs, where we have taken the efficiency to be $`ϵ=0.1`$, producing a quasar-like appearance during this phase. ## 4 Discussion While our suggestion, driven by the lack of alternatives, makes few unambigous predictions, it does open several lines of investigation that could help test and flesh out our picture. First, merging binaries would appear very much like quasars, since our picture of the migrating secondary is essentially identical to the standard picture of a quasar. The one difference is that the jet from a migrating BH could precess if the orbit of the secondary were substantially misaligned relative to the accretion disk. At present, however, we have no method of estimating how often significant misalignment should occur. Second, the redshift of the broad lines from a migrating BH’s accretion disk should be offset from the redshift of the host galaxy (as traced perhaps by the narrow lines). Since the migration probably accelerates with time, most migrating quasars should have $`v_{\mathrm{orb}}\sigma `$. Nevertheless, some should have substantially higher offsets, and measuring the distribution of these offsets would allow one to trace the migration process. However, if no offsets were observed, this would not in itself rule out our hypothesis. It could be, for example, that migrating binaries in merger remnants are preferentially buried in a larger, roughly spherical cloud of dusty gas. In this case, they would have more similarity to ultra-luminous infrared galaxies (ULIRGs) than to quasars, and the line centers of their emission would be at the galaxy velocity, not that of the secondary. Third, it is at least possible that one would see two broad-line systems, one from the primary and one from the secondary. Since broad lines are by definition broad ($`3000\mathrm{km}\mathrm{s}^1`$), the existence of two systems would not easily be recognized for $`v_{\mathrm{orb}}\sigma `$. However, distinct peaks might be discernible when the BHs were closer to merger. On the other hand, it may be that the major supply of gas lies outside the orbit of the secondary, and hence the primary does not generate a significant broad-line region. Fourth, it will be important to carry out simulations to determine whether the gas dissipation timescale is short enough for the acceting material to follow the binary inward. This is certainly the case for the simulations that have been done for extreme mass-ratio (planetary) systems, but needs to be checked for the less extreme case also. Finally, we suggest that migrating BH binaries may simply be the quasars, or at least most of them. They have the same integrated energy output as quasars, they have the same accretion-disk fuel source as quasars, and like quasars, they turn on in the wake of mergers. It may be easier to move gas inward from $`5`$pc scales for a binary BH than for a single BH because the binary would excite spiral density waves in the grand accretion disk and so augment viscous drag. Accretion in the inner disk around the secondary might also be easier than for an isolated BH because of the tidal effects of the primary. If this hypothesis is correct, then quasars should generically show offsets between the centers of their broad and narrow lines with a root mean square of $`(2/3)^{1/2}\sigma `$. Acknowledgements: We thank Andy Nelson for valuable discussions. A.G. thanks the Max-Planck-Institut für Astronomie for its hospitality during a visit when most of work of this Letter was completed. His work was supported in part by grant AST 97-27520 from the NSF.
no-problem/9912/hep-ph9912202.html
ar5iv
text
# ON THE RELATION BETWEEN THE SLOPES OF DIFFRACTION CONE IN SINGLE DIFFRACTION DISSOCIATION AND ELASTIC SCATTERING ## Abstract The fundamental relation betwen the slopes of diffraction cone in single diffraction dissociation and elastic scattering has been derived. PACS numbers: 11.80.-m, 13.85.-t, 21.30.+y Keywords: diffraction dissociation, three-body forces, elastic scattering, total cross-sections, slope of diffraction cone, interpretation of experiments. Not long ago we observed that the slope $`b_{SD}`$ of diffraction cone in single diffraction dissociation $`NNNX`$ was related to the effective interaction radius $`R_0`$ for the three-body (three-nucleon) forces $$b_{SD}(s,M_X^2)=\frac{1}{2}R_0^2(\overline{s},s_0^{}),$$ (1) $$\overline{s}=2(s+M_N^2)M_X^2,s_0^{}=2s_0,$$ where $`s_0`$ is a scale defining unitarity saturation asymptotic in hadron-hadron interaction. At the same time it was established that the quantity $`R_0^2`$ was related to the structure of hadronic total cross section in a physically clear and transparent form (see also ) $$\sigma ^{tot}(s)=2\pi \left[B^{el}(s)+R_0^2(s)\right]\left(1+\chi (s)\right),$$ (2) where $`B^{el}`$ is the slope of diffraction cone in elastic $`NN`$ scattering and $$\chi (s)=O\left(\frac{1}{\sqrt{s}\mathrm{ln}^3s}\right),s\mathrm{}.$$ This circumstance gives rise to the nontrivial consequences which are discussed in this note. Let us define the slope $`B^{sd}`$ of diffraction cone in a single diffraction dissociation at the fixed point over the missing mass $$B^{sd}(s)=b_{SD}(s,M_X^2)|_{M_X^2=2M_N^2}.$$ (3) Now taking into account Eq. (2), where the effective interaction radius for three-body forces can be extracted from $$R_0^2(2s,2s_0)=R_0^2(s,s_0)=\frac{1}{2\pi }\sigma ^{tot}(s)B^{el}(s),$$ (4) and the equation $$\sigma ^{el}(s)=\frac{\sigma ^{tot}(s)^2}{16\pi B^{el}(s)},(\rho =0),$$ (5) we come to the fundamental relation between the slopes in the single diffraction dissociation and elastic scattering $$\overline{)B^{sd}(s)=B^{el}(s)\left(4X\frac{1}{2}\right)},$$ (6) where $$X\frac{\sigma ^{el}(s)}{\sigma ^{tot}(s)}.$$ (7) The quantity $`X`$ has a clear physical meaning, it has been introduced in the papers of C.N. Yang and his collaborators . In paper we search for $`X=0.25`$ at $`\sqrt{s}=1800GeV`$. Hence in that case we have $`B^{sd}=B^{el}/2`$ which is confirmed not so badly in the experimental measurements . In the limit of black disk $`(X=1/2)`$ we obtain $$B^{sd}=\frac{3}{2}B^{el},$$ (8) and $$B^{sd}=B^{el},atX=\frac{3}{8}=0.375.$$ (9) So, we find that there is quite a nontrivial dynamics in the slopes of diffraction cone in the single diffraction dissociation and elastic scattering processes. In particular, we can study an intriguing question on black disk limit not only in the measurements of total hadronic cross sections compared with elastic ones but in the measurements of the slopes in single diffraction dissociation processes together with elastic scattering ones. There is a more general formula which can be derived with account of the real parts for the amplitudes. This formula looks like $$\overline{)B^{sd}(s)=B^{el}(s)\left(4X\frac{1\rho _{el}(s)\rho _0(s)}{1+\rho _{el}^2(s)}\frac{1}{2}\right)},$$ (10) where $`\rho _0`$ is defined in terms of three-body forces scattering amplitude similar to $`\rho _{el}`$ . If $`\rho _{el}=0`$ or $`\rho _0=\rho _{el}`$ then we come to Eq. (6). In the case when $`\rho _{el}0`$, we can rewrite Eq. (10) in the form $$\rho _0=\frac{1}{\rho _{el}}\left[1\frac{1+\rho _{el}^2}{8X}\left(1+\frac{2B^{sd}}{B^{el}}\right)\right].$$ (11) Eq. (11) can be used for the calculation of the new quantity $`\rho _0`$. Anyway it seems that the measurements of real parts for the amplitudes will play an important role in the future high energy hadronic physics. Equations (6,10) can be rewritten in a unique form $$\frac{Y}{Y^{}}+\frac{1}{2}=\alpha _\varrho X,$$ (12) where the quantity $`Y`$ has also been introduced in the above mentioned papers of C.N. Yang and his collaborators $$Y=\frac{\sigma ^{tot}}{16\pi B^{el}},$$ (13) $`\alpha _\varrho `$ is a known function of $`\rho `$’s (see Eq. (10)) and we introduced a new dimensionless quantity $`Y^{}`$ $$Y^{}=\frac{\sigma ^{tot}}{16\pi B^{sd}}.$$ (14) It is obvious $$\frac{Y}{Y^{}}\frac{B^{sd}}{B^{el}},$$ (15) and we have for the quantity $`\alpha _\varrho =4`$ if $`\rho _{el}=0`$ or $`\rho _0=\rho _{el}`$. Equation (12) represents the fundamental constraint on three dimensionless quantities $`Y,Y^{}`$ and $`X`$. It would be very desirable to experimentally study this constraint.
no-problem/9912/astro-ph9912015.html
ar5iv
text
# 1 INTRODUCTION ## 1 INTRODUCTION High spatial resolution observations in recent years, mainly with HST, show that many proto planetary nebulae (PNs) and young PNs possess complex structures in their inner regions. Proto-PNs appear in the optical as reflection nebulae, with different optical depths along different directions from the central star to different parts of the nebula (e.g., the Egg nebula \[CRL 2688\], Sahai et al. 1998a, b; more examples in Ueta, Meixner & Bobrowsky 2000; hereafter UMB). In most extreme cases, termed DUPLEX (DUst-Prominent Longitudinally-EXtended) by UMB, the light from the central star is completely, or almost completely, blocked in the equatorial plane. Many young PNs have filaments, dents, blobs and other inhomogeneities in their inner regions (e.g., Sahai & Trauger 1998). In the present paper I examine some effects of aspherical mass loss on the propagation of radiation during the proto-PNs and young PNs stages, which affect the structures and appearances of the nebulae during these stages. The aspherical mass loss can be an axisymmetrical mass loss, and/or local inhomogeneities in the mass loss process. Such local inhomogeneities may result from instabilities during the proto-PN stage (e.g., Dwarkadas & Balick 1998), or from the mass loss process itself, as expected from the model of dust formation above magnetic cool spots (Soker 2000 and references therein). In that model, which was proposed to account for the formation of most elliptical PNs (Soker 2000 and references therein), the magnetic cool spots on the AGB surface are formed by a weak magnetic activity, and they facilitate the formation of dust. The magnetic field has no dynamic role. A weak dynamo activity forms more magnetic cool spots near the equatorial plane than closer to the poles, thus leading to axisymmetrical mass loss and the formation of elliptical PNs. If the dust formation occurs during the last AGB phase when mass loss rate is high, the dust shields the region above it from the stellar radiation (Soker 2000). This leads to both further dust formation in the shaded region, and, due to lower temperature and pressure, the convergence of the stream toward the shaded region, and the formation of a flow having a higher density than its surroundings. In that model the very large optical depth of the dusty envelope during the AGB superwind phase increases substantially the departure from spherical mass loss geometry, both locally and globally. This model for axisymmetrical mass loss, which is based on a weak dynamo activity that leads to the formation of magnetic cool spots, can explain other features in addition to the axisymmetrical mass loss itself (Soker & Clayton 1999). Most important, it can operate in very slowly rotating AGB stars, which can gain the required rotation from a planet companion, or else be fast rotators on the main sequence. Since the departure from spherical mass loss occurs only when opacity near the AGB surface is large, which means high mass loss rate, the model explains the higher ellipticity of the inner regions of many elliptical PNs. In extreme cases the inner regions are elliptical while the halos are spherical (e.g., NGC 6891, Guerrero et al. 2000). It can also account for local inhomogeneities, such as clumps and filaments, and it can account for the change in direction of the symmetry axis, which is later observed as a point symmetric elliptical PN. Changes in the direction of the magnetic axis is seen in the Sun (over a period of $`11\mathrm{yrs}`$), and in Earth (over a period of hundreds of thousands years). The present paper examines the role of the density inhomogeneities on radiative transfer at later stages: their role in the optical appearance of the reflection nebulae during the proto-PN phase ($`\mathrm{\S }2`$), and their role in the propagation of the ionization front in young PNs ($`\mathrm{\S }3`$). It is well established that the ionization front has a substantial influence on the structures of PNs (Breitschwerdt & Kahn 1990; Mellema 1995; Mellema & Frank 1995; Chevalier 1997; Schönberner & Steffen 2000; Soker 1998). Here ($`\mathrm{\S }3`$) I derive the condition for the density inhomogeneities to be amplified by the ionization front, leading to an ionization front instability. In $`\mathrm{\S }4`$ I summarize the main results. ## 2 POST AGB STARS This section examines the optical radiation from the central star prior to ionization as it propagates through the dusty nebula. For the purpose of this paper it is adequate to take a simple form for the obscuring AGB wind. The mass loss rate per unit solid angle along a specific direction is $`\dot{m}\left(\theta \right)=\dot{M}\left(\theta \right)/\left(4\pi \right)`$, and the wind moves radially at a constant velocity $`v_w`$. In general $`v_w`$ may depend on the direction, but here I simply assume that it is uniform. I also assume that most of the opacity is due to the intensive mass loss episode at the termination of the AGB, i.e., the superwind, lasting for a time $`t_{sw}`$. The opacity due to the low density inner regions of the post-AGB circumstellar matter (e.g., Schönberner & Steffen 2000) is neglected. At a time $`t`$ after the superwind ends the density distribution is given by $`\rho (\theta ,r)=\left\{\begin{array}{ccc}& \dot{M}\left(\theta \right)/\left(4\pi v_wr^2\right)& r_{\mathrm{in}}rr_{\mathrm{out}}\\ & \mathrm{very}\mathrm{low}& r<r_{\mathrm{in}}\mathrm{or}r>r_{\mathrm{out}},\end{array}\right\}`$ (1) where the boundaries of the superwind are $`r_{\mathrm{in}}=v_wt,\mathrm{and}r_{\mathrm{out}}=r_{\mathrm{in}}+v_wt_{sw}.`$ (2) The optical depth of the dusty wind is given by $$\tau _a=_{r_{\mathrm{in}}}^{r_{\mathrm{out}}}\kappa \left(r\right)\rho \left(r\right)𝑑r=\frac{\dot{M}\left(\theta \right)\kappa }{4\pi v_w}\left(\frac{1}{r_{\mathrm{in}}}\frac{1}{r_{\mathrm{out}}}\right)=\frac{r_\tau }{r_{\mathrm{in}}}\left(1\frac{r_{\mathrm{in}}}{r_{\mathrm{out}}}\right)=\frac{t_\tau }{t}\left(1\frac{t}{t+t_{sw}}\right)$$ (3) where the integration was performed with equation (1) for the density, and the length scale $`r_\tau `$ and time scale $`t_\tau `$ are defined as follows, $$r_\tau \frac{\dot{M}\left(\theta \right)\kappa }{4\pi v_w}=5\times 10^3\left(\frac{\dot{M}\left(\theta \right)}{10^4M_{}\mathrm{yr}^1}\right)\left(\frac{\kappa }{150\mathrm{cm}^2\mathrm{g}^1}\right)\left(\frac{v_w}{10\mathrm{km}\mathrm{s}^1}\right)^1\mathrm{AU},$$ (4) and $$t_\tau \frac{r_\tau }{v_w}=2.4\times 10^3\left(\frac{\dot{M}\left(\theta \right)}{10^4N_{}\mathrm{yr}^1}\right)\left(\frac{\kappa }{150\mathrm{cm}^2\mathrm{g}^1}\right)\left(\frac{v_w}{10\mathrm{km}\mathrm{s}^1}\right)^2\mathrm{yrs}.$$ (5) The opacity of dust and gas around AGB stars is $`\kappa 20\mathrm{cm}^2\mathrm{g}^1`$ at $`2\mu `$m (e.g., Jura 1986), and I scaled it to $`150\mathrm{cm}^2\mathrm{g}^1`$ in the optical as it is for the ISM. At early stages, $`r_{\mathrm{in}}r_\tau `$, or $`tt_\tau `$, the opacity is very large, and no light escapes at all, as is well known for many upper AGB stars. At very late stages $`r_{\mathrm{in}}r_\tau `$, or $`tt_\tau `$, and the dusty nebula is transparent. The interesting time is when $`tt_\tau `$. As an example let the mass loss rate (when extended to a complete sphere) along the equatorial and polar directions be $`\dot{M}\left(\pi /2\right)=5\times 10^5M_{}\mathrm{yr}^1`$ and $`\dot{M}\left(0\right)=10^5M_{}\mathrm{yr}^1`$, respectively. The velocity will be taken to be $`10\mathrm{km}\mathrm{s}^1`$ in all directions, and the superwind duration $`1000\mathrm{yrs}`$. At time $`t=200\mathrm{yrs}`$ the optical depth of the superwind along the equatorial plane is 5, while it is only $`1`$ along the polar direction. The central star is substantially attenuated along the equatorial plane, but only moderately so along the polar directions. Taking the equatorial flow to be slower will increase the effect further. The mass loss rate at the end of the superwind phase declines by more than two orders of magnitude within $`100\mathrm{yrs}`$ (Blöcker 1995; Schönberner & Steffen 2000), so any effect on a shorter time scale cannot be accurately treated with the assumptions made here. From equation (5) we learn that for the optical depth to be $`2`$, so that the central star is attenuated by a factor of $`10`$, for more than $`200\mathrm{yrs}`$ the mass loss should be $`\dot{M}\left(\theta \right)2\times 10^5M_{}\mathrm{yr}^1`$. This result is interesting since the maximum mass loss rate possible from radiation momentum transfer is $`\dot{M}_{\mathrm{max}}=n_sL/\left(cv_s\right)`$, where $`L`$ is the stellar luminosity, $`c`$ is the speed of light, $`v_w`$ the terminal wind velocity and $`n_s`$ is the average number of times a photon is scattered within the outflowing material. For most cases $`n_s1`$ (Knapp 1986). For $`L=5000L_{}`$, $`v_w=10\mathrm{km}\mathrm{s}^1`$, and $`n_s=1`$ we find $`\dot{M}_{\mathrm{max}}=10^5M_{}\mathrm{yr}^1`$. The conclusion is that the dust opacity will obscure almost completely the central star, at least in some directions, for systems where one or more of the following occurs: (1) luminosity is very high $`L5000L_{}`$; (2) the expansion velocity, $`v_w`$, (probably in the equatorial plane) is very low; (3) mass loss rate in the equatorial plane is enhanced by dynamic effects of a binary companion. A very luminous central star requires a massive progenitor, which if has a non-spherical mass loss will probably lead to the formation of a dense equatorial flow, but not necessarily to the formation of a bipolar PN. In any case, a very luminous central star is required, and this is rare. Processes (2) and (3) are more likely to occur. We note that slowly expanding equatorial flows are found around several binary systems having orbital separations of $`1\mathrm{AU}`$ (Van Winckel 1999; Van Winckel et al. 1998; Jura & Kahane 1999). These systems seem to form bipolar PNs (e.g., the Red Rectangle, Waters et al. 1998). It seems that a slowly expanding equatorial flow requires binary interaction. A high mass loss rate in the equatorial plane due to dynamic effects can result both from close companions outside the envelope (e.g., Mastrodemos & Morris 1999), or from a common envelope evolution as is evident from the structure of most of the 16 PNs known to have close-binary nuclei (orbital periods from a few hours to 16 days; Bond 2000). However, in most cases the common envelope evolution will not lead to the formation of a bipolar-PN but to an elliptical PN with high equatorial to polar density ratio, e.g., ring-like structures but without any lobes (Soker 1997). (Bipolar PNs are defined as axially symmetric PNs having two lobes with an ‘equatorial’ waist between them.) The conclusion from this section is that high optical depth, $`\tau 2`$, will be found in proto-PNs with high equatorial mass loss rates, which require their AGB progenitors to interact with stellar companions. The effect of this high optical depth is that light from the central star penetrates large distances along and near the polar directions, but not along and near the equatorial plane. This leads to the appearance of an elongated reflection nebula (e.g., UMB). However, not all of these will turn into bipolar-PNs. Some will form extreme elliptical PNs, e.g., rings without lobes on the two sides of the equatorial plane. UMB observe 21 reflection proto-PNs with the Hubble Space Telescope, and classify them into two groups. Proto-PNs with highly or completely obscured central stars were termed DUPLEX (DUst-Prominent Longitudinally-EXtended) reflection nebulae. Those with almost no obscuration are termed SOLE (Star-Obvious Low-level-Elongated) reflection nebula. The more elongated DUPLEX reflection nebula are mostly a result of obscuration in the equatorial plane. No obscuration occurs in the less elongated SOLE reflection nebula. I find that out of the 10 DUPLEX proto-PNs presented by UMB some do not show any signature of lobes, and therefore, I argue, they will later turn into extreme elliptical PNs rather than bipolar PNs. I further speculate that these systems may have close binary nuclei, like the 16 systems listed by Bond (2000). These are IRAS 19374+2359, and IRAS 23321+6545, and possibly IRAS 16342-3814, and IRAS 20028+3910. ## 3 THE EARLY IONIZATION PHASE In this section I derive the condition for the enhancement of density inhomogeneities during the early PN phase, when the ionization front moves outward. In order to obtain an analytic expression, I neglect the shock preceding the ionization front (a D-type ionization front), and the motion of the inner boundary of the superwind inward relative to the rest of the wind, after it is heated by the ionization front (e.g., Breitschwerdt & Kahn 1990; Mellema 1995). I also neglect the compression of the inner region by the newly blown fast wind (Breitschwerdt & Kahn 1990; Mellema 1995; Schönberner & Steffen 2000), and any dependence on longitudes, and consider only axisymmetrical density profiles. I will therefore use the density profile as given in equation (1). However, the results are applicable to any dependence on the direction of the mass loss rate from the progenitor AGB star. I make some other simplifying assumptions as indicated during the derivation of the condition below. The increase in the density inhomogeneities occurs because the ionization front proceeds with different speeds along directions having different density profiles. Let as assume that along a direction $`\theta `$ the density is somewhat higher that along a direction $`\theta +\mathrm{\Delta }\theta `$, because $`\dot{M}\left(\theta \right)>\dot{M}\left(\theta +\mathrm{\Delta }\theta \right)`$, where the wind velocity is taken to be constant and equal along the two directions (eq. 1). Because of the lower density the ionization front will reach a radius $`r`$ along the $`\theta +\mathrm{\Delta }\theta `$ direction earlier than along the $`\theta `$ direction. Let this time difference be $`\mathrm{\Delta }t_i\left(r\right)`$. The ionized region along the $`\theta +\mathrm{\Delta }\theta `$ direction will be much hotter than the still neutral material along the $`\theta `$ direction ($`10^4\mathrm{K}`$ compared with $`<10^3\mathrm{K}`$), and its thermal pressure much higher (assuming that the density ratio is $`<10`$), so it will compress the cooler region along the $`\theta `$ direction. The compression proceeds in the azimuthal direction, from $`\theta +\mathrm{\Delta }\theta `$ to $`\theta `$ with a velocity $`c_s`$, where $`c_s`$ is the adiabatic sound speed. The compression continues as long as the ionization front does not reach the same radius $`r`$ along the $`\theta `$ direction, and it increases the density in the denser and cooler region, and decreases the density in the already low density region along the $`\theta +\mathrm{\Delta }\theta `$ direction. I neglect the ionization of the denser region along the $`\theta `$ direction by the recombination radiation from the matter along the $`\theta +\mathrm{\Delta }\theta `$ direction. The changes in the densities in the two directions will be significant only if the time for the compression wave to cross the distance between the two region is shorter than the time difference between the ionization times of the two regions $`\mathrm{\Delta }t_i\left(r\right)`$. The compression time is $`r\mathrm{\Delta }\theta /c_s`$. The condition for significant enhancement of the inhomogeneity is therefore $`\mathrm{\Delta }t_i\left(r\right)>r\mathrm{\Delta }\theta /c_s`$. Taking the limit of small angles and rearranging the equation, this condition reads $`t_i^{}{\displaystyle \frac{dt_i}{d\theta }}>{\displaystyle \frac{r}{c_s}}.`$ (6) Several characteristics of this instability should be noticed. First, this is not the type of instability considered by Breitschwerdt & Kahn (1990), who simply considered the Rayleigh-Taylor instability in the interface of the ionized and neutral matter. Second, if the solid angle span by the high density region is smaller than that of the low density region, as in a dense clump, the instability will result in a dense region extending radially behind the dense clump. This case was studied in a previous paper for a few specific cases relevant to the PN IC 4593 (Soker 1998). If it is the low density region that is narrower, it will be ionized first, and its density will drop further as it expands. This will lead to the formation of a faint region within the nebular shell. Third, the instability, when it starts, has a positive feedback (Soker 1998), due to the increase (decrease) of density of the already denser (tenuous) region, hence the ionization front will move even slower (faster) along this direction. Fourth, after the denser region is ionized its pressure exceeds that of its surroundings, and it expands and its density drops. In the present study I only examine the general conditions for the instability to start, and I do not follow its evolution. I now turn to express condition (6) in term of the density profile of the nebula. At very early stages the ionizing photon emission rate $`\dot{N}_{}`$, in photons per second, can be approximated by a simple linear rise with time. Using the post-AGB results presented by Blöcker (1995; see also Schönberner & Steffen 2000) and an earlier approximation (Breitschwerdt & Kahn 1990; eq. 1 of Soker 1998), I take the ionization to start a time $`t_1`$ after the superwind ends, with a time dependence according to $`\dot{N}_{}\left(t\right)=\dot{N}_0\left({\displaystyle \frac{tt_1}{t_2}}\right),t>t_1.`$ (7) For the standard case I find from fig. 1 of Breitschwerdt & Kahn (1990) $`\dot{N}_0=1\times 10^{47}\mathrm{s}^1`$, and $`t_2=700\mathrm{yrs}`$, and I take the ionization to star at $`t_1=1000\mathrm{yrs}`$. These numbers are very sensitive to the mass of the central star (e.g., Blöcker 1995), but sufficient to illustrate, and derive the condition for, the enhancement of initial density inhomogeneities. For the density profile I use the same assumption and density profile as in the previous section (eq. 1). At the early stages the ionizing flux is low and the ionization front moves through a dense medium. At this stage most ionizing photons go to ionized recombining atoms, and only a small fraction of the ionizing flux reaches the ionization front and ionizes new material. I therefore approximate the location of the ionization front along a specific direction by equating the ionizing flux with the recombination rate per unit solid angle $`{\displaystyle \frac{\dot{N}_{}}{4\pi }}={\displaystyle _{r_{\mathrm{in}}}^{r_f}}\alpha n_en_ir^2𝑑r,`$ (8) where $`\alpha `$ is the recombination coefficient, and $`n_e`$ and $`n_i`$ are the electron and ions number density, respectively. To derive the dependence of the ionization front $`r_f\left(\theta \right)`$ on time, I substitute for the density from equation (1) and for $`\dot{N}_{}`$ from equation (7), and then integrate from $`r_{\mathrm{in}}=v_wt_i`$ to $`r_f(\theta ,t_i)`$. This gives $`{\displaystyle \frac{r_f(\theta ,t_i)}{r_{\mathrm{in}}}}=\left(1{\displaystyle \frac{t_it_1}{t_2}}{\displaystyle \frac{t_i}{t_F\left(\theta \right)}}\right)^1,`$ (9) where I defined the time scale (marked F in eq. 6 of Soker 1998) $`t_F{\displaystyle \frac{\dot{M}^2\left(\theta \right)}{4\pi v_w^3}}{\displaystyle \frac{n_en_i}{\rho ^2}}{\displaystyle \frac{\alpha }{\dot{N}_0}}700\left({\displaystyle \frac{\dot{M}\left(\theta \right)}{10^5M_{}\mathrm{yr}^1}}\right)^2\left({\displaystyle \frac{\dot{v}_w}{10\mathrm{km}\mathrm{s}^1}}\right)^3\left({\displaystyle \frac{\dot{N}_0}{10^{47}\mathrm{s}^1}}\right)^1\mathrm{yrs}.`$ (10) More (less) massive cores have a higher (lower) mass loss rate, but the ionizing photon emission rate is higher (lower) as well, and $`t_2`$ shorter (longer). These effects may cancel, more or less, their mutual influence on the product $`t_2t_F`$ in equation (9). We also note that the expression for the inner boundary of the superwind $`r_{\mathrm{in}}=v_wt_i`$ is not accurate, since as this region is ionized its pressure increases, and the hot material flows inward relative to the rest of the wind (e.g., Breitschwerdt & Kahn 1990; Mellema 1995). Therefore, the effective velocity in the relation $`r_{\mathrm{in}}=v_wt_i`$ may be $`<v_w`$. Taking $`r_f`$ from equation (9) for $`r`$ in the instability condition (6) and taking $`r_{\mathrm{in}}=v_wt_i`$ gives for the instability condition $`{\displaystyle \frac{t_i^{}}{t_i}}>{\displaystyle \frac{v_w}{c_s}}\left(1{\displaystyle \frac{t_it_1}{t_2}}{\displaystyle \frac{t_i}{t_F\left(\theta \right)}}\right)^1.`$ (11) The next step is to take the derivative of equation (9) with respect to the angle $`\theta `$, at a constant value of $`r_f`$. Rearranging equation (9), taking $`r_{\mathrm{in}}=v_wt_i`$, and then taking the derivative gives $`r_f{\displaystyle \frac{d}{d\theta }}\left({\displaystyle \frac{t_it_1}{t_2}}{\displaystyle \frac{t_i}{t_F\left(\theta \right)}}\right)=v_wt_i^{}.`$ (12) In performing the derivation we note that $`t_1`$ and $`t_2`$ are constants, while $`{\displaystyle \frac{dt_F\left(\theta \right)}{d\theta }}={\displaystyle \frac{2}{\dot{M}\left(\theta \right)}}{\displaystyle \frac{d\dot{M}\left(\theta \right)}{d\theta }}t_F,`$ (13) so that equation (12) becomes $`r_f\left({\displaystyle \frac{t_it_1}{t_2}}{\displaystyle \frac{t_i}{t_F\left(\theta \right)}}{\displaystyle \frac{2\dot{M}^{}}{\dot{M}\left(\theta \right)}}{\displaystyle \frac{t_i^{}}{t_2}}{\displaystyle \frac{t_i}{t_F\left(\theta \right)}}{\displaystyle \frac{t_it_1}{t_2}}{\displaystyle \frac{t_i^{}}{t_F\left(\theta \right)}}\right)=v_wt_i^{},`$ (14) where $`\dot{M}^{}d\dot{M}/d\theta `$. Substituting $`r_f`$ from equation (9), and $`r_{\mathrm{in}}=v_wt_i`$ in equation (14) gives for $`t_i^{}`$ $`{\displaystyle \frac{t_i^{}}{t_i}}={\displaystyle \frac{2\dot{M}^{}}{\dot{M}\left(\theta \right)}}{\displaystyle \frac{\left(t_it_1\right)t_i}{t_2t_F}}\left(1+{\displaystyle \frac{t_i^2}{t_2t_F}}\right)^1.`$ (15) Elimination of $`t_i^{}`$ from equations (11) and (15) gives the instability condition on the mass loss inhomogeneity $`{\displaystyle \frac{\dot{M}^{}\left(\theta \right)}{\dot{M}\left(\theta \right)}}>{\displaystyle \frac{v_w}{c_s}}\left(1+{\displaystyle \frac{t_i^2}{t_2t_F}}\right){\displaystyle \frac{1}{2Z\left(1Z\right)}},\mathrm{for}t_i>t_1\mathrm{and}Z>0,`$ (16) where $`Z(\theta ,t_i){\displaystyle \frac{\left(t_it_1\right)t_i}{t_2t_F}}.`$ (17) Let us examine the different factors in equation (16). The sound speed of the ionized region is $`1214\mathrm{km}\mathrm{s}^1`$, depending on its temperature, while the expansion velocity is $`1015\mathrm{km}\mathrm{s}^1`$. However, due to the slower motion of the inner boundary of the shell at early stages (e.g., Breitschwerdt & Kahn 1990; Mellema 1995), its effective value can be somewhat smaller. We therefore can safely take $`v_w/c_s0.51`$. The term $`\left[2Z\left(1Z\right)\right]^1`$ will reach its minimum value of $`2`$ when $`Z=0.5`$. For $`t_1=1000\mathrm{yrs}`$, $`t_2=700\mathrm{yrs}`$, and $`t_F=700\mathrm{yrs}`$, this occurs at $`t_i1200\mathrm{yrs}`$. By that time the second term on the right hand side of equation (16) is $`1+t_i^2/t_2t_F=3.94`$, and the instability condition becomes $`\dot{M}^{}/\dot{M}8`$. The exact minimum value of the r.h.s. of equation (16) (for $`v_w=c_s`$) is $`7.82`$ and it occurs at $`t_i=1184\mathrm{yrs}`$ and $`Z=0.445`$. Taking the mass loss rate to be two (three) times as high as in equation (10), with all other parameters being equal, so that $`t_F=2800\mathrm{yrs}`$ ($`6300\mathrm{yrs}`$), the minimum value of the r.h.s. of equation (16) (again, for $`v_w=c_s`$) is $`4.52`$, ($`3.82`$) and it occurs at $`t_i=1534\mathrm{yrs}`$ and $`Z=0.418`$ ($`1937\mathrm{yrs}`$ and $`Z=0.412`$). For $`t_Ft_1`$, the time of maximum instability occurs at $`t_it_1`$. Neglecting $`t_1`$, the r.h.s. of equation (16) reads $`\left(v_w/c_s\right)\left(1+Z\right)/\left[2Z\left(1Z\right)\right]`$. The minimum value of the r.h.s is $`2.914\left(v_w/c_s\right)`$, and it occurs at $`Z=2^{1/2}1=0.414`$. The third term on the r.h.s. $`\left[2Z\left(1Z\right)\right]^1`$ will always reach a value as low as $`2`$ when $`Z=0.5`$. However, when $`t_Ft_1`$ the second term ($`1+t_i^2/t_2t_F`$) will be very large when $`Z0.5`$, and the condition on the density inhomogeneity will be hard to meet. For example, when the mass loss rate is $`0.5`$ ($`0.75`$) of that in equation (10), so that $`t_F=200\mathrm{yrs}`$ ($`400\mathrm{yrs}`$), the minimum value of the r.h.s. of equation (16) (for $`v_w=c_s`$) is $`18.2`$ ($`11.0`$) and it occurs at $`t=1062`$ and $`Z=0.470`$ ($`t=1115`$ and $`Z=0.458`$). The main conclusion from this section is that for most elliptical PNs, density inhomogeneities as a result of different mass loss rates along different radial directions during the superwind phase, where $`\dot{M}10^5M_{}\mathrm{yr}^1`$, will be amplified by the propagating ionization front if $`\dot{M}^{}/\dot{M}4`$. I recall that the derivation of the instability condition applies to a time-independent mass loss rate, and hence does not apply to small blobs, but only to inhomogeneities extending to large radial distances. For example, for the case $`t_F=2800\mathrm{yrs}`$, $`t_1=1000\mathrm{yrs}`$, and $`t_2=700\mathrm{yrs}`$, we found above that maximum likelihood for the instability occurs at $`t_i1500\mathrm{yrs}`$ and $`Z=0.418`$. From equation (9) we find the ionization front to be at $`r_f=1.7r_{\mathrm{in}}`$, hence the inhomogeneous mass loss rate lasts for a long time. However, the instability can start much earlier if the density inhomogeneity is larger: at $`t=1100\mathrm{yrs}`$ ($`1200\mathrm{yrs}`$) the condition is $`\dot{M}^{}/\dot{M}15`$ ($`8`$), for which $`r_f=1.06r_{\mathrm{in}}`$ ($`1.14r_{\mathrm{in}}`$). If the enhanced, or reduced, mass loss rate spans an angle of $`0.2=12^{}`$, i.e., the density inhomogeneity from the center of the $`12^{}`$ to its edge spans an angle of $`\mathrm{\Delta }\theta =0.1=6^{}`$, the instability condition is that the density will be enhanced, or reduced, by a factor of $`e^{15\mathrm{\Delta }\theta }=4.5`$ ($`e^{8\mathrm{\Delta }\theta }=2.2`$). We note that Soker (1998) finds that for a compressed tail to develop behind a dense clump, the condition is that the ionization front reach the clump within $`100\mathrm{yrs}`$ of the beginning of the ionization ($`tt_1+100\mathrm{yrs}`$), and the density enhancement be by a factor of $`5`$. The present results are compatible with those of Soker (1998). Dense clumps can be formed by the mass loss process itself, or from the interaction process of the fast and slow winds at early stages (e.g., Dwarkadas & Balick 1998). Dwarkadas & Balick (1998) conduct two-dimensional simulations of winds interaction, taking into account the evolution of the fast wind, and find that the interaction process is prone to instabilities, which may form clumps at early stages. Since the density enhancement is only within a clump, and does not extend to a large radial distance as with inhomogeneous mass loss assumed here, the clump density enhancement should be larger, i.e. $`5`$, to form a dense radially extended tail (Soker 1998). Images of the inner regions of several PNs observed with HST, e.g., M1-26, He2-142, He2-138, and He2-131, all from Sahai & Trauger (1998), reveal many blobs, arcs and filaments with angular width of $`0.20.5=1030^{}`$. For these inhomogeneities to be amplified by ionization, the density differences between different regions should have been by a factor of $`25`$ as ionization started. This is quite plausible in these PNs. Very narrow radially extending structures which span an angle of $`2^{}=0.04`$, will be amplified even for mass loss rate enhancement as small as a factor of $`1.22`$. Finally, I examine the numerical results of Mellema (1995). Mellema finds that the ionization front modifies the slow wind density profile along the azimuthal direction (from pole to equator) in his model B, but not in his model A. In his model B the density profile has a steep variation with the angle near the pole, where the ionization front modifies the density profile, while in model A the density variation with the angle is much shallower in all directions. From his density plots I find that within $`30^{}`$ from the symmetry axis (i.e., polar direction) the average value is $`\dot{M}^{}/\dot{M}2.5`$, with higher values nearer the symmetry axis. According to the results here, this density inhomogeneity can be amplified, as indeed happens in the simulation of Mellema (1995), but a shallower density variation with angle, as in his model A, will not develop the ionization front instability. ## 4 SUMMARY In the first part of the present paper ($`\mathrm{\S }2`$) I examined the role of the dust opacity in the optical band in the appearance of proto-PNs. This was motivated in part by the recent HST observations of proto-PNs by UMB, who classified the proto-PNs into two groups, SOLE and DUPLEX. The conclusion from this study is that a large optical depth, $`\tau 2`$, will be found in proto-PNs with high equatorial mass loss rate. The high mass loss rates requires in most cases dynamic effects, probably from a binary companion. Such effects can be gravitational focusing by a binary companion or a common envelope evolution with a stellar companion. In these cases light from the central star will reach larger distances along and near the polar directions, leading to the appearance of an elongated reflection nebula. The proto-PNs will turn into bipolar-PNs, i.e., PNs with two lobes and an equatorial waist between them, or will become extreme elliptical, e.g., a ring, but no lobes on the two sides of the equatorial plane. Proto-PNs which will turn into moderate elliptical PNs, i.e., a small departure from sphericity, will not have high optical depth, and the light from the central star will not be attenuated much. Therefore, while the dust opacity near the stellar surface may lead to non-spherical mass loss on the upper AGB of the progenitors of moderate elliptical PNs (Soker 2000), it has no substantial role thereafter. In the second part I examined the conditions for the enhancement of non-radial density inhomogeneity by the propagation of the ionization front at early stages. The ionization will proceed faster along low density region, which will be heated earlier than dense regions. The hot low density region will expand due to its higher pressure, and the density will decrease further (see Mellema 1995 for a specific numerical simulation). The condition for this ionization instability to develop is that the ionization time difference between two direction at the same radius is longer than the sound crossing time between these two regions (eq. 6). This can be expressed as a condition on the mass loss variation with the direction $`\dot{M}^{}d\dot{M}/d\theta `$. Assuming a constant mass loss rate with time, and a constant wind velocity with time and direction, this condition was derived analytically (eq. 16). For typical parameters of elliptical-PNs the ionization instability will increase density inhomogeneities when $`\left(\dot{M}^{}/\dot{M}\right)4`$. Therefore, the observed inhomogeneities in young PNs can be larger than the inhomogeneities of the mass loss process itself. ACKNOWLEDGMENTS: This research was supported in part by a grant from the Israel Science Foundation, and a grant from the Israel-USA Binational Science Foundation.
no-problem/9912/cond-mat9912418.html
ar5iv
text
# Microwave Electrodynamics of the Antiferromagnetic Superconductor GdBa2Cu3O7-δ. ## Abstract The temperature dependence of the microwave surface impedance and conductivity are used to study the pairing symmetry and properties of cuprate superconductors. However, the superconducting properties can be hidden by the effects of paramagnetism and antiferromagnetic long-range order in the cuprates. To address this issue we have investigated the microwave electrodynamics of GdBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub>, a rare-earth cuprate superconductor which shows long-range ordered antiferromagnetism below $`T_N`$=2.2 K, the Neel temperature of the Gd ion subsystem. We measured the temperature dependence of the surface resistance and surface reactance of $`c`$-axis oriented epitaxial thin films at 10.4, 14.7 and 17.9 GHz with the parallel plate resonator technique down to 1.4 K. Both the resistance and the reactance data show an unusual upturn at low temperature and the resistance presents a strong peak around $`T_N`$ mainly due to change in magnetic permeability. The analysis of the temperature dependence of the microwave surface impedance and conductivity is one of the acclaimed methods used to extract information on the pairing symmetry and properties of high temperature cuprate superconductors. However things get more complicated when the material also develops magnetic correlations, due to the localized moments of the rare-earth elements (RE). The effects of paramagnetism and antiferromagnetic long range order may hide the behavior of the superconducting screening length, influencing conclusions about the pairing symmetry, as has been suggested for the electron-doped Nd<sub>2-x</sub>Ce<sub>x</sub>CuO<sub>4</sub> . To address this issue we have focused on the electrodynamic properties of GdBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (GBCO), where the Gd<sup>3+</sup> ions carry magnetic moments which align parallel to the c-axis and order antiferromagnetically below $`T_N2.2`$ K in the three crystallographic directions . The samples we have investigated are pairs of identical c-axis oriented GBCO epitaxial films, laser ablated on (100)-cut LaAlO<sub>3</sub> single crystal substrates. The film thickness is 300 nm, the superconducting critical temperature measured by AC susceptibility is 92.5 K and the transition width is 0.3 K. We measured the effective (due to the finite film thickness) surface impedance $`Z_{Seff}(T,\omega )=R_{Seff}(T,\omega )+iX_{Seff}(T,\omega )=\sqrt{i\omega \mu (T,\omega )/\sigma \left(T\right)}\mathrm{coth}\left[t\sqrt{i\omega \mu (T,\omega )\sigma \left(T\right)}\right]`$ of the GBCO thin films from 1.4 K to $`T_c`$ with the parallel plate resonator (PPR) technique at three different resonance frequencies with rf magnetic field in the $`ab`$ plane. Here the first factor on the right hand side is the bulk surface impedance $`Z_S`$ and the second factor is the finite thickness correction ($`t`$ is the film thickness), and $`\mu `$ and $`\sigma `$ are respectively the complex magnetic permeability and complex conductivity. The PPR resonance frequency $`f(T)`$ and quality factor $`Q(T)`$ data are first converted to changes in surface reactance and surface resistance and then to absolute values using $`X_{Seff}(77K)=49`$ m$`\mathrm{\Omega }`$ and $`R_{Seff}\left(77K\right)=0.48`$ m$`\mathrm{\Omega }`$ measured at 10 GHz by the variable spacing parallel plate resonator technique . In Fig. 1 we show $`R_{Seff}(T)`$ and $`X_{Seff}(T)`$ at 10.4 GHz over the entire measurement temperature range. The high temperature behavior is consistent with a $`d`$-wave temperature dependence for the surface impedance . The deviations from this behavior start below 30 K, where the magnetic effects due to $`\mu \left(T\right)`$ come into play . Both $`R_{Seff}(T)`$ and $`X_{Seff}(T)`$ show a minimum at two different temperatures, $`T`$ 25 K and $``$ 7 K, respectively. Then $`R_{Seff}(T)`$ and $`X_{Seff}(T)`$ increase upon reducing the temperature, and a strong peak is observed in $`R_{Seff}(T)`$. The same behavior is found at the other two frequencies with some extra frequency dependence other than the trivial $`X_S\omega `$ and $`R_S\omega ^2`$ observed for superconductors. This is clearly seen in Fig. 2, where we show the data at the three frequencies as modified complex conductivity $`\sigma _m(T,\omega )`$, defined through $`Z_S(T,\omega )=\sqrt{i\omega \mu _0/\sigma _m(T,\omega )}`$. The real part, $`\sigma _{1m}=2R_S\omega \mu _0/X_S^3`$, presents frequency dependent peaks around $`T_N`$. In the inset to Fig. 2 we show a rescaled imaginary part $`\lambda _m^2=\sigma _{2m}\omega \mu _0=(\omega \mu _0/X_S)^2`$, where the frequency dependence is less pronounced. In conclusion strong unusual features are observed in the temperature dependence of surface impedance and conductivity for GBCO. The effects of paramagnetism and antiferromagnetism are shown to have a significant influence on $`\lambda (T)`$ and $`R_S(T)`$. The authors want to acknowledge J. Claassen, M. Coffey, P. Fournier H. Harshevarden, M. Pambianchi, A. Pique, A. Porch and A. Schwartz.
no-problem/9912/astro-ph9912179.html
ar5iv
text
# Contents ## 1 Introduction The purpose of Pégase is the study of galaxies by evolutionary synthesis. This version supersedes our previous model (Fioc & Rocca-Volmerange 1997; contributions in Leitherer *et al.* 1996). The main differences are the implementation of: * stellar evolutionary tracks with non-solar metallicities; * the library of stellar spectra of Lejeune *et al.* (1997, 1998); * radiative transfer computations to model the extinction. The extension to the far-infrared (Fioc & Dwek, in prep.) and a detailed modeling of the nebular emission (Moy, Rocca-Volmerange & Fioc, in prep.) are in progress. Synthetic spectra computed from standard star formation scenarios fitted on new statistical templates for nearby galaxies (Fioc & Rocca-Volmerange 1999; Fioc & Rocca-Volmerange, in prep.) will be proposed in a near future, as well as their colors, $`k`$\- and $`e`$-corrections. To be informed of the future developments of Pégase, to require specific computations, ask questions, or make comments or suggestions, mail us at pegase@iap.fr. ## 2 Contents of the directory ### 2.1 List of files | README.tex | IMF$`\mathrm{\_}`$Scalo98.dat | stellibLCBcor.dat | | --- | --- | --- | | README.ps | Spitzer.dat | SunLCB.dat | | SSPs.f | WW.dat | VegaLCB.dat | | calib.f | ages.dat | King.dat | | colors.f | calib.dat | slab.dat | | scenarios.f | dust.dat | tracksZ0.0001.dat | | spectra.f | filters.dat | tracksZ0.0004.dat | | IMF$`\mathrm{\_}`$Kennicutt.dat | list$`\mathrm{\_}`$IMFs.dat | tracksZ0.004.dat | | IMF$`\mathrm{\_}`$Kroupa.dat | list$`\mathrm{\_}`$tracks.dat | tracksZ0.008.dat | | IMF$`\mathrm{\_}`$MillerScalo.dat | HII.dat | tracksZ0.02.dat | | IMF$`\mathrm{\_}`$Salpeter.dat | BD+17o4708.dat | tracksZ0.05.dat | | IMF$`\mathrm{\_}`$Scalo86.dat | stellibCM.dat | tracksZ0.1.dat | ### 2.2 Codes * calib.f: code computing the calibrations of the filters. * colors.f: code computing colors and other quantities. * spectra.f: code computing synthetic spectra of galaxies and other quantities. * SSPs.f: code computing the properties of simple stellar populations (SSPs), i.e. populations of stars formed simultaneously with the same metallicity. * scenarios.f: code used to prepare the input file (star formation scenarios) to spectra. The codes are written in Fortran 77. Though available on most systems, some features are non-standard: * use of lowercase letters and underscore; * use of identifiers longer than 6 characters; * implicit none; * do while; * doend do; * list-directed input/output in internal files. Fortran 90 should also work. To compile and execute the file name.f (name=calib/colors/spectra/SSPs/scenarios): ##### Unix: * f77 name.f -o name * name ##### VMS: * for name * link name * run name You may have to rename name.f as name.for and change the lowercase letters to uppercase. ### 2.3 Data files #### 2.3.1 Stellar evolutionary tracks Stellar evolutionary tracks for various metallicities ($`Z`$) and helium abundances ($`Y`$): * tracksZ0.0001.dat: $`Z=0.0001,Y=0.23`$. * tracksZ0.0004.dat: $`Z=0.0004,Y=0.23`$. * tracksZ0.004.dat: $`Z=0.004,Y=0.24`$. * tracksZ0.008.dat: $`Z=0.008,Y=0.25`$. * tracksZ0.02.dat: $`Z=0.02,Y=0.28`$. * tracksZ0.05.dat: $`Z=0.05,Y=0.35`$. * tracksZ0.1.dat: $`Z=0.1,Y=0.48`$. The names of these files are written in list$`\mathrm{\_}`$tracks.dat (read by SSPs). The tracks proposed here come mainly from the “Padova” group. At $`Z=0.1`$, pseudo-tracks for masses larger than 9 $`M_{}`$ have been computed from the corresponding masses in the $`Z=0.02`$ and $`Z=0.05`$ sets. For stars undergoing the helium flash, the zero-age main sequence tracks are connected to the zero-age horizontal branch tracks assuming a Reimers law for the mass loss along the first giant branch with $`\eta =0.4`$. The same law is used to describe the mass loss during the early asymptotic giant branch phase (EAGB). Pseudo-tracks are then computed for the thermally pulsing AGB (TPAGB) phase using the equations proposed by Groenewegen & de Jong (1993) with $`\eta =4`$ (van den Hoek & Groenewegen 1997). Hydrogen burning post-AGB and CO white dwarf tracks from Blöcker (1995) and Schönberner (1983) are then connected. For low-mass stars becoming helium white dwarfs, but for which the Padova group does not provide the tracks, we use the Althaus & Benvenuto (1997) models. The positions of (unevolving) low-mass stars in the HR diagram come from Chabrier & Baraffe (1997). #### 2.3.2 Initial mass functions Some initial mass functions (IMF) are already defined analytically in SSPs.f: * ln: lognormal IMF (Miller & Scalo 1979). * RB: Rana & Basu (1992). * Fe: Ferrini (1991). Others are given in specific files: * IMF$`\mathrm{\_}`$Kennicutt.dat: Kennicutt (1983). * IMF$`\mathrm{\_}`$Kroupa.dat: Kroupa *et al.* (1993). * IMF$`\mathrm{\_}`$MillerScalo.dat: Miller & Scalo (1979). * IMF$`\mathrm{\_}`$Salpeter.dat: Salpeter (1955). * IMF$`\mathrm{\_}`$Scalo86.dat: Scalo (1986). * IMF$`\mathrm{\_}`$Scalo98.dat: Scalo (1998). These files are listed in list$`\mathrm{\_}`$IMFs.dat. #### 2.3.3 Filters and calibrations ##### Filters: The passbands of the filters are provided in the file filters.dat. Since you may want to change it to add new filters, we detail its content here. The structure of filters.dat is the following: * $`1^{\mathrm{st}}`$ line: number of filters ($`N_{\mathrm{filters}}`$). * $`N_{\mathrm{filters}}`$ blocks, one for each filter, containing: + $`1^{\mathrm{st}}`$ line: - the number of wavelengths ($`N_{\mathrm{wavelengths}}`$); - the type of transmission (see below); - the type of calibration (see below); - a code between quotes () identifying the filter and used in calib.dat; - reference, comments (optional). + $`N_{\mathrm{wavelengths}}`$ lines containing: - the wavelength in Å; - the transmission at this wavelength. ###### Type of transmission: * 0: the shape of the transmission curve ($`T_\lambda =T_\nu `$) corresponds to the *energy* transmitted. * 1: the shape of the transmission curve corresponds to the *number of photons* transmitted. * 2: used for $$D_{4000}=\frac{{\displaystyle _{4050\text{Å}}^{4250\text{Å}}}F_\nu d\lambda }{{\displaystyle _{3750\text{Å}}^{3950\text{Å}}}F_\nu d\lambda }\text{(Bruzual 1983).}$$ ###### Type of calibration: * 0: used for $`D_{4000}`$. * 1: standard system $$m()=2.5\mathrm{log}_{10}\frac{{\displaystyle F_\lambda ()T_\lambda d\lambda }}{{\displaystyle F_\lambda (\mathrm{Vega})T_\lambda d\lambda }}+0.03\text{(i.e., the magnitude of Vega is 0.03).}$$ * 2: AB system; used for the SDSS filters ($`u^{}`$, $`g^{}`$, $`r^{}`$, $`i^{}`$, $`z^{}`$) $$m_{\mathrm{AB}}()=2.5\mathrm{log}_{10}\frac{{\displaystyle F_\nu ()T_\nu d\nu }}{{\displaystyle T_\nu d\nu }}48.60\text{(}F_\nu \text{ in erg.s}\text{-1}\text{.cm}\text{-2}\text{.Hz}\text{-1}\text{).}$$ * 3: Thuan & Gunn system; used for the Thuan & Gunn filters ($`u`$, $`v`$, $`g`$, $`r`$) $$m_{\mathrm{TG}}()=2.5\mathrm{log}_{10}\frac{{\displaystyle F_\lambda ()T_\lambda d\lambda }}{{\displaystyle F_\lambda (\mathrm{BD}+17^{}4708)T_\lambda d\lambda }}+9.50.$$ * 4: used for the WFPC2 filters ($`F300W`$, $`F450W`$, $`F606W`$, $`F814W`$) and the ultraviolet filters at 1650 Å, 2500 Å and 3150 Å $$m_{21.10}()=2.5\mathrm{log}_{10}\frac{{\displaystyle F_\lambda ()T_\lambda d\lambda }}{{\displaystyle T_\lambda d\lambda }}21.10\text{(}F_\lambda \text{ in erg.s}\text{-1}\text{.cm}\text{-2}\text{}\text{-1}\text{).}$$ * 5: used for the FOCA filter at 2000 Å $$m_{21.175}()=2.5\mathrm{log}_{10}\frac{{\displaystyle F_\lambda ()T_\lambda d\lambda }}{{\displaystyle T_\lambda d\lambda }}21.175\text{(}F_\lambda \text{ in erg.s}\text{-1}\text{.cm}\text{-2}\text{}\text{-1}\text{).}$$ ##### Calibrations: The calibrations of the filters are computed by the code calib.f and written incalib.dat. The structure of this file is the following: * $`1^{\mathrm{st}}`$ line: caption of the file. * One line for each filter containing: + the name of the filter; + the corresponding index used in colors; + the apparent flux of Vega in erg.s<sup>-1</sup>.cm<sup>-2</sup><sup>-1</sup>: $`F_\lambda (\mathrm{Vega})T_\lambda d\lambda /T_\lambda d\lambda `$; + the “area” of the filter in Å: $`T_\lambda d\lambda `$; + the mean wavelength in Å: $`\overline{\lambda }=\lambda T_\lambda d\lambda /T_\lambda d\lambda `$; + the effective wavelength of Vega in Å: $`\lambda F_\lambda (\mathrm{Vega})T_\lambda d\lambda /F_\lambda (\mathrm{Vega})T_\lambda d\lambda `$; + the AB-magnitude of Vega; + the Thuan & Gunn-magnitude of Vega (99.999 if undefined); + the “monochromatic” luminosity of the Sun in erg.s<sup>-1</sup><sup>-1</sup>: $`L_\lambda ()T_\lambda d\lambda /T_\lambda d\lambda `$. The AB-magnitude $`m_{\mathrm{AB}}`$ may be computed from the standard magnitude $`m`$ (in the Vega-system) as: $`m_{\mathrm{AB}}()=m()+m_{\mathrm{AB}}(\mathrm{Vega})`$ and the same for the Thuan & Gunn-magnitude. You may also directly modify the type of calibration in filters.dat to change the default ones used in colors.f. #### 2.3.4 Other files * stellibCMcor.dat: stellar library of Clegg & Middlemass (1987); $`T_{\mathrm{eff}}>50000K`$. * stellibLCBcor.dat: stellar library of Lejeune *et al.* (1997, 1998; corrected version (BaSeL-2.0)); $`T_{\mathrm{eff}}50000K`$.. * ages.dat: ages at which the synthetic spectra will be written. * HII.dat: used to compute the nebular emission (continuum and lines). * dust.dat: extinction properties of graphites and silicates (Draine & Lee 1993; Laor & Draine 1993). * slab.dat: results of the radiative transfer code for an homogeneous slab model for both stars and dust (Fioc 1997); used to model the extinction for disk galaxies. * King.dat: results of the radiative transfer code for a spheroidal geometry, where the stars are distributed according to a King profile and the dust to a power $`\frac{1}{2}`$ of the King profile (Fioc & Rocca-Volmerange 1997); used to model the extinction for elliptical galaxies. * VegaLCB.dat: spectrum of Vega (Lejeune *et al.* 1997; computed by R.L. Kurucz). * BD+17o4708.dat: spectrum of the F subdwarf BD+174708 (Oke & Gunn 1983) used to calibrate the Thuan & Gunn (1976) photometric system. * SunLCB.dat: spectrum of the Sun (Lejeune *et al.* 1997; computed by R.L. Kurucz). * Spitzer.dat: table 5.4 of Spitzer (1978, p. 113). * WW.dat: stellar yields of Woosley & Weather (1995). ## 3 Computing synthetic spectra ### 3.1 Preliminaries Except for the star formation rate, which takes also into account substellar objects, “star”, “stellar”, etc., refer only to luminous stars to the exclusion of stellar remnants (old white dwarfs, neutrons stars and black holes) and substellar objects. Gas means both the gas strictly speaking and the dust. All the metallicities are given in mass fraction. We consider only the baryonic matter (with the constant mass $`M_{\mathrm{tot}}`$) and distinguish two zones: * The galaxy itself (mass $`M_{\mathrm{gal}}`$). Unless otherwise specified, all the quantities refer only to this zone. * A reservoir of gas only surrounding the galaxy (mass $`M_{\mathrm{res}}`$). Initially, both zones contain only gas and we have either * $`M_{\mathrm{gal}}=M_{\mathrm{tot}}`$ and $`M_{\mathrm{res}}=0`$: the galaxy is already fully constituted; or * $`M_{\mathrm{gal}}=0`$ and $`M_{\mathrm{res}}=M_{\mathrm{tot}}`$: the galaxy forms entirely by infall from the reservoir. In both cases, the reservoir may be replenished by galactic winds occurring in the galaxy. These moreover interrupt the infall. Some quantities in the following are *normalized* to $`M_{\mathrm{tot}}=1M_{}`$. To obtain the value for a given $`M_{\mathrm{tot}}`$, you have either to: * multiply them by $`M_{\mathrm{tot}}`$ \[in $`M_{}`$\]: quantities denoted by a “$``$”; or * to add $`2.5\mathrm{log}_{10}M_{\mathrm{tot}}`$ \[in $`M_{}`$\]: quantities denoted by a “$``$”. If, at any time $`t`$, the *normalized* star formation rate $`\text{SFR}(t)`$ exceeds $`\text{SFR}_{\mathrm{max}}`$, its maximal possible value given the amount of gas available, spectra sets $`\text{SFR}(t)`$ to $`\text{SFR}_{\mathrm{max}}`$ and, the first time it happens, also prints a warning on the screen and in the header of the output file. If there is some extinction in the disk geometry, the emission is not isotropic. If you choose to compute the spectra for a specific inclination, the monochromatic luminosity $`L_\lambda `$ is then defined as $`L_\lambda (\theta _0)=4\pi \mathrm{\Lambda }_\lambda (\theta _0)`$, where $`\mathrm{\Lambda }_\lambda (\theta _0)\mathrm{d}\lambda \mathrm{d}\omega (\theta _0)`$ is the energy radiated between $`\lambda `$ and $`\lambda +\mathrm{d}\lambda `$ and escaping from the galaxy in a solid angle $`\mathrm{d}\omega (\theta )=2\pi \mathrm{sin}\theta \mathrm{d}\theta `$ having an inclination $`\theta =\theta _0`$ to the axis of rotational symmetry. Inclination-dependent quantities (monochromatic or in-line luminosities, magnitudes, etc.) are denoted by a “$`\mathrm{\S }`$” in the following. This does not apply to the bolometric luminosity, as computed here ($`L_{\mathrm{bol}}\mathrm{\Lambda }_\lambda (\theta )d\lambda d\omega (\theta )L_\lambda (\theta _0)d\lambda `$), nor to the dust emission, which is supposed to be isotropic (negligible self-absorption). You may also output inclination-averaged spectra ($`L_\lambda =\mathrm{\Lambda }_\lambda (\theta )d\omega (\theta )`$) rather than for a specific inclination. Stars are formed with the same metallicity as the ISM. ### 3.2 Procedure The synthetic spectra are computed in three steps: 1. run SSPs<sup>1</sup><sup>1</sup>1You do not need to run SSPs every time if you keep the same IMF and the other parameters asked by SSPs. to compute the properties of SSPs of different metallicities; 2. run scenarios to prepare the input file to spectra containing the parameters of the star formation scenarios; 3. run spectra. #### 3.2.1 SSPs You will be asked: * The shape of the initial mass function (enter the corresponding number). * The lower mass of the IMF. * The upper mass. * The type of supernovae ejecta. * If you want to take into account the ejecta due to stellar winds in high-mass stars through a somewhat dubious procedure. * A prefix (e.g. *prefix*). The output files corresponding to the tracks tracksZ\*.dat will be named *prefix*$`\mathrm{\_}`$tracksZ\*.dat<sup>2</sup><sup>2</sup>2spectra will interpolate between the resulting files. If the metallicity of the stars formed in spectra is lower (resp. higher) than the lowest (resp. highest) metallicity of every file, spectra does not extrapolate but uses the data of the file with the lowest (resp. highest) metallicity. and will be listed in the file called *prefix*$`\mathrm{\_}`$SSPs.dat. Default values are proposed for some quantities. Type \<return\> to select them. #### 3.2.2 scenarios You will be asked: * the name of the output file, e.g. *scenarios.dat* (it must be a *new* name); * the name of the file (*prefix*$`\mathrm{\_}`$SSPs.dat in the example above) listing the names of the SSP files; * the fraction of close binary systems (this quantity is used to compute the number and the ejecta of SNIa, assuming the W7 model of Thielemann *et al.* (1986) and the formalism of Greggio & Renzini (1983) and Matteucci & Greggio (1986)). These data are common to all the star formation scenarios chosen later. Then for each scenario, you will be asked: * The name of the file containing the corresponding synthetic spectra (just type end to stop). * The initial metallicity of the interstellar medium (ISM). * Whether you want to build your galaxy by infall or prefer to start from a galaxy already constituted. The infall rate, computed as a function of the time $`t`$, is: $$M_{\mathrm{tot}}\frac{\mathrm{exp}(t/t_{\mathrm{infall}})}{t_{\mathrm{infall}}}.$$ You will have to provide $`t_{\mathrm{infall}}`$ (Myr) and the metallicity of the infalling gas. * The type of star formation scenario (characterized by an integer) giving SFR \[†\] – the *normalized* star formation rate in $`M_{}`$.Myr<sup>-1</sup> – as a function of the time in Myr, the *normalized* mass of gas $`M_{\mathrm{gas}}`$ \[†\] in $`M_{}`$ and other quantities. + Types 0 to 9 are reserved for predefined laws of star formation implemented in spectra.f: - 0: instantaneous burst: $`\text{SFR}(t)=\delta (t).`$ - 1: constant star formation rate: $$\begin{array}{cccc}\text{SFR}(t)\hfill & =& p_1\hfill & \text{ if }tp_2,\hfill \\ & =& 0\hfill & \text{ if }t>p_2.\hfill \end{array}$$ $`[p_1]=M_{}.\mathrm{Myr}^1`$; $`[p_2]=\mathrm{Myr}`$. - 2: exponentially decreasing or increasing star formation rate: $$\text{SFR}(t)=p_2\frac{\mathrm{exp}(t/p_1)}{p_1}.$$ $`[p_1]=\mathrm{Myr}`$; $`[p_2]=M_{}`$. - 3: star formation rate proportional to some power of the mass of gas: $$\text{SFR}(t)=\frac{M_{\mathrm{gas}}^{p_1}(t)}{p_2}.$$ $`[p_1]=1`$; $`[p_2]=\mathrm{Myr}.M_{}^1`$. - 49: not yet defined. You will then be asked the values of the parameters ($`p_1`$, $`p_2`$, etc.). They must be real. + Types $``$10: you have to implement your star formation law in spectra.f (see section 5.2). You will be asked the number of parameters used by this law and the values (real) of each one. + Types -1 and -2 are for files containing the star formation rate as a function of time. You will be asked the name of the file (e.g. *SFRfile*): - -1: *SFRfile* must contain on each line the age in Myr and SFR separated by blanks. - -2: *SFRfile* must contain on each line the age in Myr, SFR and the metallicity of the forming stars separated by blanks. This metallicity may be inconsistent with that of the ISM. These quantities must be real. The first age in *SFRfile* must be $`0.`$ and the last must be higher than $`20000`$. The computation of the star formation rate at intermediate ages is performed by spectra. * If the type of star formation scenario is not -2, whether you want a consistent evolution or prefer to form stars with a constant metallicity (asked later). * The fraction (in mass) of the star formation rate used to form substellar objects. These objects lock the mass and are supposed to emit no light. * If you want galactic winds. Galactic winds expel all the interstellar medium from the galaxy after a given time (asked later) and prevent any further star formation. * If you want to take into account the nebular emission, i.e. the continuum and lines emitted by the ionized gas in star-forming regions. The emission in the continuum and the hydrogen lines is computed from the number of Lyman continuum photons in the case B of recombination. Typical observed ratios to H$`\beta `$ are taken for other lines. If you hereafter choose to have some extinction, a fraction of the Lyman continuum photons will be absorbed by the dust inside the Hii region rather than by the gas. This fraction is computed according to the prescriptions of Spitzer (1978, p. 113) and assuming that 70% of the Lyman continuum photons are absorbed by the gas at solar metallicity. * If you want to introduce some extinction: + 0: No extinction. + 1: Extinction for a spheroidal geometry. + 2: Extinction for a disk geometry; inclination-averaged. + 3: Extinction for a disk geometry; specific inclination. You will then be asked the inclination in degrees relative to face-on. The optical depth is estimated from the mass of gas and the metallicity. The absorption, the albedo and the asymmetry parameter are computed from Draine & Lee (1984) and Laor & Draine (1993) data for a mixture of graphites and silicates depending on the metallicity and fitted on the Magellanic Clouds and the Milky Way (cf. Pei (1992)). In the cases 1 and 2, all the Lyman continuum photons not absorbed by the gas as well as those emitted in the Ly$`\alpha `$ line are absorbed by the dust as soon as the metallicity of the ISM is non 0. Default answers are proposed for some questions. Just type \<return\> to select them. Default names of the output files are created by inserting the number of the scenario between the prefix spectra and the suffix .dat (see however note 3). For the other questions, the default answers are those defined in scenarios.f the first time you answer to a specific question. When you have already answered to this question for a previous scenario, the default is your last choice. #### 3.2.3 spectra Type the name of the file of scenarios (*scenarios.dat* in the example above) when required<sup>3</sup><sup>3</sup>3If one of the files of spectra you want to create already exists, spectra appends one or more “+” to the name of the new file and prints a warning on the screen.. The structure of the output files is the following: * A block describing the evolutionary scenario and ending with a line of asterisks (\*** ... \***) only. * One line with: + the number of timesteps ($`N_{\mathrm{timesteps}}`$); + the number of wavelengths of the continuum ($`N_{\mathrm{continuum}}`$); + the number of emission lines ($`N_{\mathrm{lines}}`$). * A block containing the $`N_{\mathrm{continuum}}`$ wavelengths (Å) of the continuum (5 per line). * A block containing the $`N_{\mathrm{lines}}`$ wavelengths (Å) of the emission lines (5 per line). * $`N_{\mathrm{timesteps}}`$ blocks (one for each timestep) containing: + $`1^{\mathrm{st}}`$ line: - the time (Myr, integer); - the *normalized* mass of the galaxy \[†\] ($`M_{}`$); - the *normalized* mass in stars \[†\] ($`M_{}`$); - the *normalized* mass in white dwarfs \[†\] ($`M_{}`$); - the *normalized* mass in neutron stars and black holes \[†\] ($`M_{}`$); - the *normalized* mass in substellar objects \[†\] ($`M_{}`$); - the *normalized* mass in the gas \[†\] ($`M_{}`$); - the metallicity of the interstellar medium (mass fraction); - the mean metallicity of stars averaged on the mass (i.e., the mean initial metallicity of the stars still alive averaged on their initial mass); - the mean metallicity of stars averaged on the bolometric luminosity (i.e., the mean initial metallicity of the stars still alive averaged on their present bolometric luminosity). + $`2^{\mathrm{nd}}`$ line: - the *normalized* bolometric luminosity \[†\] (erg.s<sup>-1</sup>); - the optical depth in the $`V`$-band (5500 Å) from side to side (through the center for the spheroidal geometry and along the axis of rotational symmetry for the disk geometry); - the ratio of the luminosity emitted by the dust to the bolometric luminosity; - the *normalized* star formation rate \[†\] ($`M_{}`$.Myr<sup>-1</sup>); - the *normalized* number of Lyman continuum photons emitted \[†\] (s<sup>-1</sup>); - the *normalized* SNII rate \[†\] (Myr<sup>-1</sup>); - the *normalized* SNIa rate \[†\] (Myr<sup>-1</sup>); - the mean age of the stars averaged on the mass (Myr); - the mean age of stars averaged on the bolometric luminosity (Myr). + A block containing the *normalized* monochromatic luminosities \[†, §\] (erg.s<sup>-1</sup><sup>-1</sup>) of the $`N_{\mathrm{continuum}}`$ wavelengths of the continuum (5 per line). + A block containing the *normalized* luminosities \[†, §\] (erg.s<sup>-1</sup>) of the $`N_{\mathrm{lines}}`$ emission lines (5 per line). ## 4 Computing colors To compute colors, luminosities, etc., for a given set of spectra, run colors and type the name of the input file (spectra) when required. You are then asked the name of the output file (colors). If you just type \<return\>, the name of the output file is created by adding the prefix colors$`\mathrm{\_}`$ to the name of the input file. The structure of the output file is the following: * A block describing the evolutionary scenario and ending with a line of asterisks (\*** ... \***) only. * One line giving the number of timesteps ($`N_{\mathrm{timesteps}}`$). * Eight blocks consisting each in: + one line describing the quantity in each column; + $`N_{\mathrm{timesteps}}`$ lines giving these quantities<sup>4</sup><sup>4</sup>4If no stars have formed yet, all the quantities are set to $`0`$. This happens in particular at $`t=0`$ when the galaxy forms by infall. at each timestep. The quantities printed in the output file are the following: * $`1^{\mathrm{st}}`$ block: time Mgal M\* MWD MBHNS Mgas Zgas \<Z\*\>mass \<Z\*\>Lbol + time: time (Myr, integer). + Mgal \[†\]: *normalized* mass of the galaxy ($`M_{}`$). + M\* \[†\]: *normalized* mass in stars ($`M_{}`$). + MWD \[†\]: *normalized* mass in white dwarfs ($`M_{}`$). + MBHNS \[†\]: *normalized* mass in neutron stars and black holes ($`M_{}`$). + Msub \[†\]: *normalized* mass in substellar objects ($`M_{}`$). + Mgas \[†\]: *normalized* mass in the gas ($`M_{}`$). + Zgas: metallicity of the gas. + \<Z\*\>mass: mean stellar metallicity averaged on the mass. + \<Z\*\>Lbol: mean stellar metallicity averaged on the bolometric luminosity. * $`2^{\mathrm{nd}}`$ block: time Lbol tauV Ldust/Lbol SFR nSNII nSNIa \<t\*\>mass \<t\*\>Lbol + Lbol \[†\]: *normalized* bolometric luminosity (erg.s<sup>-1</sup>). + tauV: optical depth in the $`V`$-band. + Ldust/Lbol: ratio of the luminosity of the dust to the bolometric luminosity. + SFR \[†\]: *normalized* star formation rate ($`M_{}`$.Myr<sup>-1</sup>). + nSNII \[†\]: *normalized* rate of type II supernovae (Myr<sup>-1</sup>). + nSNIa \[†\]: *normalized* rate of type Ia supernovae (Myr<sup>-1</sup>). + \<t\*\>mass: mean stellar age averaged on the mass (Myr). + \<t\*\>Lbol: mean stellar age averaged on the bolometric luminosity (Myr). * $`3^{\mathrm{rd}}`$ block: time nLymcont L(Ha) W(Ha) L(Hb) W(Hb) LB/LBsol LV/LVsol D4000 + nLymcont \[†\]: *normalized* number of Lyman continuum photons emitted (s<sup>-1</sup>). + L(Ha) \[†, §\]: *normalized* luminosity of the emission line H$`\alpha `$ (erg.s<sup>-1</sup>). + W(Ha) \[§\]: equivalent width of the emission line H$`\alpha `$ (Å). + L(Hb) \[†, §\]: *normalized* luminosity of the emission line H$`\beta `$ (erg.s<sup>-1</sup>). + W(Hb) \[§\]: equivalent width of the emission line H$`\beta `$ (Å). + LB/LBsol \[†, §\]: *normalized* blue<sup>5</sup><sup>5</sup>5For the sake of the consistency with the stellar library of Lejeune *et al.* (1997, 1998), the $`U`$, $`B`$, $`V`$ filters used in the files of colors are from Buser & Kurucz (1978), not from Bessel (1990). The $`B`$ filter is always $`B3`$, except for $`UB`$ where we use $`B2`$, which, as $`U3`$, is not corrected for the atmospheric absorption. luminosity ($`L_\mathrm{B}=_\mathrm{B}L_\lambda T_\lambda d\lambda /_\mathrm{B}T_\lambda d\lambda `$) in units of the solar blue luminosity, (i.e., $`L_\mathrm{B}/L_\mathrm{B}()`$, which is different of $`\overline{\lambda }_\mathrm{B}L_\mathrm{B}/L_{}`$ where $`L_{}`$ is the bolometric luminosity of the Sun). + LV/LVsol \[†, §\]: *normalized* visual luminosity ($`L_\mathrm{V}/L_\mathrm{V}()`$). + D4000 \[§\]: intensity of the Balmer break ($`D_{4000}`$). * $`4^{\mathrm{th}}`$ block: time Mbol V U-B B-V V-K V-RC V-IC J-H H-K + Mbol \[‡\]: *normalized* bolometric magnitude ($`M_{\mathrm{bol}}()=4.75`$). + V, U, B \[‡, §\]: *normalized* absolute magnitudes in the filters of Buser & Kurucz (1978) \[see note 5\]. + RC and IC \[‡, §\]: *normalized* absolute magnitudes in the $`R`$ and $`I`$ Cousins filters (Bessel 1990). + J, H and K, as well as L and M (see below) \[‡, §\]: *normalized* absolute magnitudes in the filters of Bessel & Brett (1988). * $`5^{\mathrm{th}}`$ block: time K-L L-M V-RJ V-IJ JK-V UK-JK JK-FK FK-NK 2000-V + RJ and IJ \[‡, §\]: *normalized* absolute magnitudes in the $`R`$ and $`I`$ Johnson filters (Johnson 1965). + UK, NK \[‡, §\]: *normalized* absolute magnitudes in the $`U`$ and $`N`$ filters of Koo (1986) + JK, FK \[‡, §\]: *normalized* absolute magnitudes in the $`J`$ and $`F`$ filters of Kron (1980). + 2000 \[‡, §\]: *normalized* absolute magnitude in the ultraviolet filter (José Donas, private communication) of the FOCA experiment (Milliard et al. 1991). * $`6^{\mathrm{th}}`$ block: time V-ID ID-JD JD-KD BJ-V BJ-RF V-606 300-450 450-606 606-814 + ID, JD, KD \[‡, §\]: *normalized* absolute magnitudes in the $`I`$, $`J`$ and $`K`$ DENIS filters (Éric Copet, private communication); the passband of the $`K`$ filter is the one determined at ambient temperature. + BJ, RF \[‡, §\]: *normalized* absolute magnitudes in the $`B_\mathrm{J}`$ and $`R_\mathrm{F}`$ photographic filters (Couch & Newell 1980). + 300, 450, 606, 814 \[‡, §\]: *normalized* absolute magnitudes in the $`F300W`$, $`F450W`$, $`F606W`$, $`F814W`$ filters of the WFPC2 instrument on the Hubble Space Telescope. * $`7^{\mathrm{th}}`$ block: time u’-g’ g’-r’ V-r’ r’-i’ i’-z’ u-v v-g g-V g-r + u’, g’,r’, i’, z’ \[‡, §\]: *normalized* absolute magnitudes in the Sloan Digital Sky Survey filters (Fukugita *et al.* 1996). + u, v, g, r \[‡, §\]: *normalized* absolute magnitudes in the Thuan & Gunn (1976) filters. * $`8^{\mathrm{th}}`$ block: time 1650-B 1650-2500 3150-B + 1650, 2500 and 3150 \[‡, §\]: *normalized* absolute magnitudes in Gaussian filters centered on the corresponding wavelengths in Å of the Rifatto *et al.* (1995) data. Note that not all the filters provided in filters.dat are used in the ouput file of colors. See section 5.5 if you want to use them. ## 5 Adaptations ### 5.1 IMF You may define your IMF as a series of $`p`$ continuous piecewise power laws giving the number of stars $`n`$ as a function of their mass $`m`$: $$\text{if }m[m_i,m_{i+1}],\frac{\mathrm{d}n}{\mathrm{d}\mathrm{ln}m}m^{s_i}(1ip).$$ Create a file like this (see for example IMF$`\mathrm{\_}`$Scalo86.dat): $$\begin{array}{cc}p\hfill & \\ m_1\hfill & s_1\hfill \\ m_2\hfill & s_2\hfill \\ \mathrm{}\hfill & \mathrm{}\hfill \\ m_p\hfill & s_p\hfill \\ m_{p+1}\hfill & \end{array}$$ and add its name at the end of the file list$`\mathrm{\_}`$IMFs.dat (type \<return\> at the end of the file). The lower mass should preferably be larger than $`0.09M_{}`$ and the upper mass less than $`120M_{}`$ to be in agreement with the tracks. The continuity of the power laws and the normalization of $`{\displaystyle _{m_1}^{m_{p+1}}}{\displaystyle \frac{\mathrm{d}n}{\mathrm{d}m}}mdm`$ to 1 $`M_{}`$ are ensured by SSPs. ### 5.2 Star formation rate You may define your own star formation rate. Search for the lines c if (typeSFR.eq.n\>=10) then c SFR(i)=your SFR law (note that i = time in Myr + 1) c end if in spectra.f. Uncomment and modify them; then, express the *normalized* star formation rate \[†\] SFR(i) at the timestep i as a function of the age time(i)=i-1., the *normalized* mass of gas \[†\] (sigmagas(i)) or other quantities. SFR(i) may also depend on free parameters param(1), param(2), … , param(nparam) that you will have to provide when running scenarios. The maximal number of parameters nmaxparam is set to 99 in the declarations at the top of spectra.f; it should be enough! ### 5.3 Changing the output ages of the spectra ages.dat contains the ages in Myr (one per line, integer) at which the spectra are printed. You may change these data (do not forget to type \<return\> at the end of the file). If necessary, modify the parameter nmaxtimesimpr at the beginning of spectra.f (maximal number of printed spectra). ### 5.4 Introducing other filters You may include other filters (see section 2.3.3): * Change the number of filters on the first line of filters.dat. * At the end of the file, write on the same line (with blanks between them): + the number of wavelengths defining the passband of the filter; + the type of transmission; + the type of calibration; + the name between quotes; + comments (optional). * Write each wavelength (Å) and the corresponding transmission on one line. Do not forget to type \<return\> at the end of the file. * Run calib to obtain the calibrations of the filters in calib.dat. ### 5.5 Printing other quantities colors may print other quantities: * *Normalized* absolute magnitudes \[‡, §\] mag(j,i) (magnitude at time time(j) in the filter number i) and derived colors \[§\]. * *Normalized* “monochromatic” luminosities \[†, §\] fluxfilter(j,i) (erg.s<sup>-1</sup><sup>-1</sup>) or their ratio to the solar luminosity in the filter (fluxfilter(j,i)/fluxsol(i)). * *Normalized* luminosity \[†, §\] Lumline(j,i) (erg.s<sup>-1</sup>) of the emission line number i (see HII.dat) at time time(j) or its equivalent width \[§\] EW(j,i) (Å). The data for the nebular lines are given in HII.dat at lines 84 to 144. Each line contains the wavelength of the emission line, the ratio of its intensity to H$`\beta `$, the name and, finally, the index i used in colors.f. To do this, add new lines in colors.f before the instruction close(50) in the following way: do j=1,ntimes write(50,*format*) *variable1(j), variable2(j)...* end do ## 6 References * Althaus L.G., Benvenuto O.G., 1997, ApJ 477, 313 * Bessel M., 1990, PASP 102, 1181 * Bessel M., Brett J., 1988, PASP 100, 1134 * Bruzual G., 1983, ApJ 273, 105 * Buser R., Kurucz R.L., 1978, A&A 70, 555 * Chabrier G., Baraffe I., 1997, A&A 327, 1039 * Clegg R.E.S., Middlemass D., 1987, MNRAS 228, 759 * Couch W.J., Newell E.B., 1980, PASP 92, 746 * Draine B.T., Lee H.M., 1984, ApJ 285, 89 * Ferrini F., 1991, in *Chemical and Dynamical Evolution of Galaxies*, F. Ferrini, F. Matteucci, J. Franco (eds.), p. 520 * Fioc M., 1997, Ph.D. thesis (Université Paris XI): *Évolution spectrale des galaxies de l’ultraviolet au proche infrarouge — Étude de l’histoire de la formation d’étoiles* (in French; available at http://www.iap.fr/users/fioc/) * Fioc M., Rocca-Volmerange, 1997, A&A 326, 950 * Fukugita M., Ichikawa T., Gunn J.E., Doi M., Shimasaku K., Schneider D.P., 1996, AJ 111, 1748 * Greggio L., Renzini A., 1983, A&A 118, 217 * Groenewegen M., de Jong T., 1993, A&A 267, 410 * Johnson H.L., 1965, ApJ 141, 923 * Kennicutt R.C., 1983, ApJ 272, 54 * Koo D.C., 1986, ApJ 311, 651 * Kron R.G., 1980, ApJS 43, 305 * Kroupa P., Tout C.A., Gilmore G., 1993, MNRAS 262, 545 * Laor A., Draine B.T., 1993, ApJ 402,441 * Leitherer C. *et al.*, 1996, PASP 108, 996 * Lejeune T., Cuisinier F., Buser R., 1997, A&AS 125, 229 * Lejeune T., Cuisinier F., Buser R., 1998, A&AS 130, 65 * Matteucci F., Greggio L., 1986, A&A 154, 279 * Miller G.E., Scalo J.M., 1979, ApJS 41, 513 * Milliard B., Donas J., Laget M., 1991 , AdSpR 11, 135 * Oke J.B., Gunn J.E., 1983, ApJ 266, 713 * “Padova”: + Bressan A., Fagotto F., Bertelli G., Chiosi C., 1993, A&AS 100, 647 + Fagotto F., Bressan A., Bertelli G., Chiosi C., 1994a, A&AS 104, 365 + Fagotto F., Bressan A., Bertelli G., Chiosi C., 1994b, A&AS 105, 29 + Fagotto F., Bressan A., Bertelli G., Chiosi C., 1994c, A&AS 105, 39 + Girardi L., Bressan A., Chiosi C., Bertelli G., Nasi E., 1996, A&AS 117, 113 * Pei Y.C., 1992, ApJ 395, 130 * Rana N., Basu S., 1992, A&A 265, 499 * Rifatto A., Longo G., Capaccioli M., 1995, A&AS 114, 257 * Salpeter E., 1955, ApJ 121, 161 * Scalo J.M., 1986, Fund. Cosm. Phys. 11, 1 * Scalo J.M., 1998, in *The Stellar Initial Mass Function*, G. Gilmore, D. Howell (eds.) \[ASP Conf. Ser. 142\], p. 201 * Spitzer L., 1978, *Physical Processes in the Interstellar Medium*, Wiley-Interscience * Thielemann F.K., Nomoto K., Yokoi K., 1986, A&A 158, 17 * Thuan T.X., Gunn J.E., 1976, PASP 88, 543 * van den Hoek L., Groenewegen M., 1997, A&AS 123, 305 * Woosley S., Weaver T., 1995, ApJS 101, 181
no-problem/9912/hep-lat9912043.html
ar5iv
text
# 1 HMC algorithm for overlap fermions for any number of flavors ## 1 HMC algorithm for overlap fermions for any number of flavors Overlap fermions represent a lattice discretization of fermions with the same chiral properties as continuum fermions . Properties of overlap fermions are reviewed in (see also ). In this contribution we would like to describe a Hybrid Monte Carlo (HMC) algorithm for the dynamical simulation of overlap fermions, which exploits some of their chiral properties. We denote by $`H_o(\mu )`$ the hermitian overlap Dirac operator $`\gamma _5D(\mu )`$ and find $`D^{}(\mu )D(\mu )=H_o^2(\mu )`$. Since $`[H_o^2(\mu ),\gamma _5]=0`$ one can split $`H_o^2(\mu )`$ into two parts, each acting in one chirality sector only, $`H_o^2(\mu )=H_{o+}^2(\mu )+H_o^2(\mu )`$ where, with $`P_\pm =\frac{1}{2}(1\pm \gamma _5)`$, $$H_{o\pm }^2(\mu )=\frac{1+\mu ^2}{2}P_\pm \pm \frac{1\mu ^2}{2}P_\pm ϵ(H_w)P_\pm .$$ (1) The non-zero eigenvalues of $`H_o^2(\mu )`$ are equal in both chirality sectors and hence also their contribution to the fermion determinant: $$det(H_{o+}^2(\mu ))=det(H_o^2(\mu ))>0$$ (2) The indicates that the zero modes have been left out. For $`N_f`$ dynamical flavors the fermion determinant is thus $`[det(D(\mu ))]^{N_f}`$ $`=`$ $`\mu ^{N_f|Q|}[det(D^{}(\mu ))]^{N_f}=`$ $`\mu ^{N_f|Q|}[det(H_o^2(\mu ))]^{N_f/2}`$ $`=`$ $`\mu ^{N_f|Q|}[det(H_{o\pm }^2(\mu ))]^{N_f}.`$ (3) We can use this rewriting to get a Hybrid Monte Carlo algorithm for dynamical overlap fermions for any number of flavors. For each flavor we introduce one pseudo-fermion of a single chirality: $$det(H_{o\pm }^2(\mu ))=𝑑\varphi _\pm ^{}\varphi _\pm e^{S_p};S_p=\varphi _\pm ^{}\left[H_{o\pm }^2(\mu )\right]^1\varphi _\pm .$$ (4) The choice of the chirality is made such as to avoid zero modes: If the gauge configuration at the beginning of the trajectory has non-trivial topology, we choose the chirality that does not have an exact zero mode of the massless overlap Dirac operator. If the topology is trivial, we choose the chirality randomly. To take the zero mode contribution into account, we reweight to compute observables $$𝒪=\mu ^{N_f|Q|}𝒪_\pm /\mu ^{N_f|Q|}_\pm .$$ (5) Having introduced the pseudo-fermions, doing HMC is straightforward. We need the contribution from the pseudo-fermions to the force: $$\frac{\delta S_p}{\delta U}=\frac{1}{2}(1\mu ^2)\chi _\pm ^{}\frac{\delta ϵ(H_w)}{\delta U}\chi _\pm ;\left[H_{o\pm }^2(\mu )\right]^1\varphi _\pm =\chi _\pm .$$ (6) We use a rational polynomial approximation for $`ϵ(H_w)`$ written as a sum over poles : $$ϵ(x)x\frac{P(x^2)}{Q(x^2)}=x\left(c_0+\underset{k}{}\frac{c_k}{x^2+b_k}\right).$$ (7) Straightforward algebra then gives (see also ) $`\chi _\pm ^{}{\displaystyle \frac{\delta ϵ(H_w)}{\delta U}}\chi _\pm `$ $``$ $`c_0\chi _\pm ^{}{\displaystyle \frac{\delta H_w}{\delta U}}\chi _\pm +{\displaystyle \underset{k}{}}c_kb_k\chi _{k\pm }^{}{\displaystyle \frac{\delta H_w}{\delta U}}\chi _{k\pm }`$ (8) $`{\displaystyle \underset{k}{}}c_k\chi _{k\pm }^{}H_w{\displaystyle \frac{\delta H_w}{\delta U}}H_w\chi _{k\pm }.`$ where we introduced $$\chi _{k\pm }=[H_w^2+b_k^2]^1\chi _\pm .$$ (9) The computation of the force requires thus one additional multi-shift “inner” CG inversion to obtain the $`\chi _{k\pm }`$. A few remarks are in order: (1) We anticipate that a straightforward HMC for dynamical overlap fermions will suffer even more than with staggered fermions from difficulties in changing topology due to the existence of exact zero modes. By working only in one chiral sector, a change of topology is possible, unimpeded by the fermions, as long as the number of zero modes changes only in the opposite chirality sector. (2) Accuracy of the approximation of $`ϵ(H_w)`$ can be enforced by projecting out the lowest few eigenvectors of $`H_w`$, and adding their correct contribution exactly . The molecular dynamics evolution of the eigenvector projectors $`P_\pm `$ in Eq. (1) can be included using ordinary first order perturbation theory. However, we have not included projections in our dynamical fermion code yet. (3) The approximation of $`ϵ(H_w)`$ used in the molecular dynamics steps need not be the same as the approximation of $`ϵ(H_w)`$ for the Metropolis accept/reject step. E.g. an approximation, which is smooth around the origin, can be used for the HMD part and the more accurate optimal rational approximation with projection for the accept/reject step. ## 2 Testing in the Schwinger model We tested our HMC algorithm in the $`N_f=1`$ and 2 Schwinger model. We first look at time histories of the topological charge $`Q`$, determined via the number of exact zero modes. We see (Fig. 1) that the topological charge changes, even in the massless case. We compared our HMC results with fiducial results, obtained by a brute force approach (exact diagonalization, and then reweighting with the fermion determinant of quenched gauge fields). We notice that the acceptance rate does not drop rapidly and the number of CG iterations does not diverge as $`\mu 0`$. ## 3 Conclusions The non-zero eigenvalues of $`H_o^2(\mu )`$ in each chirality sector contribute identically to the overlap fermion determinant. Utilizing this fact, and separating the contribution from the fermion zero modes in non-trivial gauge fields, we devised an HMC algorithm for any number of flavors of overlap fermions, with changes of topology possible even in the massless limit. The trick consists in working in the chirality sector without exact zero modes. Preliminary tests in the $`N_f=1`$ and 2 Schwinger model show that the algorithm works. The topological charge changes. The algorithm works even in the massless case. The acceptance rate does not go to zero or the CG count to infinity. Further tests on larger systems and in four dimensions are needed to better judge the usefulness of the algorithm for realistic dynamical simulations. This work has been supported in part by DOE contracts DE-FG05-85ER250000 and DE-FG05-96ER40979. We would like to thanks the organizers for the opportunity to present this poster during the workshop.
no-problem/9912/cond-mat9912298.html
ar5iv
text
# Berry phase induced persistent current in mesoscopic systems Since the discovery of the Berry phase , there has been much interest in the study of topological effects in the fields of quantum mechanics and condensed matter physics . The typical example to illustrate the Berry phase is the Aharonov-Bohm (AB) effect in the mesoscopic ring \[4$``$10\], where the relative phase would accumulate on the wave function of a charged particle due to the presence of a electromagnetic gauge potential. Similarly, when a quantum spin follows adiabatically a magnetic field that rotates slowly in time, the spin wave function acquires an additional geometric phase (Berry phase) besides the usual electromagnetic phase in the static magnetic fields. In this paper we investigate the persistent current \[11$``$17\] of the quasi-one-dimensional disordered rings in the presence of a static inhomogeneous magnetic field and show that the spin wave function accumulates the Berry phase when the spin of an electron traversing an AB ring adiabatically follows an inhomogeneous magnetic filed with a tilt angle and this phase leads to persistent equilibrium current . We begin by considering a quasi-one-dimensional ring of circumference $`L_x=2\pi r`$ and volume $`V=L_xL_yL_z`$. The ring is embedded in an static inhomogeneous magnetic field $`𝐁`$. For a spin-$`1/2`$ electrons of mass $`m`$ and charge $`e`$, the system may be described by the Hamiltonian $`={\displaystyle \frac{1}{2m}}\left[𝐩{\displaystyle \frac{e}{c}}𝐀^{em}(𝐫)\right]^2+u(𝐫){\displaystyle \frac{1}{2}}g\mu _B𝐁(𝐫)\mathbf{}𝝈,`$ (1) where $`𝐩`$, $`𝐫`$, $`g`$, $`\mu _B`$, and $`\mathrm{}𝝈/2`$ are the momentum, position, $`g`$ factor, Bohr magneton, and spin, respectively. The operator $`u(𝐫)`$ represents the spin-independent random impurity potential and, $`𝐀^{em}`$ is the electromagnetic gauge potential, with $`𝐁=\times 𝐀^{em}`$ relating it to the magnetic fields. In the following we specialize in the case of inhomogeneous magnetic fields with constant magnitude $`B`$, and we have parametrized $`𝐁`$ in terms of the spherical polar angles $`\chi `$ and $`\eta `$ so that it has Cartesian components $`B(\mathrm{sin}\chi (𝐫)\mathrm{cos}\eta (𝐫),\mathrm{sin}\chi (𝐫)\mathrm{sin}\eta (𝐫),\mathrm{cos}\chi (𝐫))`$, with the angles $`\chi `$ and $`\eta `$ being smooth functions of position. Using the Green’s function, the canonical disorder-averaged persistent current is given by $$I(\mathrm{\Phi }^{em})\frac{\mathrm{\Delta }V^2}{2}\frac{}{\mathrm{\Phi }^{em}}_{\mathrm{}}^{\mathrm{}}𝑑\epsilon _1_{\mathrm{}}^{\mathrm{}}𝑑\epsilon _2f(\epsilon _1)f(\epsilon _2)\underset{\alpha ,\alpha ^{}}{}K_{\alpha ,\alpha ^{}}(\epsilon _1,\epsilon _2),$$ (2) where $`\mathrm{\Delta }`$, $`\mathrm{\Phi }^{em}`$, $`\alpha `$ and $`f`$ are the mean level spacing, the spin index, the electromagnetic flux and the Fermi-Dirac distribution function., respectively. In this equation, the two-point correlator of the density of state $`K_{\alpha ,\alpha ^{}}`$ is defined as $`K_{\alpha ,\alpha ^{}}(\epsilon _1,\epsilon _2)={\displaystyle \frac{1}{2\pi ^2V^2\mathrm{}^2}}\text{Re}{\displaystyle 𝑑𝐱_1𝑑𝐱_2𝒞_{\alpha ,\alpha ^{}}(𝐱_1,𝐱_2;\epsilon _1\epsilon _2)𝒞_{\alpha ,\alpha ^{}}(𝐱_2,𝐱_1;\epsilon _1\epsilon _2)}.`$ (3) Here we have used the definition of the particle-particle pair propagator $`𝒞_{\alpha ,\alpha ^{}}(𝐱_1,𝐱_2;\epsilon _1\epsilon _2)={\displaystyle \frac{2\pi \rho (0)}{\mathrm{}}}G_{\alpha ,\alpha }^R(𝐱_2,𝐱_1;\epsilon _1)G_{\alpha ^{},\alpha ^{}}^A(𝐱_2,𝐱_1;\epsilon _2),`$ (4) where $`G^{R(A)}`$ is the retarded (advanced) Green’s function and $`\rho (0)`$ is the density of states (per unit volume and spin) at the Fermi surface. We have evaluated the pair propagator $`𝒞_{\alpha ,\alpha ^{}}`$ by using the diagrammatic method and obtained $`I(\mathrm{\Phi }^{em})={\displaystyle \frac{\mathrm{\Delta }}{2\pi \beta }}{\displaystyle \underset{\alpha =\pm 1}{}}\text{Re}{\displaystyle \frac{}{\mathrm{\Phi }^{em}}}{\displaystyle \underset{\nu _{\mathrm{}}>0}{}}{\displaystyle \underset{n_x=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\nu _{\mathrm{}}}{\left\{\nu _{\mathrm{}}+{\displaystyle \frac{\mathrm{}}{\tau _\phi }}+4\pi ^2E_{Th}\left(n_x+2\mathrm{\Phi }_\alpha \right)^2\right\}^2}},`$ (5) where $`\beta =1/k_BT`$ and $`E_{Th}=\mathrm{}D/L_x^2`$ is the Thouless energy, $`\nu _{\mathrm{}}=2\pi \mathrm{}/\beta `$ ($`\mathrm{}`$ is an integer) is the boson Matsubara frequency and $`\tau _\phi =L_\phi ^2/D`$ ($`L_\phi `$ is the phase coherence length) is the phase coherence time, respectively. In this equation $`\mathrm{\Phi }_\alpha =\mathrm{\Phi }^{em}/\mathrm{\Phi }_0+\alpha \mathrm{\Phi }^g`$, where $`\mathrm{\Phi }^{em}`$ is the electromagnetic flux through the area $`\pi r^2`$ and the geometric flux $`\mathrm{\Phi }^g`$ which corresponds to the Berry phase is given by $`\mathrm{\Phi }^g={\displaystyle \frac{1}{4\pi }}{\displaystyle _0^{2\pi }}𝑑\varphi \left[\mathrm{cos}\chi (\varphi )1\right]_\varphi \eta (\varphi ).`$ (6) The Berry phase arises from adiabatic approximation for the spin (dynamical Zeeman) propagator. It should be note that the expression Eq. (5) is only valid in the adiabatic regime in which the spin of the electron adiabatically follows the local direction of the non-uniform magnetic field. This adiabaticity requires that the precession frequency $`\omega _B=g\mu _BB/2\mathrm{}`$ is large compared to the reciprocal of the diffusion time $`\tau _d=L_x^2/D`$ ($`D`$ is the diffusion constant) around the ring, i.e., $`\omega _B\tau _d1`$, or equivalently $`BB_c2E_{Th}/g\mu _B`$ \[21$``$23\]. In the limit of zero temperature, the Matsubara sum turns into an integral $`_\nu 2\pi /\beta 𝑑\nu `$, which is easily evaluated. This yields for the averaged persistent current at $`T=0`$ as $`I(\mathrm{\Phi }^{em})`$ $`=`$ $`{\displaystyle \frac{I_0}{M}}{\displaystyle \underset{\alpha =\pm 1}{}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}\mathrm{exp}\left(n{\displaystyle \frac{L_x}{L_\phi }}\right)\mathrm{sin}\left(4\pi n\mathrm{\Phi }_\alpha \right)`$ (7) $`=`$ $`{\displaystyle \frac{I_0}{2M}}{\displaystyle \underset{\alpha =\pm 1}{}}{\displaystyle \frac{\mathrm{sin}\left(4\pi \mathrm{\Phi }_\alpha \right)}{\mathrm{cosh}\left({\displaystyle \frac{L_x}{L_\phi ^B}}\right)\mathrm{cos}\left(4\pi \mathrm{\Phi }_\alpha \right)}}.`$ $`I_0=e\upsilon _F/L_x`$ is the current carried by a single electron state in an ideal one-dimensional ring and $`M=k_F^2V/L_x`$ is the effective channel number, where $`\upsilon _F`$ is the Fermi velocity and $`k_F=m\upsilon _F/\mathrm{}`$ is the Fermi wave number. The average current is a periodic function of $`\mathrm{\Phi }^{em}`$ and $`\mathrm{\Phi }^g`$ with period $`\mathrm{\Phi }_0/2`$ and $`1/2`$, respectively. As the half flux periodicity of the electromagnetic flux in disordered rings , that of geometric flux is ascribed to the ensemble averaging for fixed particle number: averaging eliminates the first Fourier component of the current although the second components which results from the interference between time-reversed trajectories survives. In summary we have investigated the effect of Berry phase on the persistent current in a static inhomogeneous magnetic field and showed that the disorder-averaged current oscillates as a function of the geometric flux.
no-problem/9912/astro-ph9912458.html
ar5iv
text
# 1 Introduction ## 1 Introduction The important observation on the high redshift Type Ia supernovas in 1998 is to discover that the universe is expanding accelerated with the negative deceleration parameter<sup></sup> $$q_0\ddot{R}/(RH^2)=0.33\pm 0.17.$$ $`(1)`$ The universe must contain some new dark energy which equation-of-state $`w=p/\rho `$ is negative. In spite of the small non-zero cosmology constant ($`w_\lambda =1`$) is one of the possible explanations, people does not satisfy this scheme due to its bad coincidence and fine-turning problems. A popular candidate is the quintessence<sup></sup> ($`1<w_q<0`$), which is a slowly varied scalar field $`\varphi `$ and has an inverse power-law potential $`V=m^{4+\beta }\varphi ^\beta `$, where we call the parameter $`\beta `$ as the inverse power index of the quintessence. We see that its minimum, i.e., vacuum has zero energy, this tallies with such a thought that the true cosmological constant should be zero due to some unknown profound reason. Zlatev et al. find out that the quintessence has a nice property, the tracking behavior, and its potential initial values with almost 100 different orders can converge to a common evolving final state, i.e., the tracking solution<sup></sup>. This scheme can well solve the coincidence problem. The idea of the quintessence received enough attentions, many papers did further researches on the quintessence<sup></sup>, and developed many new ideas<sup></sup>. The inverse power index $`\beta `$ is an important parameter for quintessence, the quintessence will not be able to come into the tracking situation up to the present time if $`\beta `$ is too small, for example $`\beta <5`$ as shown by Ref.. An important problem is whether the present experiment, more concretely from the cosmic deceleration parameter, have added some restriction on this inverse power index. In this paper we shall research this interesting problem in detail. We shall find out the equations-of-states of the quintessence in various evolving stages of the universe at first, then express the deceleration parameter in these equation-of-states. The observation result of the deceleration parameter will constrain the inverse power index of the quintessence up to $`\beta 2`$. We shall give the evolution of the quintessence energy density, the cosmic scale factor and the equation-of-state of the quintessence in the future. ## 2 Equation-of-states of quintessence The evolution of the quintessence in the radiation dominated or the matter dominated epochs is dependence on its equation-of-states $`w_q=(\beta w_B2)/(\beta +2)`$, where $`w_B`$ is the equation-of-state of the background. We cite this formula directly, which has obtained in Ref.. The best tracking behavior requires that the evolution velocity of the quintessence in the radiation dominated epoch ($`w_r=1/3`$) is the equal to (or larger than) the one of the matter ($`w_m=0`$), i.e., $`w_q^{(r)}=(\beta /32)/(\beta +2)=0`$, then we obtain $`\beta =6`$. The equation-of-state of the quintessence in the matter dominated epoch is $`w_q^{(m)}=(\beta /2+1)^1`$, which evolution is slower than one of the matter. As the universe expanses, the quintessential potential energy will overweigh one of the matter and become quintessence dominated, it is important to know what is the equation-of-state of the quintessence in the quintessence dominated epoch, which can be obtained by analyzing the equation of field motion $`3H\dot{\varphi }=\beta m^{4+\beta }\varphi ^{\beta 1}`$ and $`3M_p^2H^2=m^{4+\beta }\varphi ^\beta `$, where $`M_p(8\pi G)^{1/2}`$ $`=2.4\times 10^{18}`$GeV is the Planck energy. Note that in this stage the field rolls slowly and the potential energy is dominated, these allow us to take the above approximation. We suppose that the solution is a simple power-law type of the time, $`\varphi t^\alpha `$. Comparing the exponential about time $`t`$ in the two sides of the equations, we get $`\alpha =2/(\beta +4)`$. This power value confirms further the conditions of the slow rolling $`\ddot{\varphi }3H\dot{\varphi }`$ and potential dominated $`\dot{\varphi }^2/2V`$, and the approximation is reasonable. This result shows that the quintessence field value is increasing slowly as the age of the universe increases. Then we obtain that the evolution of the quintessence energy density $`\rho _q`$ as the cosmic scale factor $`R`$ is $$\rho _q=\rho _{q0}\{1+\frac{6\tau }{\beta +4}\mathrm{ln}\frac{R}{R_0}\}^{\beta /2},$$ $`(2)`$ and the cosmic scale factor expands as the universe time $$R=R_0\mathrm{exp}\{\frac{\beta +4}{6\tau }[(\frac{t}{t_0})^{4/(\beta +4)}1]\},$$ $`(3)`$ where the constant $`\tau ^1=\frac{3}{2}H_0t_01`$ due to the matter dominated<sup></sup>, where $`H_{0\text{ }}`$is the Hubble constant in the time of the beginning of the quintessence dominated and $`t_0`$ is the universe age in the same time. It is obviously that the decreasing of the quintessence energy density as the increasing of the cosmic scale factor is very slow, its limited behavior is similar with the cosmological constant. Eqs.(2-3) determined the future destiny of our universe. We can obtain the equation-of-state of the quintessence as a function in the cosmic scale factor $$w_q(R)1\frac{d\mathrm{ln}\rho }{3d\mathrm{ln}R}=[(4+6\mathrm{ln}\frac{R}{R_0})^1\beta +1]^1.$$ $`(4)`$ The equation-of-state of the quintessence in the beginning of the quintessence dominated is obtained as $`w_q^{(q)}=w_q(R_0)=(\beta /4+1)^1`$. The Type Ia Supernovas measured by Perlmutter et al. have average redshift $`z0.4`$, in that epoch the universe undergoes the phase transition from the matter dominated to the quintessence dominated. Watching on the variation from the equation-of-state of the quintessence in the matter dominated epoch $`w_q^{(m)}`$ to one in the quintessence dominated epoch $`w_q^{(q)}`$ carefully, we see that in the middle time, in which the matter density is nearly equal to the quintessence density, the equation-of-state of the quintessence should be $`w_q^{(e)}=(\beta /\gamma +1)^1`$, here we take $`\gamma =3`$ may be a reasonable approximation. It is this equation-of-state of the quintessence $`w_q^{(e)}`$ that should be applied in the formula of the deceleration parameter of the recent observation of the high redshift supernovas. ## 3 Deceleration parameter Now we see the deceleration parameter $$q_0=\frac{1}{2}\mathrm{\Omega }_i+\frac{3}{2}w_i\mathrm{\Omega }_i,$$ $`(5)`$ where $`\mathrm{\Omega }_i\rho _i/\rho _c`$ are the ratios of various component densities to the critical density of the universe. We tested the correctness of this formula, even if in the case of the existence of the quintessence. We cite this correct formula directly, which appeared in Ref.. Considering the red shift effect, we have $$q_0=\frac{1}{2}\mathrm{\Omega }_{m0}(1+z)^3+(\frac{1}{2}\frac{3}{2}(\beta /\gamma +1)^1)\mathrm{\Omega }_{q0}(1+z)^{33/(\beta /\gamma +1)}.$$ $`(6)`$ Noted the curvature term $`\mathrm{\Omega }_u`$ with $`w_u=1/3`$ has just be canceled. When $`\beta =0`$ and $`z=0.4`$ we obtain $`q_0\frac{4}{3}\mathrm{\Omega }_{m0}\mathrm{\Omega }_{\lambda 0}`$, and Ref. gives the observation result $`0.6q_0=0.8\mathrm{\Omega }_{m0}0.6\mathrm{\Omega }_{\lambda 0}`$ $`=0.20\pm 0.10`$. In the later we shall use an equivalent result eq.(1). Using the eqs.(6) and (1) one can discuss what values the density ratios should take in the figure of $`\mathrm{\Omega }_{m0}`$ and $`\mathrm{\Omega }_{\lambda 0}`$, like the treatment in Ref.. However we can think that as an approximation we can take $`\mathrm{\Omega }_{m0}=0.3`$, $`\mathrm{\Omega }_{q0}=0.7`$ at first, then use $`q_0`$ formula to constrain the parameter $`\beta `$. Thus we obtain an important restriction, i.e., parameter $`\beta `$ is less than $`2`$. Here we used the assumption of the flat universe predicted by the inflation models<sup></sup>. Let us look at some concrete data. For example, if $`\beta =1`$, then $`w_q^{(e)}=0.75`$ and $`q_0=0.15`$ which is out off the lower limit of the observation value; if $`\beta =1/2`$, then $`w_q^{(e)}=0.86`$ and $`q_0=0.22`$ in according to eq.(6). If we take $`\mathrm{\Omega }_{m0}=0.2`$, $`\mathrm{\Omega }_{q0}=0.8`$, $`z=0.4`$, $`\beta =2`$, then $`w_q^{(e)}=0.60`$ and $`q_0=0.20`$; or we take $`\beta =3`$ and the same other parameters, then $`w_q^{(e)}=0.50`$ and $`q_0=0.06`$. Anyhow the inverse power index must be less than $`3`$. In fact it seems impossible that the sum of the densities of the cold dark matter and the baryon matter is so small like $`\mathrm{\Omega }_{m0}=0.2`$ in according to the estimated matter quantity from the X-ray observation on the cluster of the galaxies<sup></sup>, $`\mathrm{\Omega }_{m0}=0.35\pm 0.07`$. If the inverse power index of the quintessence is $`\beta =2`$, the equation-of-state of the quintessence in the radiation dominated era is $`w_q^{(r)}=1/3`$, and the quintessence energy density is $`\rho _qa^2`$, it decreases too slowly. At the early stage of the universe, due to $`\rho _q/\rho _\gamma =(1+z)^2`$ and very high red-shift, require that the quintessence energy density must be very low. In this case it is very easy for the quintessence to produce overshot behavior and can not begin to track even in very late time, therefore the initial condition can not be adjusted in wide rang, and then it can not solve the coincidences problem. In fact the ref. has given a conclusion that the inverse power index must be larger than $`5`$. Thus the quintessence with the low inverse power index lost its attracting property. During the derivation, we use some reasonable approximations, we think the further exact results will not affect our main conclusion. ## 4 Two term potential Of course we can use more complicated potential to overcome this difficulty, for example $`V=m_6^{10}\varphi ^6+m_2^6\varphi ^2`$. When field $`\varphi `$ is small, the first term is dominated, it has a good tracking behavior and wide adjusting rang of the initial condition. When field $`\varphi `$ becomes large, the second term is dominated, the quintessence has the suitable the equation-of-state $`w_q`$ for the deceleration parameter of the supernovas. However, since the second term has a lower inverse power index, the energy scale parameter $`m_2`$ has to be rather small, this may induce the fine-turning problem. On the other hand, in this scheme, we must turn finely the relative ratio of the mass parameters $`m_6`$ and $`m_2`$, let it undergo a transition from the high inverse power term dominated to the low inverse power term dominated just before the time of the matter-quintessence equality. Let us consider at what redshift this transition should happen. The tracking solution should satisfy the equation<sup></sup> $$V^{\prime \prime }=\frac{9}{2}\frac{\beta +1}{\beta }(1w_q^2)H^2,$$ $`(7)`$ then we obtain the relation between the quintessential field value $`\varphi `$ and the quintessential energy faction $`\mathrm{\Omega }_q`$ in the matter-dominated epoch $$\varphi ^2=\frac{2}{3}\beta (\beta +2)^2(\beta +4)^1\mathrm{\Omega }_{q0}(1+z)^2M_p^2,$$ $`(8)`$ When $`\beta =6`$, then $`\varphi _65\mathrm{\Omega }_{q0}^{1/2}(1+z)^1M_p`$ and when $`\beta =2`$, then $`\varphi _22\mathrm{\Omega }_{q0}^{1/2}M_p`$, the transition should happen in the time of $`\varphi _6<0.6\varphi _2`$. Therefore we obtain the redshift $`z>3`$. The more early this transition from high to low inverse power terms process, the more narrow the adjusting rang of the initial condition is. Why does the nature arrange these two transitions in such order? The order magnitude of the mass parameters are $`m_610^5`$GeV and $`m_210`$MeV, if the low inverse power term is $`m_1^5\varphi ^1`$, then $`m_11`$keV. As a comparison, the cosmological constant $`\mathrm{\Lambda }=`$$`m_0^4`$ has $`m_010^3`$eV. We see that the fine turning problem is relaxed<sup></sup>. ## 5 Conclusion The important problem is whether the observation data can constrain some models. We obtain the various equation-of-states of the quintessence in the different stages of the universe, specially the equation-of-states of the quintessence $`w_q^{(e)}=(\beta /3+1)^1`$ in the time of the matter-quintessence equality. Using it we give the restriction on the inverse power index $`\beta 2`$ of the quintessence potential from the deceleration parameter of the high redshift supernovas. The two functions of the tracking and the present quintessential energy have to be achieved by the separated different two terms in the quintessence potential respectively. Why the God takes so refinement arrangement? It is hopeful that the deceleration parameter which will be more accurate in future, combining other observation, specially the total density ratio $`\mathrm{\Omega }_0=\mathrm{\Omega }_m+\mathrm{\Omega }_q`$ from the position of the first Doppler peak of CMBR<sup></sup>, will give the more strict constrain on the potential parameter of the quintessence. If somebody does not approve of the non-terseness of the two term potential of the quintessence, one can explore the exponential potentials or other more complicated ones. To search new ideas to replace one of the cosmological constant is the interesting challenge problem. Acknowledgment: This work is supported by The foundation of National Nature Science of China, No.19675038 and No.19777103. The author would like to thank useful discussions with Profs. J.R.Bond, L.Kofman, U.-L.Pen, X.-M.Zhang Y.Z.-Zhang and X.-H.Meng. References: S.J.Perlmutter et al., Nature 391(1998)51. S.J.Perlmutter, et al., astro-ph/9812133. A.G.Riess, et al., Astron.J.116(1998)1009. B.Ratra and P.L.E.Peebles, Phys.Rev.D37(1988)3406. I.Zlatev, L.Wang and P.J.Steinhardt, Phys.Rev.Lett.896 (1999); P.J.Steinhardt, L.Wang, and I.Zlatev, Phys.Rev.D59(1999)123504. M.S.Turner and M.White, Phys.Rev.D56(1997)4439. A.R.Liddle and R.J.Scherrer, Phys.Rev.D59(1999)23509. R.R.Caldwell, R.Dave and P.J.Steinhardt, Phys.Rev.Lett.80(1998)1582. P.J.E.Peebles and A.Vilenkin, astro-ph/9810509. E.W. Kolb and M.S. Turner, The Early Universe, Addison Wesley, 1990. M.S.Turner, astro-ph/9904049; astro-ph/9912211. A.Linde, Particle Physics and Inflationary Cosmology, 1990 by Harwood Academic Publishers. J. Mohr et al, Astrophys.J., in press (1999) (astro-ph/9901281). S.Weinberg, Rev.Mod.Phys.61(1989)61. M.Kamionkowski and A.Kosowsky, astro-ph/9904108. S. Dodelson and L. Knox, astro-ph/9909454.
no-problem/9912/astro-ph9912055.html
ar5iv
text
# The concave X–ray spectrum of the blazar ON 231: the signature of intermediate BL Lac objects ## 1 Introduction The continuum emission from Active Galactic Nuclei (AGN) is both highly luminous and rapidly variable, especially for the blazar class (BL Lac objects and violently variable quasars). Determining the continuum production mechanism is critical for understanding the central engine in AGNs, a fundamental goal in extragalactic astrophysics. The observed radiation of blazars is dominated by the emission of a jet whose plasma moves relativistically at small angles to the line of sight (Blandford & Rees 1978). Early multiwavelength studies provided the first strong evidence for bulk relativistic motion, later confirmed directly with VLBI observations (Vermeulen & Cohen 1994). However, single epoch spectra cannot constrain the models of variability in relativistic jets (e.g. Königl 1989; Ulrich et al. 1997). The overall spectral energy distribution (SED) of blazars shows two broad emission peaks: the lower frequency peak is believed to be produced by synchrotron emission, while the higher frequency peak should be due to the inverse Compton process. The location of the synchrotron peak is used to define different classes of blazars: HBL (High frequency peak blazar, peaking in the UV or X–ray frequencies) and LBL (Low frequency peak blazar, peaking in the IR or optical bands) (Giommi & Padovani 1994). Since blazars emit over the entire electromagnetic spectrum, a key for understanding blazar variability is the acquisition of several wide band spectra in different luminosity states during major flaring episodes. Coupling spectral and temporal information greatly constrains the jet physics, since different models predict different variability as a function of wavelength. Important progress in this respect has been achieved recently for some of the brightest and most studied blazars, as PKS 2155-304 (Chiappetti et al. 1999; Urry et al. 1997), BL Lac (Bloom et al. 1997), 3C 279 (Wehrle et al. 1998), Mkn 501 (Pian et al. 1998), Mkn 421 (Maraschi et al. 1999). We successfully used the BeppoSAX satellite to perform observations of blazars that were known to be in a high state from observations carried out both in other bands (mainly optical and TeV) and in the X–ray band itself. The good BeppoSAX sensitivity and spectral resolution over a very wide X–ray energy range (0.1–200 keV) are ideal to constrain the existing models for the X–ray emission. The BL Lac object ON 231 (W Com, B2 1219+28, $`z=0.102`$), which had been observed in the X–ray band by Einstein IPC in June 1980 with a 1 keV flux of $`1\mu `$Jy (Worrall & Wilkes 1990) and by ROSAT PSPC in June 1991 with a 1 keV flux of $`0.4\mu `$Jy and energy spectral index $`\alpha =1.2`$ (Lamer et al. 1996, Comastri et al. 1997), had an exceptional optical outburst in April–May 1998, reaching the most luminous state ever recorded, about 40 mJy in the R band. The optical broad band spectrum was strongly variable. In particular, it was very flat at the maximum with a broad band energy spectral index of 0.52, while before the flare it was found to be 1.4; the peak frequency moved from near IR to beyond the B band. During the flare a sudden and large increase of the linear polarisation, from about 3% to 10%, was also observed and it remained high at least to the end of May, indicating a non-thermal origin of the burst (Massaro et al. 1999). An optical spectrum of ON 231 was obtained by Weistrop et al. (1985) and it shows two weak emission features identified with the H<sub>α</sub> (EW 1 or 2 Å) and O III, from which a redishift estimate of z=0.102 was derived. No more recent spectra, in particular during the flare, have been published. Following the optical flare, we triggered our X–ray observation and ON 231 was observed by BeppoSAX in May, with a second pointing performed a month later, in June. We measured for the first time the hard X–ray spectrum of this source above 3 keV and in different brightness states. In these occasions simultaneous optical observations were also performed. Unfortunately the source was already close to the sun and it was impossible to monitor it long enough to search for correlated variability at optical and X–ray frequencies. In this paper we present and discuss the results of these $`Beppo`$SAX observations together with simultaneous optical data. ## 2 X-ray Observations ### 2.1 Observations and Data Reduction The BeppoSAX satellite is the result of an international collaboration between the Italian Space Agency (ASI), the Netherlands Agency for Aerospace Programs (NIVR) and the Space Science Department of the European Space Agency (SSD-ESA). It carries on board four Narrow Field Instruments (NFI) pointing in the same direction and covering a very large energy range from 0.1 to 300 keV (Boella et al. 1997a ). Two of the four instruments have imaging capability, the Low Energy Concentrator Spectrometer (LECS), sensitive in the range 0.1–10 keV (Parmar et al. PMB97 (1997)), and the three Medium Energy Concentrator Spectrometers (MECS) sensitive in the range 1.3–10 keV (Boella et al. 1997b ). The LECS and three MECS detectors are all Gas Scintillation Proportional Counters and are at the focus of four identical grazing incidence X–ray telescopes. The other two are passively collimated instruments: the High Pressure Proportional Counter (HPGSPC), sensitive in the range 4–120 keV (Manzo et al. 1997) and the Phoswich Detector Systems (PDS), sensitive in the range 13–300 keV (Frontera et al. 1997). For a full description of the BeppoSAX mission see Boella et al. (1997a). The log of the ON 231 observations is given in Table 1, together with the exposures and the mean count rates in the various instruments. The data analysis for the LECS and MECS instruments was based on the linearized, cleaned event files obtained from the online archive (Giommi & Fiore GF97 (1998)). Light curves and spectra were accumulated with the FTOOLS package (v. 4.0), using an extraction region of 8.5 and 4 arcmin radius for the LECS and MECS, respectively. At low energies the LECS has a broader Point Spread Function (PSF) than the MECS, while above 2 keV the PSFs are similar. The adopted regions provide more than 90% of the source counts at all energies both for the LECS and MECS. The LECS and MECS background is low and rather stable, but not uniformly distributed across the detectors. For this reason, it is better to evaluate the background from blank fields, rather than in concentric rings around the source region. Thus, after having checked that the background was not varying during the whole observation by analyzing a light curve extracted from a source–free region, we used for the spectral analysis the background files accumulated from long blank field exposures and available from the SDC public ftp site (see Fiore et al. FGG99 (1999), Parmar et al. Petal98 (1999)). The PDS was operated in the customary collimator rocking mode, where half collimator points at the source and half at the background and they are switched every 96 s. The PDS data were analysed using the XAS software (Chiappetti & Dal Fiume 1997) and the data reduction was performed according to the procedure described in Chiappetti et al. (1999), inclusive of spike filtering. In the case of May data the attitude control software using a single gyro was in a non favorable condition, resulting in significant gaps in the reconstructed attitude data up to 20 min in each orbit while the satellite pointed at the source (these intervals are rejected by the standard event file creation software). However since the MECS is kept on during such intervals, one can use XAS to accumulate images from telemetry and verify that, despite a little blur, the satellite is not drifting significantly. Considered also the sizeable flat top response of the PDS collimator, it is therefore justified to use also such intervals in PDS spectra accumulation, e.g. using a plain limit on Earth elevation angle above 5 degrees (this explains why the PDS exposure times in Table 1 are longer). No such problem was present in June data (attitude gaps were small and confined during Earth eclipses). The source was not detected by the HPGSPC detector, thus we will not discuss these data. ### 2.2 Spectral Analysis For the spectral analysis, the LECS data have been considered only in the range 0.1–4 keV, due to still unsolved calibration problems at higher energies (Fiore et al. FGG99 (1999)). To fit the LECS, MECS and PDS spectra together, one has to introduce a constant rescaling factor to account for uncertainties in the inter–calibration of the instruments. The acceptable values for these constants are in the range 0.7–1.0 for the LECS and in the range 0.77–0.95 for the PDS, with respect to the MECS (Fiore et al. FGG99 (1999)). The spectral analysis was performed with the XSPEC 10.0 package. As expected, during the May observation ON 231 was in a high state with respect to previous X–ray observations (Comastri et al. 1997) and the source was detected also with the PDS up to 100 keV. In the fitting procedure we first considered only the LECS and MECS data. We fitted a single power law model plus absorption with the column density fixed at the Galactic value $`N_\mathrm{H}=2\times 10^{20}`$ cm<sup>-2</sup>. This model does not give a good fit to the data yielding a reduced $`\chi _r^2=1.9`$ (60 degree of freedom). The fit did not improve by letting $`N_\mathrm{H}`$ free to vary. In particular, the data at energies above $`4`$ keV could not be fitted by the same power law fitting the lower energies data (see residual in Fig. 1, lower panel). Instead, a broken power law (BPL) model provides a good fit to the data with $`\chi _r^2=0.96`$ (58 degree of freedom), the best fit values and error at 90% (for three parameters of interest) are given in Table 2. Notice that the spectrum hardens significantly at higher energies. We then considered also the PDS data which approximately lie on the extrapolation of the BPL that fits the LECS & MECS data. By including also the PDS data in the fit procedure, the second photon spectral index ($`\mathrm{\Gamma }_2`$) is somewhat flatter than before, the break is at slightly higher energy and the errors on the second spectral index are smaller (see Table 2). The flux at 1 keV is $`1.7\mu `$Jy. In Fig. 1, top panel, we report the LECS-MECS-PDS spectra together with the BPL best fit. In this fit the LECS intercalibration constant factor has an acceptable value of 0.80, while for the PDS we kept the constant fixed at 0.9. In the same figure, lower panel, we report the best–fit with a single power law for comparison. During the June observation the source was detected in a state fainter than in May, although still higher than earlier X–ray observations. Again a BPL model was still necessary to fit the LECS–MECS data (a fit with a simple power law gave a $`\chi _r^2=3.3`$ for 47 dof). The first spectral index ($`\mathrm{\Gamma }_1`$) is very similar to the previous observation, while the break is at lower energies. This suggests that the break moves toward lower energies when the source is weaker. The second spectral index ($`\mathrm{\Gamma }_2`$) seems steeper, but at 90% confidence level it is consistent with the value found for the May observation (due to the poorer statistics at high energies, the second spectral index $`\mathrm{\Gamma }_2`$ is not very well constrained). The LECS constant is 0.85. The PDS detection in June is less significant and it does not add significant information. Again we kept the PDS rescaling constant factor fixed to 0.9 in the best-fit procedure. The inclusion of these data does not change the results (see Table 2). The flux at 1 keV is $`1.2\mu `$Jy. Given the concave shape of our two spectra, we then fitted the sum of two power law models, which is more physical than a concave broken power law. The results are also reported in Table 2, together with the ratio between the two power law normalizations. Formally the fit is as good as the BPL ones, with the first spectral index steeper and the second one flatter. The two power laws cross at about the values of the breaks found with the BPL model. ### 2.3 Time Variability As apparent from the best-fit fluxes reported in Table 2, during the second observation the source was weaker. We checked if the amount of variability was different at energies lower or higher than the break. We considered the LECS and MECS detectors, which have a much higher statistics than the PDS, and calculated the count rates in four different energy bands: 0.1–2.0, 0.1–4.0, 2.0–10 and 5.0–10 keV, for the two observations. The values for the May observation are: $`0.077\pm 0.002`$, $`0.086\pm 0.002`$, $`0.050\pm 0.002`$, $`0.014\pm 0.001`$. For the June observation we have: $`0.050\pm 0.003`$, $`0.059\pm 0.002`$, $`0.040\pm 0.001`$, $`0.013\pm 0.001`$. Thus, on a monthly time scale, the amount of variability in the energy band 0.1-10 keV seems to be greater at softer energies. In the May observation rapid X–ray variability of about a factor of three in 4–5 hours was clearly detected, but only at energies smaller than 3–4 keV, confirming a higher amount of variability at energies below the break. This can be seen from the light curves of Fig. 2: in the 0.1–4 keV band the flux increased by a factor $``$3 just after the starting of the observation and reached the maximum level at about 30h. This level was maintained for about 2–3 hours and then the count rate declined to $``$ 0.06 cts s<sup>-1</sup>, comparable to the level measured at the beginning of the observation. Above 4 keV this variability, if at all present, is much less pronounced. Note that our $`3\sigma `$ limit, in the band 4–10 keV, corresponds to a variability of 40%. Thus, at high energies, the source is much less variable than below 4 keV. We extracted LECS and MECS spectra during the flare (from the third to the ninth points of the X–ray light curve shown in Fig. 2) and outside the flare and performed the spectral analysis. Again, in both cases a power law did not fit the data and a BPL model was necessary. The first spectral index is steeper during the flare ($`\mathrm{\Gamma }_1=2.7\pm 0.06`$ vs $`2.4\pm 0.15`$ outside the flare). The break seems to move at higher energies (best fit values are 4.4 and 3.5 keV, respectively), although the two values are consistent inside the 90% confidence errors for three parameters of interest. The second spectral index does not change at all. Thus, also the fast variability that we detected during the May observation, suggests that the break moves at higher energies when the source flux increases. In the June observation we did not detect significant variability, neither at high nor at low energies. ## 3 Optical Observations Optical photometry of ON 231 during the BeppoSAX pointings was performed with some telescopes in Italy, in the standard bandpasses Johnson B, V and Cousins R, I, operated by the Perugia and Torino Observatories and by the Istituto Astronomico of University “La Sapienza” in Roma. The main results of the optical observations during the great 1998 outburst of ON 231 were already presented by Massaro et al. (1999). A detailed description of the instrumentation and data reduction, together with a complete data list up to 1998 June 9 can be retrieved from the article by Tosti et al. (1999). The mean V, R<sub>c</sub> and I<sub>c</sub> magnitude are given in Table 1. From these data we also evaluated the optical (energy) spectral index (assuming $`A_V=0.19`$) which was found equal to $`1.24\pm 0.08`$ for all observations. In Fig. 3 we show the optical light curve of ON 231 in the R band from the end of April to about the end of June with the data obtained at the three observing sites. The times of the two X–ray observations are marked. From this light curve we can see that the R magnitude in the period between the two $`Beppo`$SAX pointings was in the interval 13.0 – 13.5: the source remained quite bright but at a mean level fainter than that of great burst of the end of April. In May and June the angular distance of ON 231 from the Sun was small and we were able to perform our optical observations only for few hours: in particular, the observations of May were only at the beginning and at the end of the $`Beppo`$SAX pointing and then missed the “flare” observed in the soft X–rays (see Fig. 2). In June, the weather conditions allowed to observe ON 231 only at the beginning of the $`Beppo`$SAX pointing. ## 4 Discussion ### 4.1 SED In Fig. 4 we show the spectral energy distribution (SED) of ON 231, including our simultaneous X–ray and optical data. The SED clearly shows that in the X–ray band we have detected, simultaneously and with the same instruments, both the synchrotron and the inverse Compton emission in the spectrum of a blazar. Simultaneous detection of both components in the X-ray spectrum of blazar have already been reported by Kubo et al. (1998) and by Giommi et al. (1999) for S5 0716+714, although not as clearly as in ON 231. In the same figure we plot other three sets of quasi–simultaneous observations: i) the data of the 1996 multiwavelength campaign as reported by Maisack et al. (1997); ii) the optical and $`\gamma `$–ray observations during 1995, when the source reached the brightest state in the EGRET band; iii) the infrared, optical, X–ray and $`\gamma `$–ray data during 1991–1992, when the source was first detected in the $`\gamma `$–ray band by EGRET. The last two sets of data are not strictly simultaneous (the $`\gamma `$–ray fluxes detected in 1991–1992 refer to the sum of various pointings), but can illustrate the different states of the source. The source was also detected by IRAS (Impey & Neugebauer 1988), and the corresponding IR fluxes are reported in Fig. 4, even if they are not simultaneous with any other observations. Note that there are some inconsistencies between the data in 1991–1992 as reported in Table 1 of von Montigny et al. (1995) and the fluxes reported in Fig. 5 of the same paper, which are consistent with the flux reported by Sreekumar et al. (1996). We have reported the data as shown in Fig. 5 of von Montigny et al. (1995). As can be seen, the 1991–1992 $`\gamma `$–ray spectrum is extremely hard ($`\alpha 0.4\pm 0.4`$). We could not find the spectral index for the 1995 $`\gamma `$–ray flux, but the shape of the spectrum combining all observations together (from 1991 to 1995) is $`\alpha 0.73\pm 0.18`$ (Hartman et al. 1999), suggesting that the combined spectrum is steeper than the 1991–1992 spectrum (and suggesting that the 1995 spectrum is steeper still). Also shown are the upper limits in the TeV band, as derived by WHIPPLE and HEGRA observations (Maisack et al. 1997) during Jan–Feb 1996. Alike other blazars, the SED of ON 231 is characterized by two broad components, the first peaking at IR–optical frequencies and the second in the $`\gamma `$–ray band. The first is believed to be synchrotron emission by a relativistic jet, while the second component has been interpreted as synchrotron self–Compton scattering, possibly including some contribution from seed photons produced externally to the jet (see e.g. Ghisellini & Madau 1996 and reference therein) or synchrotron by ultra–relativistic electron–positron pairs generated by relativistic protons (the proton blazar model, Mannheim 1993, see Maisack et al. 1997 for the application of this model to ON 231). The only other spectral information in the X–ray band is from ROSAT: Comastri et al. (1997), using a single power law model, found an energy spectral index $`\alpha _x=1.2\pm 0.05`$, in agreement with Lamer et al. (1996). An earlier determination of the spectral shape using Einstein data resulted in an unconstrained spectral index (Worrall & Wilkes 1990). The shape of the X–ray spectrum at the time of the ROSAT observation seems different from the one determined by $`Beppo`$SAX. This could be due both to the narrower spectral coverage of ROSAT and to the fact that the source was in a weaker state. The 0.1–2.5 keV ROSAT spectrum, which is flatter than that measured by us in the same energy band (see index $`\mathrm{\Gamma }_1`$ in Table 2), could be due to the contribution, in this energy band, of the flat component that $`Beppo`$SAX sees above 4 keV in May and above 2.5 keV in June. If the break moves at lower energy when the source is weaker, as suggested by the two $`Beppo`$SAX observations, then during the ROSAT observation, when the source was more than a factor of 4 weaker, the break should be inside the ROSAT band, or even at lower energies. In this case the spectrum detected by ROSAT would be either a combination of synchrotron and Compton emission or purely due to Compton scattering, explaining the relative flatness of the ROSAT spectrum. A steepening of the spectrum going to softer energies in the SED is also required by the quasi simultaneous IR–optical data (Massaro et al. 1994). ### 4.2 Limits on magnetic field and particle energies The observed “flare” in the soft X–ray band is symmetrical (equal rise and decay timescale), suggesting that the variability timescale is determined by the light crossing time of the emitting region, $`R/c`$ (see Chiaberge & Ghisellini 1999). This in turn implies that the cooling time is shorter than $`R/c`$, allowing us to put limits on the value of the magnetic field and on the energy of the electrons producing the variable flux at the oberved frequency $`\nu _x`$. Using $`t_{\mathrm{var}}=5`$ hours and $`\nu _x=3\times 10^{16}`$ Hz, we derive $`B>0.4\delta ^{1/3}`$ Gauss and $`\gamma _x<1.5\times 10^5\delta ^{1/3}`$. Here $`\delta =[\mathrm{\Gamma }\sqrt{\mathrm{\Gamma }^21}\mathrm{cos}\theta ]^1`$ is the Doppler beaming factor, where $`\theta `$ is the viewing angle. Since the peak of the synchrotron emission must be at a frequency $`\nu _\mathrm{s}<3\times 10^{14}`$ Hz, the corresponding Lorentz factor of the electrons emitting at the peak is $`\gamma _\mathrm{s}<1.5\times 10^4\delta ^{1/3}`$. This implies that the peak of the self Compton flux is at $`h\nu _\mathrm{c}=(4/3)\gamma _\mathrm{s}^2h\nu _\mathrm{s}<370\delta ^{2/3}`$ MeV. Tighter constraints can be obtained assuming that the optical and X–ray emission are cospatial, and that also the cooling time of the optical emitting electrons is shorter than the light crossing time. Massaro et al. (1999) report intranight variations in the I and B bands of 0.2 magnitude in 1 hour, with an approximately symmetric time profile. Assuming again $`t_{\mathrm{var}}=5`$ hours, we obtain $`B>1.5\delta ^{1/3}`$ Gauss and $`\gamma _\mathrm{s}<9.7\times 10^3\delta ^{1/3}`$. ### 4.3 Homogeneous SSC model The far IR to $`\gamma `$–ray emission from blazars can be explained by simple homogeneous and one–zone synchrotron inverse Compton models, with the emitting region moving relativistically towards the observer. The inverse Compton emission may have two components: the first is produced by relativistic electrons scattering off locally produced synchrotron photons (SSC), while the second corresponds to the scattering of photons produced in other regions (EC), either elsewhere in the jet or by the broad emission line clouds, or by some scattering plasma within these clouds. Ghisellini et al. (1998), analyzing all blazars of known redshift detected by EGRET with spectral information in the $`\gamma `$–ray band, found that the EC component decreases its contribution as the total luminosity decreases, with lowest luminosity BL Lac objects requiring a negligible amount of EC. We applied a pure SSC model to the SED of ON 231, as shown in Fig. 4. In particular we have tried to explain the different SED by changing the minimum number of parameters. The applied models assume to continuously inject, in a spherical source of radius $`R`$ embedded in a tangled magnetic field $`B`$, relativistic electrons with a power law energy distribution $`\gamma ^s`$, between $`\gamma _{\mathrm{min}}`$ and $`\gamma _{\mathrm{max}}`$. The total luminosity injected in the form of relativistic electrons is $`L_{\mathrm{inj}}^{}`$, calculated in the comoving frame. We also assumed to observe the source at the viewing angle $`1/\mathrm{\Gamma }`$, so that the Doppler factor $`\delta =\mathrm{\Gamma }`$. The steady–state particle distribution is the result of the injection and cooling processes, and we also account for possible escape of the particles, which may be relevant for ON 231. It is assumed that particles escape at some velocity $`v_{\mathrm{esc}}=c\beta _{\mathrm{esc}}`$, independent of their energy. Further details about this model can be found in Ghisellini et al. (1998). The input parameters for the models shown in Fig. 4 are given in Table 3. The size of the source and the Doppler factor have been kept fixed; the slope $`s`$ of the injected electron distribution and $`\gamma _{\mathrm{max}}`$ are similar. The magnetic field does not change, while $`\gamma _{min}`$ changes by $`30\%`$, from 2300 to 3000. The largest change concerns the injected power, increasing from $`L_{\mathrm{inj}}^{}=3.3\times 10^{41}`$ erg s<sup>-1</sup> (1991 model), to $`L_{\mathrm{inj}}^{}=2.2\times 10^{42}`$ erg s<sup>-1</sup> (1998 model), an increase of a factor 7. Within the SSC model, it is not possible to account for the very hard $`\gamma `$–ray spectrum of 1991–1992. The quasi–simultaneous optical data requires the peak of the optical emission to be at frequencies between the near IR and the optical, while the hard EGRET spectrum indicates that the Compton peak is at energies greater than a few GeV. This translates in a lower limit for the energy of the electron emitting at the peaks of $`\gamma _{\mathrm{peak}}>6\times 10^4`$. These electrons emit at $`\nu _{\mathrm{s},\mathrm{peak}}2\times 10^{14}`$ Hz if the magnetic field $`B<1.4\times 10^2\delta ^1`$ Gauss. This is not consistent with the limits derived in the previous section. In addition, a small magnetic field would imply a very large radiation to magnetic energy density ratio, $`U_\mathrm{r}/U_\mathrm{B}`$, and then an excessive self Compton flux, unless the Doppler factor is exceedingly large ($`\delta >100`$, see eq. 2.5 and 2.6 in Ghisellini et al. 1996). We then conclude that either the hard 1991–1992 $`\gamma `$–ray spectrum is due to another source, or, if it will be confirmed to be associated with ON 231, is produced by another component (i.e. to inverse Compton scattering with photons produced externally to the jet). Note that BL Lacertae showed a similar behavior (hardening of the $`\gamma `$–ray spectrum) during the flare of summer 1997 (Bloom et al. 1997). This was interpreted (Sambruna et al. 1999; Madejski et al. 1999; Bottcher & Bloom 1999) as due to an increased contribution of emission line photons to the inverse Compton scattering process. Something similar could have happened also to ON 231 (during the 1991–1992 EGRET observation), but the lack of spectroscopic observations preclude any further conclusions. What is interesting, and peculiar, in ON 231 is the sharp flattening, above $`24`$ keV, of the X–ray spectrum. A population of electrons which cools only radiatively can not account for spectra as flat as observed in ON 231: the flattest predicted spectrum in the case of radiative cooling has an energy spectral index $`\alpha =0.5`$ (see e.g. Ghisellini et al. 1998). We therefore must invoke an additional mechanism. One likely possibility is escape. In this case high energy electrons would cool before escaping, while low energy electrons would preferentially escape before cooling radiatively. The corresponding steady–state particle distribution would then show a flattening towards the low energy part, accounting for the very flat inverse Compton component emerging above 4 keV. The model we have applied to ON 231 indicates that $`\beta _{\mathrm{esc}}0.3`$$`0.4`$, corresponding to an escape time of the order of 2–3 light crossing times $`R/c`$. The variability predicted by the model can account for the observed variability in the soft X–ray band and for the much less variable hard X–ray flux, even if the bolometric luminosity does not change. This can be achieved by changing (even by a small amount) the slope $`s`$ of the injected electron distribution, without changing the total injected power. This will change the synchrotron spectrum above the synchrotron peak (characterized by $`\alpha s/2`$), but not the flux below, nor the self Compton flux below the Compton peak, produced by low energy electrons scattering low frequency synchrotron photons. The $`Beppo`$SAX data of ON 231 show that this source can be considered a BL Lac object intermediate between the HBL and LBL sources. The concave shape of the X–ray spectrum in the 0.1–10 keV band can be used to define the class of intermediate blazars, which should be characterized, in this band, by the presence of the steep tail of the synchrotron radiation and the hard emerging of the Compton emission. ###### Acknowledgements. This research was financially supported by the Italian Space Agency. The Roma and Torino groups acknowledges financial support from the Italian Ministry for University and Research (MURST) under the grants Cofin98-02-03 and Cofin98-02-32. We thank the BeppoSAX Science Data Center (SDC) for their support in the data analysis.
no-problem/9912/astro-ph9912240.html
ar5iv
text
# Ultra-High Energy Cosmic Rays from Young Neutron Star Winds ## Abstract The long-held notion that the highest-energy cosmic rays are of distant extragalactic origin is challenged by observations that events above $`10^{20}`$ eV do not exhibit the expected high-energy cutoff from photopion production off the cosmic microwave background. We suggest that these unexpected ultra-high-energy events are due to iron nuclei accelerated from young strongly magnetized neutron stars through relativistic MHD winds. We find that neutron stars whose initial spin periods are shorter than $`10`$ ms and surface magnetic fields are in the $`10^{12}10^{14}`$ G range can accelerate iron cosmic rays to greater than $`10^{20}`$ eV. These ions can pass through the remnant of the supernova explosion that produced the neutron star without suffering significant spallation reactions or energy loss. For plausible models of the Galactic magnetic field, the trajectories of the iron ions curve sufficiently to be consistent with the observed, largely isotropic arrival directions of the highest energy events. Fermilab-Pub-99-348-A and Subject headings: acceleration of particles, magnetic fields, MHD, plasmas The detection of cosmic rays with energies above $`10^{20}`$ eV has triggered considerable interest on the origin and nature of these particles. Hundreds of events with energies above $`10^{19}`$ eV and about 20 events above $`10^{20}`$ eV have now been observed by a number of experiments such as HiRes (Kieda et al. 1999), AGASA (Takeda et al. 1998, 1999), Fly’s Eye (Bird et al. 1995, 1993, 1994), Haverah Park (Lawrence, Reid & Watson 1991), Yakutsk (1996), and Volcano Ranch (1963). Most unexpected is the large flux of events observed above $`5\times 10^{19}`$ eV (Takeda et al. 1998) with no sign of the Greisen-Zatsepin-Kuzmin (GZK) cutoff (Greisen 1966; Zatsepin & Kuzmin 1966). The cutoff should be present if these ultra-high energy particles are protons produced by sources distributed homogeneously throughout the universe. Cosmic ray protons of energy above $`5\times 10^{19}`$ eV lose their energy to photopion production off the cosmic microwave background and cannot originate further than about $`50`$Mpc away from us. Alternatively, if ultra-high-energy cosmic rays (UHECRs) are protons from sources closer than $`50`$Mpc, the arrival direction of the events should point toward their source. The present data shows a mostly isotropic distribution and no sign of the local distribution of galaxies or of the Galactic disk above $`10^{19}`$ eV (Takeda et al. 1999). In sum, the origin of these particles with energies tens of millions of times greater than any produced in terrestrial particle accelerators, remains a mystery. In addition to the difficulties with locating plausible sources of UHECRs in our nearby universe, there are great difficulties with finding plausible accelerators for such extremely energetic particles. Acceleration of cosmic rays in astrophysical plasmas occurs when the energy of large-scale macroscopic motion, such as shocks and turbulent flows, is transferred to individual particles. The maximum possible energy, $`E_{\mathrm{max}}`$, is estimated by requiring that the gyro-radius of the particle be contained in the acceleration region (Hillas 1994) and that the acceleration time be smaller than the time for energy losses. The former condition relates $`E_{\mathrm{max}}`$ to the strength of the magnetic field, $`B`$, and the size of the acceleration region, $`L`$, such that $`E_{\mathrm{max}}ZeBL`$, where $`Ze`$ is the charge of the particle. For instance, for $`E_{\mathrm{max}}10^{20}`$ eV and $`Z1`$, the known astrophysical sources with reasonable $`BL`$ products are neutron stars ($`B10^{12}`$ G and $`L10`$ km), active galactic nuclei ($`B10^4`$ G and $`L10`$ AU), radio galaxies ($`B10^5`$ G and $`L10`$ kpc), and clusters of galaxies ($`B10^6`$ G and $`L100`$ kpc) (Hillas 1984; Berezinsky et al. 1990). However, energy losses usually prevent acceleration to $`E_{\mathrm{max}}`$, and no effective mechanism for UHECR acceleration has been shown for any of these objects (Blandford 1999; Bhattacharjee & Sigl 1998; Venkatesan, Miller & Olinto 1997). Here we show that the early evolution of young magnetized neutron stars in our Galaxy may be responsible for the flux of cosmic rays beyond the GZK cutoff. A preliminary study of this idea can be found in (Olinto, Epstein & Blasi 1999). Neutron stars have been previously suggested as possible sources of UHECRs, starting with an early attempt by Gunn & Ostriker (1969) to the more recent proposal of Bell (1992). Thus far, these attempts have failed at either reaching the highest energies or reproducing the spectrum or the apparent isotropy of the arrival directions of UHECRs. In the following we describe our alternative. We propose that young neutron stars may accelerate heavy nuclei to the highest observed energies by transferring their rotational energy to particle kinetic energy via a relativistic MHD wind. Some neutron stars may begin their life rotating rapidly ($`\mathrm{\Omega }3000\mathrm{rad}\mathrm{s}^1`$) and with large surface magnetic fields ($`B_S10^{13}`$ G). The dipole component of the field decreases as the cube of the distance from the star’s surface $`B(r)=B_S(R_S/r)^3`$, where the radius of the star is $`R_S10^6`$ cm. As the distance from the star increases, the dipole field structure cannot be causally maintained, and beyond the light cylinder radius, $`R_{lc}=c/\mathrm{\Omega }`$, the field is mostly azimuthal, with field lines spiraling outwards (Michel 1991). For young, rapidly rotating neutron stars, the light cylinder is about ten times the stellar radius, $`R_{lc}=10^7\mathrm{\Omega }_{3k}^1`$ cm, where $`\mathrm{\Omega }_{3k}\mathrm{\Omega }/3000\mathrm{rad}\mathrm{s}^1`$. The surface of young neutron stars is composed of iron peak elements formed during the supernova event. Iron ions can be stripped off the hot surface of a young neutron star due to strong electric fields and be present throughout much of the magnetosphere (Ruderman & Sutherland 1975, Arons & Scharlemann 1979). Inside the light cylinder, the magnetosphere corotates with the star and the iron density has the Goldreich-Julian value: $`n_{GJ}(r)=B(r)\mathrm{\Omega }/(4\pi Zec)`$, where $`c`$ is the speed of light (Goldreich & Julian 1969). In this estimate, and what follows, we do not include the trigonometric factors related to the relative orientation of the magnetic and rotational axes. The exact fate of the plasma outside the light cylinder is still a subject of debate (Gallant & Arons, 1994; Begelman & Li, 1994; Chiueh, Li, & Begelman, 1998; Melatos & Melrose, 1996). Observations of the Crab Nebula indicate that most of the rotational energy emitted by the Crab pulsar is converted into the kinetic energy of particles in a relativistic wind (Kennel & Coroniti 1994; Begelman 1998; Emmering & Chevalier 1987). This conversion may be due to properties of the MHD flow, related to magnetic reconnection (Coroniti 1990), or a more gradual end of the MHD limit (Melatos & Melrose, 1996). Some analytical and numerical studies show the development of kinetically dominated relativistic winds (see e.g., Begelman and Li, 1994), but at present the theoretical understanding of the wind dynamics is far from complete. The basic idea of accelerating plasmas by the Poynting flux was proposed by Weber and Davis (1967) (then called magnetic slingshot). Later, Michel(1969) showed that for a perfectly spherical flow the complete conversion of the magnetic energy into kinetic energy of the flow could not be achieved. However, Begelman and Li (1994) reconsidered the problem and showed that even small deviations from a spherical flow could imply an efficient conversion of the magnetic energy into kinetic energy of the wind through the so-called magnetic nozzle effect, provided the magnetic field lines have the right geometry. In the present study we assume that, at least for some neutron stars most of the magnetic energy in the wind zone is converted into the flow kinetic energy of the particles in the wind and that the rest mass density of the regions of the wind containing iron ions are not dominated by electron-positron pairs; that is, the electron-positron density is less than $`10^5`$ times that of the iron ions. With these assumptions, the magnetic field in the wind zone decreases as $`B(r)B_{lc}R_{lc}/r`$. For surface fields of $`B_S10^{13}B_{13}`$ G, the field at the light cylinder is $`B_{lc}=10^{10}B_{13}\mathrm{\Omega }_{3k}^3`$ G. The maximum energy of particles that can be contained in the wind near the light cylinder is $$E_{max}=\frac{ZeB_{lc}R_{lc}}{c}8\times 10^{20}Z_{26}B_{13}\mathrm{\Omega }_{3k}^2\mathrm{eV},$$ (1) where $`Z_{26}Z/26`$. In the rest frame of the wind, the plasma is relatively cold while in the star’s rest frame the plasma moves with Lorentz factors $`10^910^{10}`$. The typical energy of the accelerated cosmic rays, $`E_{cr}`$, can be estimated by considering the magnetic energy per ion at the light cylinder $`E_{cr}B_{lc}^2/8\pi n_{GJ}`$. At the light cylinder $`n_{GJ}=1.7\times 10^{11}B_{13}\mathrm{\Omega }_{3k}^4/Z\mathrm{cm}^3`$ which gives $$E_{cr}4\times 10^{20}Z_{26}B_{13}\mathrm{\Omega }_{3k}^2\mathrm{eV},$$ (2) similar to $`E_{max}`$ above (Gallant & Arons 1994; Begelman 1994). The spectrum of accelerated UHECRs is determined by the evolution of the rotational frequency: As the star spins down, the energy of the cosmic ray particles ejected with the wind decreases. The total fluence of UHECRs between energy $`E`$ and $`E+dE`$ is $$N(E)dE=\frac{\dot{𝒩}}{\dot{\mathrm{\Omega }}}\frac{d\mathrm{\Omega }}{dE}dE,$$ (3) where the particle luminosity is $$\dot{𝒩}=\xi n_{GJ}\pi R_{lc}^2c=6\times 10^{34}\xi \frac{B_{13}\mathrm{\Omega }_{3k}^2}{Z_{26}}\mathrm{s}^1$$ (4) and $`\xi <1`$ is the efficiency for accelerating particles at the light cylinder. The rotation speed decreases due to electromagnetic and gravitational radiation (Lindblom, Owen & Morsink 1998; Andersson, Kokkotas & Schutz 1999). For $`B_S10^{13}`$ G, r-mode gravitational radiation is likely suppressed (Rezzolla, Lamb & Shapiro 1999) and the spin down may be dominated by magnetic dipole radiation given by: $$I\mathrm{\Omega }\dot{\mathrm{\Omega }}=\frac{B_S^2R_S^6\mathrm{\Omega }^4}{6c^3}.$$ (5) For a moment of inertia $`I=10^{45}`$ g cm<sup>2</sup>, the time derivative of the spin frequency is, $`\dot{\mathrm{\Omega }}=1.7\times 10^5B_{13}^2\mathrm{\Omega }_{3k}^3\mathrm{s}^1`$, and Eq. (2) gives $$\frac{dE}{d\mathrm{\Omega }}=1.7\times 10^3\frac{E}{\mathrm{\Omega }_{3k}}.$$ (6) Substituting in Eq. (3), the particle spectrum from each neutron star is $$N(E)=\xi \frac{\mathrm{\hspace{0.17em}5.5}\times 10^{31}}{B_{13}E_{20}Z_{26}}\mathrm{GeV}^1,$$ (7) where $`E=10^{20}E_{20}\mathrm{eV}`$. Neutron stars are produced in our Galaxy at a rate $`1/\tau `$, where $`\tau 100\tau _2`$ yr, and a fraction $`ϵ`$ of them have the required magnetic fields, initial spin rates and magnetic field geometry to allow efficient conversion of magnetic energy into kinetic energy of the flow. As discussed below, UHE iron nuclei scatter and diffuse in the Galactic magnetic field. Taking the confining volume for these particles to be $`V_c`$ and the lifetime for confinement to be $`t_c`$, the UHECR density is $`n(E)=ϵN(E)t_c/\tau V_c`$, and the flux at the surface of the Earth is $`F(E)=n(E)c/4`$. For a characteristic confinement dimension of $`R=10R_1`$ kpc we can write $`V_c=4\pi R^3/3`$ and $`t_c=QR/c`$, where $`Q>1`$ is a measure of the how well the UHECR are trapped. The predicted UHECR flux at the Earth is $$F(E)=10^{24}\frac{\xi ϵQ}{\tau _2R_1^2B_{13}E_{20}Z_{26}}\mathrm{GeV}^1\mathrm{cm}^2\mathrm{s}^1.$$ (8) By comparing with observations, we can estimate the required efficiency factor, $`\xi ϵ`$. The AGASA experiment finds that the flux at $`10^{20}\mathrm{eV}`$ at Earth is $`F(E)=4\times 10^{30}\mathrm{GeV}^1\mathrm{cm}^2\mathrm{s}^1`$. Equating this flux with the estimate of Eq. (8), we find that the efficiency factor only needs to be $`\xi ϵ4\times 10^6Q^1`$. The smallness of the required efficiency suggests that young, Galactic neutron stars can be the source of UHECRs even if only a small fraction of stars are born with very rapid spin frequencies and high magnetic fields. The observed energy spectrum of cosmic rays below the expected GZK cutoff (i.e., between $`10^8`$eV and $`10^{19}`$eV) has a steep energy dependence $`N(E)E^\gamma `$, with $`\gamma 2.7`$ for $`E10^{15}`$ eV and $`\gamma 3.1`$ for $`10^{15}E(\mathrm{eV})10^{19}`$ (Gaisser 1990). The events with energy above $`10^{19.5}`$eV, however, show a much flatter spectrum with $`1\gamma 2`$; the drastic change in slope suggests the emergence of a new component of cosmic rays at ultra-high energies. The predicted spectrum of Eq. (8) is very flat, $`\gamma =1`$, which agrees with the lower end of the plausible range of $`\gamma `$ observed at ultra-high energies. Propagation effects can produce an energy dependence of the confinement parameter $`Q`$ and, correspondingly, a steepening of the spectrum toward the middle of the observed range $`1\gamma 2`$. Even though a young neutron star is usually surrounded by the remnant of the presupernova star, the accelerated particles can escape the supernova remnant without significant degradation as the envelope expands. A requirement for relativistic winds to supply UHECRs is that the column density of the envelope becomes transparent to UHE iron nuclei before the spinning rate of the neutron star decreases to the level where the star is unable to emit particles of the necessary energy. To estimate the evolution of the column density of the envelope, consider a supernova that imparts $`E_{SN}=10^{51}_{51}`$ erg to the stellar envelope of mass $`M_{env}=10M_1\mathrm{M}_{}`$. The envelope then disperses with a velocity $`v_e\left(2E_{SN}/M_{env}\right)^{1/2}=3\times 10^8\left(_{51}/M_1\right)^{1/2}\mathrm{cm}\mathrm{s}^1`$. The column density of the envelope surrounding the neutron star is $`\mathrm{\Sigma }M_{env}/4\pi R_{eff}^2`$ where $`R_{eff}=R_0+v_et`$, where $`R_0`$ is the characteristic radius of the presupernova star, $`R_010^{14}\mathrm{cm}`$. We now have $$\mathrm{\Sigma }\frac{M_{env}}{4\pi \left[R_0+v_et\right]^2}=1.6\times 10^{16}\frac{M_1^2_{51}^1}{t^2(1+t_e/t)^2}\mathrm{g}\mathrm{cm}^2,$$ (9) where $`t`$ is in seconds, and $`t_e=R_0/v_e3\times 10^5(M_1/_{51})^{1/2}`$ s. The condition for iron nuclei to traverse the supernova envelope without significant losses is that $`\mathrm{\Sigma }100`$ g cm<sup>-2</sup>. This “transparency” occurs at times $`t>t_{tr}=1.3\times 10^7M_1_{51}^{1/2}\mathrm{s}t_e`$. As the envelope is being ejected, the neutron star spin is slowing due to the magnetic dipole radiation, Eq. (5), so that $$\mathrm{\Omega }_{3k}^2(t)=\frac{\mathrm{\Omega }_{i3k}^2}{[1+t_8B_{13}^2\mathrm{\Omega }_{i3k}^2]},$$ (10) where $`3000\mathrm{\Omega }_{i3k}`$ rad s<sup>-1</sup> is the initial spin rate and $`t_8=t/10^8\mathrm{s}`$. The cosmic ray energy thus evolves according to $$E_{cr}(t)=4\times 10^{20}\mathrm{eV}\frac{Z_{26}B_{13}\mathrm{\Omega }_{i3k}^2}{[1+t_8B_{13}^2\mathrm{\Omega }_{i3k}^2]}.$$ (11) The condition that a young neutron star could produce the UHECRs is that $`E_{cr}`$ exceeds the needed energy when the envelope becomes transparent; i.e., $`E_{cr}(t_{tr})>10^{20}E_{20}`$ eV. This translates into the following condition: $$\mathrm{\Omega }_i>\frac{3000\mathrm{s}^1}{B_{13}^{1/2}\left[4Z_{26}E_{20}^10.13M_1B_{13}_{51}^{1/2}\right]^{1/2}}.$$ (12) From this equation we obtain the allowed regions in the $`B_S`$-$`\mathrm{\Omega }_i`$ plane shown in Figure 1 for $`E_{20}=1`$ and 3 and $`M_{env}=5`$ and $`50\mathrm{M}_{}`$. For the parameters within the allowed region, the acceleration and survival of UHE iron nuclei is not significantly affected by the ambient photon radiation. The most important source of radiation in the wind region is the thermal emission from the star’s surface. The low energy non-thermal radiation from the neutron star is not significant unless it is $`>10^4`$ that of the Crab pulsar. In the time needed for the envelope to become transparent, the surface cools to $`3\times 10^6`$ K (Tsuruta 1998). For these temperatures, photodissociation (see, e.g., Protheroe, Bednarek & Luo 1998) and Compton drag have minor effects on the energy and composition of the accelerating iron nuclei. Furthermore, synchrotron losses are unimportant because the plasma is essentially cold in the rest frame of the accelerating plasma. The relativistic MHD wind from a rapidly spinning neutron star may impart more energy to the supernova remnant than the initial explosion. For initial spin rates $`1000`$ rad s<sup>-1</sup>, the rotational energy is $`10^{51}`$ erg, comparable to the kinetic energy of most supernova remnants. More rapidly spinning neutron stars may generate highly-energetic supernova events, possibly similar to SN 1998bw (Kulkarni 1998). In these cases, the right boundary of the allowed region in Figure 1 should be enlarged because the remnant expands more rapidly than assumed above. The iron ejected with energies $`10^{20}`$ eV will reach Earth after being deflected by the Galactic and halo magnetic fields (Zirakashvili, Pochepkin, Ptuskin & Rogovaya 1998). The gyroradius of these UHECRs in the Galactic field of strength $`B_{gal}`$ is $$r_B=\frac{E_{cr}}{ZeB}=\frac{1.4}{Z_{26}}\left(\frac{3\mu \mathrm{G}}{B_{gal}}\right)E_{20}\mathrm{kpc}$$ (13) which is considerably less than the typical distance to a young neutron star ($``$ 8 kpc). Therefore, ultra-high energy iron arriving at the Earth would not point at the source. A Galactic iron source is consistent with an approximately isotropic arrival direction distribution as observed by AGASA for UHECRs (Zirakashvili et al. 1998). In support of this interpretation, we note that the cosmic ray component at $`10^{18}`$ eV is nearly isotropic with only a slight correlation with the Galactic disk and spiral arms (Hayashida et al. 1999). If these cosmic rays are protons of Galactic origin, their isotropy is indicative of the diffusive effect of the Galactic and halo magnetic fields. Since the iron arrival distribution at $`10^{20}`$ eV probes similar trajectories to protons at a few times $`10^{18}`$ eV we expect the iron to show a nearly isotropic distribution with a slight correlation with the Galactic center and disk. This correlation should become apparent if the number of observed events grows by orders of magnitude or if events with energies higher than the present highest energies events are detected. Although some indication of a correlation with the Galactic center for events above $`10^{20}`$ eV has been recently reported (Stanev & Hillas 1999), the small number of observed events limits the significance of this finding. In conclusion, we propose that ultra-high-energy cosmic ray events originate from iron nuclei accelerated by young, strongly magnetic, Galactic neutron stars. Iron from the surface of newborn neutron stars are accelerated to ultra-high energies by a relativistic MHD wind. Neutron stars whose initial spin periods are shorter than $`4(B_S/10^{13}\mathrm{G})^{1/2}`$ ms can accelerate iron nuclei to greater than $`10^{20}`$ eV. These ions can pass through the radiation field near the neutron star and the remnant of the supernova explosion that produced the neutron star without suffering significant deceleration or spallation reactions. The best test of this proposal is a unambiguous composition (mass/charge) determination and a correlation of arrival directions for events with energies above $`10^{20}`$ eV with the Galactic center and disk. Both aspects will be well tested by future experiments such as the Auger Project (Cronin 1999) and OWL-Airwatch (Ormes et al., 1997). In addition, our model will be severely constrained if the indication of a small scale clustering among UHECR events (Uchihori et al. 1999) is confirmed by future experiments to be due to an isotropically distributed set of discrete sources. Acknowledgments We are grateful to A. Ferrari, C. Ho, F. K. Lamb and H. Li for helpful conversations. This research was partly supported by NSF through grant AST 94-20759 at the University of Chicago; by NASA grant NAG 5-7092 at Fermilab, and by the DOE at Fermilab, at LANL through IGPP, and at the University of Chicago through grant DE-FG0291 ER40606. Figure caption Fig. 1 Parameter space for which acceleration and escape of the accelerated particles through the ejecta are allowed. The solid lines refer to particle energy $`E_{cr}=10^{20}`$ eV and dashed lines to $`E_{cr}=3\times 10^{20}`$ eV. The curves are plotted for two values of the envelope mass, $`M_{env}=50M_{solar}`$ and $`M_{env}=5M_{solar}`$, as indicated. The horizontal line at spin period $`0.3`$ ms indicates the minimum period (maximum angular speed) allowed for neutron stars (Haensel, Lasota & Zdunik 1999) .
no-problem/9912/astro-ph9912236.html
ar5iv
text
# Experimental Cosmic Statistics I : Variance ## 1 Introduction Measurements of higher order statistics in galaxy catalogs test theories of structure formation, the nature of the initial fluctuations, and the processes of galaxy formation. The power of such measurements to constrain theories, however, depends crucially on the detailed understanding of the errors. Usually it is tacitly assumed that the underlying distribution of events is Gaussian and thus the term “errors” becomes synonymous with the “variance”. Knowledge of the variance is sufficient only when the error distribution is Gaussian. For statistics related to counts-in-cells a rigorous theory for the cosmic errors was presented in a suite of papers by Szapudi & Colombi 1996, hereafter SC, Colombi, Szapudi & Szalay 1998, and Szapudi, Colombi & Bernardeau 1999a, hereafter SCB. Nevertheless these calculations relied on approximations, for which the domain of validity could not be checked extensively until the arrival of the Virgo Hubble Volume Simulations. Moreover, the regime where the underlying cosmic distribution is Gaussian could not be examined previously. This paper addresses the first problem by studying the statistical errors and cross-correlations numerically, while a companion paper, Szapudi et al. (1999c, hereafter paper II), discusses the underlying distributions of statistics in their full splendour. Let us consider a statistic $`A`$ measured in a galaxy catalog of volume $`V`$. The corresponding indicator is denoted by $`\stackrel{~}{A}`$. In practice, only one sample of our local universe is accessible. However, a frequentist numerical experiment can be performed in a large numerical simulation if a sufficient number $`C_{}`$ of galaxy catalogs $`_i`$ can be extracted from it. In each of them a value $`\stackrel{~}{A}_i`$, $`1iC_{}`$, can be measured. For any statistic $`A`$ the cosmic distribution function $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ is the probability density of measuring the value $`\stackrel{~}{A}`$ in a particular finite realization. This distribution function can be approximately extracted from the $`C_{}`$ subsamples under the ergodic hypothesis. For simplicity, we dispense with the (logical) notation $`\stackrel{~}{\mathrm{{\rm Y}}}`$, and replace it in what follows with $`\mathrm{{\rm Y}}`$. This expresses the fact that we do not wish to enter one more level of complexity by considering the “error on the error” problem (SC) in greater detail. The smoothness and regularity of our measurements suggest that the number of realizations, which represent a two orders of magnitude improvement over any previous work, is large enough to provide an adequate determination of the quantities measured. While in practice the function $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ is the fundamental quantity underlying all measurements, this paper concentrates on its first two moments; paper II examines its shape and skewness in detail. In the following definitions, integrals are to be understood as summations of the estimator over the distribution function. The first moment of $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ is the spatial average $$\stackrel{~}{A}\mathrm{{\rm Y}}(\stackrel{~}{A})𝑑\stackrel{~}{A}=\stackrel{~}{A}A,$$ (1) where it is assumed that the estimator $`\stackrel{~}{A}`$ is unbiased. The bias is negligible compared to the relative cosmic error in most meaningful cases (SCB) as illustrated later by practical examples. For completeness, however, the definition of the cosmic bias is $$b_A\frac{\stackrel{~}{A}A}{A}.$$ (2) The second (centered) moment of the cosmic distribution is called the cosmic error, $$(\stackrel{~}{A}A)^2\mathrm{{\rm Y}}(\stackrel{~}{A})𝑑\stackrel{~}{A}=(\stackrel{~}{A}A)^2(\mathrm{\Delta }A)^2.$$ (3) For a biased statistic, the variance should be centered around the biased average and not the true value. It can however be shown formally (SCB) that the above definition is valid to second order in $`\mathrm{\Delta }A/A`$ for any biased statistic.<sup>1</sup><sup>1</sup>1More precisely, to first order in $`(\stackrel{~}{x}_ix_i)(\stackrel{~}{x}_jx_j)`$ where $`\stackrel{~}{x}_i`$ denote the unbiased estimators from which $`\stackrel{~}{A}`$ is constructed in a non-linear fashion. Finally, the cosmic covariance can be defined analogously to the variance as $`(\stackrel{~}{A}A)(\stackrel{~}{B}B)`$. The theoretical results for the errors and cross-correlations are summarized below. If $`v`$ and $`V`$ are the cell and catalog volumes respectively, the cosmic error can be approximately separated into three components to leading order in $`v/V`$ (SC): * The discreteness or shot noise error which is due to the finite number of objects $`N_{\mathrm{obj}}`$ in the catalog. It increases towards small scales and with the order of the statistics considered, but becomes negligible when $`N_{\mathrm{obj}}`$ is very large. * The edge effect error is due to the uneven weight given to galaxies near the edges of the survey compared to those near the centre. It is especially significant on large scales, comparable to the size of the catalog. * The finite volume error is due to fluctuations of the underlying density field on scales larger than the characteristic size of the catalog. The next to leading order correction in $`v/V`$ is proportional to the perimeter of the catalog $`V`$. At this level of accuracy there are also correlations between the three sources of error (e.g., Colombi et al. 1999, hereafter CCDFS). Colombi, Bouchet & Schaeffer (1995, hereafter CBS) investigated in detail the cosmic error on the void probability function. The groundwork for error calculations of statistics related to counts-in-cells is based on SC where the cosmic error for factorial moments<sup>2</sup><sup>2</sup>2e.g., Appendix A for definitions and notations. was evaluated analytically. SCB, extended the work of SC to cross-correlations, including perturbation theory predictions (e.g., Bernardeau 1996). The cosmic errors, biases (see also Hui & Gaztañaga 1998, hereafter HG) and covariances for cumulants $`\overline{\xi }`$ and $`S_N`$ were calculated as well. The main goal of this paper is to compare the analytical predictions of CBS, SC and SCB to measurements made in the VIRGO $`\tau `$CDM Hubble Volume simulation. The exhaustive nature of the comparison that follows warrants the questions: is it meaningful to thrive for the detailed numerical understanding of the theory? How much of it is practically useful? Can it accurately estimate the errors on measurements in future surveys? While some of these questions were addressed in SCB, a brief account of supporting arguments is given next. The analytics do take into account all possible theoretical errors, but systematics, such as those resulting from cut out holes, incompleteness from fiber separation, possible magnitude errors in the case of the 2dF, etc., could in principle corrupt the theory and introduce biases. These effects might even require detailed simulation of the survey. In the case of the UKST and Stromlo surveys such simulations were performed and compared with the predictions: the spectacular agreement surprised even the present authors (Hoyle, Szapudi, & Baugh 1999). Thus systematics do not dominate in all surveys; for another example, where cut out holes found to have insignificant effect on the cosmic probability distribution of the two-point correlations function see Kerscher, Szapudi, & Szalay (1999). Moreover, the wide theoretical framework is flexible enough to incorporate all systematics, which have the effect of altering certain parameters, such as the factorial moments. In such case any bias can be corrected for. There might be unforeseen systematics which have such complicated non-linear effect that cannot even be modelled by appropriate alteration of a set of parameters. While it would be difficult to anticipate whether these could dominate for a particular survey, it is still instructive to investigate the potential results in an ideal case, especially during design phase of the survey. Error calculations help optimizing geometry, sampling, and other parameters. During the design of the VIRMOS survey such considerations were taken into account (Colombi et al. 1999). These calculations as well as maximum likelihood analyses need to explore such a large region in parameter space, that they would typically be impractical to carry out with simulations. In addition to applications to surveys, the theory can be applied reliably to assess significance of measurements in simulations where multiple runs would be too costly (e.g., Szapudi et al. 1999d). All these present and potential future applications motivate the detailed investigations performed in this article. The exposition is organized as follows. § 2 describes the $`N`$-body data used for the purpose of our study. § 3 analyses the count-in-cells distribution function $`P_N`$, its cumulants $`\overline{\xi }`$ and $`S_N`$’s, and the scaling function of the void probability distribution $`\sigma \mathrm{ln}(P_0)/F_1`$. These quantities are measured in the full simulation as well as in $`C_{}=4096`$ subsamples. The accuracy of simulation is assessed by comparing the measurements to the non-linear Ansatz of Hamilton et al. (1991) improved by Peacock and Dodds (1996, hereafter PD), and to perturbation theory predictions (hereafter PT). The model of Fosalba & Gaztañaga (1998) and extended perturbation theory (hereafter EPT, see Colombi, Bernardeau, Bouchet & Hernquist 1997) are considered as well. § 4 extends these investigations to the cosmic error and the variance of the cosmic distribution function. A preliminary investigation of the cross-correlations is done for factorial moments and cumulants. The measurements are compared where possible to the theoretical predictions of SC, SCB and CBS, including extended perturbation theory. Finally § 5 recapitulates the results and discusses their implications. In addition, Appendix A gives a summary of the definitions and notations used in this paper for counts-in-cells statistics. It will be useful for the reader unfamiliar with these concepts. ## 2 The $`N`$-body data The $`\tau `$CDM Hubble volume simulation (e.g., Evrard et al. 1999) was carried out using a parallel $`\mathrm{P}^3\mathrm{M}`$ code described in MacFarland et al. (1998). The code was run on 512 processors of the Cray T3E-600 at the Rechenzentrum in Garching. Initial conditions were laid down by imposing perturbations on an initially uniform state represented by a “glass” distribution of particles generated by the method of White (1996). Because of the size of the simulation, a glass file of $`10^6`$ particles was tiled 10 times in each direction. As the initial glass file was created with periodic boundary conditions tiling does not create any non-uniformities at the interface between the tiles. A Gaussian random density field was set up by perturbing the positions of the particles and assigning velocities to them according to the growing mode linear theory solutions, using the algorithm described by Efstathiou et al. (1985). Individual modes were assigned random phases and the power for each mode was selected at random from an exponential distribution with mean power corresponding to the desired power spectrum $`|\delta _k^2|`$. Unlike Efstathiou et al. (1985), however, the initial velocities were set up exactly proportional to the initial displacements, according to the Zel’dovich (1970) approximation. As shown by Scoccimarro (1998) this leads to larger initial transients. To compensate for this the simulation was started at a high redshift, $`z=29`$. The cosmological model used for the simulation $`\tau `$CDM is described in more detail in Jenkins et al. (1998). The approximation to the linear CDM power spectrum (Bond & Efstathiou 1984) was used $$|\delta _k^2|=\frac{Ak}{\left[1+\left[aq+(bq)^{3/2}+(cq)^2\right]^\nu \right]^{2/\nu }},$$ (4) where $`q=k/\mathrm{\Gamma }`$, $`a=6.4h^1`$ Mpc, $`b=3h^1`$ Mpc, $`c=1.7h^1`$ Mpc and $`\nu =1.13`$. The value of $`\mathrm{\Gamma }`$ was set equal to $`0.21`$. The normalization constant, $`A`$, is chosen by fixing the value of $`\sigma _8^2`$ (the linear variance of the matter distribution in a sphere of radius $`8h^1`$ Mpc at $`z=0`$). A value of $`\sigma _8=0.6`$ was motivated by estimates based on cluster abundances (White, Efstathiou & Frenk 1993; Eke, Cole & Frenk 1996). The simulation was integrated using a leapfrog scheme as described in Hockney & Eastwood (1981), section 11-4-3. The simulation was completed in 500 equal steps in time. The softening used was 100 kpc/$`h`$ comoving Plummer equivalent - see Jenkins et al. (1998). ## 3 Counts-in-cells analysis : the underlying statistics The count probability distribution function (CPDF) $`P_N`$ is defined as the probability of finding $`N`$ objects in a cell of volume $`v`$ thrown at random in the catalog. CPDF was measured in the whole simulation $``$ for cubic cells of size $`L_{\mathrm{box}}/512\mathrm{}L_{\mathrm{box}}/8`$, where $`L_{\mathrm{box}}=2000h^1`$ Mpc is the size of the simulation cube (see Table 1). Then the simulation cube was divided into $`16^3`$ contiguous cubic subsamples $`_i`$ of size $`L=125h^1`$ Mpc. $`P_N`$ was evaluated in each of these for $`L/512\mathrm{}L/2`$ (see Table 1). The successive convolution algorithm of Szapudi et al. (1999d, hereafter SQSL) allowed the determination of the CPDF on all scales simultaneously in only a few minutes of CPU on a workstation<sup>3</sup><sup>3</sup>3This estimate does not include the reading in of the file. with $`512^3`$ sampling cells. The accuracy is thus $`P_NP_{\mathrm{min},1}=1/512^37.45\times 10^9`$ for the measurement in $``$ and for each individual $`_i`$; the accuracy increases by averaging over all subsamples: $`P_NP_{\mathrm{min},2}=1/(512\times 16)^31.82\times 10^{12}`$. For $`4\mathrm{}63h^1`$ Mpc the measurements in $``$ and $`_i`$ overlap (Table 1). This is illustrated by Fig. 1, displaying $`P_N`$ as a function of $`N`$: the figure presents the CPDF extracted from both the full cube and averaged over all the sub-cubes. In the overlap region, the difference can be detected as slight irregularities of the high-$`N`$ tail from the full cube measurements. The figure suggests that at least on the smallest scales considered in $``$ (or each $`_i`$), our sampling is probably insufficient by the standards of SC. However, this does not affect significantly the calculations as indicated by the agreement of the moments measured in $``$ and those calculated from averages obtained from the subsamples. Therefore measurement errors will be neglected in what follows, i.e. infinite sampling is assumed. Note that this ideal can be achieved in practice for two-dimensional and small three-dimensional catalogs via the algorithm of Szapudi (1998), however, the present simulation is too large for this method. The smallest scale considered is only $`2.4`$ times larger than the softening length $`\lambda _ϵ=100h^1`$ kpc. As discussed extensively in Colombi, Bouchet & Hernquist (1996), contamination by softening restricts the validity of the simulation on small scales. For spherical cells of radius $`R`$, at least $`R4\lambda _ϵ`$ should hold. For the cubic cells of the present simulation this condition translates to $`\mathrm{}6.5\lambda _ϵ0.65h^1`$ Mpc. Thus the two smallest cell sizes i.e. the two leftmost points could be contaminated by softening, a fact that should be borne in mind, especially when comparing with theoretical calculations which employ models motivated by dynamics. On the other hand, for statistical purposes the dynamics can be ignored and the simulation can be regarded as a set with prescribed statistics. Then the possible contamination is irrelevant at the level of the approximations taken in the next sections. Another possible source of contamination could be in principle the anticorrelation introduced by the glass initial positions. The effect of this is, however, extremely small as evidenced by the measurement of $`\overline{\xi }`$ shown below. Figure 2 displays the average correlation function $`\overline{\xi }`$ as a function of scale. By definition $$\overline{\xi }\frac{1}{v^2}_vd^3r_1d^3r_2\xi (|r_1r_2|),$$ (5) where $`\xi (r)`$ is the two-point correlation function. In practice, it is obtained as the variance of the counts-in-cells, corrected for discreteness effects automatically via the use of factorial moments (e.g., see SQSL and Appendix A for the detailed description of the method used in this paper to obtain the cumulants including the variance from counts-in-cells). The measured $`\overline{\xi }`$ is compared with linear theory (dots) and with the non-linear Ansatz of Hamilton et al. (1991) improved by PD (dashes). As expected, the agreement with linear theory in the regime $`\overline{\xi }1`$ is excellent, even on the largest scales where the anticorrelations introduced by the glass initial condition could cause contamination. The two leftmost points are slightly below the dashes, because of softening effects as discussed above, otherwise the results are in perfect accord with theory. Figure 3 plots the extracted cumulants $`S_N`$’s against $`\overline{\xi }`$. They are compared with predictions of various models, including perturbation theory (PT, dots). By definition (e.g., Balian & Schaeffer 1989a) $$S_N=N^{N2}Q_N\overline{\xi }_N/\overline{\xi }^{N1},$$ (6) where $`\overline{\xi }_N`$ is the $`N`$-point correlation function averaged over a cell: $$\overline{\xi }_N=\frac{1}{v^N}_vd^3r_1\mathrm{}d^3r_N\xi _N(r_1,\mathrm{},r_N).$$ (7) Perturbation theory predictions have been calculated for spherical cells by Juszkiewicz, Bouchet & Colombi (1993) for $`S_3`$ and extended to arbitrary order by Bernardeau (1994): $$S_N(\mathrm{})=f_N(\gamma _1,\mathrm{},\gamma _{N2}),$$ (8) $$\gamma _i\frac{d^i\mathrm{log}\overline{\xi }}{(d\mathrm{log}\mathrm{})^i}.$$ (9) For example $$S_3=\frac{34}{7}+\gamma _1,$$ (10) $$S_4=\frac{60712}{1323}+\frac{62}{3}\gamma _1+\frac{7}{3}\gamma _1^2\frac{2}{3}\gamma _2.$$ (11) The dots on Fig. 3 assume $`\gamma _i=0`$, $`i2`$. While this is incorrect in principle for a scale dependent spectrum such as $`\tau `$CDM, the long dashes on the left-hand panels prove that the contribution of $`\gamma _2`$ is insignificant. Higher order $`\gamma _i`$ terms, as discussed also by Baugh, Gaztañaga & Efstathiou (1995), have an even smaller effect and can be rightly neglected. PT predictions are accurately fulfilled in the weakly non-linear regime. This confirms again numerous earlier works (see, e.g. Juszkiewicz, Bouchet & Colombi 1993; Bernardeau 1994; Juszkiewicz et al. 1995; Gaztañaga & Baugh 1995; Baugh, Gaztañaga & Efstathiou 1995; SQSL). In fact the textbook quality agreement with PT demonstrates the accuracy of the $`\tau `$CDM Hubble Volume simulation. The dashes give the predictions obtained from extended perturbation theory (EPT, Colombi et al. 1997; see also Szapudi, Meiksin & Nichol 1996 for EPT applied to galaxy data, and Scoccimarro & Frieman, 1998 for “hyperextended” perturbation theory). EPT assumes that the same forms of the higher order moments are preserved in the highly non-linear regime. There $`\gamma _1`$ above is simply an adjustable parameter without any particular meaning, i.e. $$\gamma _{1,\mathrm{eff}}=\gamma _1(S_3)=S_3\frac{34}{7},$$ (12) where $`S_3`$ is the measured one. With this value of $`\gamma _1`$ the $`S_N`$’s, $`N4`$, can be computed using equation (8) (with $`\gamma _i=0`$, $`i2`$). The dashed curves matches the measurements quite well even in the highly non-linear regime thereby reconfirming the efficiency of EPT (see also SQSL). The agreement is not expected to be absolutely perfect from this Ansatz: on Fig. 3, EPT tends to underestimate slightly the measured values of $`S_N`$ when $`1\overline{\xi }10`$. The dynamic range in the upper left panel of Fig. 3 is narrower than in the lower left panel: on large scales the agreement between PT and measurement becomes less accurate for the $`S_N`$’s, especially if $`N`$ is large. This might be related to transients due to the initial setup of the particles on a glass perturbed by using the Zel’dovich approximation. On the one hand, the transients related to pure Zel’dovich should decrease the value of the $`S_N`$’s (e.g., Juszkiewicz et al. 1993 and Scoccimarro 1998) while, on the other hand, the anticorrelations due to the glass could have the opposite effect by decreasing $`\overline{\xi }^{N1}`$ more than $`\overline{\xi }_N`$. Although this problem was not examined in detail, the glass contamination on $`\overline{\xi }`$ appears to be inconsequential. Alternatively, finite volume effects can degrade the high-$`N`$ tail of the CPDF (e.g., Colombi, Bouchet & Schaeffer 1994; CBS; Colombi et al. 1996). In addition, it is worth reemphasising that two rightmost points are prone to errors caused by softening as discussed earlier. The right-hand panels of Fig. 3 zoom in on the transition between the weakly and highly nonlinear regime. For comparison, PT (with $`\gamma _i=0`$, $`i2`$, dots), EPT (dashes), and the one loop perturbation theory of Fosalba & Gaztañaga (1998) (dots-long dashes) are displayed. The last model yields agreement with the extracted values of $`S_N`$ for $`\overline{\xi }1`$, or even larger when the order $`N`$ is high enough (see upper right panel). This affirms the success of one-loop perturbation theory (see also Lokas et al. 1996; Scoccimarro et al. 1998). Interestingly, EPT produces almost identical results to the spherical model when $`\overline{\xi }1`$. Finally, figure 4 shows $`\sigma =\mathrm{ln}(P_0)/\overline{N}`$ as a function of scale, compared with EPT predictions. By definition (White 1979; Balian & Schaeffer 1989a; see also Appendix A) $$\sigma =\underset{N=1}{\overset{\mathrm{}}{}}(1)^{N1}\frac{S_N}{N!}\left(\overline{N}\overline{\xi }\right)^{N1},$$ (13) where $`\overline{N}`$ is the average count in a cell. This function is thus sensitive to low order statistics when $`N_\mathrm{c}\overline{N}\overline{\xi }1`$, and to high order statistics when $`N_\mathrm{c}1`$. According to Fig. 4, EPT is an accurate Ansatz on small scales where $`\sigma `$ is close to unity and is dominated by low order $`S_N`$. It is a less precise approximation on the largest scales probed, as expected. Indeed, the rightmost point of Fig. 4 corresponds to where $`\overline{\xi }1`$ in Fig. 3. There EPT increasingly underestimates the $`S_N`$’s when $`N`$ is high. Note the remarkable power-law behavior of $`\sigma \mathrm{}^{D_0}`$, $`D_00.25`$, in agreement with the predictions of the scaling model of Balian & Schaeffer (1989a). This reflects a non-trivial (multi)fractal particle distribution (Balian & Schaeffer 1989b) with a Hausdorff dimension $`D_0`$. Such behavior was found in a standard CDM model by Bouchet, Schaeffer & Davis (1991). Subsequently, the fractal distribution with $`D_00.5`$ was established by Colombi, Bouchet & Schaeffer (1992). ## 4 The Cosmic Error In the previous section we demonstrated that good agreement was obtained comparing measurements made on the $`\tau `$CDM Hubble Volume dataset with previous work regarding higher order clustering statistics. Having established the accuracy of the dataset this section concentrates on the the determination of cosmic errors and their comparison to the available theoretical predictions, where possible. In § 4.1 we summarise analytic calculations of the cosmic errors and their cross-correlations. From this follows a systematic study of the experimental cosmic error of low-order statistics, i.e. factorial moments $`F_k`$, $`1k4`$ (§ 4.2), and cumulants $`\overline{\xi }`$, $`S_3`$ and $`S_4`$ (§ 4.3) together with a thorough comparison with the theoretical predictions. Also in § 4.3 we discuss the cosmic bias of the cumulants. Then the void probability and its scaling function $`\sigma `$ are explored (§ 4.4) followed by the cosmic error on the CPDF itself (§ 4.5). Finally, in § 4.6, there is a preliminary investigation of the cosmic cross-correlations of factorial moments and cumulants. In all subsequent figures, except for the cross-correlations, there are errorbars plotted on the symbols corresponding to measurements due to the finite number of realizations $`C_{}=4096`$. These measurement errors, proportional to $`1/\sqrt{C_{}}`$ (SC), are negligible for our simulation, and the errorbars are smaller than the size of the symbols in most cases. As discussed in the Introduction, we neglect the cosmic error on the determination of the cosmic error (which is due to the finite size of the Hubble Volume itself) because in practice it is insignificant. ### 4.1 Cosmic Error: Theoretical Predictions Before making any comparison with the analytic predictions, we outline the main ideas in CBS, SC, and SCB – more details can be found in these papers. Spherical cells of radius $`\mathrm{}`$ are assumed throughout for simplicity. The bivariate CPDF $`P_{N,M}(\mathrm{},r)`$ is the probability of finding $`N`$ and $`M`$ points in two cells of size $`\mathrm{}`$ at distance $`r=|r_1r_2|`$ from each other. According to SC the cosmic error is computed via a double integral of $`P_{N,M}(\mathrm{},r)`$ over $`r_1`$, and $`r_2`$, conveniently split according to whether the cells overlap or not: give rise to the discreteness and edge effect errors (see Introduction). The locally Poissonian assumption (CBS, SC) enables the approximate representation of the generating function $`P(x,y)`$ for overlapping cells by using only the monovariate generating function $`P(x)`$, i.e. the calculation depends on $`\overline{\xi }`$, $`S_N`$, $`N3`$ and the average count $`\overline{N}`$. generate the finite volume error (see Introduction). To simplify the writing of $`P_{N,M}(\mathrm{},r)`$, the distance $`r`$ is assumed to be large enough compared to the cell size such that the bivariate CPDF can be Taylor expanded (to first order) in terms of $`\xi (r)/\overline{\xi }`$. This approximation is surprisingly accurate even when the cells touch each other (Szapudi, Szalay, & Boschán 1992, Bernardeau 1996, hereafter B96). Three models are used: two particular but still quite general forms of the hierarchical model, SS and BeS, introduced by Szapudi & Szalay (1993a, hereafter SSa, 1993b) and by Bernardeau & Schaeffer (1992), respectively, and perturbation theory, hereafter PT (B96). See SC and SCB for more details. The former two models depend only on monovariate statistics, i.e. on $`\overline{\xi }`$ and $`S_N`$, $`N3`$ and $`\overline{N}`$. PT on the other hand is expressed in terms of $`\gamma _i`$, $`\overline{\xi }`$ and $`\overline{N}`$ (B96). In principle, PT is accurate only in the weakly non-linear regime, for which it was originally designed, but it can be extended to the nonlinear regime as well: for monovariate distributions, EPT was proposed by Colombi et al. (1997), as discussed and tested versus measurements in § 3. This Ansatz can actually be naturally generalized to the bivariate CPDF (Szapudi & Szalay 1997, SCB). Our version, denoted by E<sup>2</sup>PT, takes the measured (non-linear) value for $`\overline{\xi }`$, $`\gamma _{1,\mathrm{eff}}`$ from equation (12) and it assumes, as EPT, $`\gamma _i=0`$ for $`i2`$. Except for the error on the void probability and its scaling function $`\sigma `$ detailed in CBS, the theoretical results shown in this section were computed to leading order in $`v/V`$, where $`v`$ is the cell volume and $`V=L^3`$ is the sample volume. The calculation of the error on a statistics of order $`k`$ depends on $`\overline{N}F_1`$, $`\overline{\xi }`$, $`\overline{\xi }(\widehat{L})`$, the average of the correlation function over the survey (see below), and $`S_N`$, $`3N2k`$. PT is determined by $`\gamma _i`$, $`i2k2`$ (§ 3) and E<sup>2</sup>PT by $`\gamma _{1,\mathrm{eff}}`$ as explained above. In all cases, we use the measured value of $`\overline{N}`$. Other parameters are chosen as follows: 1. linear theory is employed to compute $`\overline{\xi }`$ and $`\overline{\xi }(\widehat{L})`$ \[the catalog is assumed to be spherical to simplify the calculation of integral (16) below\] while higher order statistics are evaluated according to eq. (8) with $`\gamma _i=0`$, $`i2`$. 2. : the experimental $`\overline{\xi }`$ is used (open symbols on Fig. 2). The quantity $`\overline{\xi }(\widehat{L})`$ is computed numerically with the non-linear Ansatz of PD discussed in § 3 (assuming that the catalog is spherical). For the $`S_N`$’s, the measurements (open symbols on left panels of Fig. 3) are used for $`\mathrm{}15h^1`$ Mpc. On larger scales, EPT is more appropriate to determine $`S_N`$, $`N4`$: the increasing inaccuracy of the $`S_N`$’s on large scales and for large $`N`$ require this procedure. It is justified all the more since when $`\overline{\xi }0.27`$ EPT matches quite well to the PT predictions (see Fig. 3). There is a subtlety worth mentioning which concerns the finite volume error, proportional to the integral $$\overline{\xi }(\widehat{L})=\frac{1}{\widehat{V}}_{r_{12}2\mathrm{}}d^3r_1d^3r_2\xi (|r_1r_2|).$$ (14) To leading order in $`v/V`$, this integral reads (CCDFS) $$\overline{\xi }(\widehat{L})=\overline{\xi }_0(\widehat{L})\frac{8v}{\widehat{V}}\overline{\xi }_1(2\mathrm{}),$$ (15) with $$\overline{\xi }_0(\widehat{L})=\frac{1}{\widehat{V}^2}_{r_1,r_2\widehat{V}}d^3r_1d^3r_2\xi (|r_1r_2|),$$ (16) $$\overline{\xi }_1(\mathrm{})\frac{1}{v}_r\mathrm{}4\pi r^2\xi (r)𝑑r.$$ (17) In the above equations, $`\widehat{V}`$ corresponds to the volume covered by cells of volume $`v`$ included in the catalog. The next to leading order correction, $`\overline{\xi }_1`$, can be identified as a negligible correction to the edge effects for most practical purposes. Although it did not make a significant difference, we included this correction nonetheless. ### 4.2 Cosmic Error: Factorial Moments Figure 5 presents the cosmic error measured for the factorial moments $`F_k`$, $`1k4`$. By definition $$F_k(N)_kN(N1)\mathrm{}(Nk+1)=\underset{N}{}(N)_kP_N.$$ (18) The factorial moments directly estimate the moments of the underlying continuous density field: $`F_k=\overline{N}^k\rho ^k`$ where $`\overline{N}=F_1`$ is the average count (e.g., SSa). On Fig. 5, the dotted, dash, long dash and dotted-long dash curves correspond to SS, BeS, E<sup>2</sup>PT and PT. All the models converge and agree quite well with the measurements on large scales $`\mathrm{}\mathrm{}_07.1h^1`$ Mpc, as expected, since PT predictions should be valid. In contrast, on small scales $`\mathrm{}<\mathrm{}_0`$ the models overestimate slightly the numerically obtained error, E<sup>2</sup>PT being the most accurate. It is worth remembering that the leftmost two points may be contaminated by smoothing effects and should not be over-interpreted. Nevertheless, the decrease of precision on small scales suggests that our assumptions (i) or (ii) in § 4.1 are becoming more and more approximate in the non-linear regime, i.e. either the local Poisson assumption or the particular hierarchical decompositions loose their accuracy. To test this idea the contribution of overlapping cells (edge $`+`$ discreteness effects) were separated from the contribution of disjoint cells (finite volume effects), as shown respectively as solid and dash-long dash curves on Figure 6, which concentrates on E<sup>2</sup>PT (long dashes). Note that the solid curve represents the SS and BeS models as well. Finite volume effects appear to dominate on small scales because our subsamples are dense enough to suppress discreteness error as expected (SC). This pinpoints assumption (ii) as the source of inaccuracy. Note that naively one would suspect additional loss of precision in the Taylor expansion of the bivariate CPDF. However, the finite volume error is a double integral over all the cells included in the catalog and separated by more than $`2\mathrm{}`$. The contribution of close cells is small, especially when $`\mathrm{}/L`$ is small. Thus E<sup>2</sup>PT itself appears to break down in the non-linear regime (SS and BeS are even less accurate), at least for the particular experiment we are analysing. Despite that EPT itself fares quite well (Fig. 3), its simplest natural extension to bivariate distributions, E<sup>2</sup>PT, is less accurate, as noticed earlier by Szapudi & Szalay (1997) in connection with the cumulant correlators of the APM galaxy catalog. However, the accuracy of the calculation based on E<sup>2</sup>PT should be adequate for most practical uses, and future work on the representation of the bivariate distribution in the highly non-linear regime will result in increased precision. The solid curves in Fig. 6 represent the main contribution of the cosmic error on large scales. Here, as expected (SC), the cosmic error is dominated by edge effects. Despite the fact that theoretical predictions were determined to leading order in $`v/V`$ and the largest scale considered is $`\mathrm{}=L/2`$, i.e. $`v/V=1/8`$, the agreement between theory and measurement is surprisingly good. CCDFS have computed the next to leading order contribution proportional to the perimeter $`V`$ of the survey. With this correction, which increases the cosmic error especially on the largest scales, next to leading order theory would be inferior to the leading order one. The reason is that the calculation of CCDFS assumes a perimetric curvature radius much larger than the cell size. This assumption, which is useful for deep galaxy surveys with small sky coverage, obviously fails for a compact catalogue such as this one where the cell size $`\mathrm{}`$ becomes comparable to $`L`$. ### 4.3 Cosmic Error and Cosmic Bias: Variance and Cumulants So far only the full moments $`F_k`$ have been examined. The cumulants $`\overline{\xi }`$ and $`S_N`$, however, are the more physically motivated quantities. But the statistics of these is complicated by the fact that they are ratios. For example (see Appendix A) $$\overline{\xi }=F_2/F_1^21.$$ (19) As is well known in statistics (e.g., HG, SCB) $`A/BA/B`$. In other words, the estimator $$\stackrel{~}{\overline{\xi }}=\stackrel{~}{F}_2/\stackrel{~}{F}_1^21$$ (20) is biased. Note that this is a general feature for any statistic constructed from unbiased estimators in a non-linear fashion (e.g. SCB). However, SCB showed theoretically that the cosmic bias defined in the Introduction, given here by $$b_{\overline{\xi }}(\stackrel{~}{\overline{\xi }}\overline{\xi })/\overline{\xi },$$ (21) is of same order of $`(\mathrm{\Delta }\overline{\xi }/\overline{\xi })^2`$ in the regime $`\mathrm{\Delta }\overline{\xi }/\overline{\xi }1`$. Similar reasoning applies to the $`S_N`$’s. Thus leading order theoretical calculations neglect the bias. This can be done safely in the domain of validity of the perturbative approach used to expand a non-linear combination of biased estimators. A reasonable criterion proposed by SCB for this domain is that the cosmic bias be small compared to the relative cosmic error which itself should be small compared to unity. For an arbitrary (possibly biased) statistic $`A`$ this reads $$b_A\mathrm{\Delta }A/A1.$$ (22) Left panels of Figure 7 are analogous to Fig. 5 and show the measured cosmic error as a function of scale for the biased estimators of $`\overline{\xi }`$, $`S_3`$ and $`S_4`$. The middle panels show the absolute value of the cosmic bias (open symbols) compared to the cosmic error (filled symbols). For additional clarity, the cosmic bias is plotted in linear coordinates as well in the right-hand panels. It is interesting first to compare the cosmic error for factorial moments and cumulants of same order. The discreteness error is negligible for the scaling regime and the statistics considered here. The cumulants fare better/worse than the factorial moments in the non-linear/weakly nonlinear regimes, respectively. The finite volume error, dominating on small scales, is the limiting factor for factorial moments, while the edge effect error, dominating on large scales, drives the errors of the cumulants. This is in full accord with the predictions of SCB which can be consulted for more details. The theoretical models on Fig. 7 use the analytic calculations of SCB and are computed analogously to Fig. 5, as explained in § 4.1. E<sup>2</sup>PT only is presented in the middle and right-hand panels. Again, it is worth remembering that the leftmost points are dangerously close to the limit of possible contamination from artificial smoothing effects introduced by the force softening. For the variance $`\overline{\xi }`$, the theory systematically overestimates the errors and the cosmic bias, except for the latter on large scales. This is not at all unexpected in light of the previous findings on small scales, where the three models SS, BeS and E<sup>2</sup>PT loose precision. In the weakly non-linear regime, $`\mathrm{}>\mathrm{}_0=7.1h^1`$ Mpc, where perturbation theory is valid, this is somewhat disappointing. However, the dynamic range is limited by the criterion (22), which is hardly, if at all, fulfilled here. Hence the leading order perturbative approach is likely to be insufficient. For higher order statistics $`S_3`$ and $`S_4`$, the theory again tends to overestimate the amplitude of the measured cosmic bias on small scales. On large scales, where the predicted $`|b_{S_k}|`$ presents a sudden turn-up, condition (22) breaks down, thus the theory is inapplicable. The measured cosmic errors, on the other hand, are in accord with the theory within the range of its validity. The agreement on small scales is even better for $`\mathrm{\Delta }S_k/S_k`$ than for $`\mathrm{\Delta }F_k/F_k`$, $`k=3,4`$. This, however, should not be over-interpreted, as it is probably a coincidence due to cancellation effects of the ratios $`S_3=\overline{\xi }_3/\overline{\xi }^2`$ and $`S_4=\overline{\xi }_4/\overline{\xi }^3`$. The cosmic bias is always negative (right-hand panels of Fig. 7), i.e. the biased estimators tend to underestimate real values (SCB; HG). In this particular experiment, the measured cosmic bias is always dominated by the measured cosmic error as predicted by the perturbative approach, except for the largest scales. Here the cosmic bias can become of same order as the cosmic error. HG suggested that the cosmic bias should be corrected for when measuring cumulants. Whether this makes sense depends on the magnitude of the cosmic skewness, i.e., the skewness of the cosmic distribution function itself. This will be discussed in more detail by paper II. However, it is worth noting that function $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ is positively skewed and that its maximum corresponds to the most likely measurement. This is in general smaller than the average, $`\stackrel{~}{A}`$. Thus, as pointed out already by SC, the measured value $`\stackrel{~}{A}`$ in a finite sample is likely to underestimate the real value $`A`$ even if $`\stackrel{~}{A}`$ is unbiased. If the cosmic skewness and/or the cosmic variance are large compared to the cosmic bias, it is pointless to correct for the cosmic bias. Either of the above is true for most surveys, including the upcoming wide-field surveys such as the 2dF and SDSS, thus bias-corrected estimators are unlikely to be useful in the future. ### 4.4 Cosmic Error and Cosmic Bias: Void Probability and Scaling Function Upper panel of Fig. 8 shows $`\mathrm{\Delta }P_0/P_0`$ as a function of scale compared to the prediction of CBS (long dashes), with the finite volume error contribution (dashes-long dashes) and with the edge + discreteness contribution (solid curve). The agreement between theory and prediction is excellent. The lower panel of Figure 8 corresponds to the scaling function $`\sigma `$. As for $`\overline{\xi }`$ and $`S_N`$, the indicator $`\stackrel{~}{\sigma }=\mathrm{ln}(\stackrel{~}{P}_0)/\stackrel{~}{\overline{N}}`$ is biased. This bias (open symbols) is of order $`(\mathrm{\Delta }\sigma /\sigma )^2`$ and can be neglected.<sup>4</sup><sup>4</sup>4The theoretical and measured errors displayed on bottom part of Fig. 8 correspond to the biased indicator. The agreement between theory and measurement is less impressive than for $`P_0`$, but this is mostly due to the difference of dynamic range covered by the error in upper and lower panels of Fig. 8. Moreover, the calculation of $`\mathrm{\Delta }\sigma /\sigma `$ by CBS is only approximate and could certainly be improved (see the discussion in CBS). The errorbars about $`\sigma `$ are quite small: nearly an order of magnitude smaller than in Figs. 5 and 7. According to equation (13), $`\sigma `$ reflects the low order statistics when $`N_\mathrm{c}=\overline{N}\overline{\xi }1`$ ($`\sigma 1`$ in Fig. 4) and the high order statistics when $`N_\mathrm{c}1`$ ($`\sigma <1`$). From the point of view of the errors, function $`\sigma `$ is an excellent higher order indicator (as discussed earlier by CBS); it is better than the low order factorial moments or cumulants, at least in the non-linear regime $`\mathrm{}\mathrm{}_0`$. This fact alone unfortunately does not guarantee the usefulness of this statistic as various models of large scale structure formation could be degenerate with respect to the void probability. The thorough work of Little & Weinberg (1994) suggests that this is indeed the case. It is tempting although dangerous to extrapolate the results of their analysis to the function $`\sigma `$. ### 4.5 Cosmic Error: Counts-in-Cells Upper panel of Fig. 9 shows the cosmic error in the CPDF as a function of $`N`$ for the various scales considered in $`_i`$. The scale increases with the $`x`$-coordinate of the upper right part of each curve. In the lower panel $`\mathrm{\Delta }P_N/P_N`$ is represented in a similar manner as a function of $`N/N_{\mathrm{max}}`$ where $`N_{\mathrm{max}}`$ is the value of $`N`$ for which $`P_N`$ is a maximum. \[We did not display the (small) scales corresponding to $`N_{\mathrm{max}}=0`$ or $`N_{\mathrm{max}}=1`$\]. In agreement with intuition, the cosmic error reaches its minimum in the vicinity of $`NN_{\mathrm{max}}`$ and becomes increasingly large in the tails. Thus the shape of the CPDF near its maximum has the most power to constrain in terms of errors. Kim & Strauss (1998) have measured the cumulants $`S_3`$ and $`S_4`$ by fitting an Edgeworth expansion convolved with a Poisson distribution to the measured CPDF in the 1.2 Jy IRAS galaxy catalog. According to their recipe, the best determined part of the CPDF near the maximum was kept for the fit. Their maximum likelihood approach uses a simple model for the cosmic error, but their method is promising. Its main weakness is the necessity to make a strong prior assumption for the shape of the CPDF. A natural consequence is that the estimated errorbars on the measured cumulants are considerably smaller than with the standard methods. ### 4.6 Cosmic Correlations So far this section has dealt only with the second moment of the cosmic distribution function, i.e. with the cosmic errors. For a full description in the Gaussian limit, however, the moments of the joint distribution function are needed. These moments form the cosmic (cross-correlation) matrix (SCB). It is defined as $`(\stackrel{~}{A}\stackrel{~}{A})(\stackrel{~}{B}\stackrel{~}{B})`$, where $`\stackrel{~}{A}`$ and $`\stackrel{~}{B}`$ are any counts-in-cells related indicators, for example $`A=F_k(\mathrm{})`$ and $`B=F_k^{}(\mathrm{}^{})`$, or $`A=\overline{\xi }(\mathrm{})`$ and $`B=S_N(\mathrm{}^{})`$, etc. A detailed theoretical investigation can be found in SCB (for $`\mathrm{}=\mathrm{}^{}`$). By definition, for two statistics $`A`$ and $`B`$, the correlation coefficient $`1\rho 1`$ reads as $$\rho \frac{\delta \stackrel{~}{A}\delta \stackrel{~}{B}}{\mathrm{\Delta }A\mathrm{\Delta }B}\frac{(\stackrel{~}{A}A)(\stackrel{~}{B}B)}{\mathrm{\Delta }A\mathrm{\Delta }B}.$$ (23) The cosmic cross-correlation coefficient together with the errors form the full correlation matrix. The inverse of this is the central quantity for the joint probability distribution function in the Gaussian limit. As a preliminary numerical analysis, Figs. 10 and 11 present the correlation coefficients as functions of scale ($`\mathrm{}^{}=\mathrm{}`$) for factorial moments and cumulants, respectively. As in Fig. 5, the dots, dashes and long dashes show the theoretical predictions given by the SS, BeS and E<sup>2</sup>PT models, respectively, as computed by SCB. The computation of $`\delta \stackrel{~}{A}\delta \stackrel{~}{B}`$ in eq. (23) is analogous to that of the cosmic error (see SCB for more details). \[For $`\mathrm{\Delta }A`$ and $`\mathrm{\Delta }B`$, and to have completely self-consistent calculations, we take the theoretical results as well in eq. (23)\]. The agreement between theory and measurement is less convincing for the cosmic cross-correlations than for the cosmic error. This appearance is due partly to the linear coordinates of the figures which emphasize deviations, but nonetheless real. On Fig. 10 there is a significant discord between theory and measurements for factorial moments in the middle top, middle bottom, and top right panels. On small scales, this result is quite natural: it is probably due to the inaccuracy of the models SS, BeS and E<sup>2</sup>PT employed to describe the underlying bivariate distributions (§ 4.2). In the weakly nonlinear regime, this discrepancy is apparently puzzling, since the predicted cosmic error matches perfectly the measurements (Fig. 5). The disagreement increases with $`|kl|`$, where $`k`$ and $`l`$ are the corresponding orders. On large scales, the cross-correlations are dominated by edge effects leading to the suspicion that the local Poisson approximation (SC, § 4.1) is becoming increasingly inaccurate with $`|kl|`$.<sup>5</sup><sup>5</sup>5This is not surprising: this approximation neglects local correlations. This is all the more inaccurate as the difference between the “weights” given to two overlapping cells, i.e. $`(N)_k`$ and $`(N)_l`$ for factorial moments, increases. Another although less likely possibility, is that the leading order approach in $`v/V`$ is insufficient and higher order corrections are necessary to calculate cross-correlations. It would go beyond the scope of this paper to analyse in detail these effects which are left for future research. For the cumulants, in addition to the above arguments, our perturbative approach to compute cross-correlation allows only a narrow dynamic range for analytic predictions, defined by criterion (22). In Fig. 11, this condition is chosen for practical purpose to be $$|b_A|\mathrm{\Delta }A/A1.$$ (24) This is necessary but not sufficient: the theory appears to disagree significantly with the measurements on large scales at the top left, lower left and lower middle panels of Fig. 11. Despite some of the discrepancies, the general features of the cross-correlations are well described by the theoretical predictions. For instance the cross-correlation between two statistics $`A_k`$ and $`A_l`$ decreases with the difference between the orders $`|kl|`$ as predicted (SCB). In our particular experiment $`\overline{N}`$ is significantly correlated with $`\overline{\xi }`$, but only weakly (anticorrelated) with $`S_k`$, $`k=3,4`$. Similarly, $`\overline{\xi }`$ and $`S_3`$ are weakly, but $`S_3`$ and $`S_4`$ are strongly correlated. A detailed discussion on these effects can be found in SCB. ## 5 Summary and Discussion In this paper we have studied experimentally the properties of the moments of the cosmic distribution function of measurements $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$, where $`\stackrel{~}{A}`$ is an indicator of a counts-in-cells statistic. For a thorough examination of $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ itself the reader is referred to paper II also in this volume. We examined the factorial moments $`F_k`$, the cumulants $`\overline{\xi }`$ and $`S_N`$’s, the void probability $`P_0`$, its scaling function, $`\sigma \mathrm{ln}(P_0)/F_1`$, and the count-in-cells themselves $`P_N`$. $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ was measured in the largest available $`\tau `$CDM simulation divided into 4096 cubical subsamples. In each of these many subsamples, $`\stackrel{~}{A}`$ was extracted, and its probability distribution function $`\mathrm{{\rm Y}}`$ was estimated with great accuracy. The main results of our analysis are the following: 1. The measured count-in-cells in the whole simulation, in particular the cumulants $`S_N`$, are in excellent agreement with perturbation theory predictions in the weakly nonlinear regime. This confirms the results of numerous previous investigations in an unprecedented dynamic range. The textbook quality agreement demonstrates the state of the art accuracy of the simulation. Similarly, the measurements confirm extended perturbation theory (EPT) in the full available dynamic range $`0.05\overline{\xi }50`$, for $`S_N`$, $`N10`$. In addition one loop perturbation theory predictions based on the spherical model (Fosalba & Gaztañaga 1998) were found to be an excellent description of the measured $`S_N`$ up to a $`\overline{\xi }1`$. 2. The variance of $`\mathrm{{\rm Y}}`$ is the square of the expected cosmic error, $`\mathrm{\Delta }A`$, in the measurement of $`A`$ in a subsample, identified with a realization of the local observed universe. The measurement of $`\mathrm{\Delta }A/A`$, for $`A=P_0`$, $`\sigma `$, $`F_k`$ and $`S_N`$ appears to be globally in good accord with the theoretical predictions of Colombi, Bouchet & Schaeffer (1995), Szapudi & Colombi (1996, SC) and Szapudi, Bernardeau & Colombi (1998a, SCB). In the highly non-linear regime, the theoretical predictions of SC and SCB tend to overestimate the cosmic error slightly, except for the ratios $`S_3=\overline{\xi }_3/\overline{\xi }^2`$ and $`S_4=\overline{\xi }_4/\overline{\xi }^3`$. In the latter case, there are some cancellations and the agreement between theory and measurement is good even on small scales, but this is probably a coincidence. It appears thus that none of the three variants of the hierarchical model in SC and SCB, can give an accurate enough account of the non-linear behavior of gravitational dynamics for the bivariate distribution functions.<sup>6</sup><sup>6</sup>6As discussed in § 4.1., the analysis of the cosmic error indirectly probes the bivariate probability distribution function $`P_{N,M}(r,\mathrm{})`$ of having $`N`$ and $`M`$ galaxies respectively in two cells of size $`\mathrm{}`$ separated by distance $`r`$ (see, e.g. SC). In the weakly non-linear regime, agreement between theory and predictions is excellent for the factorial moments, but less good for the cumulants, due to the limitations of the perturbative approach used to expand such ratios. Nonetheless EPT yields the most precise overall agreement with theory for our particular experiment. On small scales $`1h^1`$ Mpc $`\mathrm{}4h^1`$ Mpc, EPT overestimates the errors perhaps by a factor of two in the worst case. 3. In addition to the cosmic errors, the cosmic bias, $`b_A`$, was studied in detail as well. An estimator is biased when its ensemble average is different from the real value: $`b_A\stackrel{~}{A}/A10`$. This is always the case when unbiased estimators are combined in a non-linear fashion to form a new estimator (SCB, Hui & Gaztañaga 1998, HG), such as the cumulants. In agreement with SCB, the measured cosmic bias is of order $`(\mathrm{\Delta }A/A)^2`$ and thus negligible when the cosmic error is small. However, as for the errors, the theory tends to overestimate the bias in the non-linear regime. On large scales, where the cosmic bias becomes significant because of edge effects, the perturbative approach used by SBC to compute theoretical predictions is then outside of its domain of validity. Note that in the regime where the cosmic bias is significant, the cosmic error is likely to be large. For instance, in the particular numerical experiment used in this paper, the cosmic bias was always smaller than the cosmic errors and in most cases negligible. Moreover, in the regime where the bias could be significant, the cosmic distribution function $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ is significantly positively skewed (paper II). This implies that the measured $`\stackrel{~}{A}`$ is likely to underestimate the true value even for an unbiased estimator. The result is an effective cosmic bias, at most of order $`\mathrm{\Delta }A/A`$. As already shown by SC, this effective bias can contaminate even unbiased estimators such as $`\stackrel{~}{F}_k`$ and $`\stackrel{~}{P}_N`$. As a consequence, it is pointless correcting for the cosmic bias, in contrast with the proposition of HG, unless it is done in the framework of a maximum likelihood approach which takes into account fully the effects of the shape of the cosmic distribution function. 4. To complete the analysis of second moments, a preliminary investigation of the cosmic correlation coefficients for factorial moments and cumulants was conducted. Together with the cosmic errors, these coefficients form the cosmic cross-correlation matrix which underlies maximum likelihood analysis in the Gaussian limit. Theoretical predictions of SBC give good qualitative account of the measured correlation coefficients, although they become increasingly approximate with the difference between the corresponding orders. This is likely to be a consequence of the local Poisson assumption (SC) employed for analytic predictions. Provided that the Gaussian limit is reached in terms of the error distribution, the formalism of SC and SBC allows for a maximum likelihood analysis of the CPDF measured in three-dimensional galaxy catalogs. Two preliminary investigations are currently being undertaken. Szapudi, Colombi & Bernardeau (1999b) reanalyse already existing joint measurements of $`F_1`$ and $`\overline{\xi }`$, and Bouchet, Colombi & Szapudi (1999) perform a likelihood analysis of the count-in-cells measured in the 1.2Jy IRAS survey (Bouchet et al. 1993). Paper II probes the domain of the Gaussian approximation for the cosmic distribution function, together with preliminary investigations for the bivariate cosmic distributions $`\mathrm{{\rm Y}}(\stackrel{~}{A},\stackrel{~}{B})`$. As shown there, the Gaussian limit is reached when the relative cosmic error is small compared to unity. This is expected to hold for a large dynamic range in future large galaxy surveys such as the 2dF and the SDSS (Colombi et al. 1998). Statistical analyses of weak lensing surveys are similar to counts-in-cells measurements (e.g., Bernardeau, Van Waerbeke & Mellier 1997; Mellier 1999; Jain, Seljak & White 1999). As a result, slight modification of the formalism of SC and SCB is fruitful to compute theoretical cosmic errors and cross-correlations (Bernardeau, Colombi & Szapudi, 1999). Finally, it is worth to mention a few questions which were not addressed so far by the investigations presented in this paper. As light might not trace mass, the distribution of galaxies may be biased (not to be confused with the cosmic bias), and also realistic galaxy surveys are subject to redshift distortion. While the above results were obtained for the mass, note that the theory which served as a basis of comparison is quite general and was formulated to describe phenomenologically either the mass or the galaxies. It appears that there should be no qualitative changes introduced by biasing or redshift distortions (e.g., Szapudi et al., 1999e), thus the same theory can be used for the galaxies as for the mass, except perhaps with slightly different parameters, or underlying statistical models. In fact, two of the models (SS, BeS) were entirely motivated by the galaxy and not by the mass distribution; they are expected to be more accurate for realistic catalogs if used in a self-consistent fashion. The scaling properties underlying these models is even more accurate in redshift space, as is well known. EPT, on the other hand, was originally motivated by theoretical considerations of the mass distribution and numerical simulations (Colombi et al. 1997), and therefore it is no wonder that it is the most successful model for the mass (but see also Scoccimarro & Frieman 1998). Nonetheless, even EPT was found to be a fairly good model for the galaxy distribution, at least in the EDSGC survey (Szapudi, Meiksin & Nichol 1996), a possible indication that galaxies approximately trace mass after all. In addition, it is worth mentioning that biasing models can be nondeterministic, i.e. stochastic in nature, but this again does not introduce anything new qualitatively which could not be handled in the framework of the theory of SCB. Finally, the theory outlined in this paper was contrasted against measurements in a $`\tau `$CDM simulation. However, the analytical framework is general enough to accommodate any cosmological model, and there are no qualitative differences in this respect between different cosmologies with Gaussian initial conditions and hierarchical clustering. Thus repeating the same analysis for a different CDM-like cosmogony would be superfluous and inconsequential. ## Acknowledgments The FORTRAN routine for computing $`S_N`$, $`3N10`$, using one loop perturbation theory predictions based on the spherical model was provided by P. Fosalba (see the right-hand panels of Fig. 3). We thank F. Bernardeau, P. Fosalba, C. Frenk, R. Scoccimarro, A. Szalay and S. White for useful discussions. It is a pleasure to acknowledge support for visits by IS and SC to the MPA, Garching and by SC to the dept Physics, Durham, during which part of this work was completed. IS and AJ were supported by the PPARC rolling grant for Extragalactic Astronomy and Cosmology at Durham. The Hubble volume simulation data was made available by the Virgo Supercomputing Consortium (http://star-www.dur.ac.uk/frazerp/virgo/virgo.html). The simulation was performed on the T3E at the Computing Centre of the Max-Planck Society in Garching. We would like to give our thanks to the many staff at the Rechenzentrum who have helped us to bring this project to fruition. The FORCE package (FORtran for Cosmic Errors) used for the error-calculations in this paper is available on request from its authors SC and IS. ## Appendix A Definitions and Notations The count probability distribution function (CPDF) $`P_N`$, gives the probability of finding $`N`$ objects in a cell of volume $`v`$ thrown at random in the catalog. Factorial moments, $`F_k`$, are defined as follows $$F_k(N)_kN(N1)\mathrm{}(Nk+1)=\underset{N}{}(N)_kP_N,$$ (25) where the falling factorial $`(N)_k`$ is defined in the first part of the equation. The $`F_k`$ are proportional to the moments of the underlying density field $`\rho `$ smoothed over the cell of volume $`v`$: $`F_k=\overline{N}^k\rho ^k`$ (SSa; assuming the normalization $`\rho =1`$), where $`\overline{N}`$ is the average count in a cell: $$\overline{N}N=F_1.$$ (26) Counts-in-cells are related to quantities of dynamical interest, such as the (connected) $`N`$-point correlation functions, $`\xi _N`$ (e.g., Peebles 1980). The averaged $`N`$-point correlation function over a cell is given by $$\overline{\xi }_N\frac{1}{v^N}_vd^3r_1\mathrm{}d^3r_N\xi (r_1,\mathrm{},r_N).$$ (27) This is the connected moment of the smoothed density field, $`\overline{\xi }_N=\delta ^N_\mathrm{c}`$ (with $`\delta \rho 1`$). The connected moments or cumulants of a Gaussian field are identically zero for $`N3`$. In this paper, normalized cumulants are defined as $$S_N\frac{\overline{\xi }_N}{\overline{\xi }^{N1}},$$ (28) with the short-hand notation $`\overline{\xi }\overline{\xi }_2`$. By definition, $`S_1S_21`$, thus for second order $`\overline{\xi }`$ is used. The quantities $`S_3`$ and $`S_4`$ are often called skewness and kurtosis in the astrophysical literature, although their definition differs slightly from the original usage in statistics. The reason for normalization in eq. (28) is dynamical. The $`S_N`$’s exhibit a weak scale dependence only due to the scale-free nature of gravity. In the highly nonlinear regime stable clustering is expected to set in, (e.g., Peebles 1980) and in the weakly nonlinear regime perturbation theory predicts approximate scaling depending on the initial fluctuation spectrum (e.g., Juszkiewicz, Bouchet & Colombi 1993; Bernardeau 1994). The counts-in-cells generating function, $$P(x)\underset{N=0}{\overset{\mathrm{}}{}}x^NP_N,$$ (29) writes (White 1979; Balian & Schaeffer 1989a; SSa) $$P(x)=\mathrm{exp}\{\overline{N}(1x)\sigma [N_\mathrm{c}(1x)]\},$$ (30) where $$N_\mathrm{c}\overline{N}\overline{\xi }$$ (31) is the typical number of objects in an overdense cell (e.g., Balian & Schaeffer 1989a), and $$\sigma (N_\mathrm{c})=\underset{N=1}{\overset{\mathrm{}}{}}(1)^{N1}\frac{S_N}{N!}N_\mathrm{c}^{N1}.$$ (32) It is worth noticing that (White 1979; Balian & Schaeffer 1989a; SSa) $$P(x)=P_0[\overline{N}(1x)],$$ (33) if the void probability is expressed in terms of average counts $`\overline{N}`$. The measurement of $`P_0`$ is particularly interesting since it probes directly the count probability generating function: $$\sigma (N_\mathrm{c})=\mathrm{ln}(P_0)/\overline{N}.$$ (34) The exponential generating function for factorial moments, $$F(x)=\underset{k0}{}F_k\frac{x^k}{k!}$$ (35) is directly related to $`P(x)`$ (SSa) through $$F(x)=P(x+1).$$ (36) Combining eqs. (30), (32) and (36), one can obtain a useful relation between cumulants and factorial moments (SSa) $$S_N=\frac{\overline{\xi }F_N}{N_\mathrm{c}^N}\frac{1}{N}\underset{k=1}{\overset{N1}{}}\left(\begin{array}{c}N\\ k\end{array}\right)\frac{(Nk)S_{Nk}F_k}{N_\mathrm{c}^k}.$$ (37) The state of the art practical recipe consists of measuring the CDPF with high oversampling (Sect. 3), computing the factorial moments from eq. (25), and finally calculating the cumulants from the above recursion eq. (37). This procedure eliminates the need for explicit discreteness correction.
no-problem/9912/hep-ph9912301.html
ar5iv
text
# 1 SP ## Acknowledgements The work of A.R. was supported by the TMR Network under the EEC Contract No. ERBFMRX–CT960090. We thank R. Barbieri, G. Giudice and G.G. Ross for useful discussions. ## Appendix A Naturalness bound on $`m_0`$ As said naturalness disfavours heavy $`m_0`$ because very strong cancellations (either between different soft terms, or between the tree level $`m_0^2`$ term and the radiative corrections to it) are needed in order to accommodate very large values of $`m_0`$. Setting a naturalness upper bound on $`m_0`$ amounts to estimate how unlikely is the required cancellation in the light of our experimental and theoretical knowledge. If we assign to the parameter space an arbitrary probability distribution function (pdf) we can compute the probability of any event, for example of the required cancellation. The pdf is however totally arbitrary in absence of experimental data. This same assumption (the choice of an arbitrary pdf, called ‘Bayes prior’ in statistical inference) is the crucial ingredient that allows to convert experimental data into measured ranges of fundamental parameters, like the top mass. Starting from an arbitrary pdf and using simple properties of probability, it is possible to follow how experimental data modify the probability of different values. When experimental information is sufficiently strong, the final pdf does not depend on the arbitrary pdf needed to start with. This is why we can today assume that the pole top mass is distributed according to a $`175\pm 5`$ gaussian. Since the soft terms are totally unknown we assume some broad pdf for them. Our results have only a mild dependence on the pdf, unless some crazy pdf is chosen. Since $`M_Z`$ (that is one combination of soft terms) has been already measured with a practically infinite precision, it is simpler to take this experimental constraint into account with the procedure used in : we assume a probability distribution for the dimensionless ratios of the soft terms, and compute the overall scale of soft terms from the EWSB condition. Since in this way we never specify how heavy are the sparticles, the connection of this procedure with naturalness is quite transparent. Sampling all parameters, like $`M_t`$ and $`m_0/m_{1/2}`$, according to their assumed pdf, we estimated that only in $`p5\%`$ of the cases a cancellation in the EWSB conditions generates sparticle masses above all experimental bounds in mSUGRA. In order to set upper bounds on $`m_0`$ we repeat the analysis in , but without averaging $`p`$ over the distribution of $`m_0/m_{1/2}`$: we here compute $`p`$ as function of $`m_0/m_{1/2}`$ We could also study $`p`$ as function of $`m_0/M_Z`$. However $`m_0M_Z`$ is possible either because $`|a_0|1`$, or due to a cancellation between different soft terms. We study $`p(m_0/m_{1/2})`$ rather than $`p(m_0/M_Z)`$ because we here want to concentrate our attention on the first possibility. Bounds on $`m_0/M_Z`$ have a more direct impact on phenomenology. Bounds on $`m_0/m_{1/2}`$ have a more direct impact on theoretical attempts of predicting $`m_0/m_{1/2}`$. at fixed $`\mathrm{tan}\beta =10`$. We find that $`p\left(m_0/m_{1/2}\right)`$ has a maximum at $`m_03m_{1/2}`$, decreases when $`m_0m_{1/2}`$ (because too small values of $`m_0`$ give light right-handed sleptons) and becomes negligibly small when $`m_0m_{1/2}`$ (more precisely when $`m_0>\mathrm{\hspace{0.17em}3}M_3`$). We again conclude that values of $`m_0`$ significantly above $`1\mathrm{TeV}`$ require very unlikely cancellations in the EWSB condition. A certain minimal amount of cancellation is however required even for $`m_0`$ below $`1\mathrm{TeV}`$ in order to accommodate experimental bounds, as recalled in appendix C. ## Appendix B Heavy $`m_0`$ and the naturalness problem The $`Z`$ mass is given, as function of the soft terms, by a potential minimization condition that in mSUGRA with vanishing $`A_0`$ and large $`\mathrm{tan}\beta 10`$ can be approximated as $$M_Z^2=2\left(a_0m_0^2+a_{1/2}m_{1/2}^2+\mu ^2\right).$$ (3) One important success of supersymmetry is the prediction that RGE effects typically induce negative $`a_i`$ coefficients, thus establishing a direct link between SUSY-breaking and EW-breaking. This nice feature is however due to $`\lambda _t`$ and $`g_3`$ interactions: SUSY breaking most naturally induce a non vanishing $`Z`$-boson mass comparable to the gluino and top-squark masses, that are typically heavier than the other non coloured sparticles. On the contrary experiments now tell that the $`Z`$ boson is lighter than (almost) all sparticles. This kind of naturalness problem manifests itself in eq. (3) if the bounds on sparticle masses imply that the single contributions to $`M_Z^2`$ are much larger than $`M_Z^2`$ itself. What happens is that the $`m_0^2`$ contribution gives no problems, while the $`m_{1/2}^2`$ term gives an unpleasantly large contribution to $`M_Z^2`$, that can be canceled by the $`\mu ^2`$ term. The $`m_0^2`$ contribution does not pose naturalness problems because the experimental bound on $`m_0`$ is weak ($`m_0`$ could even be zero), and because the coefficient $`a_0`$ is typically small, $`a_0<1/3`$. The particular structure of the SUSY RGE protects the $`m_0^2`$ contribution from QCD corrections, that instead affect the $`m_{1/2}^2`$ contribution. This well known fact can be easily understood with the techniques of . The $`m_{1/2}^2`$ term is problematic because it has a large coefficient $`a_{1/2}\left(3÷5\right)/2`$ and because LEP and Tevatron experiments provide significant lower bounds on $`m_{1/2}`$. The $`m_{1/2}^2`$ contribution to $`M_Z^2`$ is approximately given by $$\frac{M_Z^2}{\left(91\mathrm{GeV}\right)^2}=\left(5÷11\right)\left(\frac{M_3}{290\mathrm{GeV}}\right)^2+\mathrm{}$$ where $`M_32.5m_{1/2}`$ is renormalized at $`Q=500\mathrm{GeV}`$ and lower values in the given range can be obtained for higher $`\mathrm{tan}\beta `$ and lower $`\lambda _t\left(M_{\mathrm{GUT}}\right)`$. The LEP limit on the chargino masses gives rise, due to our assumption of gaugino mass unification, to a strong but indirect bound on the gluino mass, $`M_3>\mathrm{\hspace{0.17em}290}\mathrm{GeV}`$. Abandoning gaugino mass unification only the Tevatron direct bound on the gluino mass applies ($`M_3>\left(180÷280\right)\mathrm{GeV}`$, depending on the squark spectrum) so that the situation can be partially improved . The value of $`m_0`$ has only a small indirect impact on the naturalness problem: since $`\mathrm{tan}\beta `$ is determined by minimizing the potential, a moderately large $`m_0`$ allows to naturally obtain the moderately large values of $`\mathrm{tan}\beta 10`$ for which the $`m_{1/2}^2`$ problem is minimized .